Follow these steps:
- git clone && checkout r1.14
- ./configure
- we could follow the default settings for the configure
- But on the server, we might want xla, cuda, tensorRT, MPI configured.
pip install numpy keras_preprocessingto installed some requirement.bazel build -c dbg --copt="-DNDEBUG" --config cuda --strip=never //tensorflow/tools/pip_package:build_pip_packageto build the wheel package-DNDEBUG: workaround absl string view problem--copt="-Og": will crash a internal parser crash, do not use.
./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkgto generate the packagepip install tensorflow.whlto install.
more build instructions improvement It seems easier to just use nvidia’s build
clion
- install the bazel plugin
- ./configure the tensorflow environment
- import tensorflow folder as bazel workspace
- create new workspace file as
directories:
.
derive_targets_from_directories: false
targets:
//tensorflow/tools/pip_package:build_pip_package
additional_languages:
python