You should not need this anymore! As mentioned here, conda supports tensorflow now. Just use

$ conda install tensorflow-gpu

Compiling TF on a CentOS 6.x system is a pain. Essentially I used the instructions from this github thread, however, had to make some changes with the new TF and bazel. I compiled the master as of today (r0.12) on a CentOS 6.6 system with CUDA 8 and cuDNN 5. Following are the edited instructions from the original instructions.

Update [April 5, 2017]: Follow updated instructions here (though I’ve also updated the instructions below)

Basic Utils

(Ref. option 4 on this stack overflow thread).

  1. GCC 4.9.2 (was already installed in my system)
  2. Java (JDK 8+). I installed into my anaconda installation by

    $ conda install -c cyclus java-jdk=8.45.14
    $ which javac
    $ javac -version
    javac 1.8.0_45

Install Bazel

  1. SSH to a node for compiling
  2. mv ~/.cache ~/.cache.bak && mkdir /tmp/cache && ln -s /tmp/cache ~/.cache. The cache can not be on a NFS for proper compiling.
  3. wget && unzip EDIT (April 11, 2017): Latest TF only compiles with bazel 0.4.5 and up, so compile that instead.
  4. Edit tools/cpp/CROSSTOOL
    • Replace all occurrences of /usr/bin/gcc with gcc path
    • Replace all occurrences of /usr/bin/cpp with cpp path
    • After the toolpath containing gcc path, add the lines 1. linker_flag: "-Wl,-Rlib64 path"
    1. cxx_builtin_include_directory: "include1 dir"
    2. cxx_builtin_include_directory: "include2 dir"
    3. cxx_builtin_include_directory: "include3 dir" - Finally the block of this file looked like this:

       tool_path { name: "ar" path: "/usr/bin/ar" }
       tool_path { name: "compat-ld" path: "/usr/bin/ld" }
       tool_path { name: "cpp" path: "/opt/gcc/4.9.2/bin/cpp" }
       tool_path { name: "dwp" path: "/usr/bin/dwp" }
       tool_path { name: "gcc" path: "/opt/gcc/4.9.2/bin/gcc" }
       cxx_flag: "-std=c++0x"
       linker_flag: "-lstdc++"
       linker_flag: "-B/usr/bin/"    linker_flag: "-Wl,-R/opt/gcc/4.9.2/lib64"
       cxx_builtin_include_directory: "/opt/gcc/4.9.2/lib/gcc/x86_64-unknown-linux-gnu/4.9.2/include"
       cxx_builtin_include_directory: "/opt/gcc/4.9.2/lib/gcc/x86_64-unknown-linux-gnu/4.9.2/include-fixed"
       cxx_builtin_include_directory: "/opt/gcc/4.9.2/include/c++/4.9.2/"
  5. Edit scripts/bootstrap/
    • Comment out atexit “rm -fr ${DIR}” (probably older bazel version had this)
    • Comment out

        # eval "cleanup_tempdir_${DIRBASE}() { rm -rf '${DIR}'; }"
        # atexit cleanup_tempdir_${DIRBASE}
  6. export EXTRA_BAZEL_ARGS='-s --verbose_failures --ignore_unsupported_sandboxing --genrule_strategy=standalone --spawn_strategy=standalone --jobs 8'
  7. ./
Other notes
  1. Remove any protobuf you may have on your environment variables. I had protobuf 2.6.1 in env vars, and it would keep giving errors like following:
      third_party/protobuf/3.0.0/src/google/protobuf/stubs/port.h:141:20: error: redefinition of 'const int32 google::protobuf::kint32max'
      static const int32 kint32max = 0x7FFFFFFF;
      In file included from /home/software/protobuf-2.6.1/include/google/protobuf/message_lite.h:42:0,
                  from third_party/protobuf/3.0.0/src/google/protobuf/
      /home/software/protobuf-2.6.1/include/google/protobuf/stubs/common.h:197:20: note: 'const int32 google::protobuf::kint32max' previously defined here
      static const int32 kint32max = 0x7FFFFFFF;

    Simply removing it from my bashrc and re-logging into the machine made bazel compile easily.

  2. Add the bazel binary (bazel-bin/src/bazel) to the $PATH after this.

Install TensorFlow

  1. git clone --recurse-submodules && cd tensorflow
  2. Edit third_party/gpus/crosstool/CROSSTOOL.tpl, making the same changes we made for Bazel. (/usr/bin/gcc likely won’t need to be replaced, though.)
  3. Update (April 11, 2017): Do the same changes in third_party/gpus/crosstool/CROSSTOOL_nvcc.tpl
  4. Edit third_party/gpus/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc.tpl
    • Replace all /usr/bin/gcc with gcc path.
    • Change the python in the first line to the python you use (anaconda, say)
    • Undo the temporary “fix” to find as by commenting out the line cmd = 'PATH=' + PREFIX_DIR + ' ' + cmd. (For me, this is necessary to find as.)
  5. Update (April 11, 2017): Do the point (4,5,6) from the updated link
  6. ./configure
  7. export EXTRA_BAZEL_ARGS='-s --verbose_failures --ignore_unsupported_sandboxing --genrule_strategy=standalone --spawn_strategy=standalone --jobs 8'
  8. bazel build -c opt --config=cuda --linkopt '-lrt' --copt="-DGPR_BACKWARDS_COMPATIBILITY_MODE" --conlyopt="-std=c99" //tensorflow/tools/pip_package:build_pip_package
    • As mentioned in this issue, this does not work, so use without all the flags:
    • bazel build -c opt --config=cuda --verbose_failures //tensorflow/tools/pip_package:build_pip_package
    • Or use with optimizations (somehow AVX2 (--copt=-mavx2) is not working, gives assembler errors, so removed that)
    • [update], As mentioned in here, it compiles with r0.12 branch, so using all optimizations.
    • bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --config=cuda -k //tensorflow/tools/pip_package:build_pip_package
  9. bazel-bin/tensorflow/tools/pip_package/build_pip_package ~/tensorflow_pkg
  10. pip install ~/tensorflow_pkg/* --upgrade --ignore-installed

Install cpp optimized protobuf 3.1

  1. Upgrade to cpp optimized protobuf, 10-50x faster (from here)
    • pip install --upgrade This is with GLIBC 2.14
$ # install autoconf 2.69, should be easy, just ./configure && make && make install, add to bashrc
$ # install automake 1.15, should be easy, just ./configure && make && make install, add to bashrc
$ # install libtool 2.4, should be easy, just ./configure && make && make install, add to bashrc, including: export ACLOCAL_PATH=$BASEDIR/share/aclocal:$ACLOCAL_PATH
$ # now just follow steps from the above link:
$ git clone
$ cd protobuf
$ ./
$ CXXFLAGS="-fPIC -g -O2" ./configure
$ make -j12
$ export PROTOC=$PWD/src/protoc
$ cd python
$ python bdist_wheel --cpp_implementation --compile_static_extension
$ # IMP: The following steps must be run from an outside directory (not from the current directory where you are compiling)
$ pip uninstall protobuf
$ pip install dist/<wheel file name>
  1. Check using
$ python -c "from google.protobuf.internal import api_implementation; print(api_implementation._default_implementation_type)"