Compile Tensorflow: Difference between revisions
No edit summary |
No edit summary |
||
Line 32: | Line 32: | ||
<pre>bazel build --jobs 2 --local_ram_resources 2048 //tensorflow/tools/pip_package:build_pip_package</pre> | <pre>bazel build --jobs 2 --local_ram_resources 2048 //tensorflow/tools/pip_package:build_pip_package</pre> | ||
Almost at the end, I got a compile error. | Almost at the end, I got a compile error. | ||
<pre> from tensorflow/core/kernels/pad_op.cc:20: | <pre> from tensorflow/core/kernels/pad_op.cc:20: | ||
Line 38: | Line 38: | ||
external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorPadding.h:688:7:internal compiler error: in emit_move_insn, at expr.c:3547values[i] = coeff(index+i);</pre> | external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorPadding.h:688:7:internal compiler error: in emit_move_insn, at expr.c:3547values[i] = coeff(index+i);</pre> | ||
I fixed it with this suggestion: | |||
<pre> | |||
With this workaround, you can compile tensorflow with gcc 6.3 | With this workaround, you can compile tensorflow with gcc 6.3 | ||
Revision as of 02:00, 27 October 2021
Tensorflow packages often assume the processor supports avx2. Our Xeon process does not, and requires tensorflow to be compiled without this flag.
Verify dependencies
sudo apt install python3-dev python3-pip
Install Dependencies
sudo pip3 install numpy
Install Bazel using Bazelisk
sudo npm install -g @bazel/bazelisk
Checkout Tensorflow
git clone https://github.com/tensorflow/tensorflow.git cd tensorflow
Configure
Keep all defaults
./configure
Compile
The jobs parameter may be needed to reduce memory dependencies on some servers
bazel build --jobs 2 //tensorflow/tools/pip_package:build_pip_package
I got compilation errors at one point and some guidance said to add the following to the command to reduce memory consumption:
--local_ram_resources 2048
or
bazel build --jobs 2 --local_ram_resources 2048 //tensorflow/tools/pip_package:build_pip_package
Almost at the end, I got a compile error.
from tensorflow/core/kernels/pad_op.cc:20: external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorPadding.h: In member function 'Eigen::TensorEvaluator<const Eigen::TensorPaddingOp<PaddingDimensions, XprType>, Device>::PacketReturnType Eigen::TensorEvaluator<const Eigen::TensorPaddingOp<PaddingDimensions, XprType>, Device>::packetRowMajor(Eigen::TensorEvaluator<const Eigen::TensorPaddingOp<PaddingDimensions, XprType>, Device>::Index) const [with PaddingDimensions = const Eigen::array<Eigen::IndexPair<long int>, 2ul>; ArgType = const Eigen::TensorMap<Eigen::Tensor<const std::complex<float>, 2, 1, long int>, 16, Eigen::MakePointer>; Device = Eigen::ThreadPoolDevice]': external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorPadding.h:688:7:internal compiler error: in emit_move_insn, at expr.c:3547values[i] = coeff(index+i);
I fixed it with this suggestion:
With this workaround, you can compile tensorflow with gcc 6.3 In external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorReduction.h:801 (you can find the file in a subdirectory of ~/.cache/bazel ) replace values[i] = internal::InnerMostDimReducer<Self, Op>::reduce(*this, firstIndex + i * num_values_to_reduce, num_values_to_reduce, reducer); by two instructions instead Self::CoeffReturnType a = internal::InnerMostDimReducer<Self, Op>::reduce(*this, firstIndex + i * num_values_to_reduce, num_values_to_reduce, reducer); values[i] = a;