blob: 7a8ff301f02f961e28d8168389ea0982d59c23b2 [file] [log] [blame]
namespace Eigen {
/** \page TopicMultiThreading Eigen and multi-threading
\section TopicMultiThreading_MakingEigenMT Make Eigen run in parallel
Some %Eigen's algorithms can exploit the multiple cores present in your hardware.
To this end, it is enough to enable OpenMP on your compiler, for instance:
- GCC: \c -fopenmp
- ICC: \c -openmp
- MSVC: check the respective option in the build properties.
You can control the number of threads that will be used using either the OpenMP API or %Eigen's API using the following priority:
\code
OMP_NUM_THREADS=n ./my_program
omp_set_num_threads(n);
Eigen::setNbThreads(n);
\endcode
Unless `setNbThreads` has been called, %Eigen uses the number of threads specified by OpenMP.
You can restore this behavior by calling `setNbThreads(0);`.
You can query the number of threads that will be used with:
\code
n = Eigen::nbThreads( );
\endcode
You can disable %Eigen's multi threading at compile time by defining the \link TopicPreprocessorDirectivesPerformance EIGEN_DONT_PARALLELIZE \endlink preprocessor token.
Currently, the following algorithms can make use of multi-threading:
- general dense matrix - matrix products
- PartialPivLU
- row-major-sparse * dense vector/matrix products
- ConjugateGradient with \c Lower|Upper as the \c UpLo template parameter.
- BiCGSTAB with a row-major sparse matrix format.
- LeastSquaresConjugateGradient
\warning On most OS it is <strong>very important</strong> to limit the number of threads to the number of physical cores, otherwise significant slowdowns are expected, especially for operations involving dense matrices.
Indeed, the principle of hyper-threading is to run multiple threads (in most cases 2) on a single core in an interleaved manner.
However, %Eigen's matrix-matrix product kernel is fully optimized and already exploits nearly 100% of the CPU capacity.
Consequently, there is no room for running multiple such threads on a single core, and the performance would drops significantly because of cache pollution and other sources of overheads.
At this stage of reading you're probably wondering why %Eigen does not limit itself to the number of physical cores?
This is simply because OpenMP does not allow to know the number of physical cores, and thus %Eigen will launch as many threads as <i>cores</i> reported by OpenMP.
\section TopicMultiThreading_UsingEigenWithMT Using Eigen in a multi-threaded application
In the case your own application is multithreaded, and multiple threads make calls to %Eigen, then you have to initialize %Eigen by calling the following routine \b before creating the threads:
\code
#include <Eigen/Core>
int main(int argc, char** argv)
{
Eigen::initParallel();
...
}
\endcode
\note With %Eigen 3.3, and a fully C++11 compliant compiler (i.e., <a href="http://en.cppreference.com/w/cpp/language/storage_duration#Static_local_variables">thread-safe static local variable initialization</a>), then calling \c initParallel() is optional.
\warning Note that all functions generating random matrices are \b not re-entrant nor thread-safe. Those include DenseBase::Random(), and DenseBase::setRandom() despite a call to `Eigen::initParallel()`. This is because these functions are based on `std::rand` which is not re-entrant.
For thread-safe random generator, we recommend the use of c++11 random generators (\link DenseBase::NullaryExpr(Index, const CustomNullaryOp&) example \endlink) or `boost::random`.
In the case your application is parallelized with OpenMP, you might want to disable %Eigen's own parallelization as detailed in the previous section.
\warning Using OpenMP with custom scalar types that might throw exceptions can lead to unexpected behaviour in the event of throwing.
*/
}