blob: 647825c0f4d5fc48afb2039759a6f26e0059016a [file] [log] [blame]
#3.0.1
#3.1.1
#3.2.0
3.2.4
#5745:37f59e65eb6c
5891:d8652709345d # introduce AVX
#5893:24b4dc92c6d3 # merge
5895:997c2ef9fc8b # introduce FMA
#5904:e1eafd14eaa1 # complex and AVX
5908:f8ee3c721251 # improve packing with ptranspose
#5921:ca808bb456b0 # merge
#5927:8b1001f9e3ac
5937:5a4ca1ad8c53 # New gebp kernel: up to 3 packets x 4 register-level blocks
#5949:f3488f4e45b2 # merge
#5969:e09031dccfd9 # Disable 3pX4 kernel on Altivec
#5992:4a429f5e0483 # merge
before-evaluators
#6334:f6a45e5b8b7c # Implement evaluator for sparse outer products
#6639:c9121c60b5c7
#6655:06f163b5221f # Properly detect FMA support on ARM
#6677:700e023044e7 # FMA has been wrongly disabled
#6681:11d31dafb0e3
#6699:5e6e8e10aad1 # merge default to tensors
#6726:ff2d2388e7b9 # merge default to tensors
#6742:0cbd6195e829 # merge default to tensors
#6747:853d2bafeb8f # Generalized the gebp apis
6765:71584fd55762 # Made the blocking computation aware of the l3 cache;<br/> Also optimized the blocking parameters to take<br/> into account the number of threads used for a computation.
6781:9cc5a931b2c6 # generalized gemv
6792:f6e1daab600a # ensured that contractions that can be reduced to a matrix vector product
#6844:039efd86b75c # merge tensor
6845:7333ed40c6ef # change prefetching in gebp
#6856:b5be5e10eb7f # merge index conversion
6893:c3a64aba7c70 # clean blocking size computation
6899:877facace746 # rotating kernel for ARM only
#6904:c250623ae9fa # result_of
6921:915f1b1fc158 # fix prefetching change for ARM
6923:9ff25f6dacc6 # prefetching
6933:52572e60b5d3 # blocking size strategy
6937:c8c042f286b2 # avoid redundant pack_rhs
6981:7e5d6f78da59 # dynamic loop swapping
6984:45f26866c091 # rm dynamic loop swapping,<br/> adjust lhs's micro panel height to fully exploit L1 cache
6986:a675d05b6f8f # blocking heuristic:<br/> block on the rhs in L1 if the lhs fit in L1.
7013:f875e75f07e5 # organize a little our default cache sizes,<br/> and use a saner default L1 outside of x86 (10% faster on Nexus 5)
7015:8aad8f35c955 # Refactor computeProductBlockingSizes to make room<br/> for the possibility of using lookup tables
7016:a58d253e8c91 # Polish lookup tables generation
7018:9b27294a8186 # actual_panel_rows computation should always be resilient<br/> to parameters not consistent with the known L1 cache size, see comment
7019:c758b1e2c073 # Provide a empirical lookup table for blocking sizes measured on a Nexus 5.<br/> Only for float, only for Android on ARM 32bit for now.
7085:627e039fba68 # Bug 986: add support for coefficient-based<br/> product with 0 depth.
7098:b6f1db9cf9ec # Bug 992: don't select a 3p GEMM path with non-SIMD scalar types.
7591:09a8e2186610 # 3.3-alpha1
7650:b0f3c8f43025 # help clang inlining
7708:dfc6ab9d9458 # Improve numerical accuracy in LLT and triangular solve<br/> by using true scalar divisions (instead of x * (1/y))
#8744:74b789ada92a # Improved the matrix multiplication blocking in the case<br/> where mr is not a power of 2 (e.g on Haswell CPUs)
8789:efcb912e4356 # Made the index type a template parameter to evaluateProductBlockingSizes.<br/> Use numext::mini and numext::maxi instead of <br/> std::min/std::max to compute blocking sizes.
8972:81d53c711775 # Don't optimize the processing of the last rows of<br/> a matrix matrix product in cases that violate<br/> the assumptions made by the optimized code path.
8985:d935df21a082 # Remove the rotating kernel.
8988:6c2dc56e73b3 # Bug 256: enable vectorization with unaligned loads/stores.
9148:b8b8c421e36c # Relax mixing-type constraints for binary coeff-wise operators
9174:d228bc282ac9 # merge
9175:abc7a3600098 # Include the cost of stores in unrolling
9212:c90098affa7b # Fix perf regression introduced in changeset 8aad8f35c955
9213:9f1c14e4694b # Fix perf regression in dgemm introduced by changeset 81d53c711775
9361:69d418c06999 # 3.3-beta2
9445:f27ff0ad77a3 # Optimize expression matching 'd?=a-b*c' as 'd?=a; d?=b*c;'
9583:bef509908b9d # 3.3-rc1
9593:2f24280cf59a # Bug 1311: fix alignment logic in some cases<br/> of (scalar*small).lazyProduct(small)
9722:040d861b88b5 # Disabled part of the matrix matrix peeling code<br/> that's incompatible with 512 bit registers
9792:26667be4f70b # 3.3.0
9891:41260bdfc23b # Fix a performance regression in (mat*mat)*vec<br/> for which mat*mat was evaluated multiple times.
9942:b1d3eba60130 # Operators += and -= do not resize!
9943:79bb9887afd4 # Ease compiler generating clean and efficient code in mat*vec
9946:2213991340ea # Complete rewrite of column-major-matrix * vector product<br/> to deliver higher performance of modern CPU.
9955:630471c3298c # Improve performance of row-major-dense-matrix * vector products<br/> for recent CPUs.
9975:2eeed9de710c # Revert vec/y to vec*(1/y) in row-major TRSM
10442:e3f17da72a40 # Bug 1435: fix aliasing issue in exressions like: A = C - B*A;
10735:6913f0cf7d06 # Adds missing EIGEN_STRONG_INLINE to support MSVC<br/> properly inlining small vector calculations
10943:4db388d946bd # Bug 1562: optimize evaluation of small products<br/> of the form s*A*B by rewriting them as: s*(A.lazyProduct(B))<br/> to save a costly temporary.<br/> Measured speedup from 2x to 5x.
10961:5007ff66c9f6 # Introduce the macro ei_declare_local_nested_eval to<br/> help allocating on the stack local temporaries via alloca,<br/> and let outer-products makes a good use of it.
11083:30a528a984bb # Bug 1578: Improve prefetching in matrix multiplication on MIPS.
11533:71609c41e9f8 # PR 526: Speed up multiplication of small, dynamically sized matrices
11545:6d348dc9b092 # Vectorize row-by-row gebp loop iterations on 16 packets as well
11579:efda481cbd7a # Bug 1624: improve matrix-matrix product on ARM 64, 20% speedup
11606:b8d3f548a9d9 # do not read buffers out of bounds
11638:22f9cc0079bd # Implement AVX512 vectorization of std::complex<float/double>
11642:9f52fde03483 # Bug 1636: fix gemm performance issue with gcc>=6 and no FMA
11648:81172653b67b # Bug 1515: disable gebp's 3pX4 micro kernel<br/> for MSVC<=19.14 because of register spilling.
11654:b81188e099f3 # fix EIGEN_GEBP_2PX4_SPILLING_WORKAROUND<br/> for non vectorized type, and non x86/64 target
11664:71546f1a9f0c # enable spilling workaround on architectures with SSE/AVX
11669:b500fef42ced # Artificially increase l1-blocking size for AVX512.<br/> +10% speedup with current kernels.
11683:2ea2960f1c7f # Make code compile again for older compilers.
11753:556fb4ceb654 # Bug: 1633: refactor gebp kernel and optimize for neon
11761:cefc1ba05596 # Bug 1661: fix regression in GEBP and AVX512
11763:1e41e70fe97b # GEBP: cleanup logic to choose between<br/> a 4 packets of 1 packet (=209bf81aa3f3+fix)
11803:d95b5d78598b # gebp: Add new ½ and ¼ packet rows per (peeling) round on the lhs