4 messages
Open this post in threaded view
|

 Is it possible to add a method to perform a dot product and add the result to an existing matrix in a single operation ?Like C = dot_add(A, B, C) equivalent to C += A @ B.This behavior is natively proposed by the Blas *gemm primitive.The goal is to reduce the peak memory consumption. Indeed, during the computation of C += A @ B, the maximum allocated memory is twice the size of C.Using *gemm to add directly the result , the maximum memory consumption is less than 1.5x the size of C. This difference is significant for large matrices.Any people interested in it ? _______________________________________________ NumPy-Discussion mailing list [hidden email] https://mail.python.org/mailman/listinfo/numpy-discussion
Open this post in threaded view
|

## Re: Dot + add operation

 On Wed, Mar 31, 2021 at 2:35 AM Guillaume Bethouart <[hidden email]> wrote:Is it possible to add a method to perform a dot product and add the result to an existing matrix in a single operation ?Like C = dot_add(A, B, C) equivalent to C += A @ B.This behavior is natively proposed by the Blas *gemm primitive.The goal is to reduce the peak memory consumption. Indeed, during the computation of C += A @ B, the maximum allocated memory is twice the size of C.Using *gemm to add directly the result , the maximum memory consumption is less than 1.5x the size of C. This difference is significant for large matrices.Any people interested in it ?Hi Guillaume, such fused operations cannot easily be done with NumPy alone, and it does not make sense to add separate APIs for that purpose because there are so many combinations of function calls that one might want to fuse.Instead, Numba, Pythran or numexpr can add this to some extent for numpy code. E.g. search for "loop fusion" in the Numba docs.Cheers,Ralf _______________________________________________ NumPy-Discussion mailing list [hidden email] https://mail.python.org/mailman/listinfo/numpy-discussion
 Or just use SciPy's  `get_blas_funcs`  to access *gemm, which directly exposes this function:KevinOn Wed, Mar 31, 2021 at 12:35 PM Ralf Gommers <[hidden email]> wrote:On Wed, Mar 31, 2021 at 2:35 AM Guillaume Bethouart <[hidden email]> wrote:Is it possible to add a method to perform a dot product and add the result to an existing matrix in a single operation ?Like C = dot_add(A, B, C) equivalent to C += A @ B.This behavior is natively proposed by the Blas *gemm primitive.The goal is to reduce the peak memory consumption. Indeed, during the computation of C += A @ B, the maximum allocated memory is twice the size of C.Using *gemm to add directly the result , the maximum memory consumption is less than 1.5x the size of C. This difference is significant for large matrices.Any people interested in it ?Hi Guillaume, such fused operations cannot easily be done with NumPy alone, and it does not make sense to add separate APIs for that purpose because there are so many combinations of function calls that one might want to fuse.Instead, Numba, Pythran or numexpr can add this to some extent for numpy code. E.g. search for "loop fusion" in the Numba docs.Cheers,Ralf _______________________________________________ NumPy-Discussion mailing list [hidden email] https://mail.python.org/mailman/listinfo/numpy-discussion _______________________________________________ NumPy-Discussion mailing list [hidden email] https://mail.python.org/mailman/listinfo/numpy-discussion