I was reading this and got thinking about if a ufunc could compute the sum of squared differences in a single pass without a temporary array. The python code below demonstrates a possible approach. I have (at least) 4 questions:import numpy as np x = np.arange(10) c = 1.0 def add_square_diff(x1, x2): return x1 + (x2-c)**2 ufunc = np.frompyfunc(add_square_diff, 2, 1) print(ufunc.reduce(x) - x[0] + (x[0]-c)**2) print(np.sum(np.square(x-c))) _______________________________________________ NumPy-Discussion mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/numpy-discussion |
On Fr, 2016-11-04 at 13:11 -0400, Matthew Harrigan wrote:
> I was reading this and got thinking about if a ufunc could compute > the sum of squared differences in a single pass without a temporary > array. The python code below demonstrates a possible approach. > > import numpy as np > x = np.arange(10) > c = 1.0 > def add_square_diff(x1, x2): > return x1 + (x2-c)**2 > ufunc = np.frompyfunc(add_square_diff, 2, 1) > print(ufunc.reduce(x) - x[0] + (x[0]-c)**2) > print(np.sum(np.square(x-c))) > > I have (at least) 4 questions: > 1. Is it possible to pass run time constants to a ufunc written in C > for use in its inner loop, and if so how? third argument, but a 3 arg ufunc can't be reduced. Not sure if there might be some trickery possible. > 2. Is it possible to pass an initial value to reduce to avoid the > clean up required for the first element? This is the identity normally. But the identity can only be 0, 1 or -1 right now I think. The identity is what the output array gets initialized with (which effectively makes it the first value passed into the inner loop). > 3. Does that ufunc work, or are there special cases which cause it to > fall apart? > 4. Would a very specialized ufunc such as this be considered for > incorporating in numpy since it would help reduce time and memory of > functions already in numpy? > Might be mixing up things, however, IIRC the single pass approach has a bad numerical accuracy, so that I doubt that it is a good default algorithm. - Sebastian > Thank you, > Matt > _______________________________________________ > NumPy-Discussion mailing list > [hidden email] > https://mail.scipy.org/mailman/listinfo/numpy-discussion _______________________________________________ NumPy-Discussion mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/numpy-discussion signature.asc (836 bytes) Download Attachment |
I didn't notice identity before. Seems like frompyfunc always sets it to None. If it were zero maybe it would work as desired here. In the writing your own ufunc doc, I was wondering if the pointer to data could be used to get a constant at runtime. If not, what could that be used for?static void double_logit(char **args, npy_intp *dimensions, npy_intp* steps, void* data) On Fri, Nov 4, 2016 at 1:56 PM, Sebastian Berg <[hidden email]> wrote: On Fr, 2016-11-04 at 13:11 -0400, Matthew Harrigan wrote: _______________________________________________ NumPy-Discussion mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/numpy-discussion |
On Fr, 2016-11-04 at 15:42 -0400, Matthew Harrigan wrote:
> I didn't notice identity before. Seems like frompyfunc always sets > it to None. If it were zero maybe it would work as desired here. > > In the writing your own ufunc doc, I was wondering if the pointer to > data could be used to get a constant at runtime. If not, what could > that be used for? > static void double_logit(char **args, npy_intp *dimensions, > npy_intp* steps, void* data) > Why would the numerical accuracy be any different? The subtraction > and square operations look identical and I thought np.sum just calls > np.add.reduce, so the reduction step uses the same code and would > therefore have the same accuracy. > doing the two pass method. - Sebastian > Thanks > > On Fri, Nov 4, 2016 at 1:56 PM, Sebastian Berg <sebastian@sipsolution > s.net> wrote: > > On Fr, 2016-11-04 at 13:11 -0400, Matthew Harrigan wrote: > > > I was reading this and got thinking about if a ufunc could > > compute > > > the sum of squared differences in a single pass without a > > temporary > > > array. The python code below demonstrates a possible approach. > > > > > > import numpy as np > > > x = np.arange(10) > > > c = 1.0 > > > def add_square_diff(x1, x2): > > > return x1 + (x2-c)**2 > > > ufunc = np.frompyfunc(add_square_diff, 2, 1) > > > print(ufunc.reduce(x) - x[0] + (x[0]-c)**2) > > > print(np.sum(np.square(x-c))) > > > > > > I have (at least) 4 questions: > > > 1. Is it possible to pass run time constants to a ufunc written > > in C > > > for use in its inner loop, and if so how? > > > > I don't think its anticipated, since a ufunc could in most cases > > use a > > third argument, but a 3 arg ufunc can't be reduced. Not sure if > > there > > might be some trickery possible. > > > > > 2. Is it possible to pass an initial value to reduce to avoid the > > > clean up required for the first element? > > > > This is the identity normally. But the identity can only be 0, 1 or > > -1 > > right now I think. The identity is what the output array gets > > initialized with (which effectively makes it the first value passed > > into the inner loop). > > > > > 3. Does that ufunc work, or are there special cases which cause > > it to > > > fall apart? > > > 4. Would a very specialized ufunc such as this be considered for > > > incorporating in numpy since it would help reduce time and memory > > of > > > functions already in numpy? > > > > > > > Might be mixing up things, however, IIRC the single pass approach > > has a > > bad numerical accuracy, so that I doubt that it is a good default > > algorithm. > > > > - Sebastian > > > > > > > Thank you, > > > Matt > > > _______________________________________________ > > > NumPy-Discussion mailing list > > > [hidden email] > > > https://mail.scipy.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > > NumPy-Discussion mailing list > > [hidden email] > > https://mail.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > [hidden email] > https://mail.scipy.org/mailman/listinfo/numpy-discussion NumPy-Discussion mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/numpy-discussion signature.asc (836 bytes) Download Attachment |
I started a ufunc to compute the sum of square differences here. It is about 4x faster and uses half the memory compared to np.sum(np.square(x-c)). This could significantly speed up common operations like std and var (where c=np.mean(x). It faster because its a single pass instead of 3, and also because the inner loop is specialized for the common reduce case, which might be an optimization considering more generally. Can someone please point me to an example where "data" was used in a ufunc inner loop? How can that value be set at runtime? ThanksI think I have answered my question #2&3 above. On Fri, Nov 4, 2016 at 5:33 PM, Sebastian Berg <[hidden email]> wrote: On Fr, 2016-11-04 at 15:42 -0400, Matthew Harrigan wrote: _______________________________________________ NumPy-Discussion mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/numpy-discussion |
here is an old unfinished PR adding max_abs which has a similar purpose
as yours: https://github.com/numpy/numpy/pull/5509 So far I know data is set during runtime construction and is part of the ufunc object, so it is not really very suitable to pass in constants on call. Its intended purpose is pass in functions to be called in generic loops like PyUFunc_d_d. Having an option to mark arguments of a ufunc as special in reductions could be useful, e.g. it would allow a potential (fused-)multiply-and-add ufunc to be used to implement a weighted sum. On 11.11.2016 17:25, Matthew Harrigan wrote: > I started a ufunc to compute the sum of square differences here > <https://gist.github.com/mattharrigan/6f678b3d6df5efd236fc23bfb59fd3bd>. It > is about 4x faster and uses half the memory compared to > np.sum(np.square(x-c)). This could significantly speed up common > operations like std and var (where c=np.mean(x). It faster because its > a single pass instead of 3, and also because the inner loop is > specialized for the common reduce case, which might be an optimization > considering more generally. > > I think I have answered my question #2&3 above. > > Can someone please point me to an example where "data" was used in a > ufunc inner loop? How can that value be set at runtime? Thanks > > On Fri, Nov 4, 2016 at 5:33 PM, Sebastian Berg > <[hidden email] <mailto:[hidden email]>> wrote: > > On Fr, 2016-11-04 at 15:42 -0400, Matthew Harrigan wrote: > > I didn't notice identity before. Seems like frompyfunc always sets > > it to None. If it were zero maybe it would work as desired here. > > > > In the writing your own ufunc doc, I was wondering if the pointer to > > data could be used to get a constant at runtime. If not, what could > > that be used for? > > static void double_logit(char **args, npy_intp *dimensions, > > npy_intp* steps, void* data) > > Why would the numerical accuracy be any different? The subtraction > > and square operations look identical and I thought np.sum just calls > > np.add.reduce, so the reduction step uses the same code and would > > therefore have the same accuracy. > > > > Sorry, did not read it carefully, I guess `c` is the mean, so you are > doing the two pass method. > > - Sebastian > > > > Thanks > > > > On Fri, Nov 4, 2016 at 1:56 PM, Sebastian Berg <sebastian@sipsolution > > s.net <http://s.net>> wrote: > > > On Fr, 2016-11-04 at 13:11 -0400, Matthew Harrigan wrote: > > > > I was reading this and got thinking about if a ufunc could > > > compute > > > > the sum of squared differences in a single pass without a > > > temporary > > > > array. The python code below demonstrates a possible approach. > > > > > > > > import numpy as np > > > > x = np.arange(10) > > > > c = 1.0 > > > > def add_square_diff(x1, x2): > > > > return x1 + (x2-c)**2 > > > > ufunc = np.frompyfunc(add_square_diff, 2, 1) > > > > print(ufunc.reduce(x) - x[0] + (x[0]-c)**2) > > > > print(np.sum(np.square(x-c))) > > > > > > > > I have (at least) 4 questions: > > > > 1. Is it possible to pass run time constants to a ufunc written > > > in C > > > > for use in its inner loop, and if so how? > > > > > > I don't think its anticipated, since a ufunc could in most cases > > > use a > > > third argument, but a 3 arg ufunc can't be reduced. Not sure if > > > there > > > might be some trickery possible. > > > > > > > 2. Is it possible to pass an initial value to reduce to avoid the > > > > clean up required for the first element? > > > > > > This is the identity normally. But the identity can only be 0, 1 or > > > -1 > > > right now I think. The identity is what the output array gets > > > initialized with (which effectively makes it the first value passed > > > into the inner loop). > > > > > > > 3. Does that ufunc work, or are there special cases which cause > > > it to > > > > fall apart? > > > > 4. Would a very specialized ufunc such as this be considered for > > > > incorporating in numpy since it would help reduce time and memory > > > of > > > > functions already in numpy? > > > > > > > > > > Might be mixing up things, however, IIRC the single pass approach > > > has a > > > bad numerical accuracy, so that I doubt that it is a good default > > > algorithm. > > > > > > - Sebastian > > > > > > > > > > Thank you, > > > > Matt > > > > _______________________________________________ > > > > NumPy-Discussion mailing list > > > > [hidden email] <mailto:[hidden email]> > > > > https://mail.scipy.org/mailman/listinfo/numpy-discussion > <https://mail.scipy.org/mailman/listinfo/numpy-discussion> > > > _______________________________________________ > > > NumPy-Discussion mailing list > > > [hidden email] <mailto:[hidden email]> > > > https://mail.scipy.org/mailman/listinfo/numpy-discussion > <https://mail.scipy.org/mailman/listinfo/numpy-discussion> > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > [hidden email] <mailto:[hidden email]> > > https://mail.scipy.org/mailman/listinfo/numpy-discussion > <https://mail.scipy.org/mailman/listinfo/numpy-discussion> > > _______________________________________________ > NumPy-Discussion mailing list > [hidden email] <mailto:[hidden email]> > https://mail.scipy.org/mailman/listinfo/numpy-discussion > <https://mail.scipy.org/mailman/listinfo/numpy-discussion> > > > > > _______________________________________________ > NumPy-Discussion mailing list > [hidden email] > https://mail.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ NumPy-Discussion mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/numpy-discussion |
My possibly poorly thought out API would be something like np.add_sq_diff.set_c(42.0).reduce(x), basically add a setter method which returned self. Thanks for explaining what data was intended to do. All of this is really just a dance around the requirement that reduce only works on binary functions. Would there be any interest in extending reduce to allow for arbitrary numbers of input and output arguments? On Fri, Nov 11, 2016 at 1:38 PM, Julian Taylor <[hidden email]> wrote: here is an old unfinished PR adding max_abs which has a similar purpose _______________________________________________ NumPy-Discussion mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/numpy-discussion |
In reply to this post by Matthew Harrigan
On Fri, 11 Nov 2016 11:25:58 -0500
Matthew Harrigan <[hidden email]> wrote: > I started a ufunc to compute the sum of square differences here > <https://gist.github.com/mattharrigan/6f678b3d6df5efd236fc23bfb59fd3bd>. > It is about 4x faster and uses half the memory compared to > np.sum(np.square(x-c)). Hi Matt, Using *blas* you win already a factor two (maybe more depending on you blas implementation): % python -m timeit -s "import numpy as np;x=np.linspace(0,1,int(1e7))" "np.sum(np.square(x-2.))" 10 loops, best of 3: 135 msec per loop % python -m timeit -s "import numpy as np;x=np.linspace(0,1,int(1e7))" "y=x-2.;np.dot(y,y)" 10 loops, best of 3: 70.2 msec per loop Cheers, -- Jérôme Kieffer _______________________________________________ NumPy-Discussion mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/numpy-discussion |
Yeah,
but it's not so obvious what's happening "under the hoods". Consider this (with an old Win7 machine): Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:18:55) [MSC v.1900 64 bit (AMD64)] np.__version__ '1.11.1' On Mon, Nov 14, 2016 at 10:38 AM, Jerome Kieffer <[hidden email]> wrote: On Fri, 11 Nov 2016 11:25:58 -0500 x= np.linspace(0, 1, int(1e6)) timeit np.sum(np.square(x- 2.)) 10 loops, best of 3: 23 ms per loop y= x- 2. timeit np.dot(y, y) The slowest run took 18.60 times longer than the fastest. This could mean that an intermediate result is being cached. 1000 loops, best of 3: 1.78 ms per loop timeit np.dot(y, y) 1000 loops, best of 3: 1.73 ms per loop Best, eat
_______________________________________________ NumPy-Discussion mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/numpy-discussion |
In reply to this post by Jerome Kieffer
Still slower and worse uses 2x the memory for the intermediate temporary array. I propose allowing implicit reductions with ufuncs. Specifically if out is provided with shape[axis] = 1, then pass it on to the ufunc with a stride of 0. That should allow this to work:x = np.arange(10) def add_square_diff(x1, x2, x3): return x1 + (x2-x3)**2 def minmax(x1, x2, x3): return min(x1,x3), max(x2,x3) maxx = np.array([-np.inf]) On Mon, Nov 14, 2016 at 3:38 AM, Jerome Kieffer <[hidden email]> wrote: On Fri, 11 Nov 2016 11:25:58 -0500 _______________________________________________ NumPy-Discussion mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/numpy-discussion |
On Mon, Nov 14, 2016 at 5:40 PM, Matthew Harrigan <[hidden email]> wrote:
NumPy already has a generic enough interface for creating such ufuncs. In fact, it's called a "generalized ufunc": I think you could already write "implicit reductions" using gufuncs? _______________________________________________ NumPy-Discussion mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/numpy-discussion |
In reply to this post by eat-3
On Mon, 14 Nov 2016 22:38:25 +0200
eat <[hidden email]> wrote: > but it's not so obvious what's happening "under the hoods". Consider this > (with an old Win7 machine): > Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:18:55) [MSC v.1900 64 > bit (AMD64)] > np.__version__ > '1.11.1' What matters is the "blas" library used under the hood, hence the options passed to numpy at compile time. I notices 20x differences depending on the blas version. But more importantly: * results are the same (at the limit of the numerical precision) * dot() was always faster than sum(square()), varying from a bit to a lot. I agree this may change in future version of numpy. Cheers, -- Jérôme Kieffer _______________________________________________ NumPy-Discussion mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/numpy-discussion |
In reply to this post by Stephan Hoyer-2
Thanks for pointing me to that. I think its a great match for a 1D case, like the inner product of two arrays in terms of signature. I don't see how it works with higher dimensional arrays, esp with something like the axis parameter in ufunc.reduce. With what I proposed for an array of shape (M, N) for example, result.shape could be (1,) or (M, 1) or (1, N) for reducing over the flattened array or either axis. Can you do something like that with gufunc or do you need to iterate gufunc calls over slices/views? Thanks again. On Mon, Nov 14, 2016 at 8:49 PM, Stephan Hoyer <[hidden email]> wrote:
_______________________________________________ NumPy-Discussion mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/numpy-discussion |
Free forum by Nabble | Edit this page |