

Hello all,
It says the following: A (3d array): 15 x 3 x 5 B (2d array): 3 x 5 Result (3d array): 15 x 3 x 5
But, the rule did not work for me. Here's my toy example: >>> a = np.arange(3*4*5).reshape(3,4,5) >>> b = np.arange(4*5).reshape(4,5) >>> np.dot(a, b) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: shapes (3,4,5) and (3,5) not aligned: 5 (dim 2) != 3 (dim 0)
Am I miss reading something? Thank you in advance!
_______________________________________________
NumPyDiscussion mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/numpydiscussion


On Sat, Apr 20, 2019 at 12:24 AM C W < [hidden email]> wrote:
>
> Am I miss reading something? Thank you in advance!
Hey,
You are missing that the broadcasting rules typically apply to
arithmetic operations and methods that are specified explicitly to
broadcast. There is no mention of broadcasting in the docs of np.dot
[1], and its behaviour is a bit more complicated.
Specifically for multidimensional arrays (which you have), the doc says
If a is an ND array and b is an MD array (where M>=2), it is a sum
product over the last axis of a and the secondtolast axis of b:
dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])
So your (3,4,5) @ (3,5) would want to collapse the 4length axis of
`a` with the 3length axis of `b`; this won't work. If you want
elementwise multiplication according to the broadcasting rules, just
use `a * b`:
>>> a = np.arange(3*4*5).reshape(3,4,5)
... b = np.arange(4*5).reshape(4,5)
... (a * b).shape
(3, 4, 5)
[1]: https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html_______________________________________________
NumPyDiscussion mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/numpydiscussion


Thanks, you are right. I overlooked it's for addition.
The original problem was that I have matrix X (RBG image, 3 layers), and vector y.
I wanted to do np(X, y.T). >>> X.shape # 100 of 28 x 28 matrix (100, 28, 28) >>> y.shape # Just one 28 x 28 matrix (1, 28, 28)
But, np.dot() gives me four axis shown below, >>> z = np.dot(X, y.T) >>> z.shape (100, 28, 28, 1)
The fourth axis is unexpected. Should y.shape be (28, 28), not (1, 28, 28)?
Thanks again! On Sat, Apr 20, 2019 at 12:24 AM C W <[hidden email]> wrote:
>
> Am I miss reading something? Thank you in advance!
Hey,
You are missing that the broadcasting rules typically apply to
arithmetic operations and methods that are specified explicitly to
broadcast. There is no mention of broadcasting in the docs of np.dot
[1], and its behaviour is a bit more complicated.
Specifically for multidimensional arrays (which you have), the doc says
If a is an ND array and b is an MD array (where M>=2), it is a sum
product over the last axis of a and the secondtolast axis of b:
dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])
So your (3,4,5) @ (3,5) would want to collapse the 4length axis of
`a` with the 3length axis of `b`; this won't work. If you want
elementwise multiplication according to the broadcasting rules, just
use `a * b`:
>>> a = np.arange(3*4*5).reshape(3,4,5)
... b = np.arange(4*5).reshape(4,5)
... (a * b).shape
(3, 4, 5)
[1]: https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html
_______________________________________________
NumPyDiscussion mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/numpydiscussion
_______________________________________________
NumPyDiscussion mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/numpydiscussion


You may find np.einsum() more intuitive than np.dot() for aligning axes  it's certainly more explicit. Thanks, you are right. I overlooked it's for addition.
The original problem was that I have matrix X (RBG image, 3 layers), and vector y.
I wanted to do np(X, y.T). >>> X.shape # 100 of 28 x 28 matrix (100, 28, 28) >>> y.shape # Just one 28 x 28 matrix (1, 28, 28)
But, np.dot() gives me four axis shown below, >>> z = np.dot(X, y.T) >>> z.shape (100, 28, 28, 1)
The fourth axis is unexpected. Should y.shape be (28, 28), not (1, 28, 28)?
Thanks again!
On Sat, Apr 20, 2019 at 12:24 AM C W <[hidden email]> wrote:
>
> Am I miss reading something? Thank you in advance!
Hey,
You are missing that the broadcasting rules typically apply to
arithmetic operations and methods that are specified explicitly to
broadcast. There is no mention of broadcasting in the docs of np.dot
[1], and its behaviour is a bit more complicated.
Specifically for multidimensional arrays (which you have), the doc says
If a is an ND array and b is an MD array (where M>=2), it is a sum
product over the last axis of a and the secondtolast axis of b:
dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])
So your (3,4,5) @ (3,5) would want to collapse the 4length axis of
`a` with the 3length axis of `b`; this won't work. If you want
elementwise multiplication according to the broadcasting rules, just
use `a * b`:
>>> a = np.arange(3*4*5).reshape(3,4,5)
... b = np.arange(4*5).reshape(4,5)
... (a * b).shape
(3, 4, 5)
[1]: https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html
_______________________________________________
NumPyDiscussion mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/numpydiscussion
_______________________________________________
NumPyDiscussion mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/numpydiscussion
_______________________________________________
NumPyDiscussion mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/numpydiscussion


I agree with Stephan, I can never remember how np.dot works for
multidimensional arrays, and I rarely need its behaviour. Einsum, on
the other hand, is both intuitive to me and more general.
Anyway, yes, if y has a leading singleton dimension then its transpose
will have shape (28,28,1) which leads to that unexpected trailing
singleton dimension. If you look at how the shape changes in each step
(first transpose, then np.dot) you can see that everything's doing
what it should (i.e. what you tell it to do).
With np.einsum you'd have to consider that you want to pair the last
axis of X with the first axis of y.T, i.e. the last axis of y
(assuming the latter has only two axes, so it doesn't have that
leading singleton). This would correspond to the rule 'abc,dc>abd',
or if you want to allow arbitrary leading dimensions on y,
'abc,...c>ab...':
>>> X = np.arange(3*4*5).reshape(3,4,5)
... y1 = np.arange(6*5).reshape(6,5)
... y2 = y1[:,None] # inject leading singleton
... print(np.einsum('abc,dc>abd', X, y1).shape)
... print(np.einsum('abc,...c>ab...', X, y2).shape)
(3, 4, 6)
(3, 4, 6, 1)
András
On Sat, Apr 20, 2019 at 1:06 AM Stephan Hoyer < [hidden email]> wrote:
>
> You may find np.einsum() more intuitive than np.dot() for aligning axes  it's certainly more explicit.
>
> On Fri, Apr 19, 2019 at 3:59 PM C W < [hidden email]> wrote:
>>
>> Thanks, you are right. I overlooked it's for addition.
>>
>> The original problem was that I have matrix X (RBG image, 3 layers), and vector y.
>>
>> I wanted to do np(X, y.T).
>> >>> X.shape # 100 of 28 x 28 matrix
>> (100, 28, 28)
>> >>> y.shape # Just one 28 x 28 matrix
>> (1, 28, 28)
>>
>> But, np.dot() gives me four axis shown below,
>> >>> z = np.dot(X, y.T)
>> >>> z.shape
>> (100, 28, 28, 1)
>>
>> The fourth axis is unexpected. Should y.shape be (28, 28), not (1, 28, 28)?
>>
>> Thanks again!
>>
>> On Fri, Apr 19, 2019 at 6:39 PM Andras Deak < [hidden email]> wrote:
>>>
>>> On Sat, Apr 20, 2019 at 12:24 AM C W < [hidden email]> wrote:
>>> >
>>> > Am I miss reading something? Thank you in advance!
>>>
>>> Hey,
>>>
>>> You are missing that the broadcasting rules typically apply to
>>> arithmetic operations and methods that are specified explicitly to
>>> broadcast. There is no mention of broadcasting in the docs of np.dot
>>> [1], and its behaviour is a bit more complicated.
>>> Specifically for multidimensional arrays (which you have), the doc says
>>>
>>> If a is an ND array and b is an MD array (where M>=2), it is a sum
>>> product over the last axis of a and the secondtolast axis of b:
>>> dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])
>>>
>>> So your (3,4,5) @ (3,5) would want to collapse the 4length axis of
>>> `a` with the 3length axis of `b`; this won't work. If you want
>>> elementwise multiplication according to the broadcasting rules, just
>>> use `a * b`:
>>>
>>> >>> a = np.arange(3*4*5).reshape(3,4,5)
>>> ... b = np.arange(4*5).reshape(4,5)
>>> ... (a * b).shape
>>> (3, 4, 5)
>>>
>>>
>>> [1]: https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html>>> _______________________________________________
>>> NumPyDiscussion mailing list
>>> [hidden email]
>>> https://mail.python.org/mailman/listinfo/numpydiscussion>>
>> _______________________________________________
>> NumPyDiscussion mailing list
>> [hidden email]
>> https://mail.python.org/mailman/listinfo/numpydiscussion>
> _______________________________________________
> NumPyDiscussion mailing list
> [hidden email]
> https://mail.python.org/mailman/listinfo/numpydiscussion_______________________________________________
NumPyDiscussion mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/numpydiscussion


Actually, the second version I wrote is inaccurate, because `y.T` will
permute the remaining axes in the result, but the '...' in einsum
won't do this.
On Sat, Apr 20, 2019 at 1:24 AM Andras Deak < [hidden email]> wrote:
>
> I agree with Stephan, I can never remember how np.dot works for
> multidimensional arrays, and I rarely need its behaviour. Einsum, on
> the other hand, is both intuitive to me and more general.
> Anyway, yes, if y has a leading singleton dimension then its transpose
> will have shape (28,28,1) which leads to that unexpected trailing
> singleton dimension. If you look at how the shape changes in each step
> (first transpose, then np.dot) you can see that everything's doing
> what it should (i.e. what you tell it to do).
> With np.einsum you'd have to consider that you want to pair the last
> axis of X with the first axis of y.T, i.e. the last axis of y
> (assuming the latter has only two axes, so it doesn't have that
> leading singleton). This would correspond to the rule 'abc,dc>abd',
> or if you want to allow arbitrary leading dimensions on y,
> 'abc,...c>ab...':
> >>> X = np.arange(3*4*5).reshape(3,4,5)
> ... y1 = np.arange(6*5).reshape(6,5)
> ... y2 = y1[:,None] # inject leading singleton
> ... print(np.einsum('abc,dc>abd', X, y1).shape)
> ... print(np.einsum('abc,...c>ab...', X, y2).shape)
> (3, 4, 6)
> (3, 4, 6, 1)
>
> András
>
> On Sat, Apr 20, 2019 at 1:06 AM Stephan Hoyer < [hidden email]> wrote:
> >
> > You may find np.einsum() more intuitive than np.dot() for aligning axes  it's certainly more explicit.
> >
> > On Fri, Apr 19, 2019 at 3:59 PM C W < [hidden email]> wrote:
> >>
> >> Thanks, you are right. I overlooked it's for addition.
> >>
> >> The original problem was that I have matrix X (RBG image, 3 layers), and vector y.
> >>
> >> I wanted to do np(X, y.T).
> >> >>> X.shape # 100 of 28 x 28 matrix
> >> (100, 28, 28)
> >> >>> y.shape # Just one 28 x 28 matrix
> >> (1, 28, 28)
> >>
> >> But, np.dot() gives me four axis shown below,
> >> >>> z = np.dot(X, y.T)
> >> >>> z.shape
> >> (100, 28, 28, 1)
> >>
> >> The fourth axis is unexpected. Should y.shape be (28, 28), not (1, 28, 28)?
> >>
> >> Thanks again!
> >>
> >> On Fri, Apr 19, 2019 at 6:39 PM Andras Deak < [hidden email]> wrote:
> >>>
> >>> On Sat, Apr 20, 2019 at 12:24 AM C W < [hidden email]> wrote:
> >>> >
> >>> > Am I miss reading something? Thank you in advance!
> >>>
> >>> Hey,
> >>>
> >>> You are missing that the broadcasting rules typically apply to
> >>> arithmetic operations and methods that are specified explicitly to
> >>> broadcast. There is no mention of broadcasting in the docs of np.dot
> >>> [1], and its behaviour is a bit more complicated.
> >>> Specifically for multidimensional arrays (which you have), the doc says
> >>>
> >>> If a is an ND array and b is an MD array (where M>=2), it is a sum
> >>> product over the last axis of a and the secondtolast axis of b:
> >>> dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])
> >>>
> >>> So your (3,4,5) @ (3,5) would want to collapse the 4length axis of
> >>> `a` with the 3length axis of `b`; this won't work. If you want
> >>> elementwise multiplication according to the broadcasting rules, just
> >>> use `a * b`:
> >>>
> >>> >>> a = np.arange(3*4*5).reshape(3,4,5)
> >>> ... b = np.arange(4*5).reshape(4,5)
> >>> ... (a * b).shape
> >>> (3, 4, 5)
> >>>
> >>>
> >>> [1]: https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html> >>> _______________________________________________
> >>> NumPyDiscussion mailing list
> >>> [hidden email]
> >>> https://mail.python.org/mailman/listinfo/numpydiscussion> >>
> >> _______________________________________________
> >> NumPyDiscussion mailing list
> >> [hidden email]
> >> https://mail.python.org/mailman/listinfo/numpydiscussion> >
> > _______________________________________________
> > NumPyDiscussion mailing list
> > [hidden email]
> > https://mail.python.org/mailman/listinfo/numpydiscussion_______________________________________________
NumPyDiscussion mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/numpydiscussion

