Value based promotion and user DTypes

classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

Value based promotion and user DTypes

Sebastian Berg
Hi all,

does anyone have a thought about how user DTypes (i.e. DTypes not
currently part of NumPy) should interact with the "value based
promotion" logic we currently have?
For now I can just do anything, and we will find out later.  And I will
have to do something for now, basically with the hope that it all turns
out all-right.

But there are multiple options for both what to offer to user DTypes
and where we want to move (I am using `bfloat16` as a potential DType
here).

1. The "weak" dtype option (this is what JAX does), where:

       np.array([1], dtype=bfloat16) + 4.

   returns a bfloat16, because 4. is "lower" than all floating
   point types.
   In this scheme the user defined `bfloat16` knows that the input
   is a Python float, but it does not know its value (if an
   overflow occurs during conversion, it could warn or error but
   not upcast).  For example `np.array([1], dtype=uint4) + 2**5`
   will try `uint4(2**5)` assuming it works.
   NumPy is different `2.**300` would ensure the result is a `float64`.

   If a DType does not make use of this, it would get the behaviour
   of option 2.

2. The "default" DType option: np.array([1], dtype=bfloat16) + 4. is
   always the same as `bfloat16 + float64 -> float64`.

3. Use whatever NumPy considers the "smallest appropriate dtype".
   This will not always work correctly for unsigned integers, and for
   floats this would be float16, which doesn't help with bfloat16.

4. Try to expose the actual value. (I do not want to do this, but it
   is probably a plausible extension with most other options, since
   the other options can be the "default".)


Within these options, there is one more difficulty. NumPy currently
applies the same logic for:

    np.array([1], dtype=bfloat16) + np.array(4., dtype=np.float64)

which in my opinion is wrong (the second array is typed). We do have
the same issue with deciding what to do in the future for NumPy itself.
Right now I feel that new (user) DTypes should live in the future
(whatever that future is).

I have said previously, that we could distinguish this for universal
functions.  But calls like `np.asarray(4.)` are common, and they would
lose the information that `4.` was originally a Python float.


So, recently, I was considering that a better option may be to limit
this to math Python operators: +, -, /, **, ...

Those are the places where it may make a difference to write:

    arr + 4.         vs.    arr + bfloat16(4.)
    int8_arr + 1     vs.    int8_arr + np.int8(1)
    arr += 4.      (in-place may be the most significant use-case)

while:

    np.add(int8_arr, 1)    vs.   np.add(int8_arr, np.int8(1))

is maybe less significant. On the other hand, it would add a subtle
difference between operators vs. direct ufunc calls...


In general, it may not matter: We can choose option 1 (which the
bfloat16 does not have to use), and modify it if we ever change the
logic in NumPy itself.  Basically, I will probably pick option 1 for
now and press on, and we can reconsider later.  And hope that it does
not make things even more complicated than it is now.

Or maybe better just limit it completely to always use the default for
user DTypes?


But I would be interested if the "limit to Python operators" is
something we should aim for here.  This does make a small difference,
because user DTypes could "live" in the future if we have an idea of
how that future may look like.

Cheers,

Sebastian

_______________________________________________
NumPy-Discussion mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/numpy-discussion

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Value based promotion and user DTypes

ralfgommers


On Tue, Jan 26, 2021 at 2:01 AM Sebastian Berg <[hidden email]> wrote:
Hi all,

does anyone have a thought about how user DTypes (i.e. DTypes not
currently part of NumPy) should interact with the "value based
promotion" logic we currently have?
For now I can just do anything, and we will find out later.  And I will
have to do something for now, basically with the hope that it all turns
out all-right.

But there are multiple options for both what to offer to user DTypes
and where we want to move (I am using `bfloat16` as a potential DType
here).

1. The "weak" dtype option (this is what JAX does), where:

       np.array([1], dtype=bfloat16) + 4.

   returns a bfloat16, because 4. is "lower" than all floating
   point types.
   In this scheme the user defined `bfloat16` knows that the input
   is a Python float, but it does not know its value (if an
   overflow occurs during conversion, it could warn or error but
   not upcast).  For example `np.array([1], dtype=uint4) + 2**5`
   will try `uint4(2**5)` assuming it works.
   NumPy is different `2.**300` would ensure the result is a `float64`.

   If a DType does not make use of this, it would get the behaviour
   of option 2.

2. The "default" DType option: np.array([1], dtype=bfloat16) + 4. is
   always the same as `bfloat16 + float64 -> float64`.

3. Use whatever NumPy considers the "smallest appropriate dtype".
   This will not always work correctly for unsigned integers, and for
   floats this would be float16, which doesn't help with bfloat16.

4. Try to expose the actual value. (I do not want to do this, but it
   is probably a plausible extension with most other options, since
   the other options can be the "default".)


Within these options, there is one more difficulty. NumPy currently
applies the same logic for:

    np.array([1], dtype=bfloat16) + np.array(4., dtype=np.float64)

which in my opinion is wrong (the second array is typed). We do have
the same issue with deciding what to do in the future for NumPy itself.
Right now I feel that new (user) DTypes should live in the future
(whatever that future is).

I agree. And I have a preference for option 1. Option 2 is too greedy in upcasting, the value-based casting is problematic in multiple ways (e.g., hard for Numba because output dtype cannot be predicted from input dtypes), and option 4 is hard to understand a rationale for (maybe so the user dtype itself can implement option 3?). 


I have said previously, that we could distinguish this for universal
functions.  But calls like `np.asarray(4.)` are common, and they would
lose the information that `4.` was originally a Python float.

Hopefully the future will have way fewer asarray calls in it. Rejecting scalar input to functions would be nice. This is what most other array/tensor libraries do.



So, recently, I was considering that a better option may be to limit
this to math Python operators: +, -, /, **, ...

+1

This discussion may be relevant: https://github.com/data-apis/array-api/issues/14.


Those are the places where it may make a difference to write:

    arr + 4.         vs.    arr + bfloat16(4.)
    int8_arr + 1     vs.    int8_arr + np.int8(1)
    arr += 4.      (in-place may be the most significant use-case)

while:

    np.add(int8_arr, 1)    vs.   np.add(int8_arr, np.int8(1))

is maybe less significant. On the other hand, it would add a subtle
difference between operators vs. direct ufunc calls...


In general, it may not matter: We can choose option 1 (which the
bfloat16 does not have to use), and modify it if we ever change the
logic in NumPy itself.  Basically, I will probably pick option 1 for
now and press on, and we can reconsider later.  And hope that it does
not make things even more complicated than it is now.

Or maybe better just limit it completely to always use the default for
user DTypes?

I'm not sure I understand why you like option 1 but want to give user-defined dtypes the choice of opting out of it. Upcasting will rarely make sense for user-defined dtypes anyway.



But I would be interested if the "limit to Python operators" is
something we should aim for here.  This does make a small difference,
because user DTypes could "live" in the future if we have an idea of
how that future may look like.

A future with:
- no array scalars
- 0-D arrays have the same casting rules as >=1-D arrays
- no value-based casting
would be quite nice. For "same kind" casting like https://data-apis.github.io/array-api/latest/API_specification/type_promotion.html. Mixed-kind casting isn't specified there, because it's too different between libraries. The JAX design (https://jax.readthedocs.io/en/latest/type_promotion.html)  seems sensible there.

Cheers,
Ralf


_______________________________________________
NumPy-Discussion mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: Value based promotion and user DTypes

Sebastian Berg
On Tue, 2021-01-26 at 06:11 +0100, Ralf Gommers wrote:

> On Tue, Jan 26, 2021 at 2:01 AM Sebastian Berg <  
> [hidden email]>
> wrote:
>
> > Hi all,
> >
> > does anyone have a thought about how user DTypes (i.e. DTypes not
> > currently part of NumPy) should interact with the "value based
> > promotion" logic we currently have?
> > For now I can just do anything, and we will find out later.  And I
> > will
> > have to do something for now, basically with the hope that it all
> > turns
> > out all-right.
> >
> > But there are multiple options for both what to offer to user
> > DTypes
> > and where we want to move (I am using `bfloat16` as a potential
> > DType
> > here).
> >
> > 1. The "weak" dtype option (this is what JAX does), where:
> >
> >        np.array([1], dtype=bfloat16) + 4.
> >
> >    returns a bfloat16, because 4. is "lower" than all floating
> >    point types.
> >    In this scheme the user defined `bfloat16` knows that the input
> >    is a Python float, but it does not know its value (if an
> >    overflow occurs during conversion, it could warn or error but
> >    not upcast).  For example `np.array([1], dtype=uint4) + 2**5`
> >    will try `uint4(2**5)` assuming it works.
> >    NumPy is different `2.**300` would ensure the result is a
> > `float64`.
> >
> >    If a DType does not make use of this, it would get the behaviour
> >    of option 2.
> >
> > 2. The "default" DType option: np.array([1], dtype=bfloat16) + 4.
> > is
> >    always the same as `bfloat16 + float64 -> float64`.
> >
> > 3. Use whatever NumPy considers the "smallest appropriate dtype".
> >    This will not always work correctly for unsigned integers, and
> > for
> >    floats this would be float16, which doesn't help with bfloat16.
> >
> > 4. Try to expose the actual value. (I do not want to do this, but
> > it
> >    is probably a plausible extension with most other options, since
> >    the other options can be the "default".)
> >
> >
> > Within these options, there is one more difficulty. NumPy currently
> > applies the same logic for:
> >
> >     np.array([1], dtype=bfloat16) + np.array(4., dtype=np.float64)
> >
> > which in my opinion is wrong (the second array is typed). We do
> > have
> > the same issue with deciding what to do in the future for NumPy
> > itself.
> > Right now I feel that new (user) DTypes should live in the future
> > (whatever that future is).
> >
>
> I agree. And I have a preference for option 1. Option 2 is too greedy
> in
> upcasting, the value-based casting is problematic in multiple ways
> (e.g.,
> hard for Numba because output dtype cannot be predicted from input
> dtypes),
> and option 4 is hard to understand a rationale for (maybe so the user
> dtype
> itself can implement option 3?).
Yes, well, the "rational" for option 4 is that you expose everything
that NumPy currently needs (assuming we make no changes). That would be
the only way that allows a `bfloat16` to work exactly comparable to a
`float16` as currently defined in NumPy.

To be clear: It horrifies me, but defining a "better" way is much
easier than trying to keep everything as (at least for now) while also
thinking about how it should look like in the future (and making sure
that user DTypes are ready for that future).

My guess is, we can agree on aiming for Option 1 and trying to limit it
to Python operators.  Unfortunately, only time will tell how feasible
that will actually be.

>
>
> > I have said previously, that we could distinguish this for
> > universal
> > functions.  But calls like `np.asarray(4.)` are common, and they
> > would
> > lose the information that `4.` was originally a Python float.
> >
>
> Hopefully the future will have way fewer asarray calls in it.
> Rejecting
> scalar input to functions would be nice. This is what most other
> array/tensor libraries do.
>
Well, right now NumPy has scalars (both ours and Python), and I would
expect that changing that may well be more disruptive than changing the
value based promotion (assuming we can add good FutureWarnings).

I would probabaly need a bit convincing that forbidding `np.add(array,
2)` is worth the trouble, but luckily that is probably an orthogonal
question.  (The fact that we even accept 0-D arrays as "value based" is
probably the biggest difficulty.)

>
> >
> > So, recently, I was considering that a better option may be to
> > limit
> > this to math Python operators: +, -, /, **, ...
> >
>
> +1
>
> This discussion may be relevant:
> https://github.com/data-apis/array-api/issues/14.
>
I have browsed through it, I guess you also were thinking of limiting
scalars to operators (although possibly even more broadly rather than
just for promotion purposes).  I am not sure I understand this:

    Non-array ("scalar") operands are not permitted to participate in
type promotion.

Since they do participate also in JAX and in what I wrote here. They
just participate in an abstract way. I.e. as `Floating` or `Integer`,
but not like a specific float or integer.

>
> > Those are the places where it may make a difference to write:
> >
> >     arr + 4.         vs.    arr + bfloat16(4.)
> >     int8_arr + 1     vs.    int8_arr + np.int8(1)
> >     arr += 4.      (in-place may be the most significant use-case)
> >
> > while:
> >
> >     np.add(int8_arr, 1)    vs.   np.add(int8_arr, np.int8(1))
> >
> > is maybe less significant. On the other hand, it would add a subtle
> > difference between operators vs. direct ufunc calls...
> >
> >
> > In general, it may not matter: We can choose option 1 (which the
> > bfloat16 does not have to use), and modify it if we ever change the
> > logic in NumPy itself.  Basically, I will probably pick option 1
> > for
> > now and press on, and we can reconsider later.  And hope that it
> > does
> > not make things even more complicated than it is now.
> >
> > Or maybe better just limit it completely to always use the default
> > for
> > user DTypes?
> >
>
> I'm not sure I understand why you like option 1 but want to give
> user-defined dtypes the choice of opting out of it. Upcasting will
> rarely
> make sense for user-defined dtypes anyway.
>
I never meant this as an opt-out, the question is what you do if the
user DType does not opt-in/define the operation.

Basically, the we would promote with `Floating` here (or `PyFloating`,
but there should be no difference; for now I will do PyFloating, but it
should probably be changed later). I was hinting at provide a default
fallback, so that if:

    UserDtype + Floating -> Undefined/Error

we automatically try the "default", e.g.:

    UserDType + Float64 -> Something

That would mean users don't have to worry about `Floating` itself.

But I am not opinionated here, a user DType author should be able to
quickly deal with either issue (that Float64 is undesired or that the
Error is undesired if no "default" exists).  Maybe the error is more
conservative/constructive though.

>
> >
> > But I would be interested if the "limit to Python operators" is
> > something we should aim for here.  This does make a small
> > difference,
> > because user DTypes could "live" in the future if we have an idea
> > of
> > how that future may look like.
> >
>
> A future with:
> - no array scalars
> - 0-D arrays have the same casting rules as >=1-D arrays
> - no value-based casting
> would be quite nice. For "same kind" casting like
>
I don't think array-scalars really matter here, since they are typed
and behave identical to 0-D arrays anyway.  We can have long opinion
pieces on whether they should exist :).

>  
> https://data-apis.github.io/array-api/latest/API_specification/type_promotion.html
> .
> Mixed-kind casting isn't specified there, because it's too different
> between libraries. The JAX design (
> https://jax.readthedocs.io/en/latest/type_promotion.html)  seems
> sensible
> there.

The JAX design is the "weak DType" design (when it comes to Python
numbers). Although, the fact that a "weak" `complex` is sorted above
all floats, means that `bfloat16_arr + 1j` will go to the default
complex dtype as well.
But yes, I like the "weak" approach, just think also JAX has some
wrinkles to smoothen.


There is a good deal more to this if you get user DTypes and I add one
more important constraint that:

    from my_extension_module import uint24

must not change any existing code that does not explicitly use
`uint24`.

Then my current approach guarantees:

    np.result_type(uint24, int48, int64) -> Error

If `uint24` and `int48` do not know each other (`int64` is obviously
right here, but it is tricky to be quite certain).

The other tricky example I have was:

  The following becomes problematic (order does not matter):
          uint24 +      int16  +           uint32  -> int64
     <==      (uint24 + int16) + (uint24 + uint32) -> int64
     <==                int32  +           uint32  -> int64

With the addition that `uint24 + int32 -> int48` is defined the first
could be expected to return `int48`, but actually getting there is
tricky (and my current code will not).

If promotion result of a user DType with a builtin one, can be a
builtin one, then "ammending" the promotion with things like `uint24 +
int32 -> int48` can lead to slightly surprising promotion results.
This happens if the result of a promotion with another "category"
(builtin) can be both a larger category or a lower one.

- Sebastian


>
> Cheers,
> Ralf
> _______________________________________________
> NumPy-Discussion mailing list
> [hidden email]
> https://mail.python.org/mailman/listinfo/numpy-discussion


_______________________________________________
NumPy-Discussion mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/numpy-discussion

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Value based promotion and user DTypes

ralfgommers


On Tue, Jan 26, 2021 at 10:21 PM Sebastian Berg <[hidden email]> wrote:
On Tue, 2021-01-26 at 06:11 +0100, Ralf Gommers wrote:
> On Tue, Jan 26, 2021 at 2:01 AM Sebastian Berg < 
> [hidden email]>
> wrote:
>
> > Hi all,
> >
> > does anyone have a thought about how user DTypes (i.e. DTypes not
> > currently part of NumPy) should interact with the "value based
> > promotion" logic we currently have?
> > For now I can just do anything, and we will find out later.  And I
> > will
> > have to do something for now, basically with the hope that it all
> > turns
> > out all-right.
> >
> > But there are multiple options for both what to offer to user
> > DTypes
> > and where we want to move (I am using `bfloat16` as a potential
> > DType
> > here).
> >
> > 1. The "weak" dtype option (this is what JAX does), where:
> >
> >        np.array([1], dtype=bfloat16) + 4.
> >
> >    returns a bfloat16, because 4. is "lower" than all floating
> >    point types.
> >    In this scheme the user defined `bfloat16` knows that the input
> >    is a Python float, but it does not know its value (if an
> >    overflow occurs during conversion, it could warn or error but
> >    not upcast).  For example `np.array([1], dtype=uint4) + 2**5`
> >    will try `uint4(2**5)` assuming it works.
> >    NumPy is different `2.**300` would ensure the result is a
> > `float64`.
> >
> >    If a DType does not make use of this, it would get the behaviour
> >    of option 2.
> >
> > 2. The "default" DType option: np.array([1], dtype=bfloat16) + 4.
> > is
> >    always the same as `bfloat16 + float64 -> float64`.
> >
> > 3. Use whatever NumPy considers the "smallest appropriate dtype".
> >    This will not always work correctly for unsigned integers, and
> > for
> >    floats this would be float16, which doesn't help with bfloat16.
> >
> > 4. Try to expose the actual value. (I do not want to do this, but
> > it
> >    is probably a plausible extension with most other options, since
> >    the other options can be the "default".)
> >
> >
> > Within these options, there is one more difficulty. NumPy currently
> > applies the same logic for:
> >
> >     np.array([1], dtype=bfloat16) + np.array(4., dtype=np.float64)
> >
> > which in my opinion is wrong (the second array is typed). We do
> > have
> > the same issue with deciding what to do in the future for NumPy
> > itself.
> > Right now I feel that new (user) DTypes should live in the future
> > (whatever that future is).
> >
>
> I agree. And I have a preference for option 1. Option 2 is too greedy
> in
> upcasting, the value-based casting is problematic in multiple ways
> (e.g.,
> hard for Numba because output dtype cannot be predicted from input
> dtypes),
> and option 4 is hard to understand a rationale for (maybe so the user
> dtype
> itself can implement option 3?).

Yes, well, the "rational" for option 4 is that you expose everything
that NumPy currently needs (assuming we make no changes). That would be
the only way that allows a `bfloat16` to work exactly comparable to a
`float16` as currently defined in NumPy.

To be clear: It horrifies me, but defining a "better" way is much
easier than trying to keep everything as (at least for now) while also
thinking about how it should look like in the future (and making sure
that user DTypes are ready for that future).

My guess is, we can agree on aiming for Option 1 and trying to limit it
to Python operators.  Unfortunately, only time will tell how feasible
that will actually be.

That sounds good.

> > I have said previously, that we could distinguish this for
> > universal
> > functions.  But calls like `np.asarray(4.)` are common, and they
> > would
> > lose the information that `4.` was originally a Python float.
> >
>
> Hopefully the future will have way fewer asarray calls in it.
> Rejecting
> scalar input to functions would be nice. This is what most other
> array/tensor libraries do.
>

Well, right now NumPy has scalars (both ours and Python), and I would
expect that changing that may well be more disruptive than changing the
value based promotion (assuming we can add good FutureWarnings).

I would probabaly need a bit convincing that forbidding `np.add(array,
2)` is worth the trouble, but luckily that is probably an orthogonal
question.  (The fact that we even accept 0-D arrays as "value based" is
probably the biggest difficulty.)

It probably isn't worth going through trouble for indeed. And yes, the "0-D arrays are special" is the more important issue.


>
> >
> > So, recently, I was considering that a better option may be to
> > limit
> > this to math Python operators: +, -, /, **, ...
> >
>
> +1
>
> This discussion may be relevant:
> https://github.com/data-apis/array-api/issues/14.
>

I have browsed through it, I guess you also were thinking of limiting
scalars to operators (although possibly even more broadly rather than
just for promotion purposes).

Indeed. `x + 1` must work, that's extremely common. `np.somefunc(x, 1)` is not common, and there's little downside (and lots of upside) in not supporting it if you'd design a new numpy-like library.

  I am not sure I understand this:

    Non-array ("scalar") operands are not permitted to participate in
type promotion.

Since they do participate also in JAX and in what I wrote here. They
just participate in an abstract way. I.e. as `Floating` or `Integer`,
but not like a specific float or integer.

You're right, that sentence could use a tweak. I think the intent was to say that doing this in a multi-step way like
- cast scalar to array with some dtype (e.g. Python float becomes numpy float64)
- then apply the `array <op> array` casting rules to that resulting dtype
should not be done.


> > Those are the places where it may make a difference to write:
> >
> >     arr + 4.         vs.    arr + bfloat16(4.)
> >     int8_arr + 1     vs.    int8_arr + np.int8(1)
> >     arr += 4.      (in-place may be the most significant use-case)
> >
> > while:
> >
> >     np.add(int8_arr, 1)    vs.   np.add(int8_arr, np.int8(1))
> >
> > is maybe less significant. On the other hand, it would add a subtle
> > difference between operators vs. direct ufunc calls...
> >
> >
> > In general, it may not matter: We can choose option 1 (which the
> > bfloat16 does not have to use), and modify it if we ever change the
> > logic in NumPy itself.  Basically, I will probably pick option 1
> > for
> > now and press on, and we can reconsider later.  And hope that it
> > does
> > not make things even more complicated than it is now.
> >
> > Or maybe better just limit it completely to always use the default
> > for
> > user DTypes?
> >
>
> I'm not sure I understand why you like option 1 but want to give
> user-defined dtypes the choice of opting out of it. Upcasting will
> rarely
> make sense for user-defined dtypes anyway.
>

I never meant this as an opt-out, the question is what you do if the
user DType does not opt-in/define the operation.

Basically, the we would promote with `Floating` here (or `PyFloating`,
but there should be no difference; for now I will do PyFloating, but it
should probably be changed later). I was hinting at provide a default
fallback, so that if:

    UserDtype + Floating -> Undefined/Error

we automatically try the "default", e.g.:

    UserDType + Float64 -> Something

That would mean users don't have to worry about `Floating` itself.

But I am not opinionated here, a user DType author should be able to
quickly deal with either issue (that Float64 is undesired or that the
Error is undesired if no "default" exists).  Maybe the error is more
conservative/constructive though.

I'd start with the error, and reconsider only if there's a practical problem with it. Going from error to fallback later is much easier than the other way around.

 
> > But I would be interested if the "limit to Python operators" is
> > something we should aim for here.  This does make a small
> > difference,
> > because user DTypes could "live" in the future if we have an idea
> > of
> > how that future may look like.
> >
>
> A future with:
> - no array scalars
> - 0-D arrays have the same casting rules as >=1-D arrays
> - no value-based casting
> would be quite nice. For "same kind" casting like
>

I don't think array-scalars really matter here, since they are typed
and behave identical to 0-D arrays anyway.  We can have long opinion
pieces on whether they should exist :).

Let's not do that:) My summary would be: Travis regrets adding them, all other numpy-like libraries I know of decided not to have them, and that all worked out fine. Don't want to think about touching them in NumPy now.

  
> https://data-apis.github.io/array-api/latest/API_specification/type_promotion.html
> .
> Mixed-kind casting isn't specified there, because it's too different
> between libraries. The JAX design (
> https://jax.readthedocs.io/en/latest/type_promotion.html)  seems
> sensible
> there.

The JAX design is the "weak DType" design (when it comes to Python
numbers). Although, the fact that a "weak" `complex` is sorted above
all floats, means that `bfloat16_arr + 1j` will go to the default
complex dtype as well.
But yes, I like the "weak" approach, just think also JAX has some
wrinkles to smoothen.


There is a good deal more to this if you get user DTypes and I add one
more important constraint that:

    from my_extension_module import uint24

must not change any existing code that does not explicitly use
`uint24`.

Then my current approach guarantees:

    np.result_type(uint24, int48, int64) -> Error

If `uint24` and `int48` do not know each other (`int64` is obviously
right here, but it is tricky to be quite certain).

That makes sense. I'd expect that to be extremely rare anyway. User-defined dtypes need to interact with Python types and NumPy dtypes, anything unknown should indeed just error.


The other tricky example I have was:

  The following becomes problematic (order does not matter):
          uint24 +      int16  +           uint32  -> int64
     <==      (uint24 + int16) + (uint24 + uint32) -> int64
     <==                int32  +           uint32  -> int64

With the addition that `uint24 + int32 -> int48` is defined the first
could be expected to return `int48`, but actually getting there is
tricky (and my current code will not).

If promotion result of a user DType with a builtin one, can be a
builtin one, then "ammending" the promotion with things like `uint24 +
int32 -> int48` can lead to slightly surprising promotion results.
This happens if the result of a promotion with another "category"
(builtin) can be both a larger category or a lower one.

I'm not sure I follow this. If uint24 and int48 both come from the same third-party package, there is still a problem here?

Cheers,
Ralf


_______________________________________________
NumPy-Discussion mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: Value based promotion and user DTypes

Hameer Abbasi
In reply to this post by Sebastian Berg
Hi, Sebastian, all

Please find my answers inlined below.

--
Sent from Canary

On Dienstag, Jan. 26, 2021 at 2:01 AM, Sebastian Berg <[hidden email]> wrote:
Hi all,

does anyone have a thought about how user DTypes (i.e. DTypes not
currently part of NumPy) should interact with the "value based
promotion" logic we currently have?
For now I can just do anything, and we will find out later. And I will
have to do something for now, basically with the hope that it all turns
out all-right.

But there are multiple options for both what to offer to user DTypes
and where we want to move (I am using `bfloat16` as a potential DType
here).

1. The "weak" dtype option (this is what JAX does), where:

np.array([1], dtype=bfloat16) + 4.

returns a bfloat16, because 4. is "lower" than all floating
point types.
In this scheme the user defined `bfloat16` knows that the input
is a Python float, but it does not know its value (if an
overflow occurs during conversion, it could warn or error but
not upcast). For example `np.array([1], dtype=uint4) + 2**5`
will try `uint4(2**5)` assuming it works.
NumPy is different `2.**300` would ensure the result is a `float64`.


This is what I would strongly consider the „correct“ behaviour, and would strongly prefer. All new features/user dtypes should NOT have value-based casting (or undesirable things like this in general).
If we need to have a shim for internal dtypes to preserve old behaviour, so be it, but we should aim to deprecate that at some point in the future and get rid of the shim. It should be clearly marked in the
code as a hack.


If a DType does not make use of this, it would get the behaviour
of option 2.

2. The "default" DType option: np.array([1], dtype=bfloat16) + 4. is
always the same as `bfloat16 + float64 -> float64`.

3. Use whatever NumPy considers the "smallest appropriate dtype".
This will not always work correctly for unsigned integers, and for
floats this would be float16, which doesn't help with bfloat16.

4. Try to expose the actual value. (I do not want to do this, but it
is probably a plausible extension with most other options, since
the other options can be the "default".)


Within these options, there is one more difficulty. NumPy currently
applies the same logic for:

np.array([1], dtype=bfloat16) + np.array(4., dtype=np.float64)

which in my opinion is wrong (the second array is typed). We do have
the same issue with deciding what to do in the future for NumPy itself.
Right now I feel that new (user) DTypes should live in the future
(whatever that future is).

I have said previously, that we could distinguish this for universal
functions. But calls like `np.asarray(4.)` are common, and they would
lose the information that `4.` was originally a Python float.


So, recently, I was considering that a better option may be to limit
this to math Python operators: +, -, /, **, … 


I would strongly oppose any divergence between ufuncs and operators. I’m not a fan of special cases, and I hate divergence between behaviour that should be equivalent in the user’s eye. This would equate to adding more special cases, and would only cause confusion for users and library authors alike, unless they dig up a NEP or a mailing list discussion and go through all of it. Plus, it would be basically impossible to get rid of.

Unless what you mean is limiting value-based casting to Python operators and taking the logic out of ufuncs themselves… In which case, my answer is „yes, but…“:

We should aim to get rid of „implicit“ casting in most cases. My ideal world would be ufunc(NumPy array/scalar, python scalar) raises a TypeError. All asarray calls need a dtype, but that’s going to take a LONG time, if it’s ever going to happen. So I’m willing to settle for the above. 


Those are the places where it may make a difference to write:

arr + 4. vs. arr + bfloat16(4.)
int8_arr + 1 vs. int8_arr + np.int8(1)
arr += 4. (in-place may be the most significant use-case)

while:

np.add(int8_arr, 1) vs. np.add(int8_arr, np.int8(1))

is maybe less significant. On the other hand, it would add a subtle
difference between operators vs. direct ufunc calls...


In general, it may not matter: We can choose option 1 (which the
bfloat16 does not have to use), and modify it if we ever change the
logic in NumPy itself. Basically, I will probably pick option 1 for
now and press on, and we can reconsider later. And hope that it does
not make things even more complicated than it is now. 


This seems like the right thing to do. If things break, users need to be a bit more careful about choosing types. Which is nice. It eliminates whole classes of bugs.


Or maybe better just limit it completely to always use the default for
user DTypes?


But I would be interested if the "limit to Python operators" is
something we should aim for here. This does make a small difference,
because user DTypes could "live" in the future if we have an idea of
how that future may look like.

Cheers,

Sebastian
_______________________________________________
NumPy-Discussion mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/numpy-discussion 

Best regards,
Hameer Abbasi

_______________________________________________
NumPy-Discussion mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: Value based promotion and user DTypes

Sebastian Berg
In reply to this post by ralfgommers
On Wed, 2021-01-27 at 10:33 +0100, Ralf Gommers wrote:
> On Tue, Jan 26, 2021 at 10:21 PM Sebastian Berg <
> [hidden email]>
> wrote:
>
<snip>

Thanks for all the other comments, they are helpful. I am considering
writing a (hopefully short) NEP, to define the direction of thinking
here (and clarify what user DTypes can expect).  I don't like doing
that, but the issue turns out to have a lot of traps and confusing
points. (Our current logic alone is confusing enough...)

>
> > The other tricky example I have was:
> >
> >   The following becomes problematic (order does not matter):
> >           uint24 +      int16  +           uint32  -> int64
> >      <==      (uint24 + int16) + (uint24 + uint32) -> int64
> >      <==                int32  +           uint32  -> int64
> >
> > With the addition that `uint24 + int32 -> int48` is defined the
> > first
> > could be expected to return `int48`, but actually getting there is
> > tricky (and my current code will not).
> >
> > If promotion result of a user DType with a builtin one, can be a
> > builtin one, then "ammending" the promotion with things like
> > `uint24 +
> > int32 -> int48` can lead to slightly surprising promotion results.
> > This happens if the result of a promotion with another "category"
> > (builtin) can be both a larger category or a lower one.
> >
>
> I'm not sure I follow this. If uint24 and int48 both come from the
> same
> third-party package, there is still a problem here?
>
Yes, at least unless you ask `uint24` to take over all of the work
(i.e. pass in all DTypes at once).
So with a binary operator design it is "problematic" (in the sense that
you have to live with the above result). Of course a binary operator
base does probably not preclude a more complex design.
I like a binary operator (it seems much easier to reason about and is a
common design pattern).  But it would be plausible to have an n-ary
design where you pass all dtypes to each and ask them to handle it
(similar to `__array_ufunc__`).
We could even have both (the binary version for most things, but the
ability to hook into the n-ary "reduction").

Cheers,

Sebastian


> Cheers,
> Ralf
> _______________________________________________
> NumPy-Discussion mailing list
> [hidden email]
> https://mail.python.org/mailman/listinfo/numpy-discussion


_______________________________________________
NumPy-Discussion mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/numpy-discussion

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Value based promotion and user DTypes

ralfgommers


On Wed, Jan 27, 2021 at 5:44 PM Sebastian Berg <[hidden email]> wrote:
On Wed, 2021-01-27 at 10:33 +0100, Ralf Gommers wrote:
> On Tue, Jan 26, 2021 at 10:21 PM Sebastian Berg <
> [hidden email]>
> wrote:
>
<snip>

Thanks for all the other comments, they are helpful. I am considering
writing a (hopefully short) NEP, to define the direction of thinking
here (and clarify what user DTypes can expect).  I don't like doing
that, but the issue turns out to have a lot of traps and confusing
points. (Our current logic alone is confusing enough...)

Sounds good, thanks.


>
> > The other tricky example I have was:
> >
> >   The following becomes problematic (order does not matter):
> >           uint24 +      int16  +           uint32  -> int64
> >      <==      (uint24 + int16) + (uint24 + uint32) -> int64
> >      <==                int32  +           uint32  -> int64
> >
> > With the addition that `uint24 + int32 -> int48` is defined the
> > first
> > could be expected to return `int48`, but actually getting there is
> > tricky (and my current code will not).
> >
> > If promotion result of a user DType with a builtin one, can be a
> > builtin one, then "ammending" the promotion with things like
> > `uint24 +
> > int32 -> int48` can lead to slightly surprising promotion results.
> > This happens if the result of a promotion with another "category"
> > (builtin) can be both a larger category or a lower one.
> >
>
> I'm not sure I follow this. If uint24 and int48 both come from the
> same
> third-party package, there is still a problem here?
>

Yes, at least unless you ask `uint24` to take over all of the work
(i.e. pass in all DTypes at once).
So with a binary operator design it is "problematic" (in the sense that
you have to live with the above result). Of course a binary operator
base does probably not preclude a more complex design.
I like a binary operator (it seems much easier to reason about and is a
common design pattern).  But it would be plausible to have an n-ary
design where you pass all dtypes to each and ask them to handle it
(similar to `__array_ufunc__`).
We could even have both (the binary version for most things, but the
ability to hook into the n-ary "reduction").

I'd say just document it and recommend that if >1 custom dtypes are used, then the user should (if they really care about the issue you bring up) determine the output dtype you want via some use of result_type and then explicitly cast.

Cheers,
Ralf


_______________________________________________
NumPy-Discussion mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: Value based promotion and user DTypes

Sebastian Berg
On Wed, 2021-01-27 at 18:16 +0100, Ralf Gommers wrote:

> On Wed, Jan 27, 2021 at 5:44 PM Sebastian Berg <
> [hidden email]>
> wrote:
>
> > On Wed, 2021-01-27 at 10:33 +0100, Ralf Gommers wrote:
> > > On Tue, Jan 26, 2021 at 10:21 PM Sebastian Berg <
> > > [hidden email]>
> > > wrote:
> > >
> > <snip>
> >
> > Thanks for all the other comments, they are helpful. I am
> > considering
> > writing a (hopefully short) NEP, to define the direction of
> > thinking
> > here (and clarify what user DTypes can expect).  I don't like doing
> > that, but the issue turns out to have a lot of traps and confusing
> > points. (Our current logic alone is confusing enough...)
> >
>
> Sounds good, thanks.
>
>
> > >
> > > > The other tricky example I have was:
> > > >
> > > >   The following becomes problematic (order does not matter):
> > > >           uint24 +      int16  +           uint32  -> int64
> > > >      <==      (uint24 + int16) + (uint24 + uint32) -> int64
> > > >      <==                int32  +           uint32  -> int64
> > > >
> > > > With the addition that `uint24 + int32 -> int48` is defined the
> > > > first
> > > > could be expected to return `int48`, but actually getting there
> > > > is
> > > > tricky (and my current code will not).
> > > >
> > > > If promotion result of a user DType with a builtin one, can be
> > > > a
> > > > builtin one, then "ammending" the promotion with things like
> > > > `uint24 +
> > > > int32 -> int48` can lead to slightly surprising promotion
> > > > results.
> > > > This happens if the result of a promotion with another
> > > > "category"
> > > > (builtin) can be both a larger category or a lower one.
> > > >
> > >
> > > I'm not sure I follow this. If uint24 and int48 both come from
> > > the
> > > same
> > > third-party package, there is still a problem here?
> > >
> >
> > Yes, at least unless you ask `uint24` to take over all of the work
> > (i.e. pass in all DTypes at once).
> > So with a binary operator design it is "problematic" (in the sense
> > that
> > you have to live with the above result). Of course a binary
> > operator
> > base does probably not preclude a more complex design.
> > I like a binary operator (it seems much easier to reason about and
> > is a
> > common design pattern).  But it would be plausible to have an n-ary
> > design where you pass all dtypes to each and ask them to handle it
> > (similar to `__array_ufunc__`).
> > We could even have both (the binary version for most things, but
> > the
> > ability to hook into the n-ary "reduction").
> >
>
> I'd say just document it and recommend that if >1 custom dtypes are
> used,
> then the user should (if they really care about the issue you bring
> up)
> determine the output dtype you want via some use of result_type and
> then
> explicitly cast.
>
Right, this is a problem that keeps giving...  Maybe a point of how
tricky Units are, but similar things will also apply to other
"families" of dtypes.
If you have Units (that can be based off any other NumPy numerical
type), you can break my scheme to work around the associativity issue
in the same way:

    Unit[int16] + uint16 + float16

has no clear hierarchy between them (Unit is the highest, but `float16`
dictates the precision).

So, probably we just shouldn't care too much about this (for now), but
if we want the above to return `Unit[float16]`, we must have additional
logic, to do reasonably... (aside from a binary operation)

I agree that these are all "insignificant" issues in many ways, since
most users will never even notice about the subtleties. So in some ways
my meandering towards binary-op only is that it feels at least small
enough in complexity that it hopefully doesn't make solutions for the
above much more complicated.

Cheers,

Sebastian



> Cheers,
> Ralf
> _______________________________________________
> NumPy-Discussion mailing list
> [hidden email]
> https://mail.python.org/mailman/listinfo/numpy-discussion


_______________________________________________
NumPy-Discussion mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/numpy-discussion

signature.asc (849 bytes) Download Attachment