powerpc yellow dog linux port of numpy

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

powerpc yellow dog linux port of numpy

Vincent Broman
I reported back on August 30 to this list,
with some discussion following on September 4 and 5,
about my attempt to build numpy on an ancient powerpc setup.
I'm running yellow dog linux 2.1, gcc 2.95.3.20010111, on processors from Curtiss-Wright Controls.
Don't tell me to just upgrade; this configuration will be
fighting the good fight for several more years.
I just retried with the latest numpy (svn yesterday) and gotten further than I did before.

umathmodule.c gets many compiler errors from gcc, of two kinds.

The simpler were like

    warning: conflicting types for built-in function `sinl'

repeated for `cosl', `fabsl', and `sqrtl'.
These seem to be caused by npy_longdouble being typedef'ed as double not long double,
due to the latter two types having the same size.
umathmodule.c defines its own sinl, sqrtl, etc. with npy_longdouble arguments and results,
which then conflict with the builtin sinl, sqrtl provided by gcc that expect long double.
I worked around that by adding the "-fno-builtin" argument to the extra_compiler_args in setup.py.

The other compiler complaints from the same file were:

    inconsistent operand constraints in an  `asm'

which came from every line that raised a division by zero exception,
the code in each case being "feraiseexcept( FE_DIVBYZERO)" after preprocessing.
That function is defined in fenv.h with a "__THROW" attribute,
but I saw no sign of it being an inline asm or anything.
I don't understand "__THROW".

I'm afraid I would need to find the asm code involved, before I could
see what "operand constraints" are "inconsistent".
Any hints where to look?
Any way to make the call go to a nice simple library instead?

Vincent Broman
_______________________________________________
Numpy-discussion mailing list
[hidden email]
http://projects.scipy.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: powerpc yellow dog linux port of numpy

Robert Kern-2
On Fri, Apr 18, 2008 at 8:19 PM, Vincent Broman <[hidden email]> wrote:

> I reported back on August 30 to this list,
>  with some discussion following on September 4 and 5,
>  about my attempt to build numpy on an ancient powerpc setup.
>  I'm running yellow dog linux 2.1, gcc 2.95.3.20010111, on processors from Curtiss-Wright Controls.
>  Don't tell me to just upgrade; this configuration will be
>  fighting the good fight for several more years.
>  I just retried with the latest numpy (svn yesterday) and gotten further than I did before.
>
>  umathmodule.c gets many compiler errors from gcc, of two kinds.
>
>  The simpler were like
>
>     warning: conflicting types for built-in function `sinl'
>
>  repeated for `cosl', `fabsl', and `sqrtl'.
>  These seem to be caused by npy_longdouble being typedef'ed as double not long double,
>  due to the latter two types having the same size.
>  umathmodule.c defines its own sinl, sqrtl, etc. with npy_longdouble arguments and results,
>  which then conflict with the builtin sinl, sqrtl provided by gcc that expect long double.

We check for the presence of expl() to determine if all of the rest
are provided and set the HAVE_LONGDOUBLE_FUNCS flag. It is possible
that you don't have expl() but do have these other functions.

>  I worked around that by adding the "-fno-builtin" argument to the extra_compiler_args in setup.py.

This is not unreasonable.

>  The other compiler complaints from the same file were:
>
>     inconsistent operand constraints in an  `asm'
>
>  which came from every line that raised a division by zero exception,
>  the code in each case being "feraiseexcept( FE_DIVBYZERO)" after preprocessing.
>  That function is defined in fenv.h with a "__THROW" attribute,
>  but I saw no sign of it being an inline asm or anything.
>  I don't understand "__THROW".
>
>  I'm afraid I would need to find the asm code involved, before I could
>  see what "operand constraints" are "inconsistent".
>  Any hints where to look?
>  Any way to make the call go to a nice simple library instead?

In the file numpy/core/include/numpy/ufuncobject.h, there is a stanza
that looks like this:

  #if defined(__GLIBC__) || defined(__APPLE__) || defined(__MINGW32__)
|| defined(__FreeBSD__)
  #include <fenv.h>
  #elif defined(__CYGWIN__)
  #include "fenv/fenv.c"
  #endif

I assume that you have __GLIBC__ defined. You will have to find your
platform's fenv.[ch] file from your libc sources. You may want to
comment out all of that and use our included

  #include "fenv/fenv.c"

Also, edit numpy/core/setup.py to include these files for your
platform in addition to Cygwin.

    # Don't install fenv unless we need them.
    if sys.platform == 'cygwin':
        config.add_data_dir('include/numpy/fenv')

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
 -- Umberto Eco
_______________________________________________
Numpy-discussion mailing list
[hidden email]
http://projects.scipy.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: powerpc yellow dog linux port of numpy

Vincent Broman
In reply to this post by Vincent Broman
I succeeded in working around the other Yellow Dog Linux porting problem
connected with the floating point exception calls.
It turns out that a problematic #include <bits/fenvinline.h> was protected
by a "#ifdef __OPTIMIZE__" so my preprocessing with "gcc -E" never saw its effect.
So, by avoiding optimization of umath, I was able to get a slower, but working copy
of numpy compiled, which passed the numpy.test() suite.

The one-line patch avoiding my two compile problems
adds to numpy/core/setup.py after line 270 the following line.

    extra_compile_args = ["-fno-builtin -O0"]

I don't know enough to say whether the quoted args should be split().

I do want to find real fixes, though, not just bandaids.

In numpy/core/include/numpy/ndarrayobject.h is code which causes me one problem.

#if NPY_SIZEOF_LONGDOUBLE == NPY_SIZEOF_DOUBLE
        typedef double npy_longdouble;
        #define NPY_LONGDOUBLE_FMT "g"
#else
        typedef long double npy_longdouble;
        #define NPY_LONGDOUBLE_FMT "Lg"
#endif

I do not see why the size is critical. Couldn't we just use long double
for any compiler that supports long double? If the double and long double
have the same format, why do we prefer double?


My other assembler code problem is caused, when optimization is enabled,
by code using the following definition in /usr/include/bits/fenvinline.h,
which I do not understand, never having done ppc assembler.

------------------quote-------------------------------------------------
/* The weird 'i#*X' constraints on the following suppress a gcc
   warning when __excepts is not a constant.  Otherwise, they mean the
   same as just plain 'i'.  */

/* Inline definition for feraiseexcept.  */
# define feraiseexcept(__excepts) \
  ((__builtin_constant_p (__excepts) \
    && ((__excepts) & ((__excepts)-1) == 0 \
    && (__excepts) != FE_INVALID) \
   ? ((__excepts) != 0 \
      ? (__extension__ ({  __asm__ __volatile__ \
                           ("mtfsb1 %s0" \
                            : : "i#*X"(__builtin_ffs (__excepts))); \
                           0; })) \
      : 0) \
   : (feraiseexcept) (__excepts))
----------------endquote---------------------------------------------------

This definition causes all the occurences in umathmodule.c of

    feraiseexcept( FE_DIVBYZERO)

to make gcc complain of "inconsistent operand constraints in an  `asm'".
Is there anyone who can parse that hornet's nest of a definition?

Vincent Broman
_______________________________________________
Numpy-discussion mailing list
[hidden email]
http://projects.scipy.org/mailman/listinfo/numpy-discussion