MKL and OpenBLAS

classic Classic list List threaded Threaded
40 messages Options
12
Reply | Threaded
Open this post in threaded view
|

MKL and OpenBLAS

Dinesh Vadhia
This conversation gets discussed often with Numpy developers but since the requirement for optimized Blas is pretty common these days, how about distributing Numpy with OpenBlas by default?  People who don't want optimized BLAS or OpenBLAS can then edit the site.cfg file to add/remove.  I can never remember if Numpy comes with Atlas by default but either way, if using MKL is not feasible because of its licensing issues then Numpy has to be re-compiled with OpenBLAS (for example).  Why not make it easier for developers to use Numpy with an in-built optimized Blas.
 
Btw, just in case some folks from Intel are listening:  how about releasing MKL binaries for all platforms for developers to do with it what they want ie. free.  You know it makes sense!
 

_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: MKL and OpenBLAS

Pauli Virtanen-3
26.01.2014 14:44, Dinesh Vadhia kirjoitti:
> This conversation gets discussed often with Numpy developers but
> since the requirement for optimized Blas is pretty common these
> days, how about distributing Numpy with OpenBlas by default? People
> who don't want optimized BLAS or OpenBLAS can then edit the
> site.cfg file to add/remove.  I can never remember if Numpy comes
> with Atlas by default but either way, if using MKL is not feasible
> because of its licensing issues then Numpy has to be re-compiled
> with OpenBLAS (for example).  Why not make it easier for developers
> to use Numpy with an in-built optimized Blas.

The Numpy Windows binaries distributed in the numpy project at
sourceforge.net are compiled with ATLAS, which should count as an
optimized BLAS. I don't recall what's the situation with OSX binaries,
but I'd believe they're with Atlas too.

If you are suggesting bundling OpenBLAS with Numpy source releases ---
arguments against:

OpenBLAS is big, and still rapidly moving. Moreover, bundling it with
Numpy does not really make it any easier to build.

--
Pauli Virtanen

_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: MKL and OpenBLAS

Stéfan van der Walt
On Sun, 26 Jan 2014 16:40:44 +0200, Pauli Virtanen wrote:
> The Numpy Windows binaries distributed in the numpy project at
> sourceforge.net are compiled with ATLAS, which should count as an
> optimized BLAS. I don't recall what's the situation with OSX binaries,
> but I'd believe they're with Atlas too.

Was a switch made away from Accelerate after this?

http://mail.scipy.org/pipermail/numpy-discussion/2012-August/063589.html

Stéfan
_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: MKL and OpenBLAS

Julian Taylor-3
On 26.01.2014 18:06, Stéfan van der Walt wrote:

> On Sun, 26 Jan 2014 16:40:44 +0200, Pauli Virtanen wrote:
>> The Numpy Windows binaries distributed in the numpy project at
>> sourceforge.net are compiled with ATLAS, which should count as an
>> optimized BLAS. I don't recall what's the situation with OSX binaries,
>> but I'd believe they're with Atlas too.
>
> Was a switch made away from Accelerate after this?
>
> http://mail.scipy.org/pipermail/numpy-discussion/2012-August/063589.html
>

if this issue disqualifies accelerate, it also disqualifies openblas as
a default. openblas has the same issue, we stuck a big fat warning into
the docs (site.cfg) for this now as people keep running into it.

openblas is also a little dodgy concerning stability, in the past it
crashed constantly on pretty standard problems, like dgemm on data > 64
mb etc.
While the stability has improved with latest releases (>= 0.2.9) I think
its still too early to consider openblas for a default.

multithreaded ATLAS on the other hand seems works fine, at least I have
not seen any similar issues with ATLAS in a very long time.
Building optimized ATLAS is also a breeze on Debian based systems (see
the README.Debian file) but I admit it is hard on any other platform.
_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: MKL and OpenBLAS

Sturla Molden
Julian Taylor <[hidden email]> wrote:

> if this issue disqualifies accelerate, it also disqualifies openblas as
> a default. openblas has the same issue, we stuck a big fat warning into
> the docs (site.cfg) for this now as people keep running into it.

What? Last time I checked, OpenBLAS (and GotoBLAS2) used OpenMP, not the
GCD on Mac. Since OpenMP compiles to pthreads, it should not do this (pure
POSIX). Accelerate uses the GCD yes, but it's hardly any better than ATLAS.
If OpenBLAS now uses the GCD on Mac someone in China should be flogged.

It is sad to hear about stability issues with OpenBLAS, it's predecessor
GotoBLAS2 was rock solid.

Sturla

_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: MKL and OpenBLAS

Julian Taylor-3
On 26.01.2014 22:33, Sturla Molden wrote:

> Julian Taylor <[hidden email]> wrote:
>
>> if this issue disqualifies accelerate, it also disqualifies openblas as
>> a default. openblas has the same issue, we stuck a big fat warning into
>> the docs (site.cfg) for this now as people keep running into it.
>
> What? Last time I checked, OpenBLAS (and GotoBLAS2) used OpenMP, not the
> GCD on Mac. Since OpenMP compiles to pthreads, it should not do this (pure
> POSIX). Accelerate uses the GCD yes, but it's hardly any better than ATLAS.
> If OpenBLAS now uses the GCD on Mac someone in China should be flogged.

the use of gnu openmp is probably be the problem, forking and gomp is
only possible in very limited circumstances.
see e.g. https://github.com/xianyi/OpenBLAS/issues/294

maybe it will work with clangs intel based openmp which should be coming
soon.
the current workaround is single threaded openblas, python3.4 forkserver
or use atlas.
_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: MKL and OpenBLAS

Sturla Molden
Julian Taylor <[hidden email]> wrote:
 
> the use of gnu openmp is probably be the problem, forking and gomp is
> only possible in very limited circumstances.
> see e.g. https://github.com/xianyi/OpenBLAS/issues/294
>
> maybe it will work with clangs intel based openmp which should be coming
> soon.
> the current workaround is single threaded openblas, python3.4 forkserver
> or use atlas.


Yes, it seems to be a GNU problem:

http://bisqwit.iki.fi/story/howto/openmp/#OpenmpAndFork

This Howto also claims Intel compilers is not affected.

:)

Sturla

_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: MKL and OpenBLAS

Carl Kleffner
In reply to this post by Pauli Virtanen-3
Did you consider to check the experimental binaries on https://code.google.com/p/mingw-w64-static/ for Python-2.7? These binaries has been build with with a customized mingw-w64 toolchain. These builds are fully statically build and are link against the MSVC90 runtime libraries (gcc runtime is linked statically) and OpenBLAS.

Carl



2014-01-26 Pauli Virtanen <[hidden email]>
26.01.2014 14:44, Dinesh Vadhia kirjoitti:
> This conversation gets discussed often with Numpy developers but
> since the requirement for optimized Blas is pretty common these
> days, how about distributing Numpy with OpenBlas by default? People
> who don't want optimized BLAS or OpenBLAS can then edit the
> site.cfg file to add/remove.  I can never remember if Numpy comes
> with Atlas by default but either way, if using MKL is not feasible
> because of its licensing issues then Numpy has to be re-compiled
> with OpenBLAS (for example).  Why not make it easier for developers
> to use Numpy with an in-built optimized Blas.

The Numpy Windows binaries distributed in the numpy project at
sourceforge.net are compiled with ATLAS, which should count as an
optimized BLAS. I don't recall what's the situation with OSX binaries,
but I'd believe they're with Atlas too.

If you are suggesting bundling OpenBLAS with Numpy source releases ---
arguments against:

OpenBLAS is big, and still rapidly moving. Moreover, bundling it with
Numpy does not really make it any easier to build.

--
Pauli Virtanen

_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion


_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: MKL and OpenBLAS

ralfgommers
In reply to this post by Stéfan van der Walt



On Sun, Jan 26, 2014 at 6:06 PM, Stéfan van der Walt <[hidden email]> wrote:
On Sun, 26 Jan 2014 16:40:44 +0200, Pauli Virtanen wrote:
> The Numpy Windows binaries distributed in the numpy project at
> sourceforge.net are compiled with ATLAS, which should count as an
> optimized BLAS. I don't recall what's the situation with OSX binaries,
> but I'd believe they're with Atlas too.

Was a switch made away from Accelerate after this? 

http://mail.scipy.org/pipermail/numpy-discussion/2012-August/063589.html

No, nothing changed. Still using Accelerate for all official binaries.

Ralf
 

Stéfan
_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion


_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: MKL and OpenBLAS

Sturla Molden
In reply to this post by Carl Kleffner
On 27/01/14 12:01, Carl Kleffner wrote:
> Did you consider to check the experimental binaries on
> https://code.google.com/p/mingw-w64-static/ for Python-2.7? These
> binaries has been build with with a customized mingw-w64 toolchain.
> These builds are fully statically build and are link against the MSVC90
> runtime libraries (gcc runtime is linked statically) and OpenBLAS.
>
> Carl

Building OpenBLAS and LAPACK is very easy. I used TDM-GCC for Win64.
It's just two makefile (not even a configure script). OpenBLAS and
LAPACK are probably the easiest libraries to build there is.

The main problem for using OpenBLAS with NumPy and SciPy on Windows is
that Python 2.7 from www.python.org does not ship with libpython27.a for
64-bit Python, so we need to maintain our own. Also, GNU compilers are
required to build OpenBLAS. This means we have to build our own
libgfortran as well. The binary is incompatible with the MSVC runtime we
use. I.e. not impossible, but painful.

http://mail.scipy.org/pipermail/numpy-discussion/2012-August/063740.html


Sturla



_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: MKL and OpenBLAS

Carl Kleffner
I agree, building OpenBLAS with mingw-w64 is a snap. The problem is choosing and adapting a mingw based gcc-toolchain and patching the numpy sources according to this toolchain. For the last years I was a happy user of the mingw.org based toolchain. After searching for a 64-bit alternative I stumbled upon mingw-w64 and its derivatives.

I tried out several mingw-w64 based toolchains, i.e. TDM, equation.com and more. All mingw-w64 derivatives have there pros and cons. You may know, that you have to choose not only for bitness (32 vs 64 bit) and gcc version, but also for exception handling (sjlj, dwarf, seh) and the way threading is supported (win32 vs. posix threads). Not all of these derivatives describe what they use in a clearly manner. And the TDM toolchain i.e. has introduced some API incompatibilities to standard gcc-toolchains due to its own patches.

A serious problem is gcc linking to runtimes other than msvcrt.dll. Mingw-w64 HAS import libraries fort msvcr80, msvcr90, msvcr100, msvcr110. However correct linkage to say to msvcr90 is more than just adding -lmsvcr90 to the linker command. You have to create a spec file for gcc and adapt it to your need. It is also very important (especially for msvcr90) to link manifest files to the binaries you create. This has to do with the way Microsoft searches for DLLs. "Akruis" (Anselm Kruis, science + computing AG) did the job to iron out these problems concerning mingw-w64 and python. Unfortunately his blog disappears for some time now.

The maintainers of the mingw-w64 toolchains DO NOT focus on the problem with alternative runtime linking. A related problem is that symbols are used by OpenMP and winpthreads you can resolve in msvcrt.dll, but not in msvcr90.dll, so "_ftime"has to be exchanged with "ftime64" if you want to use OpenMP or winpthreads.

In the end my solution was to build my own toolchain. This is time consuming but simple with the help of the set of scripts you can find here: https://github.com/niXman/mingw-builds/tree/develop

With this set of scripts and msys2 http://sourceforge.net/projects/msys2/ and my own "_ftime" patch I build a 'statically' mingw-w64 toolchain. Let me say a word about statically build: GCC can be build statically. This means, that all of the C, C++, Gfortran runtime is statically linked to every binary. There is not much bloat as you might expect when the binaries are stripped.

And yes, it is necessary to build an import lib for python. This import lib is specific to the toolchain you are going to use. My idea is to create and add all import libs (py2.6 up to py3.4) to the toolchain and do not use any of the importlibs that might exist in the python/libs/ folder.

My conclusion is: mixing different compiler architectures for building Python extensions on Windows is possible but makes it necessary to build a 'vendor' gcc toolchain. I did not find the time to put my latest binaries on the web or make numpy pull requests the github way due to my workload. Hopefully I find some time next weekend.

with best regards

Carl


2014-01-30 Sturla Molden <[hidden email]>:
On 27/01/14 12:01, Carl Kleffner wrote:
> Did you consider to check the experimental binaries on
> https://code.google.com/p/mingw-w64-static/ for Python-2.7? These
> binaries has been build with with a customized mingw-w64 toolchain.
> These builds are fully statically build and are link against the MSVC90
> runtime libraries (gcc runtime is linked statically) and OpenBLAS.
>
> Carl

Building OpenBLAS and LAPACK is very easy. I used TDM-GCC for Win64.
It's just two makefile (not even a configure script). OpenBLAS and
LAPACK are probably the easiest libraries to build there is.

The main problem for using OpenBLAS with NumPy and SciPy on Windows is
that Python 2.7 from www.python.org does not ship with libpython27.a for
64-bit Python, so we need to maintain our own. Also, GNU compilers are
required to build OpenBLAS. This means we have to build our own
libgfortran as well. The binary is incompatible with the MSVC runtime we
use. I.e. not impossible, but painful.

http://mail.scipy.org/pipermail/numpy-discussion/2012-August/063740.html


Sturla



_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion


_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: MKL and OpenBLAS

Sturla Molden
On 30/01/14 12:01, Carl Kleffner wrote:

> My conclusion is: mixing different compiler architectures for building
> Python extensions on Windows is possible but makes it necessary to build
> a 'vendor' gcc toolchain.

Right.

This makes a nice twist on the infamous XML and Regex story:

- There once was a man who had a problem building NumPy. Then he
thought, "I'll just use a custom compiler toolchain." Now he had two
problems.

Setting up a custom GNU toolchain for NumPy on Windows would not be
robust enough. And when there be bugs, we have two places to look for
them instead of one.

By using a tested and verified compiler toolchain, there is one place
less things can go wrong. I would rather consider distributing NumPy
binaries linked with MKL, if Intel's license allows it.

Sturla

_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: MKL and OpenBLAS

Carl Kleffner
I fully agree with you. But you have to consider the following:

- the officially mingw-w64 toolchains are build almost the same way. The only difference is, that they have non-static builds (that would be preferable for C++ development BTW)
- you won't get the necessary addons like spec-files, manifest resource files for msvcr90,100 from there.
- there is a urgent need for a free and portable C,C++, Fortran compiler for Windows with full blas, lapack support. You won't get that with numpy-MKL, but with a GNU toolchain and OpenBLAS. Not everyone can buy the Intel Fortran compiler or is allowed to install it.
- you can build 3rd party extensions which use blas,lapack directly or with cython with such a toolchain regardless if you use numpy/scipy-MKL or mingw-based numpy/scipy
- The licence question of numpy-MKL is unclear. I know that MKL is linked in statically. But can I redistribite it myself or use it in commercial context without buying a Intel licence?

Carl


2014-01-30 Sturla Molden <[hidden email]>:
On 30/01/14 12:01, Carl Kleffner wrote:

> My conclusion is: mixing different compiler architectures for building
> Python extensions on Windows is possible but makes it necessary to build
> a 'vendor' gcc toolchain.

Right.

This makes a nice twist on the infamous XML and Regex story:

- There once was a man who had a problem building NumPy. Then he
thought, "I'll just use a custom compiler toolchain." Now he had two
problems.

Setting up a custom GNU toolchain for NumPy on Windows would not be
robust enough. And when there be bugs, we have two places to look for
them instead of one.

By using a tested and verified compiler toolchain, there is one place
less things can go wrong. I would rather consider distributing NumPy
binaries linked with MKL, if Intel's license allows it.

Sturla

_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion


_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: MKL and OpenBLAS

Matthew Brett
Hi,

On Thu, Jan 30, 2014 at 4:29 AM, Carl Kleffner <[hidden email]> wrote:

> I fully agree with you. But you have to consider the following:
>
> - the officially mingw-w64 toolchains are build almost the same way. The
> only difference is, that they have non-static builds (that would be
> preferable for C++ development BTW)
> - you won't get the necessary addons like spec-files, manifest resource
> files for msvcr90,100 from there.
> - there is a urgent need for a free and portable C,C++, Fortran compiler for
> Windows with full blas, lapack support. You won't get that with numpy-MKL,
> but with a GNU toolchain and OpenBLAS. Not everyone can buy the Intel
> Fortran compiler or is allowed to install it.

Thanks for doing this - I'd love to see the toolchain.  If there's
anything I can do to help, please let me know.  The only obvious thing
I can think of is using our buildbots or just the spare machines we
have:

http://nipy.bic.berkeley.edu/

but if you can think of anything else, please let me know.

Cheers,

Matthew
_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: MKL and OpenBLAS

Sturla Molden
In reply to this post by Carl Kleffner
By the way, it seems OpenBLAS builds with clang on MacOSX, so presumably
it works on Windows as well. Unlike GNU toolchains, there is a cl-clang
frontend which is supposed to be MSVC compatible. BTW, clang is a
fantastic compiler, but little known among Windows users where MSVC and
MinGW dominate.

Sturla

On 30/01/14 13:29, Carl Kleffner wrote:

> I fully agree with you. But you have to consider the following:
>
> - the officially mingw-w64 toolchains are build almost the same way. The
> only difference is, that they have non-static builds (that would be
> preferable for C++ development BTW)
> - you won't get the necessary addons like spec-files, manifest resource
> files for msvcr90,100 from there.
> - there is a urgent need for a free and portable C,C++, Fortran compiler
> for Windows with full blas, lapack support. You won't get that with
> numpy-MKL, but with a GNU toolchain and OpenBLAS. Not everyone can buy
> the Intel Fortran compiler or is allowed to install it.
> - you can build 3rd party extensions which use blas,lapack directly or
> with cython with such a toolchain regardless if you use numpy/scipy-MKL
> or mingw-based numpy/scipy
> - The licence question of numpy-MKL is unclear. I know that MKL is
> linked in statically. But can I redistribite it myself or use it in
> commercial context without buying a Intel licence?
>
> Carl
>
>
> 2014-01-30 Sturla Molden <[hidden email]
> <mailto:[hidden email]>>:
>
>     On 30/01/14 12:01, Carl Kleffner wrote:
>
>      > My conclusion is: mixing different compiler architectures for
>     building
>      > Python extensions on Windows is possible but makes it necessary
>     to build
>      > a 'vendor' gcc toolchain.
>
>     Right.
>
>     This makes a nice twist on the infamous XML and Regex story:
>
>     - There once was a man who had a problem building NumPy. Then he
>     thought, "I'll just use a custom compiler toolchain." Now he had two
>     problems.
>
>     Setting up a custom GNU toolchain for NumPy on Windows would not be
>     robust enough. And when there be bugs, we have two places to look for
>     them instead of one.
>
>     By using a tested and verified compiler toolchain, there is one place
>     less things can go wrong. I would rather consider distributing NumPy
>     binaries linked with MKL, if Intel's license allows it.
>
>     Sturla
>
>     _______________________________________________
>     NumPy-Discussion mailing list
>     [hidden email] <mailto:[hidden email]>
>     http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
>
>
> _______________________________________________
> NumPy-Discussion mailing list
> [hidden email]
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>


_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Memory leak?

Chris Laumann-2
Hi all-

The following snippet appears to leak memory badly (about 10 MB per execution):

P = randint(0,2,(30,13))

for i in range(50):
    print "\r", i, "/", 50
    for ai in ndindex((2,)*13):
        j = np.sum(P.dot(ai))

If instead you execute (no np.sum call):

P = randint(0,2,(30,13))

for i in range(50):
    print "\r", i, "/", 50
    for ai in ndindex((2,)*13):
        j = P.dot(ai)

There is no leak. 

Any thoughts? I’m stumped.

Best, Chris

-- 
Chris Laumann
Sent with Airmail

_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: MKL and OpenBLAS

Sturla Molden
In reply to this post by Dinesh Vadhia
On 26/01/14 13:44, Dinesh Vadhia wrote:> This conversation gets
discussed often with Numpy developers but since
 > the requirement for optimized Blas is pretty common these days, how
 > about distributing Numpy with OpenBlas by default?  People who don't
 > want optimized BLAS or OpenBLAS can then edit the site.cfg file to
 > add/remove.  I can never remember if Numpy comes with Atlas by default
 > but either way, if using MKL is not feasible because of its licensing
 > issues then Numpy has to be re-compiled with OpenBLAS (for example).
 > Why not make it easier for developers to use Numpy with an in-built
 > optimized Blas.
 > Btw, just in case some folks from Intel are listening:  how about
 > releasing MKL binaries for all platforms for developers to do with it
 > what they want ie. free. You know it makes sense!


There is an active discussion on this here:

https://github.com/xianyi/OpenBLAS/issues/294



Sturla


_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: Memory leak?

Julian Taylor-3
In reply to this post by Chris Laumann-2
which version of numpy are you using?
there seems to be a leak in the scalar return due to the PyObject_Malloc usage in git master, but it doesn't affect 1.8.0


On Fri, Jan 31, 2014 at 7:20 AM, Chris Laumann <[hidden email]> wrote:
Hi all-

The following snippet appears to leak memory badly (about 10 MB per execution):

P = randint(0,2,(30,13))

for i in range(50):
    print "\r", i, "/", 50
    for ai in ndindex((2,)*13):
        j = np.sum(P.dot(ai))

If instead you execute (no np.sum call):

P = randint(0,2,(30,13))

for i in range(50):
    print "\r", i, "/", 50
    for ai in ndindex((2,)*13):
        j = P.dot(ai)

There is no leak. 

Any thoughts? I’m stumped.

Best, Chris

-- 
Chris Laumann
Sent with Airmail

_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion



_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: Memory leak?

Chris Laumann-2

Current scipy superpack for osx so probably pretty close to master. So it's a known leak? Hmm. Maybe I'll have to work on a different machine for a bit. 

Chris

---
Sent from my iPhone using Mail Ninja
--- Original Message ---
which version of numpy are you using?
there seems to be a leak in the scalar return due to the PyObject_Malloc usage in git master, but it doesn't affect 1.8.0


On Fri, Jan 31, 2014 at 7:20 AM, Chris Laumann <[hidden email]> wrote:
Hi all-

The following snippet appears to leak memory badly (about 10 MB per execution):

P = randint(0,2,(30,13))

for i in range(50):
    print "\r", i, "/", 50
    for ai in ndindex((2,)*13):
        j = np.sum(P.dot(ai))

If instead you execute (no np.sum call):

P = randint(0,2,(30,13))

for i in range(50):
    print "\r", i, "/", 50
    for ai in ndindex((2,)*13):
        j = P.dot(ai)

There is no leak. 

Any thoughts? I’m stumped.

Best, Chris

-- 
Chris Laumann
Sent with Airmail

_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion



_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Reply | Threaded
Open this post in threaded view
|

Re: Memory leak?

Nathaniel Smith
On Fri, Jan 31, 2014 at 3:14 PM, Chris Laumann <[hidden email]> wrote:
>
> Current scipy superpack for osx so probably pretty close to master.

What does numpy.__version__ say?

-n
_______________________________________________
NumPy-Discussion mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/numpy-discussion
12