[GRASS-dev] BLAS/LAPACK (Part II)

Hello,

I did not hear any response to my question of whether to continue using
BLAS/LAPACK.

This uncertainty has been particularly hard on me, being unable to
complete some work waiting for an answer one way or the other and not
wanting to implement my own version if not needed.

Currently, there is no code in the tree that makes use of either library
other than my own. In fact, others have implemented their own versions.

What I propose is moving the matrix code from v.generalize (in
particular, matrix_inverse() ) to lib/gmath and simplifying the existing
MATRIX structure.

--
73, de Brad KB8UYR/6 <rez touchofmadness com>

Brad Douglas wrote:

I did not hear any response to my question of whether to continue
using BLAS/LAPACK.

This uncertainty has been particularly hard on me, being unable to
complete some work waiting for an answer one way or the other and not
wanting to implement my own version if not needed.

Currently, there is no code in the tree that makes use of either
library other than my own. In fact, others have implemented their own
versions.

If having it there is not hurting anything, I'd say leave it as-is.

It is less work to maintain the configure scripts than it is to stay
current with the latest advancements in the library. ie 5 years from
now we'd have an unmaintained stale copy distributed with our source.

BLAS/LAPACK are in common use elsewhere, so it's not like a user would
have to spend time hunting down and compiling obscure software to use
it.

Take pride in being the first to use it, we've been waiting a while for
someone to. :slight_smile:

What I propose is moving the matrix code from v.generalize (in
particular, matrix_inverse() ) to lib/gmath and simplifying the
existing MATRIX structure.

regardless of BLAS/LAPACK staying or going, consolidation, consistency,
and anything else that makes the code easier to maintain is obviously a
good thing. (but no idea about that specific code)

Hamish

On Fri, 2007-08-17 at 17:12 +1200, Hamish wrote:

Brad Douglas wrote:
>
> I did not hear any response to my question of whether to continue
> using BLAS/LAPACK.
>
> This uncertainty has been particularly hard on me, being unable to
> complete some work waiting for an answer one way or the other and not
> wanting to implement my own version if not needed.
>
> Currently, there is no code in the tree that makes use of either
> library other than my own. In fact, others have implemented their own
> versions.

If having it there is not hurting anything, I'd say leave it as-is.

It is less work to maintain the configure scripts than it is to stay
current with the latest advancements in the library. ie 5 years from
now we'd have an unmaintained stale copy distributed with our source.

? There's nothing to go stale. Or are you making my case for me?

BLAS/LAPACK are in common use elsewhere, so it's not like a user would
have to spend time hunting down and compiling obscure software to use
it.

Take pride in being the first to use it, we've been waiting a while for
someone to. :slight_smile:

And then having modules become useless when the libraries aren't
compiled in?

> What I propose is moving the matrix code from v.generalize (in
> particular, matrix_inverse() ) to lib/gmath and simplifying the
> existing MATRIX structure.

regardless of BLAS/LAPACK staying or going, consolidation, consistency,
and anything else that makes the code easier to maintain is obviously a
good thing. (but no idea about that specific code)

There are only a few functions in lib/gmath that make use of
BLAS/LAPACK:

G_matrix_product ()
G_matrix_LU_solve ()
G_vector_norm_euclid ()
G_matrix_inverse () -- calls G_matrix_LU_solve ()

v.generalize solves:
G_matrix_product ()
G_matrix_inverse ()
G_matrix_LU_solve ()

So what's the point of having BLAS/LAPACK?

--
73, de Brad KB8UYR/6 <rez touchofmadness com>

2007/8/17, Brad Douglas <rez@touchofmadness.com>:

On Fri, 2007-08-17 at 17:12 +1200, Hamish wrote:
> Brad Douglas wrote:
> >
> > I did not hear any response to my question of whether to continue
> > using BLAS/LAPACK.

Well, i was responding ... .

> >
> > This uncertainty has been particularly hard on me, being unable to
> > complete some work waiting for an answer one way or the other and not
> > wanting to implement my own version if not needed.
> >
> > Currently, there is no code in the tree that makes use of either
> > library other than my own. In fact, others have implemented their own
> > versions.
>
> If having it there is not hurting anything, I'd say leave it as-is.
>
> It is less work to maintain the configure scripts than it is to stay
> current with the latest advancements in the library. ie 5 years from
> now we'd have an unmaintained stale copy distributed with our source.

? There's nothing to go stale. Or are you making my case for me?

> BLAS/LAPACK are in common use elsewhere, so it's not like a user would
> have to spend time hunting down and compiling obscure software to use
> it.
>
> Take pride in being the first to use it, we've been waiting a while for
> someone to. :slight_smile:

And then having modules become useless when the libraries aren't
compiled in?

> > What I propose is moving the matrix code from v.generalize (in
> > particular, matrix_inverse() ) to lib/gmath and simplifying the
> > existing MATRIX structure.
>
> regardless of BLAS/LAPACK staying or going, consolidation, consistency,
> and anything else that makes the code easier to maintain is obviously a
> good thing. (but no idea about that specific code)

There are only a few functions in lib/gmath that make use of
BLAS/LAPACK:

G_matrix_product ()
G_matrix_LU_solve ()
G_vector_norm_euclid ()
G_matrix_inverse () -- calls G_matrix_LU_solve ()

v.generalize solves:
G_matrix_product ()
G_matrix_inverse ()
G_matrix_LU_solve ()

So what's the point of having BLAS/LAPACK?

I think we shoudl keep the LAPACK/BLAS interface in GRASS,
especially for high performance comnputing on cluster, the LAPACK
interface makes
much sense.

I had a short look at matrix.c and matrix.h in v.generalize.
Because of the similar implementation
of the gpde library functionality, i am able to port the functionality
of v.generalize/matrix stuff
into the gpde library. And i will implement it multithreaded, like all
the linear equation solvers within the gpde library.

What do you think?

Soeren

--
73, de Brad KB8UYR/6 <rez touchofmadness com>

_______________________________________________
grass-dev mailing list
grass-dev@grass.itc.it
http://grass.itc.it/mailman/listinfo/grass-dev

On 17.08.2007 07:09, Brad Douglas wrote:

What I propose is moving the matrix code from v.generalize

+1

(in particular, matrix_inverse() ) to lib/gmath and simplifying the existing
MATRIX structure.

I think that would be a good idea, especially if you also want to use
that code. It is easier to maintain the code in one place.

Brad do you know of any additional mathematics or similar things you'd
like to see in lib/gmath? Perhaps next year it could be a Summer of Code
project to add them :wink:

--Wolf

--

<:3 )---- Wolf Bergenheim ----( 8:>

HI folks,

-------- Original-Nachricht --------

Datum: Mon, 20 Aug 2007 12:29:14 +0300
Von: Wolf Bergenheim <wolf+grass@bergenheim.net>
An: GRASS Devel <grass5@grass.itc.it>
CC: Daniel Bundala <daniel.bundala@oriel.ox.ac.uk>, Brad Douglas <rez@touchofmadness.com>
Betreff: Re: [GRASS-dev] BLAS/LAPACK (Part II)

On 17.08.2007 07:09, Brad Douglas wrote:
>
> What I propose is moving the matrix code from v.generalize

+1

> (in particular, matrix_inverse() ) to lib/gmath and simplifying the
existing
> MATRIX structure.
>

I can easily integrate the matrix code from v.generailze into
the gpde library, because the existing matrix structures are quite
similar. Quadratic and sparse matrices are supported.
The gpde library ships several vector-matrix and vector-vector
functions with it, but currently as static functions within the krylov-space solvers. I can make them public (extern),
so they can be accessed from out side of the krylov solvers.

Many linear equation solvers are available
within the gpde library:
* direct solvers
** gauss elimination
** lu decomposition
** cholesky decomposition.
* iterative solvers
** gauss seidel / SOR
** jacobi
** conjugate gradients (krylov space method)
** preconditioned conjugate gradients (krylov space method)
** biconjugate gradients stabilized (krylov space method)

Everything is multithreaded with OpenMP (solver, matrix, vector operations and some array functions).

And as you know, the lu code in gmath lib is a copy
of the numerical recipes algorithm and not free.

I would like to hear some suggestions.

Best regards
Soeren

I think that would be a good idea, especially if you also want to use
that code. It is easier to maintain the code in one place.

Brad do you know of any additional mathematics or similar things you'd
like to see in lib/gmath? Perhaps next year it could be a Summer of Code
project to add them :wink:

--Wolf

--

<:3 )---- Wolf Bergenheim ----( 8:>

_______________________________________________
grass-dev mailing list
grass-dev@grass.itc.it
http://grass.itc.it/mailman/listinfo/grass-dev

--
Psssst! Schon vom neuen GMX MultiMessenger gehört?
Der kanns mit allen: http://www.gmx.net/de/go/multimessenger

Hi Soeren,

from my users point of view this sounds excellent. please go ahead...
You had already suggested it and there were apparently no
objections.

thanks
Markus

On Mon, Aug 20, 2007 at 05:20:18PM +0200, "Sören Gebbert" wrote:

HI folks,

-------- Original-Nachricht --------
> Datum: Mon, 20 Aug 2007 12:29:14 +0300
> Von: Wolf Bergenheim <wolf+grass@bergenheim.net>
> An: GRASS Devel <grass5@grass.itc.it>
> CC: Daniel Bundala <daniel.bundala@oriel.ox.ac.uk>, Brad Douglas <rez@touchofmadness.com>
> Betreff: Re: [GRASS-dev] BLAS/LAPACK (Part II)

> On 17.08.2007 07:09, Brad Douglas wrote:
> >
> > What I propose is moving the matrix code from v.generalize
>
> +1
>
> > (in particular, matrix_inverse() ) to lib/gmath and simplifying the
> existing
> > MATRIX structure.
> >

I can easily integrate the matrix code from v.generailze into
the gpde library, because the existing matrix structures are quite
similar. Quadratic and sparse matrices are supported.
The gpde library ships several vector-matrix and vector-vector
functions with it, but currently as static functions within the krylov-space solvers. I can make them public (extern),
so they can be accessed from out side of the krylov solvers.

Many linear equation solvers are available
within the gpde library:
* direct solvers
** gauss elimination
** lu decomposition
** cholesky decomposition.
* iterative solvers
** gauss seidel / SOR
** jacobi
** conjugate gradients (krylov space method)
** preconditioned conjugate gradients (krylov space method)
** biconjugate gradients stabilized (krylov space method)

Everything is multithreaded with OpenMP (solver, matrix, vector operations and some array functions).

And as you know, the lu code in gmath lib is a copy
of the numerical recipes algorithm and not free.

I would like to hear some suggestions.

Best regards
Soeren

>
> I think that would be a good idea, especially if you also want to use
> that code. It is easier to maintain the code in one place.
>
> Brad do you know of any additional mathematics or similar things you'd
> like to see in lib/gmath? Perhaps next year it could be a Summer of Code
> project to add them :wink:
>
> --Wolf
>
> --
>
> <:3 )---- Wolf Bergenheim ----( 8:>
>
> _______________________________________________
> grass-dev mailing list
> grass-dev@grass.itc.it
> http://grass.itc.it/mailman/listinfo/grass-dev

--
Psssst! Schon vom neuen GMX MultiMessenger gehört?
Der kanns mit allen: http://www.gmx.net/de/go/multimessenger

_______________________________________________
grass-dev mailing list
grass-dev@grass.itc.it
http://grass.itc.it/mailman/listinfo/grass-dev

--
Markus Neteler <neteler itc it> http://mpa.itc.it/markus/
FBK-irst - Centro per la Ricerca Scientifica e Tecnologica
MPBA - Predictive Models for Biol. & Environ. Data Analysis
Via Sommarive, 18 - 38050 Povo (Trento), Italy

Guys,

It is quite interesting, but I have had plans to replace v.generalize
matrix code by "yours" library code. I have not studied G_matrix_*
code carefully, but it seems to me that it is superior.

Firstly, Soeren wrote that the current code is multithreaded.
Secondly, someone mentioned, that it supports the sparse matrices.
Support of sparse matrices would increase the efficiency of
v.generalize since it uses only the sparse matrices.
Thirdly, Soeren mentioned that the current code supports many methods
my code doesnt support. To tell the truth, I have never heard about
many of them (Well, I am still (young) student...)

The only thing I am missing in the current code is the direct access
to the elements of a matrix. But, this is quite dangerous and I really
doubt whether this is a good API-desing.

On the other hand, it is true that the current code is quite obscure,
say. Also, it is tempting to replace fortran code by C code.
Therefore, my suggestons are: clean library code and replace the
current code by v.generalize code only if it is really faster. Some
benchmarks are probably required, but I doubt that my code beats
(optimized) library code.

Daniel

On 8/20/07, Markus Neteler <neteler@itc.it> wrote:

Hi Soeren,

from my users point of view this sounds excellent. please go ahead...
You had already suggested it and there were apparently no
objections.

thanks
Markus

On Mon, Aug 20, 2007 at 05:20:18PM +0200, "Sören Gebbert" wrote:
> HI folks,
>
> -------- Original-Nachricht --------
> > Datum: Mon, 20 Aug 2007 12:29:14 +0300
> > Von: Wolf Bergenheim <wolf+grass@bergenheim.net>
> > An: GRASS Devel <grass5@grass.itc.it>
> > CC: Daniel Bundala <daniel.bundala@oriel.ox.ac.uk>, Brad Douglas <rez@touchofmadness.com>
> > Betreff: Re: [GRASS-dev] BLAS/LAPACK (Part II)
>
> > On 17.08.2007 07:09, Brad Douglas wrote:
> > >
> > > What I propose is moving the matrix code from v.generalize
> >
> > +1
> >
> > > (in particular, matrix_inverse() ) to lib/gmath and simplifying the
> > existing
> > > MATRIX structure.
> > >
>
> I can easily integrate the matrix code from v.generailze into
> the gpde library, because the existing matrix structures are quite
> similar. Quadratic and sparse matrices are supported.
> The gpde library ships several vector-matrix and vector-vector
> functions with it, but currently as static functions within the krylov-space solvers. I can make them public (extern),
> so they can be accessed from out side of the krylov solvers.
>
> Many linear equation solvers are available
> within the gpde library:
> * direct solvers
> ** gauss elimination
> ** lu decomposition
> ** cholesky decomposition.
> * iterative solvers
> ** gauss seidel / SOR
> ** jacobi
> ** conjugate gradients (krylov space method)
> ** preconditioned conjugate gradients (krylov space method)
> ** biconjugate gradients stabilized (krylov space method)
>
> Everything is multithreaded with OpenMP (solver, matrix, vector operations and some array functions).
>
> And as you know, the lu code in gmath lib is a copy
> of the numerical recipes algorithm and not free.
>
> I would like to hear some suggestions.
>
> Best regards
> Soeren
>
>
> >
> > I think that would be a good idea, especially if you also want to use
> > that code. It is easier to maintain the code in one place.
> >
> > Brad do you know of any additional mathematics or similar things you'd
> > like to see in lib/gmath? Perhaps next year it could be a Summer of Code
> > project to add them :wink:
> >
> > --Wolf
> >
> > --
> >
> > <:3 )---- Wolf Bergenheim ----( 8:>
> >
> > _______________________________________________
> > grass-dev mailing list
> > grass-dev@grass.itc.it
> > http://grass.itc.it/mailman/listinfo/grass-dev
>
> --
> Psssst! Schon vom neuen GMX MultiMessenger gehört?
> Der kanns mit allen: http://www.gmx.net/de/go/multimessenger
>
> _______________________________________________
> grass-dev mailing list
> grass-dev@grass.itc.it
> http://grass.itc.it/mailman/listinfo/grass-dev

--
Markus Neteler <neteler itc it> http://mpa.itc.it/markus/
FBK-irst - Centro per la Ricerca Scientifica e Tecnologica
MPBA - Predictive Models for Biol. & Environ. Data Analysis
Via Sommarive, 18 - 38050 Povo (Trento), Italy

_______________________________________________
grass-dev mailing list
grass-dev@grass.itc.it
http://grass.itc.it/mailman/listinfo/grass-dev

Hi Daniel,

2007/8/20, Daniel Bundala <bundala@gmail.com>:

Guys,

It is quite interesting, but I have had plans to replace v.generalize
matrix code by "yours" library code. I have not studied G_matrix_*
code carefully, but it seems to me that it is superior.

Unfortunately there are two libraries which handle
the solution of linear equation systems.
The gmath library with the G_math_* and G_vector_* functions
and the higher level gpde library with several multi threaded solvers
(N_solver_cg ...).
I have implemented the matrix vector functionality in the gpde library
again, because i wanted them multi threaded and i have had no idea
if the gmath functions are thread safe and easy to parallele.

Firstly, Soeren wrote that the current code is multithreaded.

Yes, the code from the gpde library is multi threaded, but you can
link parallelled lapack and blas libraries to the gmath interface (scalapack).

Secondly, someone mentioned, that it supports the sparse matrices.

The gpde library supports a simple sparse matrix implementation
and the matrix vector product functions.

Support of sparse matrices would increase the efficiency of
v.generalize since it uses only the sparse matrices.
Thirdly, Soeren mentioned that the current code supports many methods
my code doesnt support. To tell the truth, I have never heard about
many of them (Well, I am still (young) student...)

Of what kind are your matrices? There are two very efficient
solver within the gpde lib:
1.) for sparse symmetric and positive definite matrices (conjugate gradients
with preconditioning cg/pcg)
http://en.wikipedia.org/wiki/Conjugate_gradient_method
and
2.)for sparse non symmetric, non definite matrices (stabilized bi
conjugate gradients)
http://en.wikipedia.org/wiki/Biconjugate_gradient_method

Those two methods are the most efficient linear equation solvers
available for large matrices.

The only thing I am missing in the current code is the direct access
to the elements of a matrix. But, this is quite dangerous and I really
doubt whether this is a good API-desing.

The matrix implementation within the gpde library offers direct access
to the matrix entries and supports row shuffling by setting the
row pointer (important for for pivoting). So the programmer
have to assure thread safe access by himself.

On the other hand, it is true that the current code is quite obscure,
say. Also, it is tempting to replace fortran code by C code.
Therefore, my suggestons are: clean library code and replace the
current code by v.generalize code only if it is really faster. Some
benchmarks are probably required, but I doubt that my code beats
(optimized) library code.

The gpde library implementation is IMHO not faster than
the gmath and BLAS/LAPACK stuff. Eg: the gmath lu solver
is 30% faster than the gpde lu solver with pivoting.
But the gmath lu solver is code from the numerical recipes,
we have to rewrite this method. And the gpde lu solver runs on
multi processor machines.

I will present an implementation of several matrix-vector functions
in some days. And I'm open for any suggestions about API design. :slight_smile:

Best regards
Soeren

Daniel

On 8/20/07, Markus Neteler <neteler@itc.it> wrote:
> Hi Soeren,
>
> from my users point of view this sounds excellent. please go ahead...
> You had already suggested it and there were apparently no
> objections.
>
> thanks
> Markus
>
> On Mon, Aug 20, 2007 at 05:20:18PM +0200, "Sören Gebbert" wrote:
> > HI folks,
> >
> > -------- Original-Nachricht --------
> > > Datum: Mon, 20 Aug 2007 12:29:14 +0300
> > > Von: Wolf Bergenheim <wolf+grass@bergenheim.net>
> > > An: GRASS Devel <grass5@grass.itc.it>
> > > CC: Daniel Bundala <daniel.bundala@oriel.ox.ac.uk>, Brad Douglas <rez@touchofmadness.com>
> > > Betreff: Re: [GRASS-dev] BLAS/LAPACK (Part II)
> >
> > > On 17.08.2007 07:09, Brad Douglas wrote:
> > > >
> > > > What I propose is moving the matrix code from v.generalize
> > >
> > > +1
> > >
> > > > (in particular, matrix_inverse() ) to lib/gmath and simplifying the
> > > existing
> > > > MATRIX structure.
> > > >
> >
> > I can easily integrate the matrix code from v.generailze into
> > the gpde library, because the existing matrix structures are quite
> > similar. Quadratic and sparse matrices are supported.
> > The gpde library ships several vector-matrix and vector-vector
> > functions with it, but currently as static functions within the krylov-space solvers. I can make them public (extern),
> > so they can be accessed from out side of the krylov solvers.
> >
> > Many linear equation solvers are available
> > within the gpde library:
> > * direct solvers
> > ** gauss elimination
> > ** lu decomposition
> > ** cholesky decomposition.
> > * iterative solvers
> > ** gauss seidel / SOR
> > ** jacobi
> > ** conjugate gradients (krylov space method)
> > ** preconditioned conjugate gradients (krylov space method)
> > ** biconjugate gradients stabilized (krylov space method)
> >
> > Everything is multithreaded with OpenMP (solver, matrix, vector operations and some array functions).
> >
> > And as you know, the lu code in gmath lib is a copy
> > of the numerical recipes algorithm and not free.
> >
> > I would like to hear some suggestions.
> >
> > Best regards
> > Soeren
> >
> >
> > >
> > > I think that would be a good idea, especially if you also want to use
> > > that code. It is easier to maintain the code in one place.
> > >
> > > Brad do you know of any additional mathematics or similar things you'd
> > > like to see in lib/gmath? Perhaps next year it could be a Summer of Code
> > > project to add them :wink:
> > >
> > > --Wolf
> > >
> > > --
> > >
> > > <:3 )---- Wolf Bergenheim ----( 8:>
> > >
> > > _______________________________________________
> > > grass-dev mailing list
> > > grass-dev@grass.itc.it
> > > http://grass.itc.it/mailman/listinfo/grass-dev
> >
> > --
> > Psssst! Schon vom neuen GMX MultiMessenger gehört?
> > Der kanns mit allen: http://www.gmx.net/de/go/multimessenger
> >
> > _______________________________________________
> > grass-dev mailing list
> > grass-dev@grass.itc.it
> > http://grass.itc.it/mailman/listinfo/grass-dev
>
> --
> Markus Neteler <neteler itc it> http://mpa.itc.it/markus/
> FBK-irst - Centro per la Ricerca Scientifica e Tecnologica
> MPBA - Predictive Models for Biol. & Environ. Data Analysis
> Via Sommarive, 18 - 38050 Povo (Trento), Italy
>
> _______________________________________________
> grass-dev mailing list
> grass-dev@grass.itc.it
> http://grass.itc.it/mailman/listinfo/grass-dev
>

_______________________________________________
grass-dev mailing list
grass-dev@grass.itc.it
http://grass.itc.it/mailman/listinfo/grass-dev

On Mon, 2007-08-20 at 22:08 +0200, Daniel Bundala wrote:

Guys,

It is quite interesting, but I have had plans to replace v.generalize
matrix code by "yours" library code. I have not studied G_matrix_*
code carefully, but it seems to me that it is superior.

BLAS/LAPACK are vastly superior.

I have a couple modules I'm working on that I've either used or in
process of converting to use G_matrix_*()/G_vector_*() functions that
call BLAS/LAPACK. I would also like to expand the usage of BLAS/LAPACK
by making additional functions available (I suspect this may be
beneficial to you, also).

Firstly, Soeren wrote that the current code is multithreaded.

Soeren's code does not use BLAS/LAPACK. It probably should.

Secondly, someone mentioned, that it supports the sparse matrices.
Support of sparse matrices would increase the efficiency of
v.generalize since it uses only the sparse matrices.
Thirdly, Soeren mentioned that the current code supports many methods
my code doesnt support. To tell the truth, I have never heard about
many of them (Well, I am still (young) student...)

The only thing I am missing in the current code is the direct access
to the elements of a matrix. But, this is quite dangerous and I really
doubt whether this is a good API-desing.

On the other hand, it is true that the current code is quite obscure,
say. Also, it is tempting to replace fortran code by C code.
Therefore, my suggestons are: clean library code and replace the
current code by v.generalize code only if it is really faster. Some
benchmarks are probably required, but I doubt that my code beats
(optimized) library code.

One way or the other, it doesn't really matter to me. I just don't want
to have modules with dependency requirements that others do not.

BLAS/LAPACK are superior, but there's no since having it around if
nobody is going to use it. It just becomes clutter at that point. IMO,
few will compile it into their build if only a few obscure modules use
it; leaving those with more specific needs at a disadvantage.

--
73, de Brad KB8UYR/6 <rez touchofmadness com>

-------- Original-Nachricht --------

Datum: Tue, 21 Aug 2007 19:15:48 +0000
Von: Brad Douglas <rez@touchofmadness.com>
An: Daniel Bundala <bundala@gmail.com>
CC: Wolf Bergenheim <wolf+grass@bergenheim.net>, "Sören Gebbert" <soerengebbert@gmx.de>, GRASS developers list <grass-dev@grass.itc.it>
Betreff: Re: [GRASS-dev] BLAS/LAPACK (Part II)

On Mon, 2007-08-20 at 22:08 +0200, Daniel Bundala wrote:
> Guys,
>
> It is quite interesting, but I have had plans to replace v.generalize
> matrix code by "yours" library code. I have not studied G_matrix_*
> code carefully, but it seems to me that it is superior.

BLAS/LAPACK are vastly superior.

I have a couple modules I'm working on that I've either used or in
process of converting to use G_matrix_*()/G_vector_*() functions that
call BLAS/LAPACK. I would also like to expand the usage of BLAS/LAPACK
by making additional functions available (I suspect this may be
beneficial to you, also).

> Firstly, Soeren wrote that the current code is multithreaded.

Soeren's code does not use BLAS/LAPACK. It probably should.

Well ... :),
a mathematic Professor (http://www.math.tu-berlin.de/~schwandt/index_en.html)
told me, that some compiler with OpenMP support replace the matrix
and vector stuff with high optimized BLASS/LAPACK functions.

I guess thats what the intel compiler partly did with my code to get this
nice speedup:
http://www-pool.math.tu-berlin.de/~soeren/grass/modules/screenshots/GRASS_PDE_lib_SGI_bench.png
http://www-pool.math.tu-berlin.de/~soeren/grass/modules/screenshots/sgi_altix_cg_bench.png

I was thinking about this too, but im not sure how to implement this.
I dont know if the BLAS/LAPACK wrapper is thread safe and i dont know if
multi threaded code works correctly together with scalapack libraries.

There are only a few LAPACK methods available in the gmath library, we need to
extend it. Also many algorithms within the gmath directory do not make use of the
BLAS/LAPACK stuff, eg: the lu solver.

Well i think i can use the G_matrix and G_vector constructs within the gpde library.
I will have a deeper look in the gmath stuff.

Best regards
Soeren

> Secondly, someone mentioned, that it supports the sparse matrices.
> Support of sparse matrices would increase the efficiency of
> v.generalize since it uses only the sparse matrices.
> Thirdly, Soeren mentioned that the current code supports many methods
> my code doesnt support. To tell the truth, I have never heard about
> many of them (Well, I am still (young) student...)
>
> The only thing I am missing in the current code is the direct access
> to the elements of a matrix. But, this is quite dangerous and I really
> doubt whether this is a good API-desing.
>
> On the other hand, it is true that the current code is quite obscure,
> say. Also, it is tempting to replace fortran code by C code.
> Therefore, my suggestons are: clean library code and replace the
> current code by v.generalize code only if it is really faster. Some
> benchmarks are probably required, but I doubt that my code beats
> (optimized) library code.

One way or the other, it doesn't really matter to me. I just don't want
to have modules with dependency requirements that others do not.

BLAS/LAPACK are superior, but there's no since having it around if
nobody is going to use it. It just becomes clutter at that point. IMO,
few will compile it into their build if only a few obscure modules use
it; leaving those with more specific needs at a disadvantage.

--
73, de Brad KB8UYR/6 <rez touchofmadness com>

_______________________________________________
grass-dev mailing list
grass-dev@grass.itc.it
http://grass.itc.it/mailman/listinfo/grass-dev

--
Ist Ihr Browser Vista-kompatibel? Jetzt die neuesten
Browser-Versionen downloaden: http://www.gmx.net/de/go/browser

Hi,

I think that it is a good idea to have one library matrix/vector
calculations. But it is not a good idea to base it on v.generalize
matrix code since the suggested methods are more efficient and
general. I am not an expert in the field of LA solvers so I really
cannot tell which one is better. But I am pretty sure to say that the
specially designed and optimised solvers are better than my bunch of
LA functions. And also, it would be a great idead and increase in
efficiency to use specialised LA library for matrix stuff in
v.generalize

Daniel

On 8/21/07, "Sören Gebbert" <soerengebbert@gmx.de> wrote:

-------- Original-Nachricht --------
> Datum: Tue, 21 Aug 2007 19:15:48 +0000
> Von: Brad Douglas <rez@touchofmadness.com>
> An: Daniel Bundala <bundala@gmail.com>
> CC: Wolf Bergenheim <wolf+grass@bergenheim.net>, "Sören Gebbert" <soerengebbert@gmx.de>, GRASS developers list <grass-dev@grass.itc.it>
> Betreff: Re: [GRASS-dev] BLAS/LAPACK (Part II)

> On Mon, 2007-08-20 at 22:08 +0200, Daniel Bundala wrote:
> > Guys,
> >
> > It is quite interesting, but I have had plans to replace v.generalize
> > matrix code by "yours" library code. I have not studied G_matrix_*
> > code carefully, but it seems to me that it is superior.
>
> BLAS/LAPACK are vastly superior.
>
> I have a couple modules I'm working on that I've either used or in
> process of converting to use G_matrix_*()/G_vector_*() functions that
> call BLAS/LAPACK. I would also like to expand the usage of BLAS/LAPACK
> by making additional functions available (I suspect this may be
> beneficial to you, also).
>
> > Firstly, Soeren wrote that the current code is multithreaded.
>
> Soeren's code does not use BLAS/LAPACK. It probably should.

Well ... :),
a mathematic Professor (http://www.math.tu-berlin.de/~schwandt/index_en.html)
told me, that some compiler with OpenMP support replace the matrix
and vector stuff with high optimized BLASS/LAPACK functions.

I guess thats what the intel compiler partly did with my code to get this
nice speedup:
http://www-pool.math.tu-berlin.de/~soeren/grass/modules/screenshots/GRASS_PDE_lib_SGI_bench.png
http://www-pool.math.tu-berlin.de/~soeren/grass/modules/screenshots/sgi_altix_cg_bench.png

I was thinking about this too, but im not sure how to implement this.
I dont know if the BLAS/LAPACK wrapper is thread safe and i dont know if
multi threaded code works correctly together with scalapack libraries.

There are only a few LAPACK methods available in the gmath library, we need to
extend it. Also many algorithms within the gmath directory do not make use of the
BLAS/LAPACK stuff, eg: the lu solver.

Well i think i can use the G_matrix and G_vector constructs within the gpde library.
I will have a deeper look in the gmath stuff.

Best regards
Soeren

>
> > Secondly, someone mentioned, that it supports the sparse matrices.
> > Support of sparse matrices would increase the efficiency of
> > v.generalize since it uses only the sparse matrices.
> > Thirdly, Soeren mentioned that the current code supports many methods
> > my code doesnt support. To tell the truth, I have never heard about
> > many of them (Well, I am still (young) student...)
> >
> > The only thing I am missing in the current code is the direct access
> > to the elements of a matrix. But, this is quite dangerous and I really
> > doubt whether this is a good API-desing.
> >
> > On the other hand, it is true that the current code is quite obscure,
> > say. Also, it is tempting to replace fortran code by C code.
> > Therefore, my suggestons are: clean library code and replace the
> > current code by v.generalize code only if it is really faster. Some
> > benchmarks are probably required, but I doubt that my code beats
> > (optimized) library code.
>
> One way or the other, it doesn't really matter to me. I just don't want
> to have modules with dependency requirements that others do not.
>
> BLAS/LAPACK are superior, but there's no since having it around if
> nobody is going to use it. It just becomes clutter at that point. IMO,
> few will compile it into their build if only a few obscure modules use
> it; leaving those with more specific needs at a disadvantage.
>
>
> --
> 73, de Brad KB8UYR/6 <rez touchofmadness com>
>
> _______________________________________________
> grass-dev mailing list
> grass-dev@grass.itc.it
> http://grass.itc.it/mailman/listinfo/grass-dev

--
Ist Ihr Browser Vista-kompatibel? Jetzt die neuesten
Browser-Versionen downloaden: http://www.gmx.net/de/go/browser