[GRASS-dev] Object-based image classification in GRASS

Hello,

Based on the great work on i.segment by Eric and MarkusM, I've been trying to put up a complete workflow allowing object-based image classification in GRASS. Conclusion: it is possible with currently available tools, even though some components would be nice to have in addition. Attached you can find a simple shell script which shows all the steps I went through. I commented it extensively, so it hopefully is easy to understand.

Some remarks:

- This only works in GRASS 7.

- It uses the v.class.mlpy addon module for classification, so that needs to be installed. Kudos to Vaclav for that module ! It currently only uses the DLDA classifier. The mlpy library offers many more, and I think it should be quite easy to add them. Obviously, one could also simply export the attribute table of the segments and of the training areas to csv files and use R to do the classification.

- At the top of the script are a series of parameters that have to be defined before being able to use the script as such (but the script is more meant as a proof-of-concept than as a real script)

- Many other variables could be calculated for the segments: other texture variables (possibly variables by segment, not as average of pixel-based variables, cf [1]), other shape variables (cf the new work of MarkusM on center lines and skeletons of polygons in v.voronoi), band indices, etc. It would be interesting to hear what most people find useful.

- I do the step of digitizing training areas in the wxGUI digitizer using the attribute editing tool and filling in the 'class' attribute for those polygons I find representative. As already mentioned in previous discussions [2], I do think that it would be nice if we could have an attribute editing form that is independent of the vector digitizer.

More generally, it would be great to get feedback from interested people on this approach to object-based image classification to see what we can do to make it better.

Moritz

[1] https://trac.osgeo.org/grass/ticket/2111
[2] http://lists.osgeo.org/pipermail/grass-dev/2013-February/062148.html

(attachments)

grass_ObjectBasedImageClassification.zip (2.58 KB)

Moritz,

Thanks heaps for the script. It's really is useful and will facilitate
the adoption of i.segment. It certainly would be a nice addition to
the wiki page.

Unfortunately I can't comment too much on this, as my object-based
classification projects are on hold, but I'll try to give that a shot
sometime soon.

It could also be interesting to try non-supervised approach using
i.segment to limit the "salt and pepper" noise affecting such
classifications.

Cheers,

Pierre

2013/10/31 Moritz Lennert <mlennert@club.worldonline.be>:

Hello,

Based on the great work on i.segment by Eric and MarkusM, I've been trying
to put up a complete workflow allowing object-based image classification in
GRASS. Conclusion: it is possible with currently available tools, even
though some components would be nice to have in addition. Attached you can
find a simple shell script which shows all the steps I went through. I
commented it extensively, so it hopefully is easy to understand.

Some remarks:

- This only works in GRASS 7.

- It uses the v.class.mlpy addon module for classification, so that needs to
be installed. Kudos to Vaclav for that module ! It currently only uses the
DLDA classifier. The mlpy library offers many more, and I think it should be
quite easy to add them. Obviously, one could also simply export the
attribute table of the segments and of the training areas to csv files and
use R to do the classification.

- At the top of the script are a series of parameters that have to be
defined before being able to use the script as such (but the script is more
meant as a proof-of-concept than as a real script)

- Many other variables could be calculated for the segments: other texture
variables (possibly variables by segment, not as average of pixel-based
variables, cf [1]), other shape variables (cf the new work of MarkusM on
center lines and skeletons of polygons in v.voronoi), band indices, etc. It
would be interesting to hear what most people find useful.

- I do the step of digitizing training areas in the wxGUI digitizer using
the attribute editing tool and filling in the 'class' attribute for those
polygons I find representative. As already mentioned in previous discussions
[2], I do think that it would be nice if we could have an attribute editing
form that is independent of the vector digitizer.

More generally, it would be great to get feedback from interested people on
this approach to object-based image classification to see what we can do to
make it better.

Moritz

[1] https://trac.osgeo.org/grass/ticket/2111
[2] http://lists.osgeo.org/pipermail/grass-dev/2013-February/062148.html

_______________________________________________
grass-dev mailing list
grass-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/grass-dev

--
Scientist
Landcare Research, New Zealand

Hi Moritz,

I’m writing some modules (in python) to basically do the same thing.

I’m trying to apply a Object-based classification for a quite big area (the region is more than 14 billions of cells).

At the moment I’m working with a smaller area with “only” ~1 billions of cells, but it is still quite challenging.

To speed-up the segmentation process I did the i.segment.hierarchical module [0]. that split the region in several tiles, compute the segment for each tile, patch all the tiles together and run a last time i segment using the patched map as a seed.

for a region of 24k row for 48k cols it required less than two hour to run and patch all the tiles, and more than 5 hours to run the “final” i.segment over the patched map (using only 3 iterations!).

From my experience I can say that the use “v.to.db” is terribly slow if you want to apply to a vector map with more than 2.7 Millions of areas. So I’ve develop a python function that compute the same values, but it is much faster that the v.to.db module, and should be possible to split the operation in several processes for further speed up… (It is still under testing).

On Wednesday 30 Oct 2013 21:04:22 Moritz Lennert wrote:

  • It uses the v.class.mlpy addon module for classification, so that

needs to be installed. Kudos to Vaclav for that module ! It currently

only uses the DLDA classifier. The mlpy library offers many more, and I

think it should be quite easy to add them. Obviously, one could also

simply export the attribute table of the segments and of the training

areas to csv files and use R to do the classification.

I’m extended to use tree/k-NN/SVM Machine learning from MLPY [1] (I’ve used also Parzen, but the results were not good enough) and to work also with the scikit [2] classifiers.

Scikit it seems to have a larger community and should be easier to install than MLPY, and last but not least it seems generally faster [3].

  • Many other variables could be calculated for the segments: other

texture variables (possibly variables by segment, not as average of

pixel-based variables, cf [1]), other shape variables (cf the new work

of MarkusM on center lines and skeletons of polygons in v.voronoi), band

indices, etc. It would be interesting to hear what most people find useful.

I’m working to add also a C function to the GRASS library to compute the barycentre and the a polar second moment of Area (or Moment of Inertia), that return a number that it is independent from the orientation and dimension.

  • I do the step of digitizing training areas in the wxGUI digitizer

using the attribute editing tool and filling in the ‘class’ attribute

for those polygons I find representative. As already mentioned in

previous discussions [2], I do think that it would be nice if we could

have an attribute editing form that is independent of the vector digitizer.

I use the i.gui.class to generate the training vector map, and then use this map to select the training areas, and export the final results into a file (at the moment only csv and npy formats are supported).

More generally, it would be great to get feedback from interested people

on this approach to object-based image classification to see what we can

do to make it better.

I’m definitely interested on the topic! :slight_smile:

Some days ago I’ve discussed with MarkusM, that may be I could do a GSoC next year to modify the i.segment module to automatically split the domain in tiles, run as a multiprocess, and then “patch” only the segments that are on the border of the tiles, this solution should be much faster than my actual solution[0]. Moreover we should consider to skip to transform the segments into vector to extract the shape parameters and extract shape and others parameters (mean, median, skewness, std, etc.) directly as last step before to free the memory from the segments structures, writing a csv/npy file.

All the best.

Pietro

[0] https://github.com/zarch/i.segment.hierarchical

[1] http://mlpy.sourceforge.net/

[2] http://scikit-learn.org/

[3] http://scikit-learn.org/ml-benchmarks/

Hi Pietro,

On 31/10/13 00:34, Pietro Zambelli wrote:

Hi Moritz,

I'm writing some modules (in python) to basically do the same thing.

Great ! Then I won't continue on that and rather wait for your stuff. Do you have code, yet (except for i.segment.hierarchical) ? Don't hesitate to publish early.

I think once the individual elements are there, it should be quite easy to cook up a little binding module which would allow to choose segmentation parameters, the variables to use for polygon characterization, the classification algorithm, etc and then launch the whole process.

I'm trying to apply a Object-based classification for a quite big area
(the region is more than 14 billions of cells).

At the moment I'm working with a smaller area with "only" ~1 billions of
cells, but it is still quite challenging.

14 billion _is_ quite ambitious :wink:

I guess we should focus on getting the functionality, first and then think about optimisation for size...

To speed-up the segmentation process I did the i.segment.hierarchical
module [0]. that split the region in several tiles, compute the segment
for each tile, patch all the tiles together and run a last time i
segment using the patched map as a seed.

Any reason other than preference for git over svn for not putting your module into grass-addons ?

for a region of 24k row for 48k cols it required less than two hour to
run and patch all the tiles, and more than 5 hours to run the "final"
i.segment over the patched map (using only 3 iterations!).

That's still only 7 hours for segmentation of a billion-cell size image. Not bad compared to other solutions out there...

From my experience I can say that the use "v.to.db" is terribly slow if
you want to apply to a vector map with more than 2.7 Millions of areas.
So I've develop a python function that compute the same values, but it
is much faster that the v.to.db module, and should be possible to split
the operation in several processes for further speed up... (It is still
under testing).

Does your python module load the values into an attribute table ? I would guess that that's the slow part in v.to.db. Generally, I think that's another field where optimization would be great (if possible): database interaction, notably writing to tables. IIUC, in v.to.db there is a seperate update operation for each feature. I imagine that there must be a faster way to do this...

On Wednesday 30 Oct 2013 21:04:22 Moritz Lennert wrote:

> - It uses the v.class.mlpy addon module for classification, so that

> needs to be installed. Kudos to Vaclav for that module ! It currently

> only uses the DLDA classifier. The mlpy library offers many more, and I

> think it should be quite easy to add them. Obviously, one could also

> simply export the attribute table of the segments and of the training

> areas to csv files and use R to do the classification.

I'm extended to use tree/k-NN/SVM Machine learning from MLPY [1] (I've
used also Parzen, but the results were not good enough) and to work also
with the scikit [2] classifiers.

You extended v.class.mlpy ? Is that code available somewhere ?

Scikit it seems to have a larger community and should be easier to
install than MLPY, and last but not least it seems generally faster [3].

I don't have any preferences on that. Colleagues here use R machine learning tools.

> - Many other variables could be calculated for the segments: other

> texture variables (possibly variables by segment, not as average of

> pixel-based variables, cf [1]), other shape variables (cf the new work

> of MarkusM on center lines and skeletons of polygons in v.voronoi), band

> indices, etc. It would be interesting to hear what most people find
useful.

I'm working to add also a C function to the GRASS library to compute the
barycentre and the a polar second moment of Area (or Moment of Inertia),
that return a number that it is independent from the orientation and
dimension.

Great ! I guess the more the merrier :wink:
See also [1]. Maybe its just a small additional step to add that at the same time ?

> - I do the step of digitizing training areas in the wxGUI digitizer

> using the attribute editing tool and filling in the 'class' attribute

> for those polygons I find representative. As already mentioned in

> previous discussions [2], I do think that it would be nice if we could

> have an attribute editing form that is independent of the vector
digitizer.

I use the i.gui.class to generate the training vector map, and then use
this map to select the training areas, and export the final results into
a file (at the moment only csv and npy formats are supported).

How do you do that ? Do you generate training points (or small areas) and then select the areas these points fall into ?

I thought it best to select training areas among the actual polygons coming out of i.segment.

Some days ago I've discussed with MarkusM, that may be I could do a GSoC
next year to modify the i.segment module to automatically split the
domain in tiles, run as a multiprocess, and then "patch" only the
segments that are on the border of the tiles, this solution should be
much faster than my actual solution[0].

Great idea !

Moreover we should consider to
skip to transform the segments into vector to extract the shape
parameters and extract shape and others parameters (mean, median,
skewness, std, etc.) directly as last step before to free the memory
from the segments structures, writing a csv/npy file.

I guess it is not absolutely necessary to go via vector. You could always leave the option to vectorize the segments, import the parameter file into a table and then link that table to the vector.

Moritz

[1] https://trac.osgeo.org/grass/ticket/2122

On 30/10/13 21:23, Pierre Roudier wrote:

Moritz,

Thanks heaps for the script. It's really is useful and will facilitate
the adoption of i.segment. It certainly would be a nice addition to
the wiki page.

I can put it there as a proof-of-concept, but apparently Pietro is alreay much further on this, so that will probably be the way to go.

It could also be interesting to try non-supervised approach using
i.segment to limit the "salt and pepper" noise affecting such
classifications.

AFAIU, both scikit and mlpy offer unsupervised learning and classification techniques, so that should be possible.

Moritz

On Thursday 31 Oct 2013 10:09:20 Moritz Lennert wrote:

Great ! Then I won’t continue on that and rather wait for your stuff. Do

you have code, yet (except for i.segment.hierarchical) ? Don’t hesitate

to publish early.

I did some stuff here: https://github.com/zarch/ml.class

But I’m working to a main re-factoring to integrate my work with “v.class.mlpy”. It is still under development.

I guess we should focus on getting the functionality, first and then

think about optimisation for size…

I agree, but I’m a phD student and I need the results now! :slight_smile:

To speed-up the segmentation process I did the i.segment.hierarchical

module [0]. that split the region in several tiles, compute the segment

for each tile, patch all the tiles together and run a last time i

segment using the patched map as a seed.

Any reason other than preference for git over svn for not putting your

module into grass-addons ?

No, I was worry to add too much stuff on grass-addons, and moreover is still under development so maybe it is not ready for a production environment…

But I think that now I can move i.segment.hierarchical to grass-addons.

for a region of 24k row for 48k cols it required less than two hour to

run and patch all the tiles, and more than 5 hours to run the “final”

i.segment over the patched map (using only 3 iterations!).

That’s still only 7 hours for segmentation of a billion-cell size image.

Not bad compared to other solutions out there…

I never used other solutions, so I’m not able to compared the results, but I think that we have some chance to speed-up the process some parallelization, I’ve started to study the i.segment code, but I need time.

From my experience I can say that the use “v.to.db” is terribly slow if

you want to apply to a vector map with more than 2.7 Millions of areas.

So I’ve develop a python function that compute the same values, but it

is much faster that the v.to.db module, and should be possible to split

the operation in several processes for further speed up… (It is still

under testing).

Does your python module load the values into an attribute table ? I

would guess that that’s the slow part in v.to.db. Generally, I think

that’s another field where optimization would be great (if possible):

database interaction, notably writing to tables. IIUC, in v.to.db there

is a seperate update operation for each feature. I imagine that there

must be a faster way to do this…

yes, this is the main problem GRASS is quite bad/slow writing to the db, I’ve skipped the GRASS API and use directly the python interface that is faster.

Moreover the v.to.db create only a column per time, and if you are using the sqlite driver it mean that each time you have to create a new table and copy all the data.

Even this module is not ready yet… it is just a python function.

I’m extended to use tree/k-NN/SVM Machine learning from MLPY [1] (I’ve

used also Parzen, but the results were not good enough) and to work also

with the scikit [2] classifiers.

You extended v.class.mlpy ? Is that code available somewhere ?

No, I wrote ml.class and now I’m rewriting to integrate the things together.

I’m working to add also a C function to the GRASS library to compute the

barycentre and the a polar second moment of Area (or Moment of Inertia),

that return a number that it is independent from the orientation and

dimension.

Great ! I guess the more the merrier :wink:

See also [1]. Maybe its just a small additional step to add that at the

same time ?

I would love to have this too! :slight_smile:

I use the i.gui.class to generate the training vector map, and then use

this map to select the training areas, and export the final results into

a file (at the moment only csv and npy formats are supported).

How do you do that ? Do you generate training points (or small areas)

and then select the areas these points fall into ?

I thought it best to select training areas among the actual polygons

coming out of i.segment.

Yes I think so, I’ve generated some training areas using i.gui.class, then I’ve extract all the segments that overlap this areas and assign the category of the training vector map. I’m working on it (so no code ready yet!)

So I can write here as soon as I have something to test… :slight_smile:

Best regards

Pietro

On 31 October 2013 00:34, Pietro Zambelli <peter.zamb@gmail.com> wrote:

Hi Moritz,

Hi Pietro

[0] https://github.com/zarch/i.segment.hierarchical

Could I suggest you to use the grass-addons repository :wink:
Thanks

--
ciao
Luca

http://gis.cri.fmach.it/delucchi/
www.lucadelu.org

On Thu, Oct 31, 2013 at 3:03 PM, Luca Delucchi <lucadeluge@gmail.com> wrote:

[0] https://github.com/zarch/i.segment.hierarchical

Could I suggest you to use the grass-addons repository :wink:

Ok, moved (i.segment.hierarchical) to grass-addons (r58137).

Pietro

Dear all,

Some news about the machine learning classification of image segments.

The process described below has been used to classify some RGB images
for two different regions with more than 1 billions of pixels, and more
than 2.7 millions of segments.
Working with such challenging figures requires to optimize/rewrite part
of the pygrass library [r58622-r58628 and r58634/r58635] and
to adapt/add new GRASS modules, below is briefly reported the sequence
of modules used/developed:

    1. i.segment.hierarchical [r58137] => extract the segments
        from the raster group splitting the domain in tiles
        (in grass-addons);

    2. r.to.vect => convert the segments to a vector map;

    3. v.category => to transfer the categories of the geometry
        features to the new layers, the module was not working
        for areas but know is fixed [r58202].

    3. v.stats [r58637] => Extract statistics from a vector map
       (statistics about shape and about raster maps).
       v.stats internally use (in grass-addons):
        - v.area.stats [r58636] => extract some statistics about
          the shape (in grass-addons);
        - v.to.rast => re-convert the vector to a raster map using the
          vector categories to be sure that there is a correspondence
          between vector and raster categories (zones).
        - r.univar2 [r58439] => extract some general statistics from
          raster using the zones (consume much less memory than
          r.univar, and compute more general statistics like:
          skewness, kurtosis, and mode (in grass-addons);

    4. v.class.ml [r58638] => classify a vector map, at the moment
        only a supervisionate classification is tested/supported.
        To select the segment that must use for training the different
        machine-learning techniques you can define a training
        map using the g.gui.iclass.
        The v.class.ml module can:
        - extract the training,
        - balance and scale the training set;
        - optimize the training set;
        - test several machine learning techniques;
        - explore the SVC domain;
        - export the accuracy of different ML to a csv file;
        - find and export the optimum training set,
        - classify the vector map using several ML techniques and
          export to a new layer of the vector map with the results
          of the classification;
        - export the classification results to several raster maps,
          the vector map coming from a segment raster map is too
          big to be exported to a shape file (the limit for a shape file
          is 4Gb [0]).
        The module accept as input a python file with a list of custom
        classifiers defined by the user, and support both:
        scikit-learn[1] and mlpy[2] libraries.

Known Issues:
* not all the classifiers are working (but I hope to be able to fix this
during the next weeks);
* so far, only a supervised classification is supported.

Best regards

Pietro

[0] http://www.gdal.org/ogr/drv_shapefile.html
[1] http://scikit-learn.org/
[2] http://mlpy.sourceforge.net/

Dear Pietro,

On 07/01/14 18:33, Pietro Zambelli wrote:

Dear all,

Some news about the machine learning classification of image segments.

Thanks for the great work !!!

Just a few questions/comments:

     3. v.stats [r58637] => Extract statistics from a vector map
        (statistics about shape and about raster maps).
        v.stats internally use (in grass-addons):
         - v.area.stats [r58636] => extract some statistics about
           the shape (in grass-addons);

Looking at the code of v.area.stats, I don't understand what it does differently than v.to.db, except that it outputs all indicators in one go. I think it would be better to avoid module inflation and maybe either make v.area.stats into a script that calls v.to.db several times to collect the different variables, or modify v.to.db to allow upload/output of several variables at once (see [1]).

         - v.to.rast => re-convert the vector to a raster map using the
           vector categories to be sure that there is a correspondence
           between vector and raster categories (zones).
         - r.univar2 [r58439] => extract some general statistics from
           raster using the zones (consume much less memory than
           r.univar, and compute more general statistics like:
           skewness, kurtosis, and mode (in grass-addons);

What is the difference between your r.univar2 and the original r.univar ? Couldn't your modifications be merged directly into r.univar ?

     4. v.class.ml [r58638] => classify a vector map, at the moment
         only a supervisionate classification is tested/supported.
         To select the segment that must use for training the different
         machine-learning techniques you can define a training
         map using the g.gui.iclass.
         The v.class.ml module can:
         - extract the training,
         - balance and scale the training set;
         - optimize the training set;
         - test several machine learning techniques;
         - explore the SVC domain;
         - export the accuracy of different ML to a csv file;
         - find and export the optimum training set,
         - classify the vector map using several ML techniques and
           export to a new layer of the vector map with the results
           of the classification;
         - export the classification results to several raster maps,
           the vector map coming from a segment raster map is too
           big to be exported to a shape file (the limit for a shape file
           is 4Gb [0]).

Wow, this looks great ! I'll test this as soon as possible.

         The module accept as input a python file with a list of custom
         classifiers defined by the user, and support both:
         scikit-learn[1] and mlpy[2] libraries.

Known Issues:
* not all the classifiers are working (but I hope to be able to fix this
during the next weeks);
* so far, only a supervised classification is supported.

What would be needed to make unsupervised classification work ?

Moritz

[1] https://trac.osgeo.org/grass/ticket/2123

Dear Moritz,

On Thu, Jan 9, 2014 at 10:13 AM, Moritz Lennert
<mlennert@club.worldonline.be> wrote:

     3. v.stats [r58637] => Extract statistics from a vector map
        (statistics about shape and about raster maps).
        v.stats internally use (in grass-addons):
         - v.area.stats [r58636] => extract some statistics about
           the shape (in grass-addons);

Looking at the code of v.area.stats, I don't understand what it does
differently than v.to.db, except that it outputs all indicators in one go. I
think it would be better to avoid module inflation and maybe either make
v.area.stats into a script that calls v.to.db several times to collect the
different variables, or modify v.to.db to allow upload/output of several
variables at once (see [1]).

yes, v.area.stats is just a subset (it works only with areas) of the
v.to.db compute all the parameters and export to a csv in one step, it
is much faster than run several times v.to.db.
I agree that would be better to avoid to make a new module... But the
easier and faster solution to my problem was to rewrite this module. I
can remove v.area.stats from grass-addons.

         - r.univar2 [r58439] => extract some general statistics from
           raster using the zones (consume much less memory than
           r.univar, and compute more general statistics like:
           skewness, kurtosis, and mode (in grass-addons);

What is the difference between your r.univar2 and the original r.univar ?
Couldn't your modifications be merged directly into r.univar ?

Yes, I think so, should be possible to merge r.univar2 => r.univar,
but at the moment r.univar2 is working only with the map of zones and
the only output is tabular (not "g" and "e" flags)... moreover I did
not add only some extra statistical parameters, I've changed the main
logic to reduce the memory footprint, so I prefer to push the change
in grass-addons, in order to avoid to break the original r.univar.

     4. v.class.ml [r58638] => classify a vector map, at the moment
         only a supervisionate classification is tested/supported.
         To select the segment that must use for training the different
         machine-learning techniques you can define a training
         map using the g.gui.iclass.
         The v.class.ml module can:
         - extract the training,
         - balance and scale the training set;
         - optimize the training set;
         - test several machine learning techniques;
         - explore the SVC domain;
         - export the accuracy of different ML to a csv file;
         - find and export the optimum training set,
         - classify the vector map using several ML techniques and
           export to a new layer of the vector map with the results
           of the classification;
         - export the classification results to several raster maps,
           the vector map coming from a segment raster map is too
           big to be exported to a shape file (the limit for a shape file
           is 4Gb [0]).

Wow, this looks great ! I'll test this as soon as possible.

The main logic is to use the flags to prepare/test each step, the
command produce several npy files so you should be able to load the
npy files and ply directly with the classifiers, if you like.

* so far, only a supervised classification is supported.

What would be needed to make unsupervised classification work ?

I guess that you need only to make a list of unsupervised classifiers
and add an optional parameter with the number class that you want to
extract from your data set.

All the best.

Pietro

Hi Moritz,

2013-10-30 Moritz Lennert <mlennert@club.worldonline.be>:

though some components would be nice to have in addition. Attached you can
find a simple shell script which shows all the steps I went through. I
commented it extensively, so it hopefully is easy to understand.

I just wanted to thank you for the script and to the author(s) of
i.segment. Based on your script I was able in one day to prepare a new
lesson for my students [1] (in Czech only) ...

Martin

[1] http://geo.fsv.cvut.cz/gwiki/153ZODH_/_15._cvičení

On 28/01/14 14:47, Martin Landa wrote:

Hi Moritz,

2013-10-30 Moritz Lennert <mlennert@club.worldonline.be>:

though some components would be nice to have in addition. Attached you can
find a simple shell script which shows all the steps I went through. I
commented it extensively, so it hopefully is easy to understand.

I just wanted to thank you for the script and to the author(s) of
i.segment. Based on your script I was able in one day to prepare a new
lesson for my students [1] (in Czech only) ...

The script was written as an example for my students. Glad it was useful to you.

With all the elements in place, especially with Pietro's recent work, it should be quite easy to create a unifying module 'i.segment.classify' which would take as input

- the segments coming out of i.segment
- training zones
- a choice of variables
- a choice of classifier

would then calculate the chosen variables, submit the results to the classifier and then update the segment map attribute table with the classification result. In other words a frontend combining v.to.db, v.rast.stats, v.class.ml and possibly some others.

Moritz

Very useful resource Martin.

FWIW, the translation of the page is very usable for non-Czech speakers:

http://translate.google.com/translate?sl=cs&tl=en&js=n&prev=_t&hl=en&ie=UTF-8&u=http%3A%2F%2Fgeo.fsv.cvut.cz%2Fgwiki%2F153ZODH_%2F_15._cvi%25C4%258Den%25C3%25AD&act=url

Pierre

2014-01-29 Martin Landa <landa.martin@gmail.com>:

Hi Moritz,

2013-10-30 Moritz Lennert <mlennert@club.worldonline.be>:

though some components would be nice to have in addition. Attached you can
find a simple shell script which shows all the steps I went through. I
commented it extensively, so it hopefully is easy to understand.

I just wanted to thank you for the script and to the author(s) of
i.segment. Based on your script I was able in one day to prepare a new
lesson for my students [1] (in Czech only) ...

Martin

[1] http://geo.fsv.cvut.cz/gwiki/153ZODH_/_15._cvičení
_______________________________________________
grass-user mailing list
grass-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/grass-user

--
Scientist
Landcare Research, New Zealand

Salut GRASSers,

Moritz Lennert wrote:

[..]

With all the elements in place, especially with Pietro's recent work, it
should be quite easy to create a unifying module 'i.segment.classify'
which would take as input

- the segments coming out of i.segment
- training zones
- a choice of variables
- a choice of classifier

would then calculate the chosen variables, submit the results to the
classifier and then update the segment map attribute table with the
classification result. In other words a frontend combining v.to.db,
v.rast.stats, v.class.ml and possibly some others.

As another alternative way, so as to stay in the Raster world, what do you
think of simply using "r.statistics2" and providing an input "cover=" map
in order to derive segment-oriented statistics and use'm further, for example
in an unsupervised classification scheme?

Nikos

Hi,

2014-02-12 13:41 GMT+01:00 Nikos Alexandris <nik@nikosalexandris.net>:

[...]

think of simply using "r.statistics2" and providing an input "cover=" map

btw, it remembers me that we haven't yet decided about renaming
`r.statistics2` and `r.statistics3` to any reasonable name...

Martin

--
Martin Landa <landa.martin gmail.com> * http://geo.fsv.cvut.cz/gwiki/Landa

On Wed, Feb 12, 2014 at 7:42 AM, Martin Landa <landa.martin@gmail.com>wrote:

Hi,

2014-02-12 13:41 GMT+01:00 Nikos Alexandris <nik@nikosalexandris.net>:

[...]

> think of simply using "r.statistics2" and providing an input "cover="
map

btw, it remembers me that we haven't yet decided about renaming
`r.statistics2` and `r.statistics3` to any reasonable name...

And they confuses with r.stats and r.univar. And r.univar is the most

basic one from these, I would say, and has the most cryptic name.

Is the merging (some of) them together still an option?

Vaclav

Martin

--
Martin Landa <landa.martin gmail.com> * http://geo.fsv.cvut.cz/gwiki/Landa
_______________________________________________
grass-user mailing list
grass-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/grass-user

On 12/02/14 13:41, Nikos Alexandris wrote:

Salut GRASSers,

Moritz Lennert wrote:

[..]

With all the elements in place, especially with Pietro's recent work, it
should be quite easy to create a unifying module 'i.segment.classify'
which would take as input

- the segments coming out of i.segment
- training zones
- a choice of variables
- a choice of classifier

would then calculate the chosen variables, submit the results to the
classifier and then update the segment map attribute table with the
classification result. In other words a frontend combining v.to.db,
v.rast.stats, v.class.ml and possibly some others.

As another alternative way, so as to stay in the Raster world, what do you
think of simply using "r.statistics2" and providing an input "cover=" map
in order to derive segment-oriented statistics and use'm further, for example
in an unsupervised classification scheme?

Well, actually v.rast.stats uses r.univar with zonal stats, so it also goes through the raster world...

The vector approach does make it easier to calculate shape-related variables too. It also has the advantage of having just one vector map with all variables in the form of attributes instead of as many raster maps as you have attributes.

But as always, everyone has to see what suits them best.

Moritz

On 12/02/14 17:28, Vaclav Petras wrote:

On Wed, Feb 12, 2014 at 7:42 AM, Martin Landa <landa.martin@gmail.com
<mailto:landa.martin@gmail.com>> wrote:

    Hi,

    2014-02-12 13:41 GMT+01:00 Nikos Alexandris <nik@nikosalexandris.net
    <mailto:nik@nikosalexandris.net>>:

    [...]

     > think of simply using "r.statistics2" and providing an input
      "cover=" map

    btw, it remembers me that we haven't yet decided about renaming
    `r.statistics2` and `r.statistics3` to any reasonable name...

And they confuses with r.stats and r.univar. And r.univar is the most
basic one from these, I would say, and has the most cryptic name.

Is the merging (some of) them together still an option?

http://lists.osgeo.org/pipermail/grass-dev/2013-June/064634.html

In summary: r.statistics can be replaced by r.statistics2 and r.statistics3, with both of them renamed (none of them calculates the mode though, but I guess that could be added to r.statistics2, for integer cover maps).

r.stats and r.univar are different.

Hamish' suggested r.stats.summary for r.stats, but in my eyes r.univar provides more of a "summary" than r.stats. I don't have a better proposal, though. Maybe r.stats.binned ?

Moritz

Moritz