Just to chime in on Hamish' r.in.xyz - I just got back from a field survey
of the Bay of Fundy where 56GB of swath sonar (Simrad EM1002) was collected.
After exporting each survey day from Caris HIPS as xyz, I've used r.in.xyz +
r.patch to import the entire 56GB into Grass with no problems. I've always
felt that r.in.xyz runs very quickly given the size of each xyz dataset.
~ Eric.
-----Original Message-----
From: grassuser-bounces@grass.itc.it
To: Jonathan Greenberg; David Finlayson
Cc: grassuser@grass.itc.it; Helena Mitasova; Andrew@grass.itc.it; Danner
Sent: 8/24/2006 2:50 AM
Subject: Re: [GRASS-user] RE: [GRASSLIST:1174] Working with very large
datasets
David Finlayson wrote:
I am working with an interferometric sidescan SONAR system that
produces about 2 Gb of elevation and amplitude data per hour. Our raw
data density could support resolutions up to 0.1 m, but we currently
can't handle the data volume at that resolution so we decimate down to
1 m via a variety of filters. Still, even at 1 m resolution, our
datasets run into the hundreds of Mb and most current software just
doesn't handle the data volumes well.
Any thoughts on processing and working with these data volumes (LIDAR
folks)? I have struggled to provide a good product to our researchers
using both proprietary (Fledermaus, ArcGIS) and non-proprietary (GMT,
GRASS, my own scripts) post-processing software. Nothing is working
very well. The proprietary stuff seems easier at first, but becomes
difficult to automate. The non-proprietary stuff is easy to automate,
but often can't handle the data volumes without first down sampling
the data density (GMT does pretty well if you stick to line-by-line
processing, but that doesn't always work).
Just curious what work flows/software others are using. In particular,
I'd love to keep the whole process FOSS if possible. I don't trust
black boxes.
Speaking of large datasets, I have an extremely large number of ArcInfo LIDAR
DEM tiles that I want to import into grass and subsequently join together; is
there a way to batch import files that are all different naming conventions? I
imagine there is a script somewhere that provides a GUI that allows more than
one file name input. These files total about 17GB worth of data, and leaving as
separate tiles means that they are unmanageable. But importing one at a time is
a waste of my time if there is a better way out there (especially if there is a
way to import and patch at once?).
Thanks,
Brandon
--
Brandon M. Gabler
Research Associate
Department of Anthropology
1009 E South Campus Drive, Building #30A
University of Arizona
Tucson, AZ 85721
Phone: 520-621-8455
Fax: 520-621-2088
Quoting "Patton, Eric" <epatton@nrcan.gc.ca>:
Just to chime in on Hamish' r.in.xyz - I just got back from a field survey
of the Bay of Fundy where 56GB of swath sonar (Simrad EM1002) was collected.
After exporting each survey day from Caris HIPS as xyz, I've used r.in.xyz +
r.patch to import the entire 56GB into Grass with no problems. I've always
felt that r.in.xyz runs very quickly given the size of each xyz dataset.
~ Eric.
-----Original Message-----
From: grassuser-bounces@grass.itc.it
To: Jonathan Greenberg; David Finlayson
Cc: grassuser@grass.itc.it; Helena Mitasova; Andrew@grass.itc.it; Danner
Sent: 8/24/2006 2:50 AM
Subject: Re: [GRASS-user] RE: [GRASSLIST:1174] Working with very large
datasets
David Finlayson wrote:
I am working with an interferometric sidescan SONAR system that
produces about 2 Gb of elevation and amplitude data per hour. Our raw
data density could support resolutions up to 0.1 m, but we currently
can't handle the data volume at that resolution so we decimate down to
1 m via a variety of filters. Still, even at 1 m resolution, our
datasets run into the hundreds of Mb and most current software just
doesn't handle the data volumes well.
Any thoughts on processing and working with these data volumes (LIDAR
folks)? I have struggled to provide a good product to our researchers
using both proprietary (Fledermaus, ArcGIS) and non-proprietary (GMT,
GRASS, my own scripts) post-processing software. Nothing is working
very well. The proprietary stuff seems easier at first, but becomes
difficult to automate. The non-proprietary stuff is easy to automate,
but often can't handle the data volumes without first down sampling
the data density (GMT does pretty well if you stick to line-by-line
processing, but that doesn't always work).
Just curious what work flows/software others are using. In particular,
I'd love to keep the whole process FOSS if possible. I don't trust
black boxes.
Brandon,
You can create a shell script to batch import. Done it many times,
using the r.in.arc command. Very easy to do, let me know if you need
some help with it. Same with patching as well, I would basically import
the tiles, then at the end of the script, use the r.patch command.
Correct me if I am wrong out there GRASS world, might also want to run
r.fillnulls in case the tiles don't line up perfectly??
Kevin Slover
Coastal / GIS Specialist
2872 Woodcock Blvd Suite 230
Atlanta GA 30341
(P) 678-530-0022
(F) 678-530-0044
-----Original Message-----
From: grassuser-bounces@grass.itc.it
[mailto:grassuser-bounces@grass.itc.it] On Behalf Of Brandon M. Gabler
Sent: Thursday, August 24, 2006 2:02 PM
To: grassuser@grass.itc.it
Subject: RE: [GRASS-user] RE: [GRASSLIST:1174] Working with very large
datasets
Speaking of large datasets, I have an extremely large number of ArcInfo
LIDAR
DEM tiles that I want to import into grass and subsequently join
together; is
there a way to batch import files that are all different naming
conventions? I
imagine there is a script somewhere that provides a GUI that allows more
than
one file name input. These files total about 17GB worth of data, and
leaving as
separate tiles means that they are unmanageable. But importing one at a
time is
a waste of my time if there is a better way out there (especially if
there is a
way to import and patch at once?).
Thanks,
Brandon
--
Brandon M. Gabler
Research Associate
Department of Anthropology
1009 E South Campus Drive, Building #30A
University of Arizona
Tucson, AZ 85721
Phone: 520-621-8455
Fax: 520-621-2088
Quoting "Patton, Eric" <epatton@nrcan.gc.ca>:
Just to chime in on Hamish' r.in.xyz - I just got back from a field
survey
of the Bay of Fundy where 56GB of swath sonar (Simrad EM1002) was
collected.
After exporting each survey day from Caris HIPS as xyz, I've used
r.in.xyz +
r.patch to import the entire 56GB into Grass with no problems. I've
always
felt that r.in.xyz runs very quickly given the size of each xyz
dataset.
~ Eric.
-----Original Message-----
From: grassuser-bounces@grass.itc.it
To: Jonathan Greenberg; David Finlayson
Cc: grassuser@grass.itc.it; Helena Mitasova; Andrew@grass.itc.it;
Danner
Sent: 8/24/2006 2:50 AM
Subject: Re: [GRASS-user] RE: [GRASSLIST:1174] Working with very large
datasets
David Finlayson wrote:
I am working with an interferometric sidescan SONAR system that
produces about 2 Gb of elevation and amplitude data per hour. Our raw
data density could support resolutions up to 0.1 m, but we currently
can't handle the data volume at that resolution so we decimate down
to
1 m via a variety of filters. Still, even at 1 m resolution, our
datasets run into the hundreds of Mb and most current software just
doesn't handle the data volumes well.
Any thoughts on processing and working with these data volumes (LIDAR
folks)? I have struggled to provide a good product to our researchers
using both proprietary (Fledermaus, ArcGIS) and non-proprietary (GMT,
GRASS, my own scripts) post-processing software. Nothing is working
very well. The proprietary stuff seems easier at first, but becomes
difficult to automate. The non-proprietary stuff is easy to automate,
but often can't handle the data volumes without first down sampling
the data density (GMT does pretty well if you stick to line-by-line
processing, but that doesn't always work).
Just curious what work flows/software others are using. In
particular,
I'd love to keep the whole process FOSS if possible. I don't trust
black boxes.
This email transmission may contain confidential or privileged information. If you receive this email message in error, notify the sender by email and delete the email without reading, copying or disclosing the email contents. The unauthorized use or dissemination of any confidential or privileged information contained in this email is prohibited. If you are not the intended recipient and intentionally intercept or forward this message to someone else, you may be subject to criminal and/or civil penalties. See 18 U.S.C. 2511 et seq.
Here is one such example that I have used in the past:
with a directory full of misc. data, and a clean new GRASS location so as not
to pollute another:
for x in *
do
r.in.gdal in=$x out=x$_new #or r.in.arc
done
then use g.region rast=`g.mlist ..... ` to set the region to the extent of all
of the rasters -- and then patch away
always remembering that
region defines how r.patch will function
null values should be identified and marked as such (r.null)
and as kevin mentioned, r.fillnulls might solve gap issues.
cheers,
Dyan
On Thursday 24 August 2006 11:10, Slover, Kevin wrote:
Brandon,
You can create a shell script to batch import. Done it many times,
using the r.in.arc command. Very easy to do, let me know if you need
some help with it. Same with patching as well, I would basically import
the tiles, then at the end of the script, use the r.patch command.
Correct me if I am wrong out there GRASS world, might also want to run
r.fillnulls in case the tiles don't line up perfectly??
Kevin Slover
Coastal / GIS Specialist
2872 Woodcock Blvd Suite 230
Atlanta GA 30341
(P) 678-530-0022
(F) 678-530-0044
-----Original Message-----
From: grassuser-bounces@grass.itc.it
[mailto:grassuser-bounces@grass.itc.it] On Behalf Of Brandon M. Gabler
Sent: Thursday, August 24, 2006 2:02 PM
To: grassuser@grass.itc.it
Subject: RE: [GRASS-user] RE: [GRASSLIST:1174] Working with very large
datasets
Speaking of large datasets, I have an extremely large number of ArcInfo
LIDAR
DEM tiles that I want to import into grass and subsequently join
together; is
there a way to batch import files that are all different naming
conventions? I
imagine there is a script somewhere that provides a GUI that allows more
than
one file name input. These files total about 17GB worth of data, and
leaving as
separate tiles means that they are unmanageable. But importing one at a
time is
a waste of my time if there is a better way out there (especially if
there is a
way to import and patch at once?).
Thanks,
Brandon
--
Brandon M. Gabler
Research Associate
Department of Anthropology
1009 E South Campus Drive, Building #30A
University of Arizona
Tucson, AZ 85721
Phone: 520-621-8455
Fax: 520-621-2088
Quoting "Patton, Eric" <epatton@nrcan.gc.ca>:
> Just to chime in on Hamish' r.in.xyz - I just got back from a field
survey
> of the Bay of Fundy where 56GB of swath sonar (Simrad EM1002) was
collected.
> After exporting each survey day from Caris HIPS as xyz, I've used
r.in.xyz +
> r.patch to import the entire 56GB into Grass with no problems. I've
always
> felt that r.in.xyz runs very quickly given the size of each xyz
dataset.
> ~ Eric.
>
> -----Original Message-----
> From: grassuser-bounces@grass.itc.it
> To: Jonathan Greenberg; David Finlayson
> Cc: grassuser@grass.itc.it; Helena Mitasova; Andrew@grass.itc.it;
Danner
> Sent: 8/24/2006 2:50 AM
> Subject: Re: [GRASS-user] RE: [GRASSLIST:1174] Working with very large
> datasets
>
> David Finlayson wrote:
>> I am working with an interferometric sidescan SONAR system that
>> produces about 2 Gb of elevation and amplitude data per hour. Our raw
>> data density could support resolutions up to 0.1 m, but we currently
>> can't handle the data volume at that resolution so we decimate down
to
>> 1 m via a variety of filters. Still, even at 1 m resolution, our
>> datasets run into the hundreds of Mb and most current software just
>> doesn't handle the data volumes well.
>>
>> Any thoughts on processing and working with these data volumes (LIDAR
>> folks)? I have struggled to provide a good product to our researchers
>> using both proprietary (Fledermaus, ArcGIS) and non-proprietary (GMT,
>> GRASS, my own scripts) post-processing software. Nothing is working
>> very well. The proprietary stuff seems easier at first, but becomes
>> difficult to automate. The non-proprietary stuff is easy to automate,
>> but often can't handle the data volumes without first down sampling
>> the data density (GMT does pretty well if you stick to line-by-line
>> processing, but that doesn't always work).
>>
>> Just curious what work flows/software others are using. In
particular,
>> I'd love to keep the whole process FOSS if possible. I don't trust
>> black boxes.
>
> _______________________________________________
> grassuser mailing list
> grassuser@grass.itc.it
> http://grass.itc.it/mailman/listinfo/grassuser
This email transmission may contain confidential or privileged information.
If you receive this email message in error, notify the sender by email and
delete the email without reading, copying or disclosing the email contents.
The unauthorized use or dissemination of any confidential or privileged
information contained in this email is prohibited. If you are not the
intended recipient and intentionally intercept or forward this message to
someone else, you may be subject to criminal and/or civil penalties. See
18 U.S.C. 2511 et seq.
I assume that you were in a linux environment on your ship’s network?
Our aquisition and navigation software run only on Windows machines. Sidescan is processed on Windows, RTK GPS is on Windows, I have some flexibility on bathy processing but you can see the pattern…
I just updated the cygwin version of grass 6.1.cvs and it doesn’t have r.in.xyz.
Cygwin grass is kind-of a mixed bag anyway…
How much work is involved compiling Grass on cygwin? I’ve only compiled it on Linux.
Just to chime in on Hamish’ r.in.xyz - I just got back from a field survey
of the Bay of Fundy where 56GB of swath sonar (Simrad EM1002) was collected.
After exporting each survey day from Caris HIPS as xyz, I’ve used r.in.xyz +
r.patch to import the entire 56GB into Grass with no problems. I’ve always
felt that r.in.xyz runs very quickly given the size of each xyz dataset.
I am working with an interferometric sidescan SONAR system that
produces about 2 Gb of elevation and amplitude data per hour. Our raw
data density could support resolutions up to 0.1 m, but we currently
can’t handle the data volume at that resolution so we decimate down to
1 m via a variety of filters. Still, even at 1 m resolution, our
datasets run into the hundreds of Mb and most current software just
doesn’t handle the data volumes well.
Any thoughts on processing and working with these data volumes (LIDAR
folks)? I have struggled to provide a good product to our researchers
using both proprietary (Fledermaus, ArcGIS) and non-proprietary (GMT,
GRASS, my own scripts) post-processing software. Nothing is working
very well. The proprietary stuff seems easier at first, but becomes
difficult to automate. The non-proprietary stuff is easy to automate,
but often can’t handle the data volumes without first down sampling
the data density (GMT does pretty well if you stick to line-by-line
processing, but that doesn’t always work).
Just curious what work flows/software others are using. In particular,
I’d love to keep the whole process FOSS if possible. I don’t trust
black boxes.
On Fri, Aug 25, 2006 at 11:43:47AM +0100, Glynn Clements wrote:
David Finlayson wrote:
> I just updated the cygwin version of grass 6.1.cvs and it doesn't have
> r.in.xyz.
> Cygwin grass is kind-of a mixed bag anyway...
>
> How much work is involved compiling Grass on cygwin? I've only compiled it
> on Linux.
Compiling GRASS itself isn't any harder than on Linux, but you
may have to install a few more packages manually.