It has worked for two DEMS, but I have a problem with the third one at
the step where one "verifies that the number of rows in the ASCII file
corresponds to the number of cells in the enlarged region".
At this point, g.region reports 1146474 cells in the region, while I
have 1146370 lines of coordinates in my file.
The first three and last three coordinates in my text file are:
-----
-74575.000000,-3015850.000000,1548.830000
-74575.000000,-3015825.000000,1548.330000
-74575.000000,-3015800.000000,1547.500000
.
.
.
-49324.990000,-2987575.010000,1510.980000
-49324.990000,-2987550.010000,1511.240000
-49324.990000,-2987525.010000,1511.470000
-----
Can someone help me to figure out what's going on?
It has worked for two DEMS, but I have a problem with the third one at
the step where one "verifies that the number of rows in the ASCII file
corresponds to the number of cells in the enlarged region".
At this point, g.region reports 1146474 cells in the region, while I
have 1146370 lines of coordinates in my file.
Assuming you already did the "r.in.xyz -s ..." step to get and set your region to match the input data.
So it looks like there are about 100 coordinates missing from the ASCII file. Maybe "holes" in the data?
One way to work around this might be to import the point data as a 3D vector, then run the usual v.surf.rst interpolation.
v.in.xyz -z in=ascii.txt out=elev_pts z=3 fs=,
then
g.region vect=elev_pts res=<choose appropriate resolution>
v.surf.rst elev_pts elev=dem layer=0
The first three and last three coordinates in my text file are:
-----
-74575.000000,-3015850.000000,1548.830000
-74575.000000,-3015825.000000,1548.330000
-74575.000000,-3015800.000000,1547.500000
.
-49324.990000,-2987575.010000,1510.980000
-49324.990000,-2987550.010000,1511.240000
-49324.990000,-2987525.010000,1511.470000
-----
Can someone help me to figure out what's going on?
It has worked for two DEMS, but I have a problem with the third one at
the step where one "verifies that the number of rows in the ASCII file
corresponds to the number of cells in the enlarged region".
At this point, g.region reports 1146474 cells in the region, while I
have 1146370 lines of coordinates in my file.
Assuming you already did the "r.in.xyz -s ..." step to get and set your
region to match the input data.
So it looks like there are about 100 coordinates missing from the ASCII
file. Maybe "holes" in the data?
One way to work around this might be to import the point data as a 3D
vector, then run the usual v.surf.rst interpolation.
v.in.xyz -z in=ascii.txt out=elev_pts z=3 fs=,
then
g.region vect=elev_pts res=<choose appropriate resolution>
v.surf.rst elev_pts elev=dem layer=0
Thanks for the suggestion, Micha.
I've tried the interpolation route before and found that the
difference between the resulting interpolated surface and the original
DEM was up to 7m. I want to use the DEMs for a flood application, so
they need to be as accurate as possible.
I was thinking perhaps importing the points as vectors, converting
them to raster and then doing a nearest neighbour or IDW interpolation
to fill the gaps. At least then I'll be able to see where the gaps are
and limit the interpolated pixels using a mask?
The first three and last three coordinates in my text file are:
-----
-74575.000000,-3015850.000000,1548.830000
-74575.000000,-3015825.000000,1548.330000
-74575.000000,-3015800.000000,1547.500000
.
.
.
-49324.990000,-2987575.010000,1510.980000
-49324.990000,-2987550.010000,1511.240000
-49324.990000,-2987525.010000,1511.470000
-----
Can someone help me to figure out what's going on?
I was thinking perhaps importing the points as vectors, converting
them to raster and then doing a nearest neighbour or IDW interpolation
to fill the gaps. At least then I'll be able to see where the gaps are
and limit the interpolated pixels using a mask?
No need to do anything different to find the missing pixels. Inspecting
the output of r.univar with r.in.xyz's method=n maps can be very useful
for troubleshooting.
from the help page:
Gridded data
If data is known to be on a regular grid r.in.xyz can
reconstruct the map perfectly as long as some care is
taken to set up the region correctly and that the
data's native map projection is used. A typical method
would involve determining the grid resolution either by
examining the data's associated documentation or by
studying the text file. Next scan the data with
r.in.xyz's -s (or -g) flag to find the input data's
bounds. GRASS uses the cell-center raster convention
where data points fall within the center of a cell, as
opposed to the grid-node convention. Therefore you will
need to grow the region out by half a cell in all
directions beyond what the scan found in the file.
After the region bounds and resolution are set cor-
rectly with g.region, run r.in.xyz using the n method
and verify that n=1 at all places. r.univar can help.
Once you are confident that the region exactly matches
the data proceed to run r.in.xyz using one of the mean,
min, max, or median methods. With n=1 throughout, the
result should be identical regardless of which of those
methods are used.
with the "n" map you might use r.mapcalc to extract the NULL cells
as some value, then r.out.xyz or r.to.vect on th extracts to highlight
where they are. Or maybe you get lucky with r.colors with "nv" set to
bright magenta on the original data.
It has worked for two DEMS, but I have a problem with the third one at
the step where one "verifies that the number of rows in the ASCII file
corresponds to the number of cells in the enlarged region".
At this point, g.region reports 1146474 cells in the region, while I
have 1146370 lines of coordinates in my file.
Assuming you already did the "r.in.xyz -s ..." step to get and set your
region to match the input data.
So it looks like there are about 100 coordinates missing from the ASCII
file. Maybe "holes" in the data?
One way to work around this might be to import the point data as a 3D
vector, then run the usual v.surf.rst interpolation.
v.in.xyz -z in=ascii.txt out=elev_pts z=3 fs=,
then
g.region vect=elev_pts res=<choose appropriate resolution>
v.surf.rst elev_pts elev=dem layer=0
Thanks for the suggestion, Micha.
I've tried the interpolation route before and found that the
difference between the resulting interpolated surface and the original
DEM was up to 7m. I want to use the DEMs for a flood application, so
they need to be as accurate as possible.
I wonder if playiing with the "tension" and "smooth" parameters would help? Higher tension (>40) means that the interpolation acts more like a rubber sheet, so each point influences a maller area, and smooth=0 means that the final dem must go exactly thru each point.
I was thinking perhaps importing the points as vectors, converting
them to raster and then doing a nearest neighbour or IDW interpolation
to fill the gaps. At least then I'll be able to see where the gaps are
and limit the interpolated pixels using a mask?
Worth a try. But as you probably know, nearest neighbor won't take trends in the surface into account. IDW the same. And IDW usually gives worse terrain results than RST.
The first three and last three coordinates in my text file are:
-----
-74575.000000,-3015850.000000,1548.830000
-74575.000000,-3015825.000000,1548.330000
-74575.000000,-3015800.000000,1547.500000
.
-49324.990000,-2987575.010000,1510.980000
-49324.990000,-2987550.010000,1511.240000
-49324.990000,-2987525.010000,1511.470000
-----
Can someone help me to figure out what's going on?
I don't think it's this bug because this bug discards only one line of
data. I don't get any data in because the number of coordinate pairs
in the file is less than the number of cells in the defined region.
I was thinking perhaps importing the points as vectors, converting
them to raster and then doing a nearest neighbour or IDW interpolation
to fill the gaps. At least then I'll be able to see where the gaps are
and limit the interpolated pixels using a mask?
No need to do anything different to find the missing pixels. Inspecting
the output of r.univar with r.in.xyz's method=n maps can be very useful
for troubleshooting.
from the help page:
Gridded data
If data is known to be on a regular grid r.in.xyz can
reconstruct the map perfectly as long as some care is
taken to set up the region correctly and that the
data's native map projection is used. A typical method
would involve determining the grid resolution either by
examining the data's associated documentation or by
studying the text file. Next scan the data with
r.in.xyz's -s (or -g) flag to find the input data's
bounds. GRASS uses the cell-center raster convention
where data points fall within the center of a cell, as
opposed to the grid-node convention. Therefore you will
need to grow the region out by half a cell in all
directions beyond what the scan found in the file.
After the region bounds and resolution are set cor-
rectly with g.region, run r.in.xyz using the n method
and verify that n=1 at all places. r.univar can help.
Once you are confident that the region exactly matches
the data proceed to run r.in.xyz using one of the mean,
min, max, or median methods. With n=1 throughout, the
result should be identical regardless of which of those
methods are used.
with the "n" map you might use r.mapcalc to extract the NULL cells
as some value, then r.out.xyz or r.to.vect on th extracts to highlight
where they are. Or maybe you get lucky with r.colors with "nv" set to
bright magenta on the original data.
Thanks, I'll try this to find where the holes in the data are.
I don't think it's this bug because this bug discards only one line of
data. I don't get any data in because the number of coordinate pairs
in the file is less than the number of cells in the defined region.
Weird. In 6.4, r.in.xyz does import a file where the number of
coordinate pairs is far less than the number of cells in the defined
region. (I just did a simple test with two input lines and a region
with 26.5 million cells, got clean import and correct result)
You can interpolate NULL cells and a the same time keep non NULL cell
values with r.fillnulls.
have clean 25m resolution for both ns and ew in order to make life
easier. The 1cm difference in the last input lines you posted can not
possibly make a difference with 25m point spacing and is most probably
some floating point rounding error introduced by some preprocessing to
generate the ascii xyz input file.
HTH,
Markus M
I was thinking perhaps importing the points as vectors, converting
them to raster and then doing a nearest neighbour or IDW interpolation
to fill the gaps. At least then I'll be able to see where the gaps are
and limit the interpolated pixels using a mask?
No need to do anything different to find the missing pixels. Inspecting
the output of r.univar with r.in.xyz's method=n maps can be very useful
for troubleshooting.
from the help page:
Gridded data
If data is known to be on a regular grid r.in.xyz can
reconstruct the map perfectly as long as some care is
taken to set up the region correctly and that the
data's native map projection is used. A typical method
would involve determining the grid resolution either by
examining the data's associated documentation or by
studying the text file. Next scan the data with
r.in.xyz's -s (or -g) flag to find the input data's
bounds. GRASS uses the cell-center raster convention
where data points fall within the center of a cell, as
opposed to the grid-node convention. Therefore you will
need to grow the region out by half a cell in all
directions beyond what the scan found in the file.
After the region bounds and resolution are set cor-
rectly with g.region, run r.in.xyz using the n method
and verify that n=1 at all places. r.univar can help.
Once you are confident that the region exactly matches
the data proceed to run r.in.xyz using one of the mean,
min, max, or median methods. With n=1 throughout, the
result should be identical regardless of which of those
methods are used.
with the "n" map you might use r.mapcalc to extract the NULL cells
as some value, then r.out.xyz or r.to.vect on th extracts to highlight
where they are. Or maybe you get lucky with r.colors with "nv" set to
bright magenta on the original data.
Thanks, I'll try this to find where the holes in the data are.