Discussion:
[Geotools-devel] Coverages again; heartburn check
Bryce L Nordgren
2006-11-09 23:05:06 UTC
Permalink
Confluence and Jira are so awesome: can drop a topic for a season, then
pick it up again right where you left off.

This is a pondering email to generate responses which guide my further
thinking.

If there were such a thing as a Map, where the Keys were geometry objects,
and the Values were a particular arbitrary class (possibly containing
geometry objects); this is almost precisely the definition of a coverage
via 19123. (Make the keys "DomainObjects" to be perfect.) Does anyone
object to making the Coverage interface extend Map? The implication is
that the individual entries in the Map are not themselves Features. (This
agrees with 19123.) The coverage itself is the feature. The drawback to
this analogy is that Maps have a size() and can generate a finite keySet,
entrySet, and Collection of Values. A coverage does not necessarily have
any entries (it could calculate Values from an equation which accepts
Keys.) Any opinions on whether this size()/keySet()/entrySet()/values()
issue should influence the extension of Map?

Can someone tell me if coverages would break existing feature code? My
concern is with how coverages expose data. You get data from a coverage by
calling the "evaluate()", "select()", or "list()" methods and providing a
location. (Discrete coverages have more methods to access data and more
associations with data-bearing-entities.) A coverage does not expose data
via attributes. This is obviously only a concern if Coverage extends
Feature, and client code is written assuming all Features expose data as
attributes. Is this issue important or trivial?

The special case of a discrete coverage is just a collection of homogeneous
records (not themselves features) which are indexed by geospatial location.
A discrete coverage is very much like a database table. A discrete
coverage _can_ provide values for size(), can generate the Sets and
Collections listed above, and can therefore fully implement the Map
interface.

An implementation of DiscreteCoverage could also implement TableModel.
(More generically, we could write an adapter which provides TableModel
functionality given a DiscreteCoverage, and vice versa.) DiscreteCoverage
would also lend itself to implementing (or adaptation to)
java.sql.ResultSet. Both options explicitly communicate that a
DiscreteCoverage is a set of entries, each of which have the same
attributes (columns). These adapters probably should be written in terms
of Map.Entries, if Coverage extends Map.

Is TableModel happy with SWT folks or does this display a Swing bias? Any
heartburn with ResultSet, or is there perhaps a better JDBC-related
alternative you would like to propose?

The quadrilateral gridded data coverage has a number of specialized
possibilities. Adaptors to and from java.awt.image.Raster and TableModel
(cells return data values of a single numeric band) spring to mind
immediately.

THis is probably enough for today. :)

Bryce
Jody Garnett
2006-11-09 23:37:57 UTC
Permalink
Post by Bryce L Nordgren
Confluence and Jira are so awesome: can drop a topic for a season, then
pick it up again right where you left off.
Hiya Bryce :-P
Post by Bryce L Nordgren
If there were such a thing as a Map, where the Keys were geometry objects,
and the Values were a particular arbitrary class (possibly containing
geometry objects); this is almost precisely the definition of a coverage
via 19123. (Make the keys "DomainObjects" to be perfect.) Does anyone
object to making the Coverage interface extend Map? The implication is
that the individual entries in the Map are not themselves Features. (This
agrees with 19123.) The coverage itself is the feature. The drawback to
this analogy is that Maps have a size() and can generate a finite keySet,
entrySet, and Collection of Values. A coverage does not necessarily have
any entries (it could calculate Values from an equation which accepts
Keys.) Any opinions on whether this size()/keySet()/entrySet()/values()
issue should influence the extension of Map?
Well the KeySet does not have to store individual entries; only provide
an iterator to them.
In FeatureCollection land we probably should have the same relationship
between a KeySet (lets call it FidSet) and the contents
that you describe above. Something to consider is a subclass of Set that
supports spatial query...
Post by Bryce L Nordgren
Can someone tell me if coverages would break existing feature code? My
concern is with how coverages expose data.
I have been waiting to hear back on this; RobA has had feedback a couple
times on this - in a similar fashion I need to understand how a
"operation" is used to define a coverage to ensure that the Feature
Model Operation and Coverage Operation can get along.
Post by Bryce L Nordgren
You get data from a coverage by calling the "evaluate()", "select()", or "list()" methods and providing a location. (Discrete coverages have more methods to access data and more
associations with data-bearing-entities.) A coverage does not expose data
via attributes. This is obviously only a concern if Coverage extends
Feature, and client code is written assuming all Features expose data as
attributes. Is this issue important or trivial?
Not sure I understand; some points of reference:
- Coverage is intended to extend Feature
- Feature access is safely done via Expression
- SLD allows you to define Expressions against a GridCoverage as part of
a RasterSymbolizer (assume it selects out "record values" representing
different bands for use in mathmatical expressions resulting in one of
Red, Green, Blue for visualization).

So as long as Coverage understands Expression (see recent email
messages) you should be fine. What is interesting is ensuring that the
Coverage supports enough description about its contents to allow users
to create useful expressions.
Post by Bryce L Nordgren
The special case of a discrete coverage is just a collection of homogeneous
records (not themselves features) which are indexed by geospatial location.
A discrete coverage is very much like a database table. A discrete
coverage _can_ provide values for size(), can generate the Sets and
Collections listed above, and can therefore fully implement the Map
interface.
Understood; and I wonder in what sense a Coverage cannot? (Perhaps it is
a pure math coverage of infinite precision? In which case the logical
thing to do would be to create a KeySet can test set membership but have
"Integer.MAXINT" for size()? But you are correct the model would be
broken ...
Post by Bryce L Nordgren
An implementation of DiscreteCoverage could also implement TableModel.
(More generically, we could write an adapter which provides TableModel
functionality given a DiscreteCoverage, and vice versa.) DiscreteCoverage
would also lend itself to implementing (or adaptation to)
java.sql.ResultSet. Both options explicitly communicate that a
DiscreteCoverage is a set of entries, each of which have the same
attributes (columns). These adapters probably should be written in terms
of Map.Entries, if Coverage extends Map.
I would be a bit happier if we implement it as what it is; and write
adapters to TableModel (swing) or DataProvider( SWT) etc... we can use
other code for help in naming of course.
Post by Bryce L Nordgren
Is TableModel happy with SWT folks or does this display a Swing bias? Any
heartburn with ResultSet, or is there perhaps a better JDBC-related
alternative you would like to propose?
ResultSet is fine? But I think I need more understanding of what you are
trying to accomplish first.
Post by Bryce L Nordgren
The quadrilateral gridded data coverage has a number of specialized
possibilities. Adaptors to and from java.awt.image.Raster and TableModel
(cells return data values of a single numeric band) spring to mind
immediately.
Aside: GridCoverage is deprecated on GeoAPI trunk; but no alternative is
provided (!) If we do not have a replacement
can we remove the deprecation please.
Post by Bryce L Nordgren
THis is probably enough for today. :)
Cheers,
Jody
Martin Desruisseaux
2006-11-11 16:43:13 UTC
Permalink
Post by Jody Garnett
Aside: GridCoverage is deprecated on GeoAPI trunk; but no alternative is
provided (!) If we do not have a replacement
can we remove the deprecation please.
Yes, there is too much deprecated stuff in GeoAPI coverage interfaces. I will
look at them while performing the coverage branch merge review (which I started
yesterday).

Talking about deprecated stuff: The following Coverage method from OGC 01-004:

Object evaluate(DirectPosition)

has been replaced by

Set evaluate(DirectPosition)

which is a compatibility break (and break the Seagis application for example).
Simone, could you confirm that this is not an ISO 19123 methods (it doesn't
seems to me at a first look - the ISO 19123 'evaluate' method has different
arguments) and can I switch back the original Object return type before 2.3
release? I will probably deprecate this method in the process.

Martin
Martin Desruisseaux
2006-11-11 16:30:45 UTC
Permalink
Post by Bryce L Nordgren
If there were such a thing as a Map, where the Keys were geometry objects,
and the Values were a particular arbitrary class (possibly containing
geometry objects); this is almost precisely the definition of a coverage
via 19123. (Make the keys "DomainObjects" to be perfect.) Does anyone
object to making the Coverage interface extend Map? The implication is
that the individual entries in the Map are not themselves Features. (This
agrees with 19123.) The coverage itself is the feature. The drawback to
this analogy is that Maps have a size() and can generate a finite keySet,
entrySet, and Collection of Values. A coverage does not necessarily have
any entries (it could calculate Values from an equation which accepts
Keys.) Any opinions on whether this size()/keySet()/entrySet()/values()
issue should influence the extension of Map?
As of Map contract, we have to implement iterators over entrySet(). It is not
clear to me how such iterator could be implemented if Coverage can generates an
infinite amount of values computed from an infinite amount of arbitrary keys.
Actually, I think that in term of ISO, Coverage would be an extension of a
TransfiniteMap (if such interface existed in Java) rather than a Map.

Citing ISO 19107:

"TransfiniteSet<T> – a possibly infinite set; restricted only to values.
For example, the integers and the real numbers are transfinite sets.
This is actually the usual definition of set in mathematics, but
programming languages restrict the term set to mean finite set."

Same applies to Map I guess: in mathematic, Map should have been a possibly
infinite map without iterator() or size() methods, and the current java.util.Map
should have been a subclass of mathematic Map called "FiniteMap".

This lead me to believe that a mathematic Map object (or TransfiniteMap) would
have been an appropriate superclass for Coverage, but java.util.Map is not. I
would prefer a MapAdapter class or a "Coverage.mapView(...)" method, where the
'mapView(...)' arguments are the envelope and the interval to use for iterating
over the domain. Then we get a finite map, consistent with java.util.Map contract.
Post by Bryce L Nordgren
The special case of a discrete coverage is just a collection of homogeneous
records (not themselves features) which are indexed by geospatial location.
A discrete coverage is very much like a database table. A discrete
coverage _can_ provide values for size(), can generate the Sets and
Collections listed above, and can therefore fully implement the Map
interface.
Right. It would be correct to set "DiscreteCoverage extends Coverage, Map"
(which is allowed for interfaces), but I would avoid "Coverage extends Map".

However, I may wish to try to implements ISO 19123 as it stand before to add Map
in the inheritence (i.e. we may want to see how MapAdapter of
Coverage.mapView(...) work in practice, if we implement them).

Martin
Jody Garnett
2006-11-11 23:05:11 UTC
Permalink
Post by Martin Desruisseaux
"TransfiniteSet<T> – a possibly infinite set; restricted only to values.
For example, the integers and the real numbers are transfinite sets.
This is actually the usual definition of set in mathematics, but
programming languages restrict the term set to mean finite set."
Same applies to Map I guess: in mathematic, Map should have been a possibly
infinite map without iterator() or size() methods, and the current java.util.Map
should have been a subclass of mathematic Map called "FiniteMap".
This lead me to believe that a mathematic Map object (or TransfiniteMap) would
have been an appropriate superclass for Coverage, but java.util.Map is not. I
would prefer a MapAdapter class or a "Coverage.mapView(...)" method, where the
'mapView(...)' arguments are the envelope and the interval to use for iterating
over the domain. Then we get a finite map, consistent with java.util.Map contract.
Thanks Martin this makes a great deal more sense to me; indeed it also
lends to my understanding of how a coverage can be defined by an
operation. For what is a function by a from parameters to values such
as you describe.
Post by Martin Desruisseaux
Post by Bryce L Nordgren
The special case of a discrete coverage is just a collection of homogeneous
records (not themselves features) which are indexed by geospatial location.
A discrete coverage is very much like a database table. A discrete
coverage _can_ provide values for size(), can generate the Sets and
Collections listed above, and can therefore fully implement the Map
interface.
Right. It would be correct to set "DiscreteCoverage extends Coverage, Map"
(which is allowed for interfaces), but I would avoid "Coverage extends Map".
However, I may wish to try to implements ISO 19123 as it stand before to add Map
in the inheritence (i.e. we may want to see how MapAdapter of
Coverage.mapView(...) work in practice, if we implement them).
Lets implemented it as it Stands, and use our knowledge of Java to make
the result approachable to the programming community at large.

Question to refine the problem:
- Am I to understand then that the "values" for this Map is always Records?
- We can always play the statistics game to ask for a "representative
sample" in order to produce a normal Java Map out of a Coverage.

PS. Cory when you are working with your aggregation functions it may do
well to break out this idea of a "representative sample" as a
FeatureSouce operation.

Cheers,
Jody
Martin Desruisseaux
2006-11-11 16:49:08 UTC
Permalink
Talking about Map in the context of Coverage, do you have an idea why ISO 19123
methods like:

Set<GeometryValuePair> select(Geometry s, Period t);

Are not

Map<Geometry, Value> select(Geometry s, Period t);

instead? And should we consider using Map instead of Set? This bring us close to
your "Coverage extends Map" proposal (or my Coverage.mapView(...) amendment).

Martin
Bryce L Nordgren
2006-11-14 00:49:01 UTC
Permalink
Post by Jody Garnett
Hiya Bryce :-P
:-P
Post by Jody Garnett
Post by Bryce L Nordgren
If there were such a thing as a Map, where the Keys were geometry
objects,
Post by Jody Garnett
Post by Bryce L Nordgren
and the Values were a particular arbitrary class (possibly containing
geometry objects); this is almost precisely the definition of a
coverage
Post by Jody Garnett
Post by Bryce L Nordgren
via 19123. (Make the keys "DomainObjects" to be perfect.) Does anyone
object to making the Coverage interface extend Map? The implication is
that the individual entries in the Map are not themselves Features.
(This
Post by Jody Garnett
Post by Bryce L Nordgren
agrees with 19123.) The coverage itself is the feature. The drawback
to
Post by Jody Garnett
Post by Bryce L Nordgren
this analogy is that Maps have a size() and can generate a finite
keySet,
Post by Jody Garnett
Post by Bryce L Nordgren
entrySet, and Collection of Values. A coverage does not necessarily
have
Post by Jody Garnett
Post by Bryce L Nordgren
any entries (it could calculate Values from an equation which accepts
Keys.) Any opinions on whether this
size()/keySet()/entrySet()/values()
Post by Jody Garnett
Post by Bryce L Nordgren
issue should influence the extension of Map?
Well the KeySet does not have to store individual entries; only provide
an iterator to them.
Good point. But iterating over an infinite set is also problematic.
Post by Jody Garnett
In FeatureCollection land we probably should have the same relationship
between a KeySet (lets call it FidSet) and the contents
that you describe above. Something to consider is a subclass of Set that
supports spatial query...
Hmmm. I think you're describing TransfiniteSet ala 19103, which is a
superclass of Set. Both have contains(), but Set can have iterators. The
iterators bite us.
Post by Jody Garnett
Post by Bryce L Nordgren
Can someone tell me if coverages would break existing feature code? My
concern is with how coverages expose data.
I have been waiting to hear back on this; RobA has had feedback a couple
times on this - in a similar fashion I need to understand how a
"operation" is used to define a coverage to ensure that the Feature
Model Operation and Coverage Operation can get along.
Post by Bryce L Nordgren
You get data from a coverage by calling the "evaluate()",
"select()", or "list()" methods and providing a location. (Discrete
coverages have more methods to access data and more
Post by Bryce L Nordgren
associations with data-bearing-entities.) A coverage does not expose
data
Post by Jody Garnett
Post by Bryce L Nordgren
via attributes. This is obviously only a concern if Coverage extends
Feature, and client code is written assuming all Features expose data
as
Post by Jody Garnett
Post by Bryce L Nordgren
attributes. Is this issue important or trivial?
- Coverage is intended to extend Feature
- Feature access is safely done via Expression
- SLD allows you to define Expressions against a GridCoverage as part of
a RasterSymbolizer (assume it selects out "record values" representing
different bands for use in mathmatical expressions resulting in one of
Red, Green, Blue for visualization).
We might be able to make Coverage extend feature. We might also just want
to provide a coverage implementation of the Feature interface. The second
one might be more in line with the spirit of 19100. I haven't really
resolved this issue in my own mind yet.

Regardless, a "Coverage" will be manipulated with the feature interface. A
"Coverage" will advertise a standard set of operations (list(), select(),
find(), evaluate(), evaluateInverse()). It's just like determining methods
using reflection on a class, only we're exposing the "operations" using the
feature interface. At least that's how I recall it should work.

We could make a POJO<->Feature adapter which exposes public fields as
Feature attributes and public method signatures as Feature operations.
Then we just write Coverage as a POJO and adapt it into the Feature
interface. But we'd have to allow run-time extension. I think this is why
I was looking at EMF. I'll have to dwell on this one.
Post by Jody Garnett
So as long as Coverage understands Expression (see recent email
messages) you should be fine. What is interesting is ensuring that the
Coverage supports enough description about its contents to allow users
to create useful expressions.
Righto. I need to delve into archives w.r.t. Expression. Gotcha.
Post by Jody Garnett
Post by Bryce L Nordgren
The special case of a discrete coverage is just a collection of
homogeneous
Post by Jody Garnett
Post by Bryce L Nordgren
records (not themselves features) which are indexed by geospatial
location.
Post by Jody Garnett
Post by Bryce L Nordgren
A discrete coverage is very much like a database table. A discrete
coverage _can_ provide values for size(), can generate the Sets and
Collections listed above, and can therefore fully implement the Map
interface.
Understood; and I wonder in what sense a Coverage cannot? (Perhaps it is
a pure math coverage of infinite precision? In which case the logical
thing to do would be to create a KeySet can test set membership but have
"Integer.MAXINT" for size()? But you are correct the model would be
broken ...
Bingo. Broken model bad. Don't forget can't iterate over an infinite set.
Two broken contracts.
Post by Jody Garnett
Post by Bryce L Nordgren
An implementation of DiscreteCoverage could also implement TableModel.
(More generically, we could write an adapter which provides TableModel
functionality given a DiscreteCoverage, and vice versa.)
DiscreteCoverage
Post by Jody Garnett
Post by Bryce L Nordgren
would also lend itself to implementing (or adaptation to)
java.sql.ResultSet. Both options explicitly communicate that a
DiscreteCoverage is a set of entries, each of which have the same
attributes (columns). These adapters probably should be written in
terms
Post by Jody Garnett
Post by Bryce L Nordgren
of Map.Entries, if Coverage extends Map.
I would be a bit happier if we implement it as what it is; and write
adapters to TableModel (swing) or DataProvider( SWT) etc... we can use
other code for help in naming of course.
Adapters are good. I just wanted to emphasize that coverages enforce "all
records contain the same fields" and aren't just arbitrary collections of
heterogeneous features.

If we write an adapter to DataProvider, do we need another library? Is
there an SWT extension module to collect code which interoperates with this
library?
Post by Jody Garnett
Post by Bryce L Nordgren
Is TableModel happy with SWT folks or does this display a Swing bias?
Any
Post by Jody Garnett
Post by Bryce L Nordgren
heartburn with ResultSet, or is there perhaps a better JDBC-related
alternative you would like to propose?
ResultSet is fine? But I think I need more understanding of what you are
trying to accomplish first.
Well, I was thinking of exposing the coverage contents using "normal" java
idioms. I'm not thinking of this for GeoAPI, mind you, but for the various
implementations. It could also go the other way. A coverage could wrap a
ResultSet (containing domain/range pairs). So, generic geospatial code
using Coverage.evaluate(), Coverage.select(), etc. could actually operate
on a remote data store.
Post by Jody Garnett
Post by Bryce L Nordgren
THis is probably enough for today. :)
:)

I'll go look at emails regarding Expression and come back when I'm less
dumb.

Bryce
Jody Garnett
2006-11-14 01:18:06 UTC
Permalink
Post by Bryce L Nordgren
Good point. But iterating over an infinite set is also problematic.
Post by Jody Garnett
In FeatureCollection land we probably should have the same relationship
between a KeySet (lets call it FidSet) and the contents
that you describe above. Something to consider is a subclass of Set that
supports spatial query...
Hmmm. I think you're describing TransfiniteSet ala 19103, which is a
superclass of Set. Both have contains(), but Set can have iterators. The
iterators bite us.
And that is exactly the feedback Martin gave me :-P Nice when people agree.
Post by Bryce L Nordgren
Post by Jody Garnett
- Coverage is intended to extend Feature
- Feature access is safely done via Expression
- SLD allows you to define Expressions against a GridCoverage as part of
a RasterSymbolizer (assume it selects out "record values" representing
different bands for use in mathmatical expressions resulting in one of
Red, Green, Blue for visualization).
We might be able to make Coverage extend feature. We might also just want
to provide a coverage implementation of the Feature interface. The second
one might be more in line with the spirit of 19100. I haven't really
resolved this issue in my own mind yet.
I wonder w/ respect to Coverage extending Feature if they are just
advertising a few attributes (such as bounds?)
Post by Bryce L Nordgren
Regardless, a "Coverage" will be manipulated with the feature interface. A
"Coverage" will advertise a standard set of operations (list(), select(),
find(), evaluate(), evaluateInverse()). It's just like determining methods
using reflection on a class, only we're exposing the "operations" using the
feature interface. At least that's how I recall it should work.
Interesting, okay that can work for me.
Post by Bryce L Nordgren
We could make a POJO<->Feature adapter which exposes public fields as
Feature attributes and public method signatures as Feature operations.
Then we just write Coverage as a POJO and adapt it into the Feature
interface. But we'd have to allow run-time extension. I think this is why
I was looking at EMF. I'll have to dwell on this one.
I have been thinking about that (and you can see the work occurring on
the various Pojo datastores), however I am also attracted to the earlier
suggestion of defining "Mix-ins" where an an operation is defined
against a set of specific attributes (captured with Java interface or
data structure?) and then we can make a wrapper for each feature that
has the correct attributes; this would separate the definition of
operation code from the identification of types where the operation can
be used.

This would all be under the covers of the feature model; in much the
same way as the recent XPath support is under the covers of the Filter
implementation.
Bryce L Nordgren
2006-11-14 19:56:17 UTC
Permalink
Post by Jody Garnett
Post by Bryce L Nordgren
We might be able to make Coverage extend feature. We might also just
want
Post by Jody Garnett
Post by Bryce L Nordgren
to provide a coverage implementation of the Feature interface. The
second
Post by Jody Garnett
Post by Bryce L Nordgren
one might be more in line with the spirit of 19100. I haven't really
resolved this issue in my own mind yet.
I wonder w/ respect to Coverage extending Feature if they are just
advertising a few attributes (such as bounds?)
They are advertising a few _Properties_. Some properties are attributes
(rangeType, extent, commonPointRule), some properties are operations (list,
select, find, evaluate, evaluateInverse).
Post by Jody Garnett
Post by Bryce L Nordgren
We could make a POJO<->Feature adapter which exposes public fields as
Feature attributes and public method signatures as Feature operations.
Then we just write Coverage as a POJO and adapt it into the Feature
interface. But we'd have to allow run-time extension. I think this is
why
Post by Jody Garnett
Post by Bryce L Nordgren
I was looking at EMF. I'll have to dwell on this one.
I have been thinking about that (and you can see the work occurring on
the various Pojo datastores), however I am also attracted to the earlier
suggestion of defining "Mix-ins" where an an operation is defined
against a set of specific attributes (captured with Java interface or
data structure?) and then we can make a wrapper for each feature that
has the correct attributes; this would separate the definition of
operation code from the identification of types where the operation can
be used.
You seem to be thinking of a Java implementation of a Data Product
Specification (19131), which is unique because 19131 exists to help people
write human-readable data product descriptions. Essentially, you have a
pre-existing registry of well-defined Features and another registry of well
defined properties. A data product is specified by requiring the presence
of items from these registries. So, you can say "the MODIS fire product
contains a raster coverage which is a fire product classification, and a
fire point coverage." The raster classification and the fire point
coverages are defined in external registries, so you know the meanings of
the values in the various attributes. You don't know how these data are
divided into attributes and operations but you know what conceptual
entities the dataset must contain, so the human reader knows they can use
it.

It sounds like you want to be able to write code to manipulate and query a
Java Data Product Specification, and write adapters from a particular
incarnation of that data product to the specification.

I think it's a good idea. I think you can make a Data Product Mapper which
could offer the human user the opportunity to connect concepts from the
data product specification to the actual data in the data store.
Presumably they could save a particular mapping for later use, and perhaps
trade it with their friends. I'm pretty sure that automated mapping would
be hard.
Post by Jody Garnett
Post by Bryce L Nordgren
Righto. I need to delve into archives w.r.t. Expression. Gotcha.
The following page should be sufficient (the long and short of it is
that expression is starting to work against features, featuretypes and
- http://docs.codehaus.org/display/GEOTOOLS/Expression+Improvements
Ow my brain hurts. How can you call an operation with this system? Has
the <Property/> tag been appropriated so it only represents attributes?
What is this Expression interface? Do I have to access data with XML
snippets or can I write code?

In the GeoAPI Feature interface, what is putClientProperty? Does that add
a named property to the feature or does it set the value? Does setting the
value of an operation make any sense? Would that mean setting a functor
object to actually perform the operation?

I guess let me start small and slow: how do I call an operation if my
starting point is an org.opengis.feature.Feature?

Mamma, are we there yet? :)
Post by Jody Garnett
Bryce have you looked at GeoAPI Feature Model recently? We have Complex
(the supertype of Feature) and is very similar to your ISO Record.
Mostly we are breaking appart due to problems with ISO use of Name (and
TypeName literally being the super of Record or some madness).
Just did. Did you write down the tags on the truck that just hit me?
Post by Jody Garnett
I see so rather then use Complex (or GeoAPI Record) you would back
yourself onto another dynamic type system - namely JDBC ResultSet (and
ResultSetMetadata) or the different table models.
The interface should stay the same. I'd say it's more a matter of what a
particular implemenatation uses as a data store.
Post by Jody Garnett
Well if you want have a look at RasterSymbolizer (the point of contact
between coverage and expression) and then we can have a nice IRC chat.
I think I need coffee. :) Learning sucks.

Bryce
Rob Atkinson
2006-11-14 20:30:32 UTC
Permalink
OK - you've got the conversation to the point I was making in

http://docs.codehaus.org/pages/viewpage.action?pageId=62876

now - that roadmap is just a first cut that needs to be reviewed by the
implementers, but I was trying to capture the nature of progress towards
the solution from a business perspective.

can you _please_ review this and suggest alternative milestones towards
implementation - for example, driving operations first.

I think thats good. Its all critical work.

The reason this is woth our while is that folks in the UK Met. Office
may well pick up on this thread to support further development at some
stage - given that they need the ultimate functionality eventually. I'm
trying to keep a coherent path in view as things happen.

Regards

Rob Atkinson
Post by Bryce L Nordgren
Post by Jody Garnett
Post by Bryce L Nordgren
We might be able to make Coverage extend feature. We might also just
want
Post by Jody Garnett
Post by Bryce L Nordgren
to provide a coverage implementation of the Feature interface. The
second
Post by Jody Garnett
Post by Bryce L Nordgren
one might be more in line with the spirit of 19100. I haven't really
resolved this issue in my own mind yet.
I wonder w/ respect to Coverage extending Feature if they are just
advertising a few attributes (such as bounds?)
They are advertising a few _Properties_. Some properties are attributes
(rangeType, extent, commonPointRule), some properties are operations (list,
select, find, evaluate, evaluateInverse).
Post by Jody Garnett
Post by Bryce L Nordgren
We could make a POJO<->Feature adapter which exposes public fields as
Feature attributes and public method signatures as Feature operations.
Then we just write Coverage as a POJO and adapt it into the Feature
interface. But we'd have to allow run-time extension. I think this is
why
Post by Jody Garnett
Post by Bryce L Nordgren
I was looking at EMF. I'll have to dwell on this one.
I have been thinking about that (and you can see the work occurring on
the various Pojo datastores), however I am also attracted to the earlier
suggestion of defining "Mix-ins" where an an operation is defined
against a set of specific attributes (captured with Java interface or
data structure?) and then we can make a wrapper for each feature that
has the correct attributes; this would separate the definition of
operation code from the identification of types where the operation can
be used.
You seem to be thinking of a Java implementation of a Data Product
Specification (19131), which is unique because 19131 exists to help people
write human-readable data product descriptions. Essentially, you have a
pre-existing registry of well-defined Features and another registry of well
defined properties. A data product is specified by requiring the presence
of items from these registries. So, you can say "the MODIS fire product
contains a raster coverage which is a fire product classification, and a
fire point coverage." The raster classification and the fire point
coverages are defined in external registries, so you know the meanings of
the values in the various attributes. You don't know how these data are
divided into attributes and operations but you know what conceptual
entities the dataset must contain, so the human reader knows they can use
it.
It sounds like you want to be able to write code to manipulate and query a
Java Data Product Specification, and write adapters from a particular
incarnation of that data product to the specification.
I think it's a good idea. I think you can make a Data Product Mapper which
could offer the human user the opportunity to connect concepts from the
data product specification to the actual data in the data store.
Presumably they could save a particular mapping for later use, and perhaps
trade it with their friends. I'm pretty sure that automated mapping would
be hard.
Post by Jody Garnett
Post by Bryce L Nordgren
Righto. I need to delve into archives w.r.t. Expression. Gotcha.
The following page should be sufficient (the long and short of it is
that expression is starting to work against features, featuretypes and
- http://docs.codehaus.org/display/GEOTOOLS/Expression+Improvements
Ow my brain hurts. How can you call an operation with this system? Has
the <Property/> tag been appropriated so it only represents attributes?
What is this Expression interface? Do I have to access data with XML
snippets or can I write code?
In the GeoAPI Feature interface, what is putClientProperty? Does that add
a named property to the feature or does it set the value? Does setting the
value of an operation make any sense? Would that mean setting a functor
object to actually perform the operation?
I guess let me start small and slow: how do I call an operation if my
starting point is an org.opengis.feature.Feature?
Mamma, are we there yet? :)
Post by Jody Garnett
Bryce have you looked at GeoAPI Feature Model recently? We have Complex
(the supertype of Feature) and is very similar to your ISO Record.
Mostly we are breaking appart due to problems with ISO use of Name (and
TypeName literally being the super of Record or some madness).
Just did. Did you write down the tags on the truck that just hit me?
Post by Jody Garnett
I see so rather then use Complex (or GeoAPI Record) you would back
yourself onto another dynamic type system - namely JDBC ResultSet (and
ResultSetMetadata) or the different table models.
The interface should stay the same. I'd say it's more a matter of what a
particular implemenatation uses as a data store.
Post by Jody Garnett
Well if you want have a look at RasterSymbolizer (the point of contact
between coverage and expression) and then we can have a nice IRC chat.
I think I need coffee. :) Learning sucks.
Bryce
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Geotools-devel mailing list
https://lists.sourceforge.net/lists/listinfo/geotools-devel
Jody Garnett
2006-11-14 21:00:00 UTC
Permalink
Summary:
- Bryce is looking at Filter / Expression and the Feature Model
- Justin: can you help answer some questions, he will have more then I
can keep up with
- Simone: can you help us understand RasterSymbolizer, in particular how
it knows to call an operation to extract values; and does it use
Expression to do any of that?
Post by Bryce L Nordgren
Post by Jody Garnett
I wonder w/ respect to Coverage extending Feature if they are just
advertising a few attributes (such as bounds?)
They are advertising a few _Properties_. Some properties are attributes
(rangeType, extent, commonPointRule), some properties are operations (list,
select, find, evaluate, evaluateInverse).
lol - I stand corrected. Or maybe by the stage crawl.
Post by Bryce L Nordgren
Post by Jody Garnett
Post by Bryce L Nordgren
We could make a POJO<->Feature adapter which exposes public fields as
...Then we just write Coverage as a POJO and adapt it into the Features
why
I am pretty sure they were just after a quick way to bridge the gap,
formally having coverage extend feature seems a better option.
Let's move on.
Post by Bryce L Nordgren
You seem to be thinking of a Java implementation of a Data Product
Specification (19131), which is unique because 19131 exists to help people
write human-readable data product descriptions.
I was casting my mind back to a suggested made back in feb; and then
using the words from Sun's "Fortress" language I learned about at
OOPSLA. ie. Define operations in reference to "Traits", Trait lists
methods and required attributes etc...
Post by Bryce L Nordgren
Essentially, you have a pre-existing registry of well-defined Features and another registry of well
defined properties. A data product is specified by requiring the presence
of items from these registries. So, you can say "the MODIS fire product
contains a raster coverage which is a fire product classification, and a
fire point coverage." The raster classification and the fire point
coverages are defined in external registries, so you know the meanings of
the values in the various attributes. You don't know how these data are
divided into attributes and operations but you know what conceptual
entities the dataset must contain, so the human reader knows they can use
it.
Interesting; I need to balance what you say with what RobA is always
talking about (there is considerable overlap). Bah I need to take one of
you guys out for beer and sort out what ISO numbers mean what, and which
ones we care about.
Post by Bryce L Nordgren
It sounds like you want to be able to write code to manipulate and query a
Java Data Product Specification, and write adapters from a particular
incarnation of that data product to the specification.
I think it's a good idea. I think you can make a Data Product Mapper which
could offer the human user the opportunity to connect concepts from the
data product specification to the actual data in the data store.
Presumably they could save a particular mapping for later use, and perhaps
trade it with their friends. I'm pretty sure that automated mapping would
be hard.
I am hoping that it would not be a question of automatic mapping; only
of mapping in an operation when specific properties are available
(specific to the point of exact match? or specific in terms of derived
attribute? answer is the "Trait" probable has to let us known know the
rules of where it can be applied).
Post by Bryce L Nordgren
Post by Jody Garnett
The following page should be sufficient (the long and short of it is
that expression is starting to work against features, featuretypes and
- http://docs.codehaus.org/display/GEOTOOLS/Expression+Improvements
Ow my brain hurts. How can you call an operation with this system? Has
the <Property/> tag been appropriated so it only represents attributes?
What is this Expression interface? Do I have to access data with XML
snippets or can I write code?
a) your poor brain
b) not sure you can call an operation with this system (it is only used
to filter out content, which you could then call an operation with)
c) the property tag has probably been appropriated to mean data access
in the context of filtering
d) the expression interface is what we went to great lengths to preserve
with our feature model work (being sure that the same xpath could be
used to navigate the object model and the xml model)
e) you can write code, a BNF is also available

This spec matters as it is also used for:
a) metadata query (by catalog specification)
b) feature model property constraints (ie restrictions)
c) style layer descriptor, both for engaging rules (ie filter) and
extracting values (expression)
Post by Bryce L Nordgren
In the GeoAPI Feature interface, what is putClientProperty? Does that add
a named property to the feature or does it set the value? Does setting the
value of an operation make any sense? Would that mean setting a functor
object to actually perform the operation?
putClientProperty is for exactly what it says in the javadoc - giving
programmers a place to store their
crap so we do not see tones of code with Map<FeatureType,Object>.
Post by Bryce L Nordgren
I guess let me start small and slow: how do I call an operation if my
starting point is an org.opengis.feature.Feature?
Yeah! This is a dicussion I have wanted to have with you since February:
AS IT STANDS NOW
------------------------
1. grab the feature type from the feature
2. navigate through the model to find the operation you want for that
feature
3. invoke the operation with the feature as the first parameter, and the
remaining arguments following

WHAT IT SHOULD BE
--------------------------
1. grab the feature
2. feature.invoke( operationName, arg1, arg2, ... )
Post by Bryce L Nordgren
Mamma, are we there yet? :)
Nope we were waiting for you.
Post by Bryce L Nordgren
Post by Jody Garnett
Bryce have you looked at GeoAPI Feature Model recently? We have Complex
(the supertype of Feature) and is very similar to your ISO Record.
Mostly we are breaking appart due to problems with ISO use of Name (and
TypeName literally being the super of Record or some madness).
Just did. Did you write down the tags on the truck that just hit me?
Better I know the driver, ... Justin are you around?
Post by Bryce L Nordgren
Post by Jody Garnett
I see so rather then use Complex (or GeoAPI Record) you would back
yourself onto another dynamic type system - namely JDBC ResultSet (and
ResultSetMetadata) or the different table models.
The interface should stay the same. I'd say it's more a matter of what a
particular implemenatation uses as a data store.
At some point we need to make the information available to others, the
question is in what manner do
we do so?
Post by Bryce L Nordgren
Post by Jody Garnett
Well if you want have a look at RasterSymbolizer (the point of contact
between coverage and expression) and then we can have a nice IRC chat.
I think I need coffee. :) Learning sucks.
We can always cheat, simone should of looked at raster symbolizer
perhaps he can answer questions on the subject.

Thanks again Bryce,
Jody
Bryce L Nordgren
2006-11-14 23:43:44 UTC
Permalink
Source:
http://docs.codehaus.org/pages/viewpage.action?pageId=62876

I don't know anything about goals 1 & 2.

Goal 3 seems appropriately positioned.

Prologue to the rest:
"Features" and "Coverages" ala ISO do not seem to correspond to that which
is served by a Web Feature Service / Web Coverage Service. In particular,
WCS/WFS divide the problem into the Raster/Vector problem spaces, and ISO
divides the problem into single-valued/multi-valued problem spaces. A WFS,
serving a particular feature, performs many of the functions which belong
to a DiscreteCoverage in the ISO world. (e.g., spatially indexed lookup of
values from a collection of range/value pairs, where the values are
homogeneous within the collection; range=defaultGeometry; value=everything
else). I remain blissfully ignorant of ISO's efforts to adopt WFS into
their framework (19141, I think).

Short version: if I was pressed to describe what an WFS is, I'd have to say
it is a discrete coverage implementation using Web Services technology. A
WCS would then be an implementation of DiscreteGridPointCoverage
implementation. Messy, because this implies a parent-child relationship
between WFS & WCS.

Goal 4:
Not sure I understand what is the main point. Why limit ourselves to POJO
databases? Conversely, if we can create featuretypes from inspection of an
arbitrary JDBC data source now, why does this not carry over to POJO
databases? Why do POJO databases require an extra abstraction layer
instead of just a driver? I'm not sure I "get it" enough to ask anything
intelligent.

Goal 5:
Unsure as to the proposal. Is this an addition to a WFS which basically
allows the server to "cascade" to a WCS? Are we trying to wrap the bare
bones raster data with GML representative of a Feature possessing the
characteristics of a 19123 Coverage? Should we also consider repackaging
the whole shootin' match into GML: each returned pixel/grid cell gets its
own record in the returned GML (or shapefile)? In ISO speak, someone might
want to treat DiscreteGridPointCoverage data as if it were a plain
DiscreteCoverage.

Goal 6:
This is the Holy Grail! I know exactly whazzup with this one! For some
thoughts on how vertical/temporal subsetting may be handled, see the
analysis of WMS 1.3.0 on the Multidimensional WCS page. They've provided
for this functionality. (
http://docs.codehaus.org/display/GEOS/Multidimensional+WCS) Perhaps future
versions of WMS/WFS will adopt the same strategy? I'm not sure you're
going to be able to use a WFS to query a WCS based on the grid cell values
unless you adopt a cascading mentality. The WFS is going to have to hide
the WCS entirely, and can't just include a link so the user can get the
data directly.

One minor point: if it's served by a WCS, it's a discrete grid point
coverage. No continuous coverages are served over the net.

Bryce
Rob Atkinson
2006-11-15 11:55:51 UTC
Permalink
Thanks Bryce,

comments and clarifications inline.
Post by Rob Atkinson
http://docs.codehaus.org/pages/viewpage.action?pageId=62876
I don't know anything about goals 1 & 2.
These relate to discussions already underway about handling new types.
Post by Rob Atkinson
Goal 3 seems appropriately positioned.
"Features" and "Coverages" ala ISO do not seem to correspond to that which
is served by a Web Feature Service / Web Coverage Service.
Yes - this is true. Its a function of a parochial approach to WCS. WCS
also has a historic arbitrary refusal to inherit from emerging ideas of
an OWS_common at the time. One reasone to say if we want to do arbitrary
coverages, we'd be better off starting with WFS and treat WCS as a
"convenience API" for 2D raster grids.
Post by Rob Atkinson
In particular,
WCS/WFS divide the problem into the Raster/Vector problem spaces, and ISO
divides the problem into single-valued/multi-valued problem spaces. A WFS,
serving a particular feature, performs many of the functions which belong
to a DiscreteCoverage in the ISO world.
Feature collections may not represent a coverage - there is no
requirement for a mapping from a domain to a range, but this may be
splitting hairs in practice.
Post by Rob Atkinson
(e.g., spatially indexed lookup of
values from a collection of range/value pairs, where the values are
homogeneous within the collection; range=defaultGeometry; value=everything
else). I remain blissfully ignorant of ISO's efforts to adopt WFS into
their framework (19141, I think).
Short version: if I was pressed to describe what an WFS is, I'd have to say
it is a discrete coverage implementation using Web Services technology. A
WCS would then be an implementation of DiscreteGridPointCoverage
implementation. Messy, because this implies a parent-child relationship
between WFS & WCS.
WCS could also be seen as implementing a
ContinuousQuadrilateralGridCoverage, but not exposing the interpolation
function ?
Post by Rob Atkinson
Not sure I understand what is the main point. Why limit ourselves to POJO
databases? Conversely, if we can create featuretypes from inspection of an
arbitrary JDBC data source now, why does this not carry over to POJO
databases? Why do POJO databases require an extra abstraction layer
instead of just a driver? I'm not sure I "get it" enough to ask anything
intelligent.
OK - there are two kinda non-obvious reasons for this - one is that its
a separate set of business goals that shares the need for operations
etc, but also because we I hope can make POJOs to implement coverage
operations and inject them using this mechanism, allowing us to extend
the types of coverages supported.
Post by Rob Atkinson
Unsure as to the proposal. Is this an addition to a WFS which basically
allows the server to "cascade" to a WCS? Are we trying to wrap the bare
bones raster data with GML representative of a Feature possessing the
characteristics of a 19123 Coverage?
yes
Post by Rob Atkinson
Should we also consider repackaging
the whole shootin' match into GML: each returned pixel/grid cell gets its
own record in the returned GML (or shapefile)? In ISO speak, someone might
want to treat DiscreteGridPointCoverage data as if it were a plain
DiscreteCoverage.
we might want to, but at the moment I have seen no strong drivers for
this. But we do want to shift the rest of the metadata about coverages
through processing chains.
Post by Rob Atkinson
This is the Holy Grail! I know exactly whazzup with this one! For some
thoughts on how vertical/temporal subsetting may be handled, see the
analysis of WMS 1.3.0 on the Multidimensional WCS page. They've provided
for this functionality. (
http://docs.codehaus.org/display/GEOS/Multidimensional+WCS) Perhaps future
versions of WMS/WFS will adopt the same strategy? I'm not sure you're
going to be able to use a WFS to query a WCS based on the grid cell values
unless you adopt a cascading mentality. The WFS is going to have to hide
the WCS entirely, and can't just include a link so the user can get the
data directly.
I think I agree - WCS 1.0 can be used only for subsetting a small
subclass of coverage types we might enable. Maybe a WCS 2.0+ will be
more flexible one day.
Post by Rob Atkinson
One minor point: if it's served by a WCS, it's a discrete grid point
coverage. No continuous coverages are served over the net.
But we want to free ourselves of this very arbitrary limitation. Without
having to redefine WCS, by using WFS which is perfectly capable, if we
can invoke operations.
Post by Rob Atkinson
Bryce
Bryce L Nordgren
2006-11-15 17:28:48 UTC
Permalink
Post by Rob Atkinson
Post by Bryce L Nordgren
Short version: if I was pressed to describe what an WFS is, I'd have to
say
Post by Rob Atkinson
Post by Bryce L Nordgren
it is a discrete coverage implementation using Web Services technology.
A
Post by Rob Atkinson
Post by Bryce L Nordgren
WCS would then be an implementation of DiscreteGridPointCoverage
implementation. Messy, because this implies a parent-child
relationship
Post by Rob Atkinson
Post by Bryce L Nordgren
between WFS & WCS.
WCS could also be seen as implementing a
ContinuousQuadrilateralGridCoverage, but not exposing the interpolation
function ?
My first statement was uninformed, as my assumption was that WCS didn't
support resampling. Delving into it, WCS allows specification of 5
interpolation functions, the default being nearest neighbor. WCS also
allows for a server or client to specify "no interpolation". I think it
would be unfair to say that it doesn't expose the interpolation function.

Let me modify my previous statement to:
+ WCS implements ContinuousQGC for requests which resample
+ WCS also implements DiscreteGPC for requests which subset with no
resampling.

These are only "approximate" implementations: Neither implementation
supports evaluation at a single point unless you fudge by making a grid
with one point in it, and the coverage spec doesn't include an
evaluate(GM_PointGrid) signature.

Sorry for being so pedantic. In order for me to understand, I need a
single vocabulary to describe everything from standards to implementation.
Post by Rob Atkinson
Post by Bryce L Nordgren
Not sure I understand what is the main point. Why limit ourselves to
POJO
Post by Rob Atkinson
Post by Bryce L Nordgren
databases? Conversely, if we can create featuretypes from inspection
of an
Post by Rob Atkinson
Post by Bryce L Nordgren
arbitrary JDBC data source now, why does this not carry over to POJO
databases? Why do POJO databases require an extra abstraction layer
instead of just a driver? I'm not sure I "get it" enough to ask
anything
Post by Rob Atkinson
Post by Bryce L Nordgren
intelligent.
OK - there are two kinda non-obvious reasons for this - one is that its
a separate set of business goals that shares the need for operations
etc, but also because we I hope can make POJOs to implement coverage
operations and inject them using this mechanism, allowing us to extend
the types of coverages supported.
All right. I think I see. Reading the proposal again, it looks like you
want to write an adapter between the new feature model and the Hibernate
introspection mechanism?

I had an idea like this long ago, only the adapters I had in mind were
between the new feature model and...
+ EMF
+ standard Java Beans Introspection

I think everyone can coexist happily. My question is where does this fit
in the big picture? Are these really separate adapters or data source
plugins?
Post by Rob Atkinson
Post by Bryce L Nordgren
Unsure as to the proposal. Is this an addition to a WFS which
basically
Post by Rob Atkinson
Post by Bryce L Nordgren
allows the server to "cascade" to a WCS? Are we trying to wrap the
bare
Post by Rob Atkinson
Post by Bryce L Nordgren
bones raster data with GML representative of a Feature possessing the
characteristics of a 19123 Coverage?
yes
How far does this extend? For instance, do you intend to generate GML
which indicates the presence of Coverage operations, or are these
operations assumed to be present since you're describing a coverage. Also,
do you intend to advertise the operations which are specified in 19123, or
the operations which are actually implemented in a WCS, and therefore
available?
Post by Rob Atkinson
Post by Bryce L Nordgren
Should we also consider repackaging
the whole shootin' match into GML: each returned pixel/grid cell gets
its
Post by Rob Atkinson
Post by Bryce L Nordgren
own record in the returned GML (or shapefile)? In ISO speak, someone
might
Post by Rob Atkinson
Post by Bryce L Nordgren
want to treat DiscreteGridPointCoverage data as if it were a plain
DiscreteCoverage.
we might want to, but at the moment I have seen no strong drivers for
this. But we do want to shift the rest of the metadata about coverages
through processing chains.
The coverage schema itself is pretty light on metadata and heavy on
operations. Are you talking mainly about exposing the Domain and Range
associations (WCS: domainSet; rangeSet from describeCoverage)?

Also, what counts as coverage (as opposed to server) metadata? In terms of
CRS, do you count only the native CRS of the coverage, or all the CRSes
that the server will reproject to? The last one almost seems like server
metadata to me. But of course, how you look at it depends on what you're
using the metadata for.
Post by Rob Atkinson
Post by Bryce L Nordgren
This is the Holy Grail! I know exactly whazzup with this one! For
some
Post by Rob Atkinson
Post by Bryce L Nordgren
thoughts on how vertical/temporal subsetting may be handled, see the
analysis of WMS 1.3.0 on the Multidimensional WCS page. They've
provided
Post by Rob Atkinson
Post by Bryce L Nordgren
for this functionality. (
http://docs.codehaus.org/display/GEOS/Multidimensional+WCS) Perhaps
future
Post by Rob Atkinson
Post by Bryce L Nordgren
versions of WMS/WFS will adopt the same strategy? I'm not sure you're
going to be able to use a WFS to query a WCS based on the grid cell
values
Post by Rob Atkinson
Post by Bryce L Nordgren
unless you adopt a cascading mentality. The WFS is going to have to
hide
Post by Rob Atkinson
Post by Bryce L Nordgren
the WCS entirely, and can't just include a link so the user can get the
data directly.
I think I agree - WCS 1.0 can be used only for subsetting a small
subclass of coverage types we might enable. Maybe a WCS 2.0+ will be
more flexible one day.
19123 allows queries by values in the range, but does not allow anything so
flexible as the Filter deal. You can provide one tuple of range values and
it'll return all the "matches", where the coverage decides what is meant by
a "match". No expressions allowed. Although I can certainly see that it
would be useful to provide an implementation where you could set a filter
on a coverage, then from that point on, "evaluateInverse" obeyed the
filter.

Let me see...I just looked up what you can do with WCS 1.0 and the spec
confuses me. Can anyone explain the optional _PARAMETER_ argument to
GetCoverage to me? My issue is that the examples in Table 9 on Page 32
seem contradictory. "band=1,3,5" means return only the data in bands 1, 3,
and 5; but "age=0/18" means return only those points where the value of the
"age" field is between 0 and 18? Which is it, an index into a "range
vector" or limits on the values in one particular element of that vector?

I think this makes a difference as to whether you can just provide a URL
for the user to go get the data themselves or not.
Post by Rob Atkinson
Post by Bryce L Nordgren
One minor point: if it's served by a WCS, it's a discrete grid point
coverage. No continuous coverages are served over the net.
But we want to free ourselves of this very arbitrary limitation. Without
having to redefine WCS, by using WFS which is perfectly capable, if we
can invoke operations.
When I said this, I think my mindset was that continuous coverages must be
discreteized before they can be served. Once this happens, they're no
longer continuous. Perhaps it would be better to say that all WCS-served
data are the result of an evaluate() operation applied to each point on a
regular grid in the domain. This is a sampling of the coverage and not the
coverage itself. :)

In terms of coverage IO, I think all coverages on disk are going to be
initially represented with DiscreteGridPointCoverages. If someone wants to
interpolate, I think it logical to provide a constructor to ContinuousQGC
which takes a DiscreteGPC as an initializer.

Coffee. Need. Coffee.

Bryce
Rob Atkinson
2006-11-15 22:18:34 UTC
Permalink
Post by Bryce L Nordgren
+ WCS implements ContinuousQGC for requests which resample
+ WCS also implements DiscreteGPC for requests which subset with no
resampling.
Right on.
Post by Bryce L Nordgren
These are only "approximate" implementations: Neither implementation
supports evaluation at a single point unless you fudge by making a grid
with one point in it, and the coverage spec doesn't include an
evaluate(GM_PointGrid) signature.
Sorry for being so pedantic. In order for me to understand, I need a
single vocabulary to describe everything from standards to implementation.
I'm with you - I think the WCS spec guys were bloody minded to refuse to
take the ISO model into account (they knew everything you could possible
know about 2D grids apparently ;-) )
This was before OGC was officially implementing ISO specs, but they
should have at least fixed it in the intervening years. Volunteer time
<> mandate again.
Post by Bryce L Nordgren
Post by Rob Atkinson
Post by Bryce L Nordgren
Not sure I understand what is the main point. Why limit ourselves to
POJO
Post by Rob Atkinson
Post by Bryce L Nordgren
databases? Conversely, if we can create featuretypes from inspection
of an
Post by Rob Atkinson
Post by Bryce L Nordgren
arbitrary JDBC data source now, why does this not carry over to POJO
databases? Why do POJO databases require an extra abstraction layer
instead of just a driver? I'm not sure I "get it" enough to ask
anything
Post by Rob Atkinson
Post by Bryce L Nordgren
intelligent.
OK - there are two kinda non-obvious reasons for this - one is that its
a separate set of business goals that shares the need for operations
etc, but also because we I hope can make POJOs to implement coverage
operations and inject them using this mechanism, allowing us to extend
the types of coverages supported.
All right. I think I see. Reading the proposal again, it looks like you
want to write an adapter between the new feature model and the Hibernate
introspection mechanism?
I had an idea like this long ago, only the adapters I had in mind were
between the new feature model and...
+ EMF
+ standard Java Beans Introspection
I think everyone can coexist happily. My question is where does this fit
in the big picture? Are these really separate adapters or data source
plugins?
I'm agnostic - I really wanted to put up the business reqs and
understand where it best fitted in to the development path
Post by Bryce L Nordgren
Post by Rob Atkinson
Post by Bryce L Nordgren
Unsure as to the proposal. Is this an addition to a WFS which
basically
Post by Rob Atkinson
Post by Bryce L Nordgren
allows the server to "cascade" to a WCS? Are we trying to wrap the
bare
Post by Rob Atkinson
Post by Bryce L Nordgren
bones raster data with GML representative of a Feature possessing the
characteristics of a 19123 Coverage?
yes
How far does this extend? For instance, do you intend to generate GML
which indicates the presence of Coverage operations, or are these
operations assumed to be present since you're describing a coverage. Also,
do you intend to advertise the operations which are specified in 19123, or
the operations which are actually implemented in a WCS, and therefore
available?
Good question. In general, during data transfer we indicate the
FeatureType, and pass the values. Operations are assumed to be realised
when marshalling the feature back into an object.
You have raised an interesting issue about partial support for the base
Feature Type's operations. My inclination is to disallow this - its
cheaper to build it once than explain and understand the explanations IMHO.
For added operations, maybe this needs to be done by specialising the
FeatureType, so we dont have to have another mechanism.
Its a pity that derivation by restriction is so clunky that its hard to
specialise a FeatureType to a crippled version.
Post by Bryce L Nordgren
Post by Rob Atkinson
Post by Bryce L Nordgren
Should we also consider repackaging
the whole shootin' match into GML: each returned pixel/grid cell gets
its
Post by Rob Atkinson
Post by Bryce L Nordgren
own record in the returned GML (or shapefile)? In ISO speak, someone
might
Post by Rob Atkinson
Post by Bryce L Nordgren
want to treat DiscreteGridPointCoverage data as if it were a plain
DiscreteCoverage.
we might want to, but at the moment I have seen no strong drivers for
this. But we do want to shift the rest of the metadata about coverages
through processing chains.
The coverage schema itself is pretty light on metadata and heavy on
operations. Are you talking mainly about exposing the Domain and Range
associations (WCS: domainSet; rangeSet from describeCoverage)?
partly, but also the record schema, and the phenomenon, sampling method
etc - ie. treat a coverage as an Observation - which is probably correct
as we have few if any purely "asserted" continuous grids.
Post by Bryce L Nordgren
Also, what counts as coverage (as opposed to server) metadata? In terms of
CRS, do you count only the native CRS of the coverage, or all the CRSes
that the server will reproject to? The last one almost seems like server
metadata to me. But of course, how you look at it depends on what you're
using the metadata for.
We definitely need to follow the coverage exploitation in processing
chains Use Cases a bit further.
Post by Bryce L Nordgren
Post by Rob Atkinson
Post by Bryce L Nordgren
This is the Holy Grail! I know exactly whazzup with this one! For
some
Post by Rob Atkinson
Post by Bryce L Nordgren
thoughts on how vertical/temporal subsetting may be handled, see the
analysis of WMS 1.3.0 on the Multidimensional WCS page. They've
provided
Post by Rob Atkinson
Post by Bryce L Nordgren
for this functionality. (
http://docs.codehaus.org/display/GEOS/Multidimensional+WCS) Perhaps
future
Post by Rob Atkinson
Post by Bryce L Nordgren
versions of WMS/WFS will adopt the same strategy? I'm not sure you're
going to be able to use a WFS to query a WCS based on the grid cell
values
Post by Rob Atkinson
Post by Bryce L Nordgren
unless you adopt a cascading mentality. The WFS is going to have to
hide
Post by Rob Atkinson
Post by Bryce L Nordgren
the WCS entirely, and can't just include a link so the user can get the
data directly.
I think I agree - WCS 1.0 can be used only for subsetting a small
subclass of coverage types we might enable. Maybe a WCS 2.0+ will be
more flexible one day.
19123 allows queries by values in the range, but does not allow anything so
flexible as the Filter deal. You can provide one tuple of range values and
it'll return all the "matches", where the coverage decides what is meant by
a "match". No expressions allowed. Although I can certainly see that it
would be useful to provide an implementation where you could set a filter
on a coverage, then from that point on, "evaluateInverse" obeyed the
filter.
Let me see...I just looked up what you can do with WCS 1.0 and the spec
confuses me. Can anyone explain the optional _PARAMETER_ argument to
GetCoverage to me? My issue is that the examples in Table 9 on Page 32
seem contradictory. "band=1,3,5" means return only the data in bands 1, 3,
and 5; but "age=0/18" means return only those points where the value of the
"age" field is between 0 and 18? Which is it, an index into a "range
vector" or limits on the values in one particular element of that vector?
I think this makes a difference as to whether you can just provide a URL
for the user to go get the data themselves or not.
to be honest, the URL is more likely to be useful for something like
JPIP - a streaming protocol than a REST-ful WCS, in the latter case I'd
be happy to pack binary encoded data, but for say a 1gB model we'd need
a streaming protocol.
Post by Bryce L Nordgren
Post by Rob Atkinson
Post by Bryce L Nordgren
One minor point: if it's served by a WCS, it's a discrete grid point
coverage. No continuous coverages are served over the net.
But we want to free ourselves of this very arbitrary limitation. Without
having to redefine WCS, by using WFS which is perfectly capable, if we
can invoke operations.
When I said this, I think my mindset was that continuous coverages must be
discreteized before they can be served. Once this happens, they're no
longer continuous. Perhaps it would be better to say that all WCS-served
data are the result of an evaluate() operation applied to each point on a
regular grid in the domain. This is a sampling of the coverage and not the
coverage itself. :)
In terms of coverage IO, I think all coverages on disk are going to be
initially represented with DiscreteGridPointCoverages. If someone wants to
interpolate, I think it logical to provide a constructor to ContinuousQGC
which takes a DiscreteGPC as an initializer.
OK - you're getting deeping into implementation land than my wee brain
can manage today - please annotate the roadmap with any specific ideas
on an optimal game plan - remember all I'm trying to do is to sort out
the dependencies between already floated ideas, longer term options,
match to business value outcomes and then use this to help people decide
if and how to invest in progress.
Post by Bryce L Nordgren
Coffee. Need. Coffee.
Doppio!
Post by Bryce L Nordgren
Bryce
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Geotools-devel mailing list
https://lists.sourceforge.net/lists/listinfo/geotools-devel
Bryce L Nordgren
2006-11-15 19:12:34 UTC
Permalink
Post by Jody Garnett
Post by Bryce L Nordgren
It sounds like you want to be able to write code to manipulate and
query a
Post by Jody Garnett
Post by Bryce L Nordgren
Java Data Product Specification, and write adapters from a particular
incarnation of that data product to the specification.
I think it's a good idea. I think you can make a Data Product Mapper
which
Post by Jody Garnett
Post by Bryce L Nordgren
could offer the human user the opportunity to connect concepts from the
data product specification to the actual data in the data store.
Presumably they could save a particular mapping for later use, and
perhaps
Post by Jody Garnett
Post by Bryce L Nordgren
trade it with their friends. I'm pretty sure that automated mapping
would
Post by Jody Garnett
Post by Bryce L Nordgren
be hard.
I am hoping that it would not be a question of automatic mapping; only
of mapping in an operation when specific properties are available
(specific to the point of exact match? or specific in terms of derived
attribute? answer is the "Trait" probable has to let us known know the
rules of where it can be applied).
If we have human-directed mapping first, maybe we can use it enough that we
learn how to write an automated mapper. Right now I don't think it's an
option.
Post by Jody Garnett
a) your poor brain
sniffle.
Post by Jody Garnett
b) not sure you can call an operation with this system (it is only used
to filter out content, which you could then call an operation with)
ah, I'll stop trying then.
Post by Jody Garnett
c) the property tag has probably been appropriated to mean data access
in the context of filtering
d) the expression interface is what we went to great lengths to preserve
with our feature model work (being sure that the same xpath could be
used to navigate the object model and the xml model)
We need a nomenclature standard. ;) Property means any of attribute,
association, or operation unless we're in Filter land, where it means only
attribute. got it.

Here's the big question: Do you intend for Filter/Expression to operate on
individual range values of a coverage? Or should Filter/Expression be
limited to operating on Coverages themselves (not the spatially varying
data).

Corrolary: Does Filter/Expression operate on "Properties"/attributes which
are Lists/arrays, and how are these arrays indexed?
Post by Jody Garnett
e) you can write code, a BNF is also available
yeah, baby. Looking.
Post by Jody Garnett
Post by Bryce L Nordgren
In the GeoAPI Feature interface, what is putClientProperty? Does that
add
Post by Jody Garnett
Post by Bryce L Nordgren
a named property to the feature or does it set the value? Does setting
the
Post by Jody Garnett
Post by Bryce L Nordgren
value of an operation make any sense? Would that mean setting a
functor
Post by Jody Garnett
Post by Bryce L Nordgren
object to actually perform the operation?
putClientProperty is for exactly what it says in the javadoc - giving
programmers a place to store their
crap so we do not see tones of code with Map<FeatureType,Object>.
How is this different than adding an attribute to the FeatureType (and
subsequently to all Feature instances)? Is this a mechanism intended to
allow freeform annotation of individual Feature Instances? Or to allow
Features with different attributes to share a common FeatureType? Is this
crap part of the Feature and stored with it?

If it's not related to a Feature Property (Attributes, Associations, and
Operations), I think it might be best to find another name for it. The
"crapKubby" comes to mind.

While we're talking about GeoAPI interfaces, can we do a global search and
replace? Opperation -> Operation (and lowercase version of same).
Post by Jody Garnett
Post by Bryce L Nordgren
I guess let me start small and slow: how do I call an operation if my
starting point is an org.opengis.feature.Feature?
AS IT STANDS NOW
------------------------
1. grab the feature type from the feature
2. navigate through the model to find the operation you want for that
feature
3. invoke the operation with the feature as the first parameter, and the
remaining arguments following
Icky.
Post by Jody Garnett
WHAT IT SHOULD BE
--------------------------
1. grab the feature
2. feature.invoke( operationName, arg1, arg2, ... )
Yeah baby! When do we get this? This appears to be an easy score. Is
there hidden complexity which prevents us from writing a Feature.invoke()
method which does the 3-step process in "AS IT STANDS NOW"?
Post by Jody Garnett
Post by Bryce L Nordgren
[snip]
Let's table the discussion of tabular data for now...
Post by Jody Garnett
Post by Bryce L Nordgren
Well if you want have a look at RasterSymbolizer (the point of contact
between coverage and expression) and then we can have a nice IRC chat.
I think I need coffee. :) Learning sucks.
We can always cheat, simone should of looked at raster symbolizer
perhaps he can answer questions on the subject.
RasterSymbolizer is a portrayal concern. We may very well have different
implementations of it for a generic grid point coverage and for the special
case of 2D data. To be efficient, it probably needs to know the
implementation details of a particular coverage implementation. We're
probably not going to have just one.

To be honest, I'm not thinking that far ahead yet. :)

Bryce
Jody Garnett
2006-11-15 21:44:15 UTC
Permalink
Post by Bryce L Nordgren
If we have human-directed mapping first, maybe we can use it enough that we
learn how to write an automated mapper. Right now I don't think it's an
option.
Got it; do something first - and then do automation.
Post by Bryce L Nordgren
Post by Jody Garnett
b) not sure you can call an operation with this system (it is only used
to filter out content, which you could then call an operation with)
ah, I'll stop trying then.
That said I am not *sure* - would need to read the specifications again.
Post by Bryce L Nordgren
Post by Jody Garnett
c) the property tag has probably been appropriated to mean data access
in the context of filtering
d) the expression interface is what we went to great lengths to preserve
with our feature model work (being sure that the same xpath could be
used to navigate the object model and the xml model)
We need a nomenclature standard. ;) Property means any of attribute,
association, or operation unless we're in Filter land, where it means only
attribute. got it.
Here's the big question: Do you intend for Filter/Expression to operate on
individual range values of a coverage? Or should Filter/Expression be
limited to operating on Coverages themselves (not the spatially varying
data).
And this is where we need somebody to look at RasterSymbolizer. I
*strongly* suspect that they use expression to extract out different
bands from a coverage, produce complicated expressions with sine and
cosine etc on the way to mapping into a visible RGB color space. We
simply must look at RasterSymbolizer before going further (as it is part
of the "landscape" we wish to integrate with).

So for my question; if not using Expression how do you extract values
from a coverage?
Post by Bryce L Nordgren
Corrolary: Does Filter/Expression operate on "Properties"/attributes which
are Lists/arrays, and how are these arrays indexed?
Yes it does, you can use xpath to grab out values: ie VALUES[12] or
VALUES[X] or VALUES[@index] VALUES[X*43+Y]. Of course if a coverage is
defined by an operation then raster symbolizer must have a way to let us
get at that information ....
Post by Bryce L Nordgren
Post by Jody Garnett
e) you can write code, a BNF is also available
yeah, baby. Looking.
CSW-2 spec for the BNF, There is a FilterFactory if you want to make
java code examples.
Post by Bryce L Nordgren
Post by Jody Garnett
putClientProperty is for exactly what it says in the javadoc - giving
programmers a place to store their
crap so we do not see tones of code with Map<FeatureType,Object>.
How is this different than adding an attribute to the FeatureType (and
subsequently to all Feature instances)? Is this a mechanism intended to
allow freeform annotation of individual Feature Instances? Or to allow
Features with different attributes to share a common FeatureType? Is this
crap part of the Feature and stored with it?
This crap is part of the FeatureType (at least); The javadocs are very
clear that this is not stored.
Post by Bryce L Nordgren
If it's not related to a Feature Property (Attributes, Associations, and
Operations), I think it might be best to find another name for it. The
"crapKubby" comes to mind.
putClientProperties and getClientProperties conventions comes from Swing
- and the names are kept to make people feel at home.
Post by Bryce L Nordgren
While we're talking about GeoAPI interfaces, can we do a global search and
replace? Opperation -> Operation (and lowercase version of same).
sorry I am a terrible speller.
Post by Bryce L Nordgren
Post by Jody Garnett
AS IT STANDS NOW
------------------------
1. grab the feature type from the feature
2. navigate through the model to find the operation you want for that
feature
3. invoke the operation with the feature as the first parameter, and the
remaining arguments following
Icky.
As I said was waiting for your feedback, since justin and I did not have
a need for operations we did not feel it wise to chase this one down
until driven by someone with a real problem.
Post by Bryce L Nordgren
Post by Jody Garnett
WHAT IT SHOULD BE
--------------------------
1. grab the feature
2. feature.invoke( operationName, arg1, arg2, ... )
Yeah baby! When do we get this? This appears to be an easy score. Is
there hidden complexity which prevents us from writing a Feature.invoke()
method which does the 3-step process in "AS IT STANDS NOW"?
None; you can make the change yourself. We really were just waiting to
talk to you.
Post by Bryce L Nordgren
Post by Jody Garnett
We can always cheat, simone should of looked at raster symbolizer
perhaps he can answer questions on the subject.
RasterSymbolizer is a portrayal concern. We may very well have different
implementations of it for a generic grid point coverage and for the special
case of 2D data. To be efficient, it probably needs to know the
implementation details of a particular coverage implementation. We're
probably not going to have just one.
True but it is also a specification; so we want to check what
information they provide us when they request data out of the coverage -
it may give us some clues.
Post by Bryce L Nordgren
To be honest, I'm not thinking that far ahead yet. :)
Well let's check - if it does not do any expression (or operation
madness) we may be able to shelf have the work/ideas I have brought out
for your learning pleasure.

Jody
Bryce L Nordgren
2006-11-15 22:37:59 UTC
Permalink
Post by Jody Garnett
Post by Bryce L Nordgren
Here's the big question: Do you intend for Filter/Expression to operate
on
Post by Jody Garnett
Post by Bryce L Nordgren
individual range values of a coverage? Or should Filter/Expression be
limited to operating on Coverages themselves (not the spatially varying
data).
And this is where we need somebody to look at RasterSymbolizer. I
*strongly* suspect that they use expression to extract out different
bands from a coverage, produce complicated expressions with sine and
cosine etc on the way to mapping into a visible RGB color space. We
simply must look at RasterSymbolizer before going further (as it is part
of the "landscape" we wish to integrate with).
I did a preliminary overview. I think it works this way:

+ RasterSymbolizer really doesn't do anything other than hold data for the
properties defined by SLD. It's just a data structure.
+ There's a renderer implemented specifically for Grid Coverage 2Ds which
interprets these settings (like any colormaps, etc.) and produces a
portrayed image.

I have to verify this (gotta run out the door soon), but I would bet that
it requires a GridCoverage2D because it knows it's backed by Java 2D
graphics objects. (e.g., Raster, SampleModel, ColorModel, etc.) No one
actually "extracts" data from a coverage to portray it, they just make a
ColorModel and blammo, Bob's your uncle.

If we make a different coverage implementation backed by Multiarray2, we're
going to have a different rendering process (or an adapter to
GridCoverage2D).
Post by Jody Garnett
So for my question; if not using Expression how do you extract values
from a coverage?
According to 19123: Coverage.evaluate(DirectPosition) will return a tuple
of values valid at a specific location. Coverage.list() returns a Set of
all of the Domain-Range associations in the coverage.
Coverage.select(GM_Object, TM_Period) subsets a coverage.
Coverage.find(DirectPosition) returns a list of Domain-Range associations
in order of increasing distance from the supplied point.
Coverage.evaluateInverse(Record) will return a Set of DomainObjects
(positions/geometries) which are associated with matching values in the
Range.

In the OGC model? Dunno.
Post by Jody Garnett
Yes it does, you can use xpath to grab out values: ie VALUES[12] or
defined by an operation then raster symbolizer must have a way to let us
get at that information ....
I think RasterSymbolizer just needs to hold information about how data
should be portrayed. The renderer needs to get at the data (or use some
library which can.)

Gotta run. Bye bye.

Bryce
Jody Garnett
2006-11-15 23:00:21 UTC
Permalink
Post by Bryce L Nordgren
+ RasterSymbolizer really doesn't do anything other than hold data for the
properties defined by SLD. It's just a data structure.
+ There's a renderer implemented specifically for Grid Coverage 2Ds which
interprets these settings (like any colormaps, etc.) and produces a
portrayed image.
I have to verify this (gotta run out the door soon), but I would bet that
it requires a GridCoverage2D because it knows it's backed by Java 2D
graphics objects. (e.g., Raster, SampleModel, ColorModel, etc.) No one
actually "extracts" data from a coverage to portray it, they just make a
ColorModel and blammo, Bob's your uncle.
I came to a similar conclusion - see separate email.
Post by Bryce L Nordgren
If we make a different coverage implementation backed by Multiarray2, we're
going to have a different rendering process (or an adapter to GridCoverage2D).
Perhaps a processing chain that results in something RasterSymbolizer
can use (unless someone can sweet talk
OGC into some funding).
Post by Bryce L Nordgren
Post by Jody Garnett
So for my question; if not using Expression how do you extract values
from a coverage?
According to 19123: Coverage.evaluate(DirectPosition) will return a tuple
of values valid at a specific location. Coverage.list() returns a Set of
all of the Domain-Range associations in the coverage.
Coverage.select(GM_Object, TM_Period) subsets a coverage.
Coverage.find(DirectPosition) returns a list of Domain-Range associations
in order of increasing distance from the supplied point.
Coverage.evaluateInverse(Record) will return a Set of DomainObjects
(positions/geometries) which are associated with matching values in the
Range.
In the OGC model? Dunno.
Agreed; cool well now we are in the clear then.
Post by Bryce L Nordgren
Post by Jody Garnett
Yes it does, you can use xpath to grab out values: ie VALUES[12] or
defined by an operation then raster symbolizer must have a way to let us
get at that information ....
I think RasterSymbolizer just needs to hold information about how data
should be portrayed. The renderer needs to get at the data (or use some
library which can.)
I did expect to see something; aka for each channel I expected to see
expression working against that tuple you get back from the ISO Coverage.

Cheers and thanks for taking the time on this.
Jody

Loading...