[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Observation Dataset Conventions
- Subject: Re: Observation Dataset Conventions
- Date: Wed, 03 May 2006 13:45:49 -0600
The UnidataObsConvention are a recommendation for how to write observation data
into netcdf-3 files. Dappper is a method of serving this kind of data through
OpenDAP. So the closest analogue would be to make the TDS serve (through
opendap) UnidataObsConvention files using the same structures as Dapper.
Currently if you throw those files at the TDS openddap server, you would see
the linked lists, which makes performance suck, because each read of an obs
becomes a round trip to the server.
I was thinking of solving this at the "Point Obs" datatype API, but the idea of
using the dapper doubly nested sequences seems like a viable alternative. There are
probably a few subtle issues concerning performance and scalability, however, and the
fastest way to discover them is probably to just implement it and see.
The docs you point to implicitly describe dapper "Conventions". Probably with
some interaction between us and Joe Sirott's group, we could create an explicit
Convention. Do you want to inquire if they are interested, or should I, or ? Its probably
only a few days work. How high a priority is this from your POV?
Russ Rew wrote:
Ethan,
I don't know enough about the Dapper convention to give a really good
answer to this. Is there a link from the Dapper page that goes into
detail on their convention? I did see where it mentions the GDS but
still didn't find any details on the convention. One thing is that
Dapper uses nested structures a fair amount while the Unidata obs
convention doesn't really go into that much. Perhaps because their
backends are database driven and we're nc3 based. We also haven't dealt
with some data types that might require more sequence type stuff.
My understanding is that Dapper serves a doubly-nested sequences, such as:
Sequence{ lon,
lat;
depth;
Sequence{ temperature,
time} time_series;
} mooring_collection;
This is the closest I've seen to describing the Dapper OPeNDAP output
conventions:
http://www.epic.noaa.gov/epic/software/dapper/dapperdocs/metadata.html
For input, it aggregates netCDF data that follows several conventions:
* ARGO GDAC netCDF
* ARGO NODC netCDF
* EPIC netCDF
* Opendap sequence data (must follow Dapper metadata conventions).
http://www.epic.noaa.gov/epic/software/dapper/dapperdocs/load.html
As far as agreeing on consistent conventions. Seems like it would be a
good idea, probably take a bit of work on both sides to understand both
data models. John has talked to Joe Sirott about Dapper more than I have
(and probably looked at Dapper more closely as well) so he may have more
details he can provide.
OK, thanks for the information. I'm just trying to learn enough about
this to be able to defend a recommendation we are making to the IOOS
DMAC Steering Team, to use Dapper in a pilot project.
--Russ