[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[IDV #QAW-317402]: Can't get IDV 2.7u2 to image a gridded field in a netCDF file that (I think) has data stored with best practices
- Subject: [IDV #QAW-317402]: Can't get IDV 2.7u2 to image a gridded field in a netCDF file that (I think) has data stored with best practices
- Date: Wed, 24 Mar 2010 15:01:36 -0600
Hi Jim-
> > What makes you think the IDV isn't using these values?
>
> A couple of reasons. It doesn't list LATITUDE or LONGITUDE among the
> fields,which didn't seem like a good sign.
The IDV does not display these as options since they are used as coordinates
for the data variables. You can see a dump of the file using the Data Source
properties menu under the MetaData tab.
Displays are getting plotted
> in rectangles with sides parallel to the window. The true grid is
> probably curvilinear on the coordinates chosen for the map.
Not in the file you sent me. If you look at the latitude values, they are the
same across the x dimension (lines) and longitude is the same across the y
dimension (samples). It is a rectilinear grid.
> > The netCDF
> > library converts the packed lat/lon values to unpacked versions. In
> > the process, the lats go from -139.951 to -95.0994 and the lats
> > from 25.0 to 49.851. For example, the latitude is declared as:
> >
> > short LATITUDE(lines, samples) ;
> > LATITUDE:units = "" ;
> > LATITUDE:missing_value = 51.f ;
> > LATITUDE:valid_min = 24.99975f ;
> > LATITUDE:valid_max = 49.90084f ;
> > LATITUDE:scale = 0.001f ;
> > LATITUDE:offset = 24.999f ;
> > LATITUDE:scale_factor = 0.001f ;
> > LATITUDE:add_offset = 24.999f ;
> >
> > The first value is 1, so that would translate to 1*.001+24.999 which
> > would be 25. For the longitudes, the declaration is:
> >
> > int LONGITUDE(lines, samples) ;
> > LONGITUDE:units = "" ;
> > LONGITUDE:missing_value = -94.f ;
> > LONGITUDE:valid_min = -140.0005f ;
> > LONGITUDE:valid_max = -95.0994f ;
> > LONGITUDE:scale = 1.e-007f ;
> > LONGITUDE:offset = -140.0005f ;
> > LONGITUDE:scale_factor = 1.e-007f ;
> > LONGITUDE:add_offset = -140.0005f ;
> >
> > and the first value is 0, so that would make it 0*1E-7 + -140.0005
> > which would be -140.0005.
>
> Yes.
>
> > This is being set to missing.
> > Is that what you are wondering about or is it something different?
>
> I wasn't aware that it is -- or should be -- set to missing. And
> I have questions about the explanation given below.
It looks like the roundoff is causing it to be set to missing.
> > latitude is also set to NaN, but there, we have 24902*.001+24.999 =
> > 49.901 which is beyond the valid_max of 49.90084.
> >
> > I got some clarification from the netCDF folks on the use of the
> > missing and valid max/min values. Here's what they replied:
> >
> > <quote>
> > Turns out the problem is missing_value, which must be in packed units:
> >
> > from
> http://www.unidata.ucar.edu/software/netcdf-java/v4.1/javadoc/index.html
> >
> > *Implementation rules for missing data with scale/offset*
> >
> > 1. Always converted to a float or double type.
> > 2. _FillValue and missing_value values are always in the units of the
> > external (packed) data.
>
> Let me offer an example to see if I understand this. Say the scale factor
> is
> .001 and offset is 20. Then will a missing value (in packed units) of 15
> get
> displayed by ncdump as 15*.001+20=20.015?
ncdump shows only the raw values as far as I know. I don't see an option
(at least in 3.6.1) to do the scale and offset. If you are using the
netCDF-Java ToolsUI to dump the data, you can dump either the raw data or the
transformed data. That's what I've been using to check out your file.
The resulting scaled/offset missing value will be used to set data values to
NaN after they have been unpacked if they are the same as the unpacked missing
value.
> > 3. If valid_range is the same type as scale_factor (actually the
> > wider of scale_factor and add_offset) and this is wider than the
> > external data, then it will be interpreted as being in the units
> > of the internal (unpacked) data. Otherwise it is in the units of
> > the external (packed) data.
> > 4. The dataType is set to float if all attributes used are float
> > (scale_factor, add_offset valid_min, valid_max, valid_range,
> > missing_data and _FillValue) otherwise the dataType is set to double
> >
> >
> > technically, valid_min and max should also be in packed units
> > acording to the NUG, but we relaxed that in netcdf-java to
> > accomodate some existing datasets that misread the manual.
> >
> > also, he is using floats, and is encountering roundoff error,
> > looking in the debugger i see:
> >
> > valid_min = -140.00045776367188
> >
> > which will also cause problems.
>
> Where does this roundoff error occur? It seems it must be just in the
> output
> of ncdump or the debugger would give the same value.
In the netCDF-Java library.
> > so:
> >
> > 1: dont use missing_value or valid min/max unless you need them, eg
> > you actually have missing data.
>
> The lats and longitudes should always be there but there will be
> missing values of some data.
That is the best way.
> > 2. missing_value must be in packed units.
>
> See my example above.
Yes.
> > 3. make valid_min and valid_max a little wider than you need to
> > accomodate roundoff
>
> Or have them in packed units??
Yes, that would work.
> > </quote>
> >
> > Also, take a look at the CF conventions about packed data:
> >
> > http://cf-pcmdi.llnl.gov/documents/cf-conventions/1.4/cf-
> > conventions.html#packed-data
Don
Ticket Details
===================
Ticket ID: QAW-317402
Department: Support IDV
Priority: Critical
Status: Open