[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: NetCDF Java Read API
- Subject: Re: NetCDF Java Read API
- Date: Tue, 25 Nov 2008 16:40:23 -0700
great, thanks for the update.
Greg Rappa wrote:
> John Caron wrote:
>> It doesnt appear that you are correctly chunking. an h5dump -h shows:
>> DATASET "VIL" {
>> DATATYPE H5T_STD_I16LE
>> DATASPACE SIMPLE { ( 24, 1, 3520, 5120 ) / ( 24, 1, 3520, 5120 ) }
>> ATTRIBUTE "DIMENSION_LIST" {
>> DATATYPE H5T_VLEN { H5T_REFERENCE}
>> DATASPACE SIMPLE { ( 4 ) / ( 4 ) }
>> }
>>
>> chunking on ( 1, 1, 3520, 5120 ) would probably be a reasonable thing
>> to do, depending on how you plan to access the data.
>>
>> It does seem to be using excessive memory anyway, i am investigating.
>> Only Netcdf-Java 4.0 is reliable with HDF5/Netcdf4 files, and we are
>> still shaking bugs out so this may be one.
>>
> Hi John,
>
> I finally had an opportunity to re-work my NetCDF chunking sizes today
> and I managed to implement the chunking you recommended
> (1,1,3520,5120). My new NetCDF files exhibit an h5dump as follows:
>
> h5dump --header --properties VILForecast.20081125T224500Z.nc
> ...
> DATASET "VIL" {
> DATATYPE H5T_STD_I16LE
> DATASPACE SIMPLE { ( 24, 1, 3520, 5120 ) / ( 24, 1, 3520, 5120 ) }
> STORAGE_LAYOUT {
> CHUNKED ( 1, 1, 3520, 5120 )
> SIZE 45012209 (19.219:1 COMPRESSION)
> }
> FILTERS {
> COMPRESSION DEFLATE { LEVEL 1 }
> }
> ..
>
> The new chunking sizes appear to fix the Java read problem that I
> reported back on 14 Nov 2008.
>
> Thanks for your help (and yours too, Russ),
> Greg.
>