[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: 20040309:question regarding removing record from dataset
- Subject: Re: 20040309:question regarding removing record from dataset
- Date: Wed, 10 Mar 2004 11:23:48 -0700
>To: address@hidden
>From: "Dai, Feng Wei" <address@hidden>
>Subject: RE: 20040309: question regarding removing record from dataset
>Organization: UCAR/Unidata
>Keywords: 200403091336.i29DaTrV017557
Fengwei,
> thanks alot for your response. I think current version is still
> possible to support our project, meanwhile we just wait for new version to
> be available. I have one more question regarding large file access support.
> currently i have netcdf-3.5.1 package. In document it mentions that "If you
> use the unlimited dimension, any number of record variables may exceed 2
> Gbytes in size, as long as the offset of the start of each record variable
> within a record is less than about 2 Gbytes". Does that mean var1 var2 and
> var3 in following example can have 800G(20G records) volume of data for each
> variable? pls enlighten me if i am wrong abt this concept.
>
> netcdf bigfile2 {
> dimensions:
> x=40;
> t=UNLIMITED;
>
> variables:
> char var1(t, x);
> char var2(t, x);
> char var3(t, x);
> }
> }
No, the documentation should be corrected to add
However, no more than 2147483648 (2**31) records can be
written to a netCDF file.
(Except I need to do more work to determine whether it's actually
possible to have 2**32 records.)
Our 3.6.0-alpha release of netCDF, which we hope to release in the
next month, removes the restriction on 31-bit offsets, using 64-bit
offsets instead, but it will still have the restriction in the format
that the number of records must be storable in a "size_t", which on
many platforms is an unsigned 32-bit integer.
Thanks for pointing out this problem with the documentation!
--Russ
_____________________________________________________________________
Russ Rew UCAR Unidata Program
address@hidden http://www.unidata.ucar.edu/staff/russ