This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
>To: address@hidden >From: "Alan S. Dawes" <address@hidden> >Subject: netcdf 3.4 help >Organization: UCAR/Unidata >Keywords: 200108082235.f78MZM110902 Hi Alan, > I use NETCDF v3.4 using Large File support. My data is written > in the format:- > > Dimension Definitions > . > . > Variable Definitions > . > . > Data > . > . > > > On a 32bit platform can netcdf read/write files > 2GB? I'm going to have to do a little more research to answer this question. The netCDF source does not include transitional interfaces such as open64() or types such as off64_t, but some compilers automatically map xxx() source level interfaces to their corresponding xxx64() functions (and xxx_t types to xxx64_t types) when -D_FILE_OFFSET_BITS=64 is passed to the compiler. That should make it possible to read/write files > 2GB on such 32-bit platforms. However, I have to test this. > If so, what are it limitations? As I read it from the web > pages the offset from the 'header' information to the > the 'data' that you want to read/write has to be <= 2GB. > > Am I right? Yes, the limitations would be the same as you probably read on the web page: ... the file offset to the beginning of the record variables (if any) must be less than 2 Gbytes, and the relative offset to the start of each fixed length variable or each record variable within a record must be less than 2 Gbytes. Hence, a very large netCDF file might have * no record variables, some ordinary fixed-length variables, and a very large (exceeding 2 Gbytes) fixed-length variable; or * some ordinary fixed-length and record variables, and a very large record variable; or * some ordinary fixed-length and record variables and a huge number of records. These constraints are required because the disk format only permits 32 bits for these offsets, which are stored in the file. > I look forward to hearing from you. > > Thanks > > Alan Dawes > > p.s. Does this problem with files > 2GB go away > when using a 64bit platform? Even when using a 64-bit platform, the file format is still what limits the size of netCDF variables and files. The netCDF format only has space for a 32-bit offset to specify the offset in the file of the first record variable and a 32-bit offset from the start of each record to the location of each record variable within the record. The total number of records must also fit in 32 bits. So the limitations are the same as given above. --Russ