This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Hi Wes, If you read the netCDF FAQ section on Large File Support: http://www.unidata.ucar.edu/software/netcdf/docs/faq.html#Large%20File%20Support (or the section of the Users Guide on this subject) it notes that there are still some limits to netCDF variable sizes, even when using 64-bit offsets: Have all netCDF size limits been eliminated? No, there are still some limits on sizes of netCDF objects, even with the new 64-bit offset format. Each fixed-size variable and the data for one record's worth of a record variable are limited in size to a little less that 4 GiB, which is twice the size limit in versions earlier than netCDF 3.6. Why are variables still limited in size? While most platforms support a 64-bit file offset, many platforms only support a 32-bit size for allocated memory blocks, array sizes, and memory pointers. In C developer's jargon, these platforms have a 64-bit off_t type for file offsets, but a 32-bit size_t type for size of arrays. Changing netCDF to assume a 64-bit size_t would restrict netCDF's use to 64-bit platforms. This is the limit you are running into, trying to create a non-record variable with 2^30 4-byte values, which just exceeds the 4 GiB limit on the size of a variable. Using an unlimited dimension should permit much larger variables, so long as a record's worth of data for each record variable does not exceed 2^32-4 bytes. For details, see: http://www.unidata.ucar.edu/software/netcdf/docs/netcdf/NetCDF-64-bit-Offset-Format-Limitations.html If you still have questions about this or think there is a bug, please let us know. We think these limits are all tested when you run "make check" if you have run te configure script with the --enable-large-file-tests option and --with-temp-large=<directory> to specify a directory where large files will be written, if large files tests are run. --Russ Russ Rew UCAR Unidata Program address@hidden http://www.unidata.ucar.edu Ticket Details =================== Ticket ID: VMY-113616 Department: Support netCDF Priority: Normal Status: Closed