This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
>To: address@hidden >From: "Jim Cowie" <address@hidden> >Subject: netCDF General - Large file problem >Organization: RAL >Keywords: 200503022003.j22K3ZjW025634 Jim, > In a curious twist of fate, it turns out that the application > reading these last variables did not need that data after all, so > I don't have a bunch of data to recover, which is good. And > actually, the data is recoverable in the sense that I could > regenerate it from the source without too much difficulty. I'm > a little reluctant to switch to 3.6 due to the backward > compatibility with existing (3.5.x) apps, even if the code > changes are small. (I am more interested in your version 4.) > Anyway, at least I know what the problem is and there are > some options. Thanks a bunch, You're welcome! I'm really glad you don't have to do data recovery, because that saves me work, too. Just to be clear about the backward compatibility issues, any programs you have that link with 3.6 libraries will produce "classic" netCDF files that are compatible with existing applications. The only way you can generate a netCDF file in the new 64-bit offset format is by going out of your way to add an extra (non-default) flag on the nc_create() call. You can safely use 3.6 without using that flag and be assured you won't generate incompatible files, with one minor exception discussed in the FAQ: Is it possible to create a "classic" format netCDF file with netCDF version 3.6.0 that cannot be accessed by applications compiled and linked against earlier versions of the library? No, classic files created with the new library should be compatible with all older applications, both for reading and writing, with one minor exception. The exception is due to a correction of a netCDF bug that prevented creating records larger than 4 GiB in classic netCDF files with software linked against versions 3.5.1 and earlier. This limitation in total record size was not a limitation of the classic format, but an unnecessary restriction due to the use of too small a type in an internal data structure in the library. If you want to always make sure your classic netCDF files are readable by older applications, make sure you don't exceed 4 GiBytes for the total size of a record's worth of data. (All records are the same size, computed by adding the size for a record's worth of each record variable, with suitable padding to make sure each record begins on a byte boundary divisible by 4.) --Russ