This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
>To: address@hidden >From: "Alan S. Dawes" <address@hidden> >Subject: Re: 20010808: large file support in netCDF 3.5.0 >Organization: UCAR/Unidata >Keywords: 200108082235.f78MZM110902, huge files, large file support, record Alan, > I have sussed how to do large dumps on an IBM > running AIX 4.3 + 32bit and 64bit platforms. So > hold your horses. I will send you a detailed email > with my solutions next week. That's great, I'll be curious to hear about it. I had figured out what the problem was here before I got your message and managed to create a large ( > 5 Gbyte before I ran out of disk space) netCDF file with the sample Fortran program you sent. It turns out that the C library must be compiled with the -DNDEBUG flag in order to permit writing large files, because of an incorrect assert() statement left in for debugging purposes at line 277 of libsrc/posixio.c: assert(offset < X_INT_MAX); /* sanity check */ Compiling with -DNDEBUG turns off assertion checking and allows large files to be written. It should not be necessary to compile with -DNDEBUG to read and write large files, so we'll find and fix all occurrences of these too-srict assertions before releasing netCDF 3.5.1. --Russ