This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Mary, > I'm using hdf5-1.8.2. Does it need to be built with special options? Not that I know of, but Ed builds the hdf5 library I use, and he won't be back for a couple of days. You could check the file lib/libhdf5.settings in your HDF5 installation, to make sure the setting Linux Large File Support (LFS): yes says "yes" instead of "no". If it's "no", then I would look for a configure option that would fix that. > I have about 67 Gb available, but maybe there's a quota limit? You should be able to tell if your being limited by quotas using the ulimit command. The options to use depend on what shell you're using, since it's a shell built-in. > Is there a way to tell if the tests are failing because I don't have a > large file enabled NetCDF, versus not having enough free disk space? The fact that your "make check" succeeded on the "quick_large_file" tests, which create large files with "holes" (unwritten data blocks), means the netCDF library is working OK with 64-bit file offsets. The first test that fails is trying to write all the values in a variable with more than 2**32 values, so it actually needs more than 2 GiBytes of free disk space. You might try running this, just to make sure you can write a large file independent of netCDF: dd if=/dev/zero bs=1000000 count=3000 of=./largefile ls -l largefile rm largefile which should write a 3 GByte file named "largefile" in the current directory, verify its size, and remove it. --Russ Russ Rew UCAR Unidata Program address@hidden http://www.unidata.ucar.edu Ticket Details =================== Ticket ID: OHV-990519 Department: Support netCDF Priority: Normal Status: Closed