This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Becky: Most likely all you need to do is update the 'numrecs' field as Russ outlines in his first suggestion. Then you should be able to read the file normally. You shouldn't need to parse things further than this, as in Russ's second suggesstion. Just change the second 4 byte bigendian word to a number greater than or equal to the number of records actually in the file. Then read the file as usual. You will get an error when you try to read beyond the end of file. If you have a version of the netcdf library which was compiled using the '-g' flag for debugging, you can read off the header information and examine it in the debugger. Stop at the last executable line of NC_open(). Then "print *handle". You can use 'handle->begin_rec', 'handle->recsize' and the size of the file (from 'ls -l') to compute the number of records: numrecs = (file_size - begin_rec)/recsize. Some debuggers (SGI 'cvd', for example) will allow you to change values there in the debugger. You could set handle->numrecs and proceed... The symbols above apply to netcdf-2. The last record to be written is probably garbage. -glenn