This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Forget to CC: support-netcdf ... ------- Forwarded Message Date: Tue, 22 Oct 1996 16:28:25 -0600 From: Russ Rew <address@hidden> To: address@hidden cc: davis Subject: Re: 961022: extracting data from improperly closed netCDF >From: Rebecca McKeown <address@hidden> >Subject: extracting data from failed run >Organization: CSU >Keywords: 199610221727.AA05312 netCDF Hi Becky, > I was running on went down. I have some large NetCDF files which > of course didn't get closed properly, and was hoping to salvage > part of these. While there is obviously quite a bit of data in the > file, I can't seems to extract it because the unlimited dimension is > seen as 0 presently. I've read some of your previous questions and > it sounds as though there should be some way to set up an alternate > structure for the variables and their dimensions and copy the data over. > Is it possible to do this (and how would I do it since I have been > unable to extract the data so far) or is there something else I can try? I think this problem occurred because you were writing to the file without ever calling ncsync (NCSNC in FORTRAN). For a long run like this, you should either close and reopen the files from time to time, or make sure you call ncsync/NCSNC every once in a while to get the memory image flushed to disk. If you were running on a Cray, there may be other ways to do this by setting the disk cache parameters ... Now that you have the results, you may still be able to salvage the output if it was only the number of records that didn't get updated. Here's a couple of suggestions for how to do this: 1. Use the appendix B of the User's Guide to locate exactly where in the file the number of records is stored. Actually, even without appendix B, I can tell you that it's a 4-byte non-negative integer stored in the second 4 bytes of the file in the form of a 32-bit Bigendian integer. Now from the file size determine how many records were written and just write this number into the file, using either a binary editor such as emacs, a C program, or any other way you know to edit a binary file. For a C program, you would just fopen() the file in read-write mode, lseek() to the fifth byte, write out a 4-byte integer, and fclose the file, for example. It would be best to use a copy of the file instead of the original, in case something goes wrong. 2. Use appendix B of the User's Guide to parse the file header and locate where the floating-point values of the variables are in terms of an offset from the beginning of the file. Then just read in the floating-point numbers from that point, again using a C program that opens, seeks, and reads. You can parse the header if you first use a Unix tools such as "od" to print out the bytes as hexadecimal, character, and integer values (e.g. "od -xcs file.nc" | head"). It's still possible the files you have are corrupted beyond recovery if recover some or all of the data if the only problem was that the number of records wasn't updated yet. I'm CC:ing Glenn Davis on this reply, in case he has another suggestion ... - --Russ _____________________________________________________________________ Russ Rew UCAR Unidata Program address@hidden http://www.unidata.ucar.edu ------- End of Forwarded Message