This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Hi Birgit, >To: <address@hidden> >From: "Birgit Heese" <address@hidden> >Subject: netcdf data problem >Organization: MIM >Keywords: 200311171529.hAHFT5Ob010057 In the above email, you wrote: > I have a problem in replacing some variables in a netcdf-file. I am > getting some additional zeros at the end of the data. I am just > opening the file and cheching the variable id and then putting a new > value for this variable. I am not manipulating the data itselv at > all. Nevertheless, the program is adding zeros behind the data. Thanks, you have identified a bug. I now have very simple C program that demonstrates the bug by changing the value of the only variable in a one-variable 72-byte netCDF file; the file grows to 4096 bytes when it is closed. Apparently the size of the file is getting rounded up to the nearest multiple of 4096 with extra zero bytes, although this does not happen for files created, written, and closed without being rewritten. Using only the netCDF interface, it is not detectable whether a file has any number of extra zero bytes appended to it. It would be possible to shrink all your files, getting rid of the extra zero bytes, by a program that truncated them with the ftrunc() call if you knew how long they were supposed to be, but that would not be a good long term solution :-). Now that we can reproduce the problem with a simple test case, we'll try to determine where the bug is and provide a fix. Thanks for reporting the problem! --Russ _____________________________________________________________________ Russ Rew UCAR Unidata Program address@hidden http://www.unidata.ucar.edu/staff/russ