This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Hi Tim, > Date: Wed, 19 Nov 2003 15:01:19 -0700 (MST) > From: Tim Hoar <address@hidden> > To: <address@hidden> > Subject: proper netcdf creation practice In the above email, you wrote: > Can you point me to the example of the proper way to dynamically append > to the unlimited dimension? > > My writing gets much slower as the length of the unlimited dimension > (in this case, time) gets larger. > > I poked around on the unidata netcdf www-site with no success. Here's the message on the netcdfgroup mailing list that showed the time taken to append data to a large file depends on the amount of data, not the size of the file: http://www.unidata.ucar.edu/cgi-bin/msgout?/glimpse/netcdfgroup-list/1455 and the C program referenced there is still available from http://www.unidata.ucar.edu/packages/netcdf/jg1.c This was in a response to a complaint that it was taking longer and longer to append data to a file as it grew larger, but that turned out to be a bug in the Python interface to netCDF, not in the netCDF library: http://www.unidata.ucar.edu/cgi-bin/msgout?/glimpse/netcdfgroup-list/1458 http://www.unidata.ucar.edu/cgi-bin/msgout?/glimpse/netcdfgroup-list/1453 Here was another response that independently showed no slowdown: http://www.unidata.ucar.edu/cgi-bin/msgout?/glimpse/netcdfgroup-list/1454 The whole thread of discussion, from most recent to oldest, is available here: http://www.unidata.ucar.edu/cgi-bin/aglimpse/82?query=degrades&case=on&whole=on&errors=0&maxfiles=20&maxlines=10 I hope this helps ... --Russ