This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Sjur, I just constructed a small test case to time appending a record to a file using the C++ interface when there are a large number of records. I use a variable that required 1 Mbyte for each record, and timed appending a record when the number of records already in the file was 1, 250, 500, 1000, and 2000. I saw only small differences in the time to append a record, though it's possible that the file was cached in memory, so I wasn't really timing I/O properly. I just tried it again after a few minutes of doing other things, and the time is still the same for appending record 2001. I think the effect you saw should be large enough to show up with 2000 times as many records. So for now, I can't reproduce the problem, and will assume it's not a bug in netCDF. But thanks for checking and reporting the problem, it's actually good to gain confidence that things seem to be working the way I think they should :-). --Russ Russ Rew UCAR Unidata Program address@hidden http://www.unidata.ucar.edu Ticket Details =================== Ticket ID: GDB-514857 Department: Support netCDF Priority: Normal Status: Closed