This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Hi Sjur, After replying, I just thought of one more possibility that might explain why you were seeing increasing times as you added more records. If you were also changing the schema of the netCDF file, by adding new attributes, dimensions, or variables as you added new records, that would definitely cause a slowdown. There is a fixed amount of space for the file schema and attributes at the beginning of the file. When more space is required for the file schema (also called the "file header"), the data is recopied after more space is allocated. This is easy to identify in the C interface, where you have to re-enter "define mode" to change the schema, but the C++ interface tries to hide "define mode" from the user (probably a mistake) by making the calls for you to enter and leave define mode when adding new variables, dimension, or attributes, renaming anything with longer names, or extending the length of an attribute. So just putting a longer string value in an attribute might be enough to make the I/O appear to slow down, since the data would be recopied when you extended the attribute value to require more space in the file header. Is it possible you are doing something like that when you add records, making it appear as if writing the records slows down, but actually the time is taken by an implicit call to nc_enddef() in the C interface to rewrite the file header and all the data? --Russ Russ Rew UCAR Unidata Program address@hidden http://www.unidata.ucar.edu Ticket Details =================== Ticket ID: GDB-514857 Department: Support netCDF Priority: Normal Status: Closed