This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
> Barry, > > > Since the original question was about a 100 x 100 16-bit integer > > array. Can I linearly extrapolate the 0.02 seconds that you provided to > > get .13 seconds for a 256 x 256 array and 2.1 seconds for a 1024 x 1024 > > array? Or is the extrapolation non-linear and what would the correct > > numbers be? > Howdy Barry! It is our experience that the netCDF classic library, like the HDF5 library, performs well for large-scale data reads and writes. Both libraries approach the same speeds you would get doing binary data writes from a C program. So for a rough order of magnitude, you could simply write a sample data file on your test system and get much better numbers than those Russ has given you. I strongly suspect that the differences between your target system and the system Russ uses will outweigh the time that netCDF will take to write such a small amount of data. That is, your memory and disk setup are going to be more important than anything going on inside the netCDF library for your performance. Thanks, Ed Ticket Details =================== Ticket ID: CFH-512758 Department: Support netCDF Priority: Normal Status: Closed