This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
David, The nf90_close() function does flush buffers in user space and call close(2) on the file descriptor, but that doesn't necessarily mean all the data is written to disk yet. I still think the behavior you are seeing, since it's erratic and affected by other I/O that's happening on the system at the same time, is an artifact of the way the operating system schedules and buffers writes to disk. I suspect if you just used binary writes instead of netCDF writes that accessed disk space the same way, you would see similar delays and inconsistencies. It's hard to diagnose a performance problem that's not exactly repeatable like this one, but I'm skeptical that it's something we can fix in netCDF. I appreciate the effort you've put into creating an example program that demonstrates the problem, and after translating your program to C, I've found it useful for benchmarking the use of chunked data access in netCDF-4. The program runs remarkably faster using netCDF-4 when the chunking parameters are set to store the data in 32x64x64x1 chunks, with no occasional delays such as you see in the contiguous storage case with netCDF-3, because it writes each 32x64x64 section with a single write() call, rather than with 64x64 write() calls writing 32 floats each time. The wall-clock times I saw were 49 minutes on local disk with netCDF-3 versus 17 seconds using netCDF-4 with chunked storage. One new call sets the chunk parameters, then all subsequent netCDF write and read calls are the same as you already use. This faster write performance comes at the expense of somewhat slower read performance when accessing the data contiguously, since now 8 reads would be required to get a 256 element row (column in Fortran), instead of one read for all 256 values. But chunking the data means it can be accessed moderately quickly along any dimension, instead of fast along only one dimension. You may not be interested in using netCDF-4 now, since it's only available in beta release, but it does support complete backward compatibilty with the netCDF-3 API as well as format. I'm not completely done looking at this example, so if I find anything else about it in the future, I'll let you know. For the next week or so, I'll be involved in the AMS meeting ... --Russ Russ Rew UCAR Unidata Program address@hidden http://www.unidata.ucar.edu Ticket Details =================== Ticket ID: NNS-961385 Department: Support netCDF Priority: Normal Status: Closed