This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Randy, > We had a very detailed question on the use of fletcher32 checksums > that I thought would best be answered by the HDF5 folks. See below. > The question is: > > When writing "chunks" and using Fletcher checksums are there any > situations where the HDF5 API (or NetCDF API) will do a read of a "chunk" > under the covers when an application is writing a NetCDF4 file ? Yes, three such situations are 1. Overwriting some existing data, for example correcting a fill value or some data written incorrectly. In that case, the existing data is read in (with its checksum checked), and the data is overwritten (creating a new checksum for each affected chunk of the data). 2. Extending data along one of its unlimited dimensions. Such extensions can require filling in more data in partially written chunks, so the partial chunks must first be read (with checksums checked) and extended, creating a new checksum for each affected chunk of data. 3. Writing data in an order that does not fill whole chunks before they get ejected from the chunk cache. This is like case 2, above, in that the partially written chunk has to be reread into the cache, checking its checksum, then more data gets written into the chunk while it's still in the cache, then a new checksum is generated, before it gets ejected from the cache or flushed to disk. At least that's my understanding ... --Russ Russ Rew UCAR Unidata Program address@hidden http://www.unidata.ucar.edu Ticket Details =================== Ticket ID: FKU-129886 Department: Support netCDF Priority: Normal Status: Closed