This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Hi, Some file systems are much slower rewriting than they are writing in the first place, often because rewriting requires a read/modify/write if the whole block isn't being written (which it probably wasn't, given the description of your workload). Other file systems are slow if blocks haven't already been allocated, so there's no one "right" way to try to get the best performance... Regards, Rob On Fri, 11 Jun 2004, James Garnett wrote: > Thanks to a nudge from Russ Rew, I've solved the problem. > > I can now write files right up to the 2GB limit without problems. > > Using nc_set_fill() with the parameter NC_NOFILL makes the problem go away. > I can't claim to understand why, but it works. Also, this has the added > benefit of creating the file very quickly at inital creation since it > doesn't fill in the variable section of the file. > > I would think that a file prefilled with dummy data could be written to > faster than a file that has not been prefilled, but I'm apparently wrong. > I don't know if this situation is specific to Windows, or if it will show > up in on other platforms as well.