This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Hi Jeff, How chunking and compression affect file size and read/write performance is a complex issue. I'm going to pass this along to our chunking expert (Russ Rew) who, I believe, is back in the office on Monday and should be able to provide you with some better advise than I can give. In the mean time, here's an email he wrote in response to a conversation on the effect of chunking on performance that might be useful: http://www.unidata.ucar.edu/mailing_lists/archives/netcdfgroup/2013/msg00498.html Sorry I don't have a better answer for you. Ethan Jeff Johnson wrote: > Ethan- > > I made the changes you suggested with the following result: > > 10000 records, 8 bytes / record = 80000 bytes raw data > > original program (NetCDF4, no chunking): 537880 bytes (6.7x) > file size with chunk size of 2000 = 457852 bytes (5.7x) > > So a little better, but still not good. I then tried different chunk sizes > of 10000, 5000, 200, and even 1, which I would've thought would give me the > original size, but all gave the same resulting file size of 457852. > > Finally, I tried writing more records to see if it's just a symptom of a > small data set. With 1M records: > > 8MB raw data, chunk size = 2000 > 45.4MB file (5.7x) > > This is starting to seem like a lost cause given our small data records. > I'm wondering if you have information I could use to go back to the archive > group and try to convince them to use NetCDF3 instead. > > jeff Ticket Details =================== Ticket ID: BNA-191717 Department: Support netCDF Priority: Normal Status: Open