This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Hi Nick, > I am running some tests with a code I am converting from using a flat > file to netcdf/hdf5. I am using the parallel MPIIO access mode so unable > to use the deflation calls via the API. I thought I would use nccopy > -d9 as a post process on my files to compress them and therefore get > some space saving whilst still retaining the ability to do a parallel > read in other related codes. > > However, I find that I get quite poor compression using nccopy, much > worse than I get if I use the API call. In some cases, nccopy -d9 gives > little or no compression whilst using the API gives me 4-5x compression. > > Is this something you would expect or am I missing something critical > in this case? No, you should expect exactly the same compression using nccopy as with the API calls. nccopy calls the API for each variable in the file with whatever compression level you specify. The API calls are somewhat more flexible, in that you can specify a differnt level of compression (or no compression) for each variable separately, but if you use the same compression for every variable, there should be no differencce. If you are seeing something different, it sounds like a bug. Can you provide a sample file that we could use to reproduce the problem and diagnose the cause? --Russ Russ Rew UCAR Unidata Program address@hidden http://www.unidata.ucar.edu Ticket Details =================== Ticket ID: PBW-682100 Department: Support netCDF Priority: Normal Status: Closed