This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Hi George, No, ncdump dumps the whole file, unless you explicitly select a subset of variables to dump with the "-v varlist" option, or explicitly dump only the the header metadata with the "-h" or "-c" option. However, if you have a lot of values in the file that are "fill values", either because they were never explicitly written or because they were written as fill values (which can be specified using the "_FillValue" attribute or using a default fill value for each type), then ncdump displays such values using the underscore character, for example data: var1 = _, _, _, _, ... ; If the original file had lots of doubles (8 bytes each) that were displayed in the ncdump output as fill values (3 bytes each), the output could approach being only 37.5% as large as the original file, but it would not be possible for a 111 MB file to have an ncdump output of only 550K. So I agree, something is wrong. You should be able to compute the size of the netCDF file from just the information in the header. It's possible that lots of bytes have been concatenated onto the end of the file and there would be no way to know that through just the netCDF interface. If you're still puzzled, compare the output of "ncdump -c" of the original file with the output of "ncdump -c" of the ncgened file and see what the difference is. --Russ Russ Rew UCAR Unidata Program address@hidden http://www.unidata.ucar.edu Ticket Details =================== Ticket ID: GCF-527136 Department: Support netCDF Priority: Normal Status: Closed