This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Hi Jerry, > I am converting an ascii dataset that has float values, into a netcdf file. > I am facing an issue I try to solve, that is to limit the number of digits > after the decimal point, as I have only 4 significant digits scientifically. > If I choose to use Float as the storage unit, the conversion process adds > some digits to the end. > > my questions 1) is it possible to limit the number of digits after the > decimal point for "float" or "double" values in the file, 2) if not, is > there a standard attribute to indicate the significant digits are 4 -- I > have seen something like c_format and f_format attribute for c or fortran > based client, but is there one attribute that indicate the this property > when the nc file is accessed? By the way I know I could do "compact" and > use integer and scale factor to achieve that, but I want to know if there is > a general solution for floating point data. There's nothing in the netCDF library that will change the number of significant values used to store a value, and the values are stored in binary, rather than as digits. The C_format and FORTRAN_format attributes are defined in the Attribute Conventions section of the NetCDF Users Guide: http://www.unidata.ucar.edu/netcdf/docs/netcdf.html#Attribute-Conventions as indicating to applications how many digits of precision to display, but whether applications use that information is up to the application. The "ncdump" application honors the C_format attribute and only displays the number of digits specified. It's also possible to use the C pow(), floor(), and round() functions to modify a floating-point number to limit the number of significant digits it represents using some arithmetic, but numbers like "0.1" can't be exactly represented in binary, so your intention may still be thwarted. If you want to limit the precision with which the data is stored, the only way I know to do that is as you have pointed out, by packing the data and indicating the packing parameters with the "scale_factor" and "add_offset" variable attributes, as discussed in the same section. --Russ Russ Rew UCAR Unidata Program address@hidden http://www.unidata.ucar.edu Ticket Details =================== Ticket ID: JNE-101118 Department: Support netCDF Priority: Normal Status: Closed