This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
> We have both functions so that users can read data into either signed > or unsigned arrays of char without requiring an ugly cast. If we only > had nc_get_att_schar() and a user wanted to read NC_BYTE data into an > array of unsigned char, they would have to use a cast or get a > compiler complaint. > But as a user looking at a new file, there's no way for me to tell if I am dealing with signed or unsigned? I just have to know that in advance? Perhaps, someday, we should consider adding a type to netcdf-4 to allow us to tell the difference? We are saying, are we not, that NC_CHAR is to be used exclusively for text strings? Or should that also be used for unsigned char, leaving NC_BYTE to mean always signed data? > In either case the same 8 bits are read into the same location in > memory, but we have to provide both schar and uchar versions to allow > the user to treat byte data as either signed or unsigned. No > conversion takes place reading/writing a signed or unsigned char in > memory to or from a byte on disk, so users can still treat NC_BYTE > data as unsigned char if they want to. To allow them to do this > without a cast, we provide the convenience function. I understand that no conversion takes place. In terms of checking for range errors, as in going from an int to a NC_BYTE, my understanding was that I treat it as always signed. Is that right? > > For the same reason, we provide both nc_get_var_schar() and > nc_get_var_uchar(), and similarly for the corresponding put_var > functions. Yes.