This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
>To: address@hidden >From: Ben Foster <address@hidden> >Subject: Re: 20020905: transition from crays to ibm >Organization: NCAR/HAO >Keywords: external vs. internal data types Ben and Rashid, > On the IBM's we use -qrealsize=8 (8-byte reals, allowing default > 4-byte integers) and link the r4i4 netcdf library. When we are > ready to open a history file, we tell netcdf to define fields going > onto the history as "doubles", i.e., 8-byte reals: > > istat = nf_def_var(ncid,f%short_name,NF_DOUBLE,4,idimids,idvar) > > Then when it comes time to write values to the file, we give it > 8-byte reals (which were declared "real" in the model): > > istat = nf_put_vara_double(ncid,idvar,start_4d,count_4d,f3d) > > The r4i4 netcdf library can do i/o on either 4-byte or 8-byte > reals. The fact that the library was compiled with -r4 does not > affect this ability (Russ: is this strictly true?). ... Yes, the numeric type of internal data that is written to a netCDF file (int, real, double) is independent from the type of the netCDF variable on disk to which it is written (NF_BYTE, NF_SHORT, NF_INT, NF_REAL, NF_DOUBLE). Conversion happens during the write. If you try to store 8-byte internal reals into a 4-byte external NF_REAL variable, you will lose precision. The only thing the "-r4" the library was compiled with affects is whether internal REAL variables are 4-byte or 8-byte. > ... When we do > post-processing, we still link the r4i4 library, compile with -r4, > and read the doubles into the 4-byte reals with nf_get_vara_real(...). > For vis, we accept the loss of precision from 8-byte to 4-byte. > This is probably what your IDL code is doing also. > > I'm assuming this is what most large models are doing (e.g., ccm, > csm, etc), so I'm surprised the consultants suggested that you > rebuild the netcdf library yourself! (I am cc'ing the ncar consultants > and Russ Rew). > > --Ben > > > >Date: Thu, 05 Sep 2002 11:06:33 -0600 > >From: "Rashid Akmaev" <address@hidden> > >X-Accept-Language: en > >MIME-Version: 1.0 > >To: Ben Foster <address@hidden> > >Subject: Re: transition from crays to ibm > >Content-Transfer-Encoding: 7bit > > > >Hi Ben, > > > >I of course waited till the last moment to move from chipeta to bf. At > >least I was able to convert most of my model input/output to netCDF, as > >you recommended. It worked fine, for example, with idl here. Now, I've > >just found that the corresponding netcdf library on bf is only available > >in r4i4 format. Naturally, I'd like to run everything in the r8 size to > >immitate the CRAYs (not sure about the integers yet) but there is no r8 > >netcdf library. I tried the r4i4 version and not everything works as > >expected with r8, and my understandning is that it's not recommended to > >mix sizes. Consulting says that I should rebuild the library myself for > >the size I want. How did you go about this?