This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
--MAA22040.839876245/binnie.unidata.ucar.edu >To: address@hidden >cc: "Larry A. Schoof" <address@hidden> >From: "Larry A. Schoof" <address@hidden> >Subject: future plans for netCDF >Keywords: 199608121747.AA24109 > > In the netCDF 2.4 User's Guide, you mention that "XDR is to be replaced by > new software under development" and that "other desirable extensions that > may be added include ... ability to access datasets larget than 2 > Gigabytes and multiple unlimited dimensions." Do you have any > specifications for these developments and approx when they will become > part of netCDF? We have eliminated the use of ONC XDR library in netcdf-3.0. We are internally testing the 'C' language portion of this and will be able alpha release that in a matter of a few weeks. The FORTRAN interface will lag behind that maybe a month? If you are interested, I can keep you posted. The 2 gig change won't come for some time, as that would mean a change in the file format. We are keeping the file format frozen through the 3.x interface upheaval. > I understand the performance hits one takes when the header of a netCDF > file must be expanded to add new dimensions and variables. We are paying > significant penalties because our application codes typically repeat the > ncdimdef...ncvardef...ncvarput sequence many times; with our codes being > ported to MP machines, "many" is sometimes thousands! We simply must do > something about this. Have you ever considered adding a function that > a user could invoke to give an estimate for the size of the header and > fixed-size data portion of the file? netCDF could use that approximation > to allocate space in the file, and each time ncredef is called, the sizes > of the header and fixed-size data portion are checked to determine if any > file copying is necessary. This would be extremely beneficial to us. In 3.x, we have changed the 'redef', 'endef' code so that it is much less costly. Things are only shifted around in the file if necessary, and the movement happens "in place", rather than copying to another file. It will be possible once this is in the field to tweak it so that header allocation occurs on 'block' boundaries, eg, you would start with a mostly empty header block and only grow the file when you exceeded this initial allocation. We won't do this change in the initial netcdf-3 release, because we need for people to be able to do byte for byte comparisons and verify the netcdf-3 implementation. Once the new implementation has proven itself in the field, we can tweak this. > I ask these questions because three of the DOE National Labs (Sandia, Los > Alamos, and Lawrence Livermore) are cooperating on a significant effort to > define a "common data format" that will make it easier to share > application data among the labs. netCDF is one of the candidate formats > being considered for the low-level API. > > Thanks for your consideration. Thank you for your feedback. -glenn --MAA22040.839876245/binnie.unidata.ucar.edu--