This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Lynton, > Blitz appears to be very stable. It appears to be particularly good for > efficient numercically based work. Good, then I tentatively concur that it should be used as part of the netCDF C++ interface implementation. > I have noticed that the NETCDF4 C++ interface makes use of the gcc > extension of "automatic arrays". Essentially this allows you to specify > temporary working space in a function. However it is not ANSI standard. > Using Blitz containers I can remove these non-standard bits of code. > Do you have any particular views on the use of non-standard coding > techniques? Conforming to standards to achieve portability is one of the most important priorities in netCDF library development. No non-standard coding techniques are permitted. I didn't realize that the student developer of the cxx4 prototype code had made use of gcc-specific features, and I'm in favor of replacing them with a standard conforming implementation. > A further observations: > --- An "addCompoundType" method (invokes nc_def_compound) is defined > in class NcFile. I think it should be present in class NcGroup. Yes, a user-defined type is supposed to be contained in a group. Different user-defined types with the same name can be defined in different groups. An added complication/feature that comes from the underlying HDF5 layer is that types can be referred by their pathname from a group different from the one in which the type is defined. I don't think anyone is using this "feature" but it would permit the type of a compound type member in one group to be declared using a user-defined type in a different group. > Most C++ programmes have their class declarations and definitions in > individual .h and .cpp. A separate .h file (hdf5.h) then "includes" all > the .h files for the constituent objects. This is the case for HDF5. > Would you agree to me doing the same for NETCDF C++? It would make > things easier for me to work on! If so, please can you provide some > guidance about the make process. I can't see how the Make process works > out which .cpp to pick up, and also whether the dependencies are > automatically worked out (with makedepend). Yes, please feel free to create a single .h file that includes all the other needed .h files so the resulting API is easier to use. As far as guidance about the make process, we use autoconf, automake, and libtool, so our Makefiles are supposed to be generated from a Makefile.am when the configure script (generated by autoconf in the top-level netCDF directory) runs. I don't think we use makedepend, but instead a "depcomp" script, available from the top-level source directory, to maintain dependencies. It's apparently not perfect, because I have occasionally had to do a "make clean" after a change to a header file to get everything to work that should have depended on the change. For more details on the use of depcomp, I'll have to refer you to Ed Hartnett, who has developed the build, testing, and release systems we use. > One other thing, have you considered using doxygen for the programmers > documentation? I find it veryu effective. No, but it looks like it would be very useful, especially as a way to get man pages that stay consistent with the code when an interface is added. I'll suggest it to our other two netCDF developers and see what they think. Thanks for the recommendation. --Russ Russ Rew UCAR Unidata Program address@hidden http://www.unidata.ucar.edu Ticket Details =================== Ticket ID: MSV-188900 Department: Support netCDF Priority: Normal Status: Closed