This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
> Hello, Russ et al., > > Well, my new work-around is to read attributes and 1- and 2-D data > from the separate files with netcdf4, but read the 3D data with the > HDF5 fortran interface. I then write the combined 3-D data with > netcdf4. This avoids using netcdf4 at all to read the 3D variables, > and avoids the memory growth and/or having to close/open the files > with the netcdf4 interface. Well that stinks. Let's see if we an do better with netcdf. > > Background: I run an atmospheric model in MPI, and each process > writes its data to a separate netcdf4 file. The problem has been > combining the data for a given time(s) into a single netcdf4 data. Why not use parallel I/O to have each process write to the final output file? Why write intermediate files? I believe to change to this would be quite easy. Simply create the output file metadata collectively, then change the start arrays of each processes write to map to global, not local, index space. I also note that this will solve your current problem, since presumably each of your intermediate files are carrying metadata that is eventually combined in the final output file. In which case, writing the final output for the parallel program would eliminate the need to read and reread this metadata from intermediate files. > I like the idea in a recent email (netcdfgroup?) to provide a stripped- > down nf_open/read that doesn't read metadata -- basically wrappers on > h5fopen_f, h5dopen_f, h5dread_f, and h5sclose_f that take care of the > sequence: h5dget_space_f -- h5sselect_hyperslab_f -- > h5screate_simple_f -- h5sselect_hyperslab_f (then closing out the > spaces). This solution presents some coding difficulties. Reading of all file metadata at file open is a subtly important part of netCDF implementations, both C and Java, classic and netCDF-4/HDF5 files. So let's see if we can find something easier. Can you send me one of the input files? Or put it on an FTP site? Or just send me a CDL dump of the metadata in the file, and I will fake up my own. (ncdump -h). I know you sent a partial dump, with a bunch of variables taken out, but I want the whole thing. Thanks, Ed Ticket Details =================== Ticket ID: PEB-847323 Department: Support netCDF Priority: Critical Status: Open