This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
> > > > > The parallel tests did work (see end of attached make_check.log) > > > > Tanks for helping, > > Mathis > > > > > > Howdy Mathis! > > I have looked at this and I think I see the problem. I will fix this in the code and you can try again in the next release. > > Meanwhile, your users can go ahead and use netCDF. This is a test program problem, not a core library bug. > > Thanks! > > Ed Howdy Again Mathis! I have fixed all the parallel I/O issues you raise in this email. Now the MPI_Finalize calls are made on all tests which make the MPI_Init calls. Also I have rewritten the HDF5 calls which were causing the error messages. They were not real errors, just several cases in my code where I checked for the existence of an object by trying to open it. If it wasn't there, the HDF5 error stack would print out the messages you were seeing. I think my problem was that I was not turning off the error stack in all processes, but, in any case, I also wanted to rewrite the code, because I do want to expose the HDF5 error stack to the user, so I cannot be cluttering it with my false error messages. How is your use of parallel I/O going with netCDF-4? Any progress? Is it working for you? If you get the daily snapshot you can see these build improvements: ftp://ftp.unidata.ucar.edu/pub/netcdf/snapshot/netcdf-4-daily.tar.gz (The MPI_Finalize errors do not cause a problem on my test system, so if I missed any of them, please let me know.) Thanks, Ed Ticket Details =================== Ticket ID: QVT-485828 Department: Support netCDF Priority: Normal Status: Closed