This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
> > Update: I see now that the #ifndef MPI_INCLUDED block in ncdispatch.h > is *not* inserted by configure, but is just there by default. From > looking at mpi.h from couple different mpi implementations, it looks > like MPI_INCLUDED is specific to mpich2. OpenMPI and mvapich have > different flags, so this block causes errors if something besides > mpich2 is being used. > > Is there a reason not to use the USE_PARALLEL flag? Otherwise, the > #ifndef MPI_INCLUDED block could be inserted by configure like it used > to for other pieces of code. > > -- Ted > > Howdy Ted! I have fixed this as you suggtested, using USE_PARALLEL. Thanks for pointing this out. This happened after the beta2 release, so if you want to see this you need to get the daily snapshot release: ftp://ftp.unidata.ucar.edu/pub/netcdf/snapshot/netcdf-4-daily.tar.gz I hope you have checked out one of the recent daily snapshots to see improvements in opening lots of large data files and creating metadata objects. Thanks, Ed Howdy Ted! I think this problem is fixed now, because I changed the way that all that was done to be a little less tricky. Now the netcdf_par.h include file must be used for parallel I/O, and it contains the parallel Ticket Details =================== Ticket ID: ZGL-400485 Department: Support netCDF Priority: Critical Status: Closed