This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Sebastian, We'll need a test application that we could run on a system with GPFS, to see if we can reproduce the problem symptoms. Can you supply such a test, preferably something self-contained, that we could use? --Russ > > I recently stumbled across a performance problem when using netCDF + > > HDF5 + (Collective) MPI I/O: > > > > If the application has a great I/O load imbalance (some processes > > reading only a few values while others read several thousand values) I > > get a dramatic performance decrease on our GPFS file system. > > > > Reading several thousand values with all processes is done within > > seconds. However as soon as I introduce the load imbalance, it takes > > several minutes. > > > > Anybody experiences the same problems? Any advices? > > We haven't seen this problem reported before, but I've forwarded your > question to another developer who has more parallel I/O expertise than > we do at Unidata. Either he or I will let you know if we come up with > anything that might help ... > > --Russ > > Russ Rew UCAR Unidata Program > address@hidden http://www.unidata.ucar.edu > > Russ Rew UCAR Unidata Program address@hidden http://www.unidata.ucar.edu Ticket Details =================== Ticket ID: ABG-853837 Department: Support netCDF Priority: Normal Status: Closed