This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
>To: address@hidden >From: Ben Foster <address@hidden> >Subject: Re: 20000202: slow netcdf access on Cray >Organization: HAO >Keywords: 200002042214.PAA09139 j90 NETCDF_FFIOSPEC ffio Hi Ben, > Re the slow i/o on the Cray j90se's that I mentioned > yesterday: I tried various NETCDF_FFIOSPEC settings, > and they did speed up the i/o noticeably. With the > default setting, it was so slow I could not wait for > it to finish, but it would have taken many minutes. > > After several experiments, the best i/o wait time > was with ffio cache:128:8. This took 33.8 secs locked > i/o wait time. Several other settings took between > 33.9 and 83 seconds. But this was still *many* times > slower than reading the same file on SGI dataproc. > > So I decided to bite the bullet and read the whole > thing at once, rather than in the latitude loop, > which is where I was reading slices. This takes more > memory, but the locked i/o wait time was only 2.5 secs! > Needless to say, this is the route I will take. With > f90 I can deallocate after I'm done processing, so > memory is not too big a deal. I may be reading many > histories in a single run, so i/o is much more important > than memory anyway. > > I guess the moral is to read with stride-1, as close as > possible to the way the file was originally written. > ffio can help, but only if stride-1 is impossible. This is a case where the "chunked I/O" of an HDF-based netCDF could really pay off. It will make it possible to read data long any chunked dimension with about equal access times, and without requiring the memory necessary to read everything in at once. > P.S.: We use solar geomagnetic indices (ap, kp, f10.7, > etc) from NGDC for some model runs. In the past, I have > downloaded ascii files from the NGDC web site and > written direct access unformatted fortran files for > use in the model. Now that I'm going with netcdf, I'm > wondering if this data is already available in netcdf > form, or if anyone has already written code to write > these data to netcdf files? I have sent this question > to NGDC but have not heard back from them yet. I don't know whether this data is available in netCDF form, but you might check into whether DODS access is practical for this kind of data. If NGDC set up a DODS server for this data in its FreeForm format, it might make it accessible with netCDF calls using the DODS library, even if the data was stored in the NGDC "FreeForm" format. See http://top.gso.uri.edu/FFND/ffnd.dods.html or http://www.ngdc.noaa.gov/seg/freeform/freeform.shtml for example. I know NGDC has utilities for using FreeForm with HDF data also. --Russ _____________________________________________________________________ Russ Rew UCAR Unidata Program address@hidden http://www.unidata.ucar.edu