This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
> -----Original Message----- > From: Lyn Greenhill [mailto:address@hidden] > Sent: Wednesday, October 06, 1999 10:48 AM > To: John Caron > Subject: RE: 19990929: Write/Read of 2D pages > > > At 04:36 PM 10/5/99 -0600, you wrote: > > > >I looked quickly at the Jama docs. The approach that occurs > to me (that > >you are probably referring to) is: > > > > reading from netcdf: > > MultiArray ma = netcdfVar.copyout(new int[] {pageno, 0, 0}, > > new int[] > {1, nx, ny}); > > double [] vals = (double []) ma.toArray(); > > Matrix(double[] vals, int nx); // nx or ny ?? > > > > writing to netcdf: > > double[] vals = matrix.getColumnPackedCopy(); > > MultiArray ma = new MultiArrayImpl(new int[] {1, nx, ny}, vals); > > netcdfVar.copyin(new int[] {pageno, 0, 0}, ma); > > > >this might be significantly faster. > >note im not sure if Matrix wants nx or ny; part of this confusion is > >that it wants a Fortran column-major 1D array. In any case, this > >shouldnt be a problem - just switch the meaning of nx and ny > if need be. > > If all I was doing is read/write from the netCDF to a Matrix, > your code > above would work fine. In fact, I like how you've used > toArray() and I > will probably switch what I'm doing to this procedure. > > However, we're doing something a little more involved. We > are actually > writing/reading 2 separate netCDF files. The "lower" level one is the > combination of potentially many 2D pages for various > calculation arrays. > These files are scratch files and are deleted at the end of a > session. The > "upper" level file is the one that is saved, and is used to > collect all the > calculation results for a restore. In this save file, we read in the > various 2D blocks and then write out the pseudo 3D arrays. > To restore, we > read in the 3D arrays and write out the 2D blocks. > > Now you may comment as to why we don't just use the save file > to read/write > the 2D blocks, and the answer is that the user may make > several different > runs prior to actually wanting to save anything, so we want > to keep the two > file streams separate. > > Just so you get an idea as to what we're doing, I attaching > the clipped > source code for the write/read of these 2D blocks in the > upper level file > (code snippets.txt). I'm also sending you the entire source > of the class > that creates and manages the scratch files (MatrixFile.java). > This file is > not very long. The full class that reads/writes the save > file is pretty > long, I won't send you all of that. > > >in terms of netcdf efficiency, just remember that disk > reading is done > >when you copyin/copyout; your cost is in units of disk > blocks (typically > >4K), and contiguous data will reduce the number of disk > blocks. So if nx > >x ny is reasonably large, this page oriented access seems > not too bad. > > I implemented the looping by page, reading a 2D block and > writing it to the > save file the other evening. The source code I'm attaching > has this latest > method. I did find that if I used the matrix.get() method, > rather than > matrix.getArray(), the performance was glacial. So that's > why you see the > dummy 2D array being used. With the current method, the time > required to > write the save file is just as fast as creating a large 3D array and > writing it all at once. The 2D approach is not memory bound > either, so > that's what we want to use in our production code. > > I'm pretty sure I understand what is going on with this > package now, but I > certainly appreciate your comments and advice. > > sounds good; good luck on your project