This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
> > > Hmmm, unfortunately, the Java netcdf API doesnt allow you to > > incrementally update the Schema; the Schema in a NetcdfFile is > > considered immutable. > > Any chance the java NetCDF could be extended in this direction? > Possibly eventually, but not immediately > Also support for ragged arrays would be greatly appreciated. we are considering using the hdf-5 data file format underneath our next version of netcdf (version 4). You might look at that now; theres no Java version though, and we arent yet decided on that path. > > > > Kinda lame, but I think thats the way it is. You could 1) > read the data > > twice, accumulating the scheme on the first pass, or 2) > write seperate > > files for each Variable, then merge at the end. > > How do you merge netcdf files? It seems like the same problem. You can read in the metadata from the single-variable files without reading the data, construct the schema, make the big file, then read the data and fill the big file. > > > I have my code working. I simply store the data locally as I build > the schema and then read back through the data to write it to the > file. This will work fine for modest data set sizes but for larger > datasets this will be a problem. > sounds like you did the equivalent of what i just suggested. extra cost is one local read and the extra scratch disk space.