This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
I think i wouldnt use the Java library for this. The C library would be better. John > Thanks for testing. The real use case is between 3k and 30k. The files are > written with Perl + PDL::NetCDF. This is a legacy loader, which had created > ten thousands of files over the last 12 years. The time to open a file > there is independend from the number of vars. The loader needs to update > the files frequently, because the data comes in blocks of dozens to > hundreds of vars spread over weeks and months. Therefore fast file open is > needed also if the number of vars has grown. However, if it cannot be done > with the Java API, there is still the way to recompile the old code for the > new platforms. > -Bernd > address@hidden> wrote: > > > thanks for reminding, i just ran program on windose 7 and got: > > > > 10 vars: 0.0007 sec, 0.000068 sec/var > > 1000 vars: 0.0291 sec, 0.000029 sec/var > > 10000 vars: 0.7798 sec, 0.000078 sec/var > > 50000 vars: 35.5216 sec, 0.000710 sec/var > > > > the last one seems to indicate a n^2 effect, probably a linear lookup. > > > > however, creating 50000 vars is not what netcdf-3 was designed for. is > > this a real use case? > > > > john > > > > > Hi John, > > > I'd like to send you the test program regarding ticket GCF-381402 again. > > > I hope that you can find time to have a look, > > > why NetcdfFileWriteable.openExisting needs so much time for files with > > many > > > variables. > > > Thanks, > > > Bernd > > > > > > > > > address@hidden> wrote: > > > > > > > Hi Bernd: > > > > > > > > Can you send me the test program so that i can reproduce? thanks > > > > > > > > John > > > > > > > > > Hello NetCDF Java, > > > > > > > > > > I am testing the usage of NetCDF Java 4.3 to write NetCDF-3 files. > > > > > I have chosen NetCDF Java for the test because the Files will be > > written > > > > > from Windows and Linux. > > > > > > > > > > Theses Files contain many 3D-float-Arrays and need many updates > > during > > > > > their lifetime, both adding new arrays and updating the data of > > these. > > > > > It works well so far, but NetcdfFileWriteable.openExisting does not > > > > > scale well, please, see the results below. > > > > > > > > > > I have expected an open call without any overhead from my NetCDF > > > > > experience in Perl and R. > > > > > > > > > > What is the reason, is there a workaround ? > > > > > > > > > > Thanks, > > > > > Bernd > > > > > > > > > > NetcdfFileWriteable.openExisting: 2967 vars => 0.750 sec, 0.0003 > > > > sec/var > > > > > NetcdfFileWriteable.openExisting: 3625 vars => 1.484 sec, 0.0004 > > > > sec/var > > > > > NetcdfFileWriteable.openExisting: 15541 vars => 28.781 sec, 0.0019 > > > > sec/var > > > > > NetcdfFileWriteable.openExisting: 51457 vars => 310.478 sec, 0.0060 > > > > sec/var > > > > > > > > > > > > > > > > > > > > > > Ticket Details > > > > =================== > > > > Ticket ID: GCF-381402 > > > > Department: Support netCDF Java > > > > Priority: Normal > > > > Status: Open > > > > > > > > > > > > > > > > > > Ticket Details > > =================== > > Ticket ID: GCF-381402 > > Department: Support netCDF Java > > Priority: High > > Status: Open > > > > > > Ticket Details =================== Ticket ID: GCF-381402 Department: Support netCDF Java Priority: High Status: Closed