This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Unidata Support wrote:
------- Forwarded MessageTo: address@hidden From: "Peyush Jain" <address@hidden> Subject: netCDF Java - Out Of Memory Error Organization: UCAR/Unidata Keywords: 200503291926.j2TJQ3Qk025344Institution: NASA Package Version: Version 2.1 Operating System: Win XP Pro Hardware Information: P4 3GHz, 2GB RAM Inquiry: Hello, I was able to figure out how to archive streaming data. In my previous example, I created 5 one-dimensional arrays of "unlimited" length. Then I created origin[1] which stored the offset for incoming data and then used write(name, origin, doubleArray) to write the data. So, now the file size is increasing. Now, I am running into another problem. By default, all the incoming data are stored in the variables and are not written to the file until I stop the incoming data and close the file. Therefore, after a few minutes, I get an "java.lang.OutOfMemoryError" error. Is there any way to force netCDF to write to the file after each time data is received (so that I don't run out of memory, flush() didn't do the job). Peyush
looks like our emails crossed. have a look at the example and see if that solves your problen. the trick is to not store more than one time step at a time.