This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Hi guys,I am looking at serving some very large files through thredds. I had found through trial-and-error that on one particular server, somewhere between 60Mb and 300Mb thredds stopped being able to start serving up files before the client timed out.
Unfortunately, this machine services a number of people so I had to do my testing elsewhere. I have a 579Mb NetCDF file on my desktop machine, and tried doing a local test with this, installing my file server and the thredds server on it. What I found was that the thredds server was running out of heap space. Now, I know I can alter the amount of heap space the JVM has available somehow, and that's what I'll try next, but I don't know whether that's a reliable solution. I don't really know how much memory thredds needs on top of the size of the file it's trying to serve, and of course multiple incoming requests might also affect this - I don't know how tomcat deals with that kind of thing in terms of creating new JVM instances etc.
Here is the error from catalina.out:DODServlet ERROR (anyExceptionHandler): java.lang.OutOfMemoryError: Java heap space
requestState: dataset: 'verylarge.nc' suffix: 'dods' CE: '' compressOK: false InitParameters: maxAggDatasetsCached: '20' maxNetcdfFilesCached: '100' maxDODSDatasetsCached: '100' displayName: 'THREDDS/DODS Aggregation/NetCDF/Catalog Server' java.lang.OutOfMemoryError: Java heap spaceSo my question is: what's the best way to make a reliable server than can serve these large files?
Cheers, -Tennessee