To clarify somewhat:Aggregations only open one file at a time, and then close it., so this wont cause "too many file" problems.
If you have "too many open files" errors, then either theres a file leak (which we would like to know about), or you have your file cache limit set too high relative to your OS file handle limit.
There is currently no per-client resource throttle, unfortunately, but we are aware of the eventual need for that. Any given request is single-threaded, so cant hog too many resources. One can limit the size of opendap responses, which tends to be the main problem on our server anyway.
John On 5/6/2010 6:50 AM, Rich Signell wrote:
You might try increasing the max number of open files. See the top item at http://rsignell.tiddlyspot.com -Rich On May 6, 2010, at 1:46, "James T. Potemra" <address@hidden> wrote:Actually, I think the problem is worse in that over-zealous clients can shutdown your TDS. Jim Paul Reuter wrote:This is probably a good reason to limit the duration available, opting for a rolling archive. However, if I recall, TDS can be configured toset limits on a single client for a certain number of files/bytes/time. Either way, the client will crash out when that limit is met, and thismay not be expected functionality for the client. We might get more complaints that way than just limited what we serve. PaulOn the other hand, if you use TDS toaggregate the files (presumably what you are doing), then if someone were to try and make a time series (again, for example) at a single point, TDS would have to access all the files and then might crash (andthrow an error like "to many open files").