This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
>RedHat and/or Intel just can't handle a high load average. A load >average of >10 is sufficient to totally hose a RH system, possibily requiring a >reboot. >I've seen load averages > 10 on other systems, without user performance >being >affected. Since this was brought up, I am only replying to this point only. IMO this is definitely Linux behavior and not the Intel platform. This has nothing to do with it being an Intel based system. It's Linux that can't handle such a high load average. This is one reason why I think it is unsuitable if you REALLY want to run for a year without rebooting, if you work it hard. The fact that people put up with this as the norm is mystifying to me. SPARC/Irix/PowerPC systems don't handle high loads better necessarily, the OSes that run on them do, Solaris x86 has no such issues. I have two development Solaris x86 LDM boxes (Dell 1400SC) that had some processes that ran amuck (my fault) that caused load averages to be over 14 for a week and people continued to get data from them. They noticed things were a tad slow, but nobody thought there was an issue until I happened to check on them and saw the runaway processes. Linux may be cool, and for some things it may be faster, but if you truly push it hard it will fail you every time. Since many people see this as a religious issue, I will not blast the whole group.