This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
>From: David Fitzgerald <address@hidden> >Organization: Millersville University >Keywords: 199903261656.JAA28914 Dave, >Hi there, hope your doing well! I'm muddling by, thanks. >This isn't a question for the community as a whole so thats why I'm sending >directly to you. OK, I hope I can help out. >Just wanted to get your opinion on how you think the best >way to configure a network running NIS+, LDM, and a file server >serving 90 user accounts, and software such as Garp, Mcidas etc.... Well, for the first thing: we don't use NIS+. Our system administrator has a number of arguments against use of NIS, but he is better qualified to expond on his reasons than I (address@hidden). >Given the new NOAAPORT data and the quantity of processing the LDM >does, do you feel that it would be best to have the file server, NIS+ >master and LDM server on separate machines, Each on its own machine? >or do you think it would be >best to put all these on one high end server? We have run into some kind of wall here at Unidata on our Sun Ultra. This is the machine we have serving disk (via NFS); running the LDM; running all decoders (GEMPAK, McIDAS, ldm-mcidas, netCDF, etc.) and general compiles. Our system has slowed down considerably due to frequent I/O waits. The CPUs (it is a dual machine) are never very taxed, but the disk throughput has become a BIG issue. >I'm partial to keeping >things separate, but with the increase in the LDM data flow, and with >more products becoming available over time, the increase in software >processing power needed and a constantly increasing number of users, >would it be more cost effective to have one high end server handling >all these jobs instead of upgrading individual servers as they become >obsolete? (sorry for the grammar!) If you were to go this route, I would strongly recommend that you look into a disk RAID system. Throughput on a RAID is considerably better than Ultra wide SCSI (which is what we have). RAIDs also employ disk striping which further improves throughput. As far as the CPU goes, the question is whether or not you are trying to support all of those users with the compute power on the one machine (I hope not). McIDAS, for one, will choke the machine since it uses so much shared memory. GEMPAK is not as bad, but it too is a big user of resources. >This is something we have to decide here for ourselves, but I was >hoping to get your opinion, as you have more experience and have a good >idea on what will be needed for the future as far as Unidata products >are concerned. I can tell you that we are looking to offload the LDM and decoder processing onto one of the dual 450 Mhz Pentium II PCs that we bought a few months back. We havn't done this yet, so we can't report conclusively about whether or not this is a good idea. >Thanks for any help you can provide! If you have more specific questions, I am sure that Mike S. (email above) can answer them better than I. Please don't hesitate to drop him a note. Tom -- +-----------------------------------------------------------------------------+ * Tom Yoksas UCAR Unidata Program * * (303) 497-8642 (last resort) P.O. Box 3000 * * address@hidden Boulder, CO 80307 * * Unidata WWW Service http://www.unidata.ucar.edu/* +-----------------------------------------------------------------------------+