This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Hi Pete, re: > I have some money (4K or so) to spend before June 1. I'd like to > upgrade/replace idd.aos.wisc.edu with something that will be able to > handle the upcoming increases in data through noaaport, NEXRAD, CONDUIT, > etc. Good timing on having some money available! re: > This will be solely an ingest and relay machine, no data saving or > processing on this box. OK. re: > I was looking at a single linux machine with dual quad-core opterons, 32 > Gb of RAM, and dual 15K RPM SAS 300 Gb drives. Does this seem > reasonable, or would you suggest some other setup? This seems completely reasonable. The newer machines we have been buying here are all multi-processor (typically dual quad core) with as much memory as we can afford (three of the new machines have 72 GB of RAM, for instance). re: > I have heard of > people running redundant servers (one ldm address but several machines > running behind it I think..) Is this do-able, The toplevel IDD relay that we operate, idd.unidata.ucar.edu, is actually a cluster of machines composed of: - two front end "accumulators" - between 5 and 8 back end "real servers" each configured identically - two "directors" (only one of which will be active at one time) The director(s) have the IP address of idd.unidata.ucar.edu. They receive connection requests and farm them out to the real server back end machines. We use the LVS package (standard Linux installation) on the director(s). re: > and something I should > consider, or should I just go with a single box as described above? It really depends on what role you want to play in the IDD. If you would like to see UW become a toplevel relay node for most, if not all, of the datastreams available, you might pursue the setup of a cluster fashioned after the one that we run here. Penn State has done this, and I believe that Texas A&M is considering doing this. If you do not envision a greatly expanded role in IDD data relay, then I would think more along the lines of a single, well configured machine. re: > Thanks for any advice you can provide! No worries. We could provide some info on machines we have recently purchased for IDD-related activities, but prices change so fast that the information may not be of much use (plus, if UW has special deals with certain vendors, that should guide your purchases). Cheers, Tom -- **************************************************************************** Unidata User Support UCAR Unidata Program (303) 497-8642 P.O. Box 3000 address@hidden Boulder, CO 80307 ---------------------------------------------------------------------------- Unidata HomePage http://www.unidata.ucar.edu **************************************************************************** Ticket Details =================== Ticket ID: DYP-533202 Department: Support Platforms Priority: Normal Status: Closed