This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Hi Xiaofeng, re: > Would you please describe current situation of Unidata TIGGE platform for > us, that will be a good example for us to upgrade our interchange servers. Actually, we (Unidata) do not have a machine dedicated to TIGGE-related activities. NCAR/CISL does, and I see that Doug Schuster has already sent you the configuration for the machine(s) that they are using. Nonetheless, I want to let you know the specifications for machines we beginning to use for our toplevel IDD relay cluster, idd.unidata.ucar.edu. NB: we are not attempting to recommend specific manufacturers or suggest what pricing you should expect. Rather I am trying to give an overview of the class of machine that we think will be useful for LDM/IDD work going into the future. # Description Price --+-----------------------------------------+----- 1 SuperMicro workstation 7046A-T 4U chasis $ 950 2 Intel Xeon Qc E5520 LGA 1366 2.26 Ghz $ 820 6 Kingston 4 GB DDR3 1066 Mhz CL7 $ 690 1 Western Digital SATA 500 GB 7.2K 32MB HD $ 65 1 Lite-on 20x SATA DVR Drive $ 29 --------------------------------------------+----- Total $2554 As you can see, this machine has 24 GB of RAM. What may not be obvious is that it can take a LOT more RAM than that (128 GB, I believe, but I am not 100% sure). The machine has redundant power supplies and dual Gbps networking and supports large frames. We are currently running Fedora 12 Linux, but we will be upgrading to Fedora 13 in the near future. I hope that this helps... Cheers, Tom > Thanks in advance! > > Xiaofeng Bian > > > From: Doug Schuster [mailto:address@hidden] > Sent: Wednesday, July 07, 2010 6:27 AM > To: Xiaofeng Bian > Cc: Baudouin Raoult; Manuel Fuentes; ??; address@hidden; > address@hidden; address@hidden; Tom Yoksas > Subject: Fwd: LDM > > Xiaofeng, > > Below is an email I had previously sent to Baudouin that describes our ldm > configuration. We're replacing "Machine 2" with a new machine > > that will copy the built "archive files" to a large GPFS disk system and > HPSS, in addition to ingesting data through ldm. I'll send you the new > "Machine 2" specifications in a future email. > > It may also be a good idea to contact Tom Yoksas at Unidata for his input on > your situation. > > > Best, > > Doug > > > Begin forwarded message: > > > > From: Doug Schuster <address@hidden> > Date: September 30, 2009 9:25:26 AM MDT > To: address@hidden > Cc: Doug Schuster <address@hidden> > Subject: Re: LDM > > Baudouin, > > We have 2 machines for our ldm configuration. See Below. -Doug > > Machine 1 acts as a data relay and distribution machine (data reservoir). > Machine 1 Functions: > -Request all "near-realtime" data from ECMWF, CMA, and unformatted > data from NCEP into the queue. > -multiple inbound data streams > -Insert reformatted (TIGGE compliant) NCEP data into the queue. > -Allow ECMWF, CMA , and Machine 2 (NCAR) to request data from the > queue. > -multiple outbound data streams > -Do not write anything from the queue out to disk. > -Retain data in the queue for 1/2 hour -1 hour if possible (creating > a data reservoir). > > Machine 1 Specs: > -ldm version = 6.8.1 > -ldm queue size = 8 GB on local disk > -pq_slots = 533333 > -dual intel (i86pc) machine with SunOS 5.10 (we will be moving to > linux in the future). > -16 GB RAM > > Machine 2 ingests all data from machine 1 and *.archive machines (i.e. > tigge-portal.ldm.ecmwf.int), and > processes the data out of ldm queue to disk. > Machine 2 Functions: > -Request all data from machine 1. > -One inbound data stream (1 REQUEST line in ldmd.conf) > -Request backfill data from *.archive machines. > -Multiple inbound data streams from ECMWF and CMA > -Process all data out of the queue to disk (data written to a high > perfomance RAID disk connected directly to machine 2). > -Retain data in the queue for only a few minutes. > -Allows for faster data processing out of the queue to disk. > -Validate grib message integrity and cycle completeness, build > archive files, copy archive files to SAN storage disk, Tape System, etc. > > > Machine 2 Specs: > -ldm version = 6.8.1 > -ldm queue size = 1 GB on high performance raid disk (directly > connected to machine 2) > -pq_slots = 50000 > -dual intel (i86pc) machine with SunOS 5.10 (we will be moving to > linux in the future). > -16 GB RAM > -Raid disk size: 1.3 TB, increasing to 3 or 4 TB in the near future. > > > address@hidden wrote: > > Dear all, > > > > According the in-efficient of our LDM server, the upgrade of TIGGE platform > > is now in the planning phase . We hope to improve our servers' capability to > > fulfill the TIGGE data transfer. Considering of replacing the current Linux > > cluster with server such as IBM p-series, We are eager for more information. > > So, how other centers constructing a TIGGE platform are very useful guidance > > for us now. If that's all right, would you please describe in detail to us > > the current situation of you TIGGE platform , both of LDM servers and the > > archiving & retrieving servers. > > > > Thanks a lot! > > > > Regards, > > Xiaofeng Cheers, Tom -- **************************************************************************** Unidata User Support UCAR Unidata Program (303) 497-8642 P.O. Box 3000 address@hidden Boulder, CO 80307 ---------------------------------------------------------------------------- Unidata HomePage http://www.unidata.ucar.edu **************************************************************************** Ticket Details =================== Ticket ID: BJO-661283 Department: Support IDD TIGGE Priority: Normal Status: Closed