[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[TIGGE #LGY-600646]: Re: Missing fields from CMA
- Subject: [TIGGE #LGY-600646]: Re: Missing fields from CMA
- Date: Mon, 30 Apr 2007 09:02:12 -0600
Hi YangXin,
This is a quick follow-up to what Steve Emmerson wrote the other
day, and some comments on the LDM design that you illustrated in
the TIGGE_Data_Exchange_CMA_Infrastructure_2007.04.28.ppt slide
you sent us previously.
Seeing your slide was most welcome! What you have illustrated
is virtually identical to one of the recommendations we made
after our internal review of the challenges that were being experienced
in moving data to/from the CMA:
- separate the ingest of data from upstream sites (i.e., ECMWF and NCAR)
from the sending of data to downstream sites (currently ECMWF and NCAR)
Moving these activities to separate LDMs helps to mitigate the problems
seen with LDM queues that are too small to hold more than a few hundred
seconds of data.
This does not mean, however, that we are rescinding some of our other
recommendations. We still recommend the installation of more RAM in each
LDM machine and increase in LDM queue size. How much RAM to add and how
large the LDM queue should be less than our previous recommendations if/when
you implement the multiple LDM approach shown in your PPT slide. We will
need to evaluate the situation as the transition is made.
re:
> > I also have two more suggestions for the mechanisms of the
> > Product Queue: one is, would it be possible for LDM to run
> > multiple Product Queues concurrently, in order to seperate
> > data regarding different characteristics, for example,
> > incoming data go into one PQ, sending data be ingested into
> > another PQ, or further seperate sending data to multiple
> > sending PQ based on different feed type or some other
> > properties, and so on? If yes, each PQ then only hold
> > certain part of the complete data set that be handled by
> > one PQ today.
As Steve pointed out, it is possible to run more than one LDM
system on a single computer (each LDM would have its own user
account, and each would do data transfers through separate,
possibly virtual, interfaces). We are using this approach on
a toplevel IDD injection/relay system at the University of
Wisconsin, and it is working very well.
That being said, we still think that a cleaner, more direct
solution is to have multiple computers each running their own LDM.
That is why we were delighted to see your PPT slide conceptualization!
re: uptime.tcl
> How does this tool work? What is the prerequisite for
> the installation and the "uptime" script to run?
> I mean do I need to prepare something?
Steve was a bit off here. The 'uptime' utility is a simple Tcl
script that does nothing more than run a series of LDM and Linux
system tools for gathering information on system performance at
an interval of once per minute. Needed for uptime are:
- installation of Tcl/Tk (run 'which tclsh' to find out if this
is installed on your machine)
- installation of 'iostat' and 'vmstat' (these are typically
not installed on Linux machines, but are easily installed using
yum)
- the LDM utility 'pqutil'
- creating two crontab entries, one to run the script and the
other to rotate the log file on some regular basis (like once
per month)
We run the 'uptime' monitoring tool on all LDM/IDD machines under our
control, and a number of our sites are also using the tool to
help keep an eye on system performance. We can install the uptime.tcl any
time you like. Please let us know if you agree to our doing this.
More later...
Cheers,
Tom
****************************************************************************
Unidata User Support UCAR Unidata Program
(303) 497-8642 P.O. Box 3000
address@hidden Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage http://www.unidata.ucar.edu
****************************************************************************
Ticket Details
===================
Ticket ID: LGY-600646
Department: Support IDD TIGGE
Priority: Normal
Status: Closed