[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Datastream #IZJ-689237]: Additional Datafeeds
- Subject: [Datastream #IZJ-689237]: Additional Datafeeds
- Date: Tue, 14 Oct 2008 10:03:03 -0600
Hi Jeff,
re:
> First off, thank you for a response! I've got another, related ticket open
> in another
> area and haven't heard anything back on it - for 3 weeks.
I apologize for the lack of response to the GEMPAK question you submitted. We
have been
struggling to provide quality GEMPAK support since our GEMPAK expert left back
in January.
This situation is improving, however, because we were finally able to hire a
replacement
in August. The GEMPAK-related efforts are currently dedicated to getting the
current
distribution put together so sites can upgrade to the latest release.
> Anyway, you need to know that I'm new to this whole situation. I'm providing
> general
> tech support the Atmospheric Sciences department, but I came in about a month
> ago with
> no weather knowledge and no LDM/Gempak knowledge. What I've walked into is a
> situation
> where one of the faculty attended your LDM workshop awhile back and decided,
> once he was
> back, that it was time to redo our LDM server from the ground up. Needless
> to say, it
> hasn't quite been the same since!
I hear you! I believe that the problems you are currently experiencing (the
data "holes")
will be pretty easy to clean-up. More below...
> I do have some Linux knowledge - at least enough to be dangerous, so I've
> been slowly
> picking my way through things, trying to figure out how it all works. That
> was interesting,
> but it's now time to get more serious about solving the remaining problems,
> so on to your
> questions.
Sounds like good, if not painful, progress.
> > you note: "when using Gempak/Garp, we have some holes in our data". What
> > "holes" are
> > you experiencing?
> The "holes" are a lot of various "missing grids" in the Model Plan View.
> When you go into
> Garp and the Model Plan View, and pick, say "GFS thinned" you'll get all of
> the dates
> listed, along with the other two columns. You can pick pretty much anything
> from the General
> dropdown and get data, but if you pick most any other choice from the
> dropdown (convective,
> Isentropic, etc.) and choose one of the column fields, the terminal window
> scrolls by a whole
> list of "Input grid not found" messages. I'm clueless enough about this
> whole thing at the
> present time that I don't know if it's because the info isn't in the
> datafeed, isn't being
> processed once it's here, etc.
I am 99.99% convinced that your problem is due to two things:
1) your LDM queue is not sized to hold enough of the data you are receiving
long enough
for the decoders to be able to act on the products received before they are
scoured
out of the queue to make room for new products being received.
Because of comments you make below, I _strongly_ recommend that you increase
the
size of your LDM queue from the default of 400 MB to at least 2 GB.
Actually, while you are at it, I suggest that you upgrade to the latest
version of
the LDM, LDM-6.7.0. The reason for this is twofold:
- it gets you up to rev WRT the LDM
- ldm-6.7.0 contains modifications that cut down on the system resources used
by the GEMPAK GRIB/GRIB2 decoder, dcgrib !!
<as 'ldm'>
cd ~ldm
ftp ftp.unidata.ucar.edu
<user> anonymous
<pass> address@hidden
cd pub/ldm
binary
get ldm-6.7.0.tar.gz
quit
tar xvzf ldm-6.7.0.tar.gz
cd ldm-6.7.0/src
./configure
make
make install
sudo make install_setuids
cd ~ldm
ldmadmin stop
ldmadmin delqueue
rm runtime
ln -s ldm-6.7.0 runtime
-- edit ~ldm/etc/ldmadmin-pl.conf and change the queue size from 400M to 2G
change:
$pq_size = "400M";
to:
$pq_size = "2G";
ldmadmin mkqueue
ldmadmin start
2) you are using a single, combined pqact.gempak pattern-action file to process
all of
the data you are receiving. The pattern-action processing routine, 'pqact',
works
through its configuration file(s) entries in a sequential manner. The more
actions
that are in the pattern-action file, the longer it will take to check every
product
received against the list of actions.
The Unidata GEMPAK distribution has a script that one can run to create all
of the
actions needed to process data ingested via the IDD. Apparently you used
this
script to create your ~ldm/etc/pqact.gempak pattern-action file, so you
should be
aware of the script I am referring to. When the script is run, the user
(you) is
given a choice of creating a single pattern-action file (combined file) or
multiple
pattern-action files. Sites that are ingesting CONDUIT should choose to
create
multiple pattern-action files. also, the script run will list the entries
that
one needs to add to one's ~ldm/etc/ldmd.conf file. The actions that are
needed
are different when one chooses to use a combined pattern-action file than
when one
chooses to use multiple pattern-action files.
Do the following _before_ restarting the LDM with a larger queue as was
noted above:
- rerun the GEMPAK script and choose to create multiple pattern-action files
- remove the current GEMPAK-related 'exec' lines from your
~ldm/etc/ldmd.conf file
- include the list of 'exec' lines suggested by the script in
~ldm/etc/ldmd.conf
- copy the multiple pattern-action files created to the ~ldm/etc directory
making
sure that the files are readable and writable by the user running your LDM
- restart the LDM
> > you are ingesting the full CONDUIT (aka NMC2) datastream. Do you
> > really want all of the products in CONDUIT?
> > what are your objectives? For instance, is one of the holes you are
> > experiencing
> > related to you not ingesting NEXRAD Level II data.
> This probably sounds stupid, but again, it goes back to my inexperience with
> the
> system
No questions are stupid. You are facing a tough situation in having to
configuring
things for which you have not received training!
> - The main thing that the Chair of the department is concerned with is that
> all of
> the Garp stuff works. I guess that's my present objective. He wants the
> missing Model
> data. If that means that we need all of CONDUIT, then yes. If that means we
> cut back
> the CONDUIT information to reduce latency so the necessary feed can be
> processed, then
> no. Sorry I don't have a better answer.
This is a perfectly good answer. The solution is what I listed above.
> > what is the size of your LDM queue?
> > Please send us the output of 'ldmadmin config'.
>
> The queue appears to be 400Mb. I uploaded the ldmadmin config output file to
> the support portal.
The default 400 MB queue is good for sites who are ingesting the UNIDATA set of
datastreams
(UNIDATA == IDS|DDPLUS|HRS|UNIWISC). It is not large enough for sites
ingesting CONDUIT
or NGRID.
re: splitting your CONDUIT feed request into fifths
> If we can determine that we need all of the CONDUIT data, I'll go ahead and
> split it.
I suggest splitting the CONDUIT feed into fifths as I indicated in the previous
email. This will help keep the latencies down as low as possible. Just so you
know: the site you are requesting the CONDUIT data from, idd.unl.edu, splits
their CONDUIT requests into fifths for its upstream feeder.
re: the NGRID datastream
> Does this sound like a possibility, with my description of our "holes"?
No, not really. I believe that your data "holes" are caused by data not
being processed out of the queue fast enough.
re: simplifying your NNEXRAD request
> Thanks. I'll probably go ahead and change that.
Very good. Again, this will not change data latencies. It will make your
~ldm/etc/ldmd.conf NNEXRAD 'request' entry simplier and, presumably, easier to
understand for the next person that needs to pay attention to the LDM. Also,
in the current version of the LDM, the primary name for the NEXRAD Level III
products is NEXRAD3, not NNEXRAD. NNEXRAD continues as an alias for the feed,
so not changing it in your ~ldm/etc/ldmd.conf file will not have any detrimental
effects.
re: description of your machine
> As far as I know, this is a dual 64-bit processor IBM server with 4Gb RAM,
> running CentOS 5.
OK. This should be a great platform on which to ingest and process all of the
data you
are getting off of the IDD.
> Hopefully it's all right that I re-opened the ticket and uploaded the
> ldmd.conf file,
> ldmd.log file, plus a file with df,free, and uname information in it.
It is exactly what I was hoping for. Aside: my profile for our inquiry
tracking system says
to close tickets automatically upon my reply. A follow-up by a user
automatically reopens the
ticket. The indication that the ticket has been closed is not meant to be
interpreted to mean
that there is no more to be said on the subject.
> Hopefully these will provide some insight into how things are presently setup.
It did, thanks!
> Any suggestions will be more than welcome.
Like I said above, I am 99.99% sure that your situation will improve
dramatically when you
start using a larger LDM queue, multiple GEMPAK pattern-action files, and the
current
release of the LDM.
> Thank you again for the help.
No worries.
Cheers,
Tom
--
****************************************************************************
Unidata User Support UCAR Unidata Program
(303) 497-8642 P.O. Box 3000
address@hidden Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage http://www.unidata.ucar.edu
****************************************************************************
Ticket Details
===================
Ticket ID: IZJ-689237
Department: Support Datastream
Priority: Normal
Status: Closed