[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[CONDUIT #NEV-407710]: RUC grib1 increase in size
- Subject: [CONDUIT #NEV-407710]: RUC grib1 increase in size
- Date: Thu, 11 Jun 2009 14:55:22 -0600
Hi Brian,
re:
> We are obtaining RUC model data in grib1 format from CONDUIT data feed.
> Over the past week, we notice that the file sizes have in some cases
> doubled. Over a year ago we saw a similar issue in that many variables
> were repeated within the files, causing the sizes to be larger than
> necessary (see below). In order to prepare for disk storage of these
> files, is the increased size of the files something that we can expect
> to see in the future, or is it more of a temporary glitch?
The CONDUIT feed contains individual GRIB messages, and GRIB messages
from individual model runs are inserted into the CONDUIT feed once. If
you are seeing a doubling of output file sizes, it indicates that your
system is writing twice the number of GRIB messages to the output file(s).
This could happen if you were feeding CONDUIT redundantly from two upstream
sites and one is delivering the data products significantly slower than
the other AND your LDM queue size is too small to be holding the products
from the faster upstream connection when the ones from the slower upstream
connection products arrive at your site. If your machine is
idd.nrcc.cornell.edu,
this situation would not apply as you are requesting data from a single
upstream, idd.unidata.ucar.edu:
http://www.unidata.ucar.edu/cgi-bin/rtstats/iddstats_topo_nc?CONDUIT+idd.nrcc.cornell.edu
Questions:
- is, in fact, the machine you are seeing problems on idd.nrcc.cornell.edu?
- what is the size of your LDM queue?
I ask because I see that idd.nrcc.cornell.edu is receiving up to 6 GB of
data per hour:
http://www.unidata.ucar.edu/cgi-bin/rtstats/rtstats_summary_volume?idd.nrcc.cornell.edu
Data Volume Summary for idd.nrcc.cornell.edu
Maximum hourly volume 6051.883 M bytes/hour
Average hourly volume 2697.925 M bytes/hour
Average products per hour 161598 prods/hour
Feed Average Maximum Products
(M byte/hour) (M byte/hour) number/hour
CONDUIT 1727.240 [ 64.021%] 4776.542 49037.370
NNEXRAD 363.689 [ 13.480%] 438.140 35075.022
HDS 357.835 [ 13.263%] 688.848 29917.696
FSL3 141.315 [ 5.238%] 155.960 34.261
IDS|DDPLUS 50.761 [ 1.881%] 76.595 47369.435
UNIWISC 26.669 [ 0.988%] 31.928 31.174
FSL2 25.640 [ 0.950%] 33.028 126.674
DIFAX 4.775 [ 0.177%] 25.454 6.457
- are you intending to write each CONDUIT product to an individual file, or are
you intending to write multiple products to a single output file?
> Additionally, we would like to move to the grib2 versions of these files
> but had noticed about a year ago that certain hours were consistently
> missing (see below). Is this still a problem that occurs with the grib2
> files?
We are not aware of consistently missing data in the CONDUIT feed. Can you
verify instances of missing forecast hours?
Also, the transition from GRIB1 to GRIB2 for all but ICWF grids was completed
by January 23, 2008:
http://www.unidata.ucar.edu/Projects/CONDUIT/NewOrleans_20080123.html
At the CONDUIT meeting in New Orleans in 2008 it was decided that the ICWF grids
should be removed from CONDUIT as soon as possible:
http://www.unidata.ucar.edu/Projects/CONDUIT/summary_0123.html
We were under the impression that the ICWF grids had been removed from
the feed.
Question:
- are the GRIB1 grids you are referring to the ICWF ones?
If yes, we will work with NCEP to get them removed from the feed.
Comments:
- the only GRIB1 messages in IDD feeds should be ones from the HRS
(aka HDS) datastream from NOAAPort
- it is possible that the problem of missing forecast times you have seen
was caused by received GRIB products not being processed out of your
LDM queue before they were scoured by newly received data. This could
happen if:
- your LDM queue is too small to hold the data you are receiving for
long enough for processing
- your machine becoming bogged down during processing activities.
You could check to see if this is the case by looking or warning
messages in your LDM log file, ~ldm/etc/ldmd.log.
> Thanks for your help again -
No worries.
Cheers,
Tom
--
****************************************************************************
Unidata User Support UCAR Unidata Program
(303) 497-8642 P.O. Box 3000
address@hidden Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage http://www.unidata.ucar.edu
****************************************************************************
Ticket Details
===================
Ticket ID: NEV-407710
Department: Support CONDUIT
Priority: High
Status: Closed