[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Support #VXV-940104]: Re: [conduit] missing and incomplete GFS grids on CONDUIT feed
- Subject: [Support #VXV-940104]: Re: [conduit] missing and incomplete GFS grids on CONDUIT feed
- Date: Thu, 27 Jun 2019 18:32:22 -0600
Hi,
re:
> One more question. We are writing out the FT16 feed on freshair2, and
> everyone feeds from it.
"writing out"... do you mean processing all of the data via an
LDM pattern-action file action? That processing should not
affect the receipt latencies for products.
re:
> Currently freshair1 is not writing the L1B files,
> and is not feeding anybody. Could this account for some of the difference?
REQUESTing the high volume SATELLITE feed could very well affect
the latencies on other feeds if there is not unlimited bandwidth.
For reference, here are the relative volumes of the feeds being
relayed by our clusters:
Data Volume Summary for uni20.unidata.ucar.edu
Maximum hourly volume 101106.477 M bytes/hour
Average hourly volume 65979.921 M bytes/hour
Average products per hour 496482 prods/hour
Feed Average Maximum Products
(M byte/hour) (M byte/hour) number/hour
SATELLITE 15302.750 [ 23.193%] 19874.859 6511.231
CONDUIT 11307.617 [ 17.138%] 33227.046 103887.962
NGRID 9989.784 [ 15.141%] 14642.619 61919.538
NEXRAD2 8466.684 [ 12.832%] 10234.515 94048.731
NOTHER 7276.466 [ 11.028%] 10162.294 11712.038
NIMAGE 6724.477 [ 10.192%] 9312.298 5726.038
NEXRAD3 2622.432 [ 3.975%] 3134.633 118478.385
FNMOC 2582.920 [ 3.915%] 8671.995 6087.692
HDS 1179.896 [ 1.788%] 1664.591 41480.846
GEM 223.194 [ 0.338%] 1519.538 1283.731
FNEXRAD 112.568 [ 0.171%] 130.607 100.731
UNIWISC 99.489 [ 0.151%] 141.982 49.385
IDS|DDPLUS 69.176 [ 0.105%] 81.052 44723.692
EXP 12.171 [ 0.018%] 18.515 127.346
LIGHTNING 10.197 [ 0.015%] 16.669 344.077
GPS 0.100 [ 0.000%] 1.092 1.000
As you can see, the SATELLITE feed is the most voluminous feed
on average currently.
re:
> I can turn off the FT16 processing for now, as we are only experimenting
> with the files currently.
I don't think that the processing will have any effect on the receipt
latencies in other feeds unless the writing of the products is causing
a large I/O problem on the (virtual) machine. Comments in your email
directed to David O suggest that this is not the case, however.
By the way, and for what it is worth: one use was experiencing
very weird problems with very high system load and (we think)
related latencies. We eventually figured out that the SSDs
being used for the file system that was being written to by
LDM products were "worn out", so writes were taking longer and
longer. All of his problems went away when he switched to
writing to spinning disk (as a test) and then to new SSDs.
I just thought I'd throw that factoid in just in case it rings
any alarm bells there.
Cheers,
Tom
--
****************************************************************************
Unidata User Support UCAR Unidata Program
(303) 497-8642 P.O. Box 3000
address@hidden Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage http://www.unidata.ucar.edu
****************************************************************************
Ticket Details
===================
Ticket ID: VXV-940104
Department: Support CONDUIT
Priority: Normal
Status: Closed
===================
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata
inquiry tracking system and then made publicly available through the web. If
you do not want to have your interactions made available in this way, you must
let us know in each email you send to us.