[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[LDM #OGU-667706]: vetQueueSize()
- Subject: [LDM #OGU-667706]: vetQueueSize()
- Date: Fri, 30 Oct 2015 09:05:02 -0600
Hi Clint,
I see that your machine, squall.unl.edu is reporting real time
stats back to us. Here is a snapshot:
Data Volume Summary for squall.unl.edu
Maximum hourly volume 49730.952 M bytes/hour
Average hourly volume 25036.984 M bytes/hour
Average products per hour 314983 prods/hour
Feed Average Maximum Products
(M byte/hour) (M byte/hour) number/hour
NEXRAD2 8657.391 [ 34.578%] 33668.628 91814.040
NGRID 3952.504 [ 15.787%] 9861.327 27368.600
NOTHER 3108.861 [ 12.417%] 6053.396 5999.520
CONDUIT 2657.245 [ 10.613%] 20844.490 25412.000
EXP 2516.261 [ 10.050%] 3901.883 4801.520
NEXRAD3 1133.638 [ 4.528%] 1757.540 56164.560
NIMAGE 825.738 [ 3.298%] 1636.710 35969.560
FNMOC 786.975 [ 3.143%] 3337.827 2094.480
UNIWISC 768.335 [ 3.069%] 2644.977 500.360
HDS 302.088 [ 1.207%] 650.656 28166.200
GEM 123.202 [ 0.492%] 494.808 1307.000
IDS|DDPLUS 92.898 [ 0.371%] 221.400 34840.080
FNEXRAD 72.188 [ 0.288%] 141.079 113.520
GPS 23.117 [ 0.092%] 176.845 234.920
LIGHTNING 16.542 [ 0.066%] 164.965 196.800
Your numbers seem high given what is flowing into the real-server
backend nodes of our IDD top level relay cluster, idd.unidata.ucar.edu:
Data Volume Summary for uni15.unidata.ucar.edu
Maximum hourly volume 38711.646 M bytes/hour
Average hourly volume 20927.039 M bytes/hour
Average products per hour 336448 prods/hour
Feed Average Maximum Products
(M byte/hour) (M byte/hour) number/hour
NEXRAD2 7520.412 [ 35.936%] 26939.991 84155.575
NGRID 4131.920 [ 19.744%] 8892.729 28362.775
NOTHER 3050.060 [ 14.575%] 6118.982 6082.275
CONDUIT 2223.008 [ 10.623%] 16631.640 23669.475
NEXRAD3 1464.974 [ 7.000%] 2505.249 77746.700
FNMOC 997.410 [ 4.766%] 3337.827 2622.125
GEM 513.476 [ 2.454%] 2099.202 35113.125
NIMAGE 472.785 [ 2.259%] 1395.747 17067.125
HDS 301.681 [ 1.442%] 689.683 22990.375
IDS|DDPLUS 76.998 [ 0.368%] 193.651 37556.325
FNEXRAD 69.892 [ 0.334%] 135.656 77.350
EXP 46.391 [ 0.222%] 88.477 385.425
UNIWISC 32.532 [ 0.155%] 82.668 37.475
GPS 24.801 [ 0.119%] 278.129 251.775
LIGHTNING 0.698 [ 0.003%] 1.695 329.950
Your peaks of 40 GB is 11 GB/hr larger than what is being relayed by
idd.unidata.ucar.edu. This does indicate that your LDM queue size is
too small -- the extra volume reported is most likely being caused
by receipt of "second trip" products (ones that were received, were
deleted from the queue and then received again).
But, a suggested queue size of 83 GB is _way_ too large. A queue size
of 40 GB would be more like it.
I am sure that Steve will chime in with other observations...
Cheers,
Tom
--
****************************************************************************
Unidata User Support UCAR Unidata Program
(303) 497-8642 P.O. Box 3000
address@hidden Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage http://www.unidata.ucar.edu
****************************************************************************
Ticket Details
===================
Ticket ID: OGU-667706
Department: Support LDM
Priority: Normal
Status: Open