[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
20050411: nexrad2 data on rasp (cont.)
- Subject: 20050411: nexrad2 data on rasp (cont.)
- Date: Mon, 11 Apr 2005 17:08:36 -0600
>From: Celia Chen <address@hidden>
>Organization: NCAR/RAP
>Keywords: 200504111650.j3BGoJv2002156 IDD latency
Hi Celia,
>Thanks for all your help.
No worries.
>I was looking at the volume of NEXTAD2 data on rasp and looks like
>there was a huge amount of data coming through this morning. Is this
>what you called the "storm mode" of the radar operations?
>
>http://my.unidata.ucar.edu/cgi-bin/rtstats/iddstats_vol_nc?NEXRAD2+rasp.rap.ucar.edu
The NEXRADs operate in one of three modes:
- clear air
- precipitation mode
- storm mode
The scanning strategies are different for the different modes both in
terms of frequency (clear air - volume scan every 10 minutes;
precipitation mode - volume scan every 6 minutes; storm mode - volume
scan every 5 minutes) and number of beam tilts. The idea is to get
more information about "interesting" weather. The stronger the weather
system, the more data the volume scan will contain. So, as large
systems move across the country, more and more radars will operate in
storm mode, and the corresponding volume of data in the NEXRAD2 feed
will increase significantly.
As you will notice from the volume plot of NEXRAD2 data received by
rasp (the URL you supplied above), the peak data volume over the past
two days was just over 900 MB/hour, and the minimum was over 400
MB/hour.
Instead of plotting volume of an individual feed, you can choose to
list the Cumulative Volume Summary (link at the bottom left of the page
for rasp):
http://my.unidata.ucar.edu/cgi-bin/rtstats/rtstats_summary_volume?rasp.rap.ucar.edu
This gives some very interesting numbers:
Data Volume Summary for rasp.rap.ucar.edu
Maximum hourly volume 5377.889 M bytes/hour
Average hourly volume 2819.041 M bytes/hour
Average products per hour 124533 prods/hour
Feed Average Maximum Products
(M byte/hour) (M byte/hour) number/hour
CONDUIT 1309.511 [ 46.452%] 3651.614 25389.292
NEXRAD2 699.635 [ 24.818%] 939.768 44708.083
NIMAGE 191.678 [ 6.799%] 755.870 172.958
HDS 184.776 [ 6.555%] 686.733 14067.750
FSL2 141.153 [ 5.007%] 157.349 174.812
NNEXRAD 138.875 [ 4.926%] 178.977 20110.271
FSL3 73.020 [ 2.590%] 83.564 221.958
FNEXRAD 28.424 [ 1.008%] 37.700 560.458
UNIWISC 19.373 [ 0.687%] 32.360 26.771
IDS|DDPLUS 13.764 [ 0.488%] 26.340 18913.354
PCWS 6.736 [ 0.239%] 11.601 23.458
DIFAX 5.218 [ 0.185%] 19.343 6.667
WMO 4.445 [ 0.158%] 10.295 117.812
WSI 2.319 [ 0.082%] 3.609 29.833
NLDN 0.113 [ 0.004%] 0.396 9.542
Notice that rasp received an average of 2.8 GB of data per hour over
the two day period and that indicated peak rates have exceeded 5 GB per
hour. This is a little odd given that the machine feeding rasp shows
different and smaller numbers:
Data Volume Summary for uni3.unidata.ucar.edu
Maximum hourly volume 3524.747 M bytes/hour
Average hourly volume 2656.313 M bytes/hour
Average products per hour 125681 prods/hour
Feed Average Maximum Products
(M byte/hour) (M byte/hour) number/hour
CONDUIT 1222.307 [ 46.015%] 2051.581 24485.125
NEXRAD2 710.722 [ 26.756%] 939.768 45381.979
NIMAGE 197.874 [ 7.449%] 755.870 177.042
HDS 176.372 [ 6.640%] 435.189 13543.042
NNEXRAD 140.709 [ 5.297%] 178.977 20390.146
NGRID 137.894 [ 5.191%] 507.282 2499.417
FNEXRAD 28.831 [ 1.085%] 37.700 571.625
UNIWISC 19.745 [ 0.743%] 32.360 27.271
IDS|DDPLUS 13.403 [ 0.505%] 17.603 18537.479
DIFAX 5.228 [ 0.197%] 19.343 6.708
FSL2 2.211 [ 0.083%] 2.323 11.646
GEM 0.901 [ 0.034%] 10.808 39.583
NLDN 0.117 [ 0.004%] 0.396 9.708
We see that the cause of this discrepency is that you are requesting
CONDUIT data redundantly: one set from level and the other from clamp.
level is requesting its CONDUIT data from f5.aos.wisc.edu, and clamp is
requesting its from thelma.ucar.edu. Everything would be OK with this
setup _if_:
- the latencies for the different feeds were equivalent
- your LDM queue on rasp were large enough to hold an hour of
data
Unfortunately, neither of these conditions is met:
- the latency of the CONDUIT feed from f5 to level climbed to over 1000
seconds on numerous occasions in the past two days
- the queue on rasp is only large enough to hold about 1000 seconds
of data
So, what is happening is that you get the CONDUIT data from thelma via
clamp in a timely manner, and the data is eventually scoured out of the
queue given all of the data you are ingesting. At some times of the
day, you start getting the same CONDUIT data from f5 via level that you
had already gotten from thelma. This makes your CONDUIT actions do the
same work as they did before -- not a good situation.
The solution to this problem is to not redundantly request the CONDUIT
data, or to increase the size of your LDM queue so that it can hold an
hour's worth of data.
Also, you commented to me that you had turned off CONDUIT processing on
rasp. If you are not processing it there, you would do well to turn
off its ingestion. This would save room in your queue for the other
data so that you have a chance of having an hour's worth of data in the
queue.
If you would like us to review your IDD ingestion setup(s) and product
processing, we would try to set aside some time to do so. I have the
impression that data ingestion and processing at RAP could be made more
efficient AND robust by some tuning.
Cheers,
Tom
>On Mon, Apr 11, 2005 at 02:45:49PM -0600, Unidata Support wrote:
>> >From: Celia Chen <address@hidden>
>>
>> Hi Celia,
>>
>> >Thanks for the info. I did check the latency status of rasp on the realtim
> e
>> >statistics but didn't check the status of thelma.
>>
>> OK. The other thing you could do in a situation like this is use the
>> LDM application 'notifyme' to see if your upstream feed host is
>> getting the data you want and what sorts of latencies it is seeing.
>> For instance:
>>
>> <as 'ldm' on rasp>
>> notifyme -vxl- -f NEXRD2 -h thelma.ucar.edu
>>
>> >Could you setup thelma to feed the NEXTAD2 data to level at RAL in addition
>> >to feeding rasp?
>>
>> You need to setup the request on 'level'. thelma.ucar.edu already has
>> an allow line for this.
>>
>> >Thanks in advance.
>>
>> No worries.
>>
>> Cheers,
>>
>> Tom
>> --
>> ****************************************************************************
>> Unidata User Support UCAR Unidata Program
>> (303)497-8643 P.O. Box 3000
>> address@hidden Boulder, CO 80307
>> ----------------------------------------------------------------------------
>> Unidata WWW Service http://my.unidata.ucar.edu/content/support
>> ----------------------------------------------------------------------------
>> NOTE: All email exchanges with Unidata User Support are recorded in the
>> Unidata inquiry tracking system and then made publicly available
>> through the web. If you do not want to have your interactions made
>> available in this way, you must let us know in each email you send to us.
>
--
NOTE: All email exchanges with Unidata User Support are recorded in the
Unidata inquiry tracking system and then made publicly available
through the web. If you do not want to have your interactions made
available in this way, you must let us know in each email you send to us.