[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[IDD #UQG-528842]: Puzzling IDD latencies from idd.unidata to metfs1.agron.iastate.edu
- Subject: [IDD #UQG-528842]: Puzzling IDD latencies from idd.unidata to metfs1.agron.iastate.edu
- Date: Fri, 14 Jun 2019 09:08:27 -0600
Hi again Daryl,
Another comment:
I took a look at the real-time stats being reported by
both metfs1.agron.iastate.edu and metfs2.agron.iastate.edu,
and it seems that the high latencies are being reported for
the highest volume feeds being REQUESTed, CONDUIT and NGRID.
Here is a listing of the volumes being reported by metfs1:
Data Volume Summary for metfs1.agron.iastate.edu
Maximum hourly volume 32333.782 M bytes/hour
Average hourly volume 17422.871 M bytes/hour
Average products per hour 306925 prods/hour
Feed Average Maximum Products
(M byte/hour) (M byte/hour) number/hour
NGRID 9261.417 [ 53.157%] 14084.641 58195.975
CONDUIT 3407.846 [ 19.560%] 12784.612 49240.600
NEXRAD3 2440.436 [ 14.007%] 2914.372 112083.575
HDS 1204.240 [ 6.912%] 1639.485 39299.700
EXP 525.464 [ 3.016%] 1457.611 4047.350
FSL2 200.574 [ 1.151%] 650.261 48.800
FNEXRAD 117.786 [ 0.676%] 139.598 100.650
NIMAGE 84.579 [ 0.485%] 144.219 109.050
UNIWISC 68.057 [ 0.391%] 136.957 43.575
IDS|DDPLUS 65.436 [ 0.376%] 79.650 43682.025
NEXRAD2 46.742 [ 0.268%] 114.076 17.250
LIGHTNING 0.295 [ 0.002%] 0.682 56.000
The existence of high latencies for high volume feeds and low latencies
for low volume feeds is a "classic" feature of volume-related bandwidth
limiting on a per-connection basis.
The only ways that we have found to workaround this kind of imposed
bandwidth limiting are:
- work with the campus network group to see if the limiting has been
imposed locally by mistake and can be altered
- split the feed REQUEST(s) into multiple, disjoint REQUESTs
This is easily done for CONDUIT as each product has a monotonically
increasing sequence number, but not so easily done for NGRID.
The one thing you can do immediately is isolate the REQUEST(s) for
the highest volume feeds (i.e., don't REQUEST any other feed in the
same REQUEST). The next step would be to start splitting the feed
REQUEST(s) and then run for awhile to see the effect on the reported
latencies.
I would guess that this latency issue is fairly new. If this is the
case, I would suspect that the network folks at IASTATE have imposed
some bandwidth controls either intentionally or unintentionally. We
have seen several instances where a campus starts using software from
Palo Alto Networks, and they have unintentionally started limiting
legitimate network traffic.
Cheers,
Tom
--
****************************************************************************
Unidata User Support UCAR Unidata Program
(303) 497-8642 P.O. Box 3000
address@hidden Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage http://www.unidata.ucar.edu
****************************************************************************
Ticket Details
===================
Ticket ID: UQG-528842
Department: Support IDD
Priority: Normal
Status: Closed
===================
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata
inquiry tracking system and then made publicly available through the web. If
you do not want to have your interactions made available in this way, you must
let us know in each email you send to us.