More latency noted overnight, courtesy of Kyle Griffin @ UWisc-Madison:
-----------------------------------------------------------------------------------
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+flood.atmos.uiuc.edu
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.meteo.psu.edu
And noted nicely downstream. This smells like an NCEP problem, as UWisc and UIUC have the same (~1800 second) latency and others are >2000 (PSU, Albany).
Comparing Albany and UWisc, the GFS files that are short (some are more than 30% missing, one as much as 70% missing) are the same, but the file sizes are not the same, implying downstream servers were getting some slightly different sets of data from
their upstream options.
Just wanted to send this out in case either of you had a couple minutes in your busy Monday mornings to check this out...might be getting to be an annoying problem to try and chase...
Kyle
----------------------------------------
Kyle S. Griffin
Department of Atmospheric and Oceanic Sciences
University of Wisconsin - Madison
Room 1407
1225 W Dayton St, Madison, WI 53706
Email: address@hidden
_____________________________________________
Kevin Tyle, Systems Administrator Dept. of Atmospheric & Environmental Sciences University at Albany Earth Science 235, 1400 Washington Avenue Albany, NY 12222 Email: address@hidden Phone: 518-442-4578 _____________________________________________ From: address@hidden <address@hidden> on behalf of Michael Shedlock <address@hidden>
Sent: Friday, November 13, 2015 2:53 PM To: Mike Dross; Arthur A Person Cc: Bentley, Alicia M; _NCEP.List.pmb-dataflow; Michael Schmidt; address@hidden; Daes Support Subject: Re: [conduit] [Ncep.list.pmb-dataflow] How's your GFS? All,
NCEP is indeed on internet2, which I presume would apply here. A couple of noteworthy things.... I see some latency, but not for everyone, and it doesn't seem to matter which conduit machine a client is connected to. For example, with today's and yesterday's gfs.t12z.pgrb2.0p25.f096 (hour 96) file here are the latencies per client that I see: 11/12 Another correlation is that UIUC and PSU (the ones with latency) are only using one thread to connect to our conduit, whereas Wisc. and Unidata use multiple threads. At the moment this sort of has the appearance of a bottleneck outside of NCEP. It might also be useful to see traceroutes from UIUC and PSU to NCEP's CONDUIT. I know I saw some traceroutes below. Can you try that and share with us? Mike Shedlock NCEP Central Operations Dataflow Team 301.683.3834 On 11/13/2015 11:42 AM, Mike Dross wrote:
|
_______________________________________________ conduit mailing list address@hidden For list information or to unsubscribe, visit: http://www.unidata.ucar.edu/mailing_lists/