This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
>From: Unidata Support <address@hidden> >Organization: UCAR/Unidata >Keywords: 200403252022.i2PKMZrV027835 IDD latency James, Larry, and Mike, Previously I wrote: >The latency seen on Larry's machine is a result of a network outage >here at UCAR this morning (Larry feeds from thelma.ucar.edu). Our >network connection has been restored, but it appears like it might not >be at our normal bandwidth. We are investigating. This turned to _not_ have been the problem. thelma.ucar.edu was at its capacity for feeding from upstream and to downstream sites when the network outage occurred at UCAR this morning. When the network was brought back up, the load on thelma was just enough to keep it from catching up on its ingest processes. The latencies that were being seen at all sites feeding from thelma was caused not by thelma's inability to feed data to those sites, but, rather, by thelma's inability to get input data in a timely manner. This can be seen from thelma's IDD realtime latency plots for feeds like IDS|DDPLUS, HDS, CONDUIT, etc: IDS|DDPLUS http://my.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?IDS|DDPLUS+thelma.ucar.edu HDS http://my.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?HDS+thelma.ucar.edu CONDUIT http://my.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+thelma.ucar.edu The high latencies for the IDS|DDPLUS and HDS feeds are the most demonstrative of the problem since the NOAAPORT ingestors for that data are on the same UCAR Gb network. Our solution was to offload all Unidata feeds from thelma except for one machine, emo.unidata.ucar.edu. As soon as we rerouted all Unidata data requests from thelma to emo, thelma was able to rapidly catch up on its ingest duties. Since thelma was not adding any latencies to the streams it was relaying, the latencies on all downstream machines decreased in lock step with the decrease in ingest latency. All-in-all, this was an interesting learning experience for us here at the UPC! Cheers, Tom -- NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publically available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.