This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
>From: Mike Voss <address@hidden> >Organization: SJSU >Keywords: 200402010010.i110ACp1023488 IDD test Mike, >Thanks for the feedback. Late in the day yesterday I realized I did not >have the CONDUIT feed on rossby split out into different requests, >i.e: > >request CONDUIT "[05]$" thelma.ucar.edu >request CONDUIT "[16]$" thelma.ucar.edu >request CONDUIT "[27]$" thelma.ucar.edu >request CONDUIT "[38]$" thelma.ucar.edu >request CONDUIT "[49]$" thelma.ucar.edu When I wrote the reply this morning, I didn't realize that you had added the CONDUIT feed to rossby (i.e., I hadn't look at the rtstats pages). >I found out how important that is. Yes, splitting the full CONDUIT feed is important when the sending LDM is not electronically "close" to the receiver. Combining all feeds with a: request ANY .* upstream.feed.host really can only be done when machines are on the same LAN. >After I did split the feeds, rossby >was able to catch up quickly as can be seen in the dramatic drop in >latency yesterday afternoon (~22Z). The drop in latency from 10000 seconds down to 30 seconds is dramatic indeed! >So, I'm a little hessitant to say this, but it appears our tests are >complete. I can get the full feed into my office here with limited >latency, in fact, I have two full feeds coming in now, rossby and >methost24. Two feeds of all data in the IDD into a domain with little to no latency is good proof that there is no packet shaping going on. >Once again, thanks so much for your help. By having multiple machine >names allowed on emo, I was able to move my machine around campus and >really prove to the network authorities that we have issues. It turns >out that our Alcatel switches are really the problem, and not any >traffic shaping. Ah Ha! >And even though I'm finally getting the full feed now, >we are getting all new cisco switches and a 10 Gig backbone in the next >few months. Hopefully it's clear sailing from here out. It really should be clear sailing with a 10 Gb backbone. Congrats! Cheers, Tom --