This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Hi Luis, I apologize for taking so long to get back to you... re: > First of all: I'm sorry for, every year, sending you an email about the > issue on the subject. It's frustrating to face the cyclical nature of > this problem, to keep asking the communications guys (one of them on > copy) for responsibilities and get a "sorry, it's not our fault/there's > nothing we can do about it" over and over again. Do you know where the bottleneck is? re: > As you can see, our host atm77-fis.clients.ua.pt is having (or keeps > having...) very high latencies for the 06,12 and 18z cycles of the > CONDUIT stream. These correspond to the volume peaks for the global GFS data. If there is some sort of volume limiting on the connection (e.g., packet shaping, etc.), it would show up during these data volume peaks. re: > Lately, the machine was migrated to a new network (a > "server" network), and my fingers were crossed, hoping that this > migration could raise latencies. I trust that you mean lower latencies... we define latency to be the difference in time from when a product is first made available in the IDD to the time it is received at the end-user site. re: > Not at all...the song remains the same. > For instance, at the time i'm writing this email to you (11:56 > localtime) i get, from ldmwatch > > Apr 01 10:56:25 pqutil INFO: 244450 20110401102330.955 CONDUIT 011 > data/nccf/com/gfs/prod/gfs.2011040106/gfs.t06z.pgrb2f153 > !grib2/ncep/GFS/#000/201104010600F153/HGHT/30 hPa PRES! 000011 And no more? This is only one product, but the apparent latency for this product is on the order of 30 minutes. This is, of course, not good. re: > Since this high latency is compromising my daily operational activities, > I would deeply appreciate if you suggest something that I might ask to > the communications guys to tackle this issue. I'm really tired of not > being able to fully understand the nature of this problem and, moreover, > its solution. First, a couple of questions: - can you send us the ~ldm/etc/ldmd.conf REQUEST line(s) you are using to request CONDUIT data? In particular, I am looking to find out which IDD upstream machine(s) you are requesting CONDUIT data from, and if you are splitting your feed REQUEST into multiple mutually-exclusive subsets. - can you tell us exactly which portion of the CONDUIT data you want/need for your activities? re: > Mind do give me a hand on this? Not at all. Responses may be somewhat slow this week as I am heading for Vienna, Austria for the EGU 2011 General Assembly. I will be checking email as much as possible, but there will be times when I will be occupied with the conference. re: > Thank You very much in advance. No worries. We will do all we can on this end to get you the data in a timely manner. Cheers, Tom -- **************************************************************************** Unidata User Support UCAR Unidata Program (303) 497-8642 P.O. Box 3000 address@hidden Boulder, CO 80307 ---------------------------------------------------------------------------- Unidata HomePage http://www.unidata.ucar.edu **************************************************************************** Ticket Details =================== Ticket ID: AZJ-636742 Department: Support IDD Priority: Normal Status: Closed