This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Robert, You would restrict the data you request in your ldmd.conf file, so that instead of a request line specifying CONDUIT ".*", you would create a pattern of the data you are interested in. For example, if you wanted the ETA grid 212, the AVN/GFS through 84 hours, and all RUC, you would use a pattern such as: "MT.eta.*#212|MT.avn.*fh.00([0-7].|8[0-4])|MT.ruc" Steve Chiswell Unidata User SUpport On Sun, 2004-08-08 at 18:50, Robert Dewey wrote: > Hello, > > I figured out part of the problem, and it was related to connection > problems. I have 2 network cards on the same machine, and both were > using the same MAC address... This caused quite a bit of packet loss, > with connection speeds dropping as low as 56K! > > I am still having CONDUIT latency greater than 6000 seconds (I believe > data is stored up to 3600 seconds, so anything past that would be long > gone), but I seem to have all of the data without any losses. At the > moment, I am only requesting a few model grids (my hourly average is > ~328MB for CONDUIT) . I believe it is the AVN/GFS which may be causing > the latency though, since these files get as large as 400-500MB... Is > there something I could add into my pqact.conf to only allow data out to > 84 hours? I am kind of new to the LDM, and not exactly sure how/if this > could be done... > > Thanks! > Robert Dewey > > On Sun, 2004-08-08 at 17:36, Steve Chiswell wrote: > > The most likely problems would be you don't have enough bandwidth to try to > > receive all of Conduit (which can be as much as 1.8 GB an hour). > > > > Looking at the reception stats for what I believe is your LDM: > > > > http://my.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+d14-69-10-190.try.wideopenwest.com > > > > you see frequent excursions even on the weekend where your latencies are so > > large that your upstream likely no longer has the data by the time your LDM > > could receive the data. You have values as large as 10,000 seconds, and even > > today will see peaks during the main model times around 4000 seconds > > latency. > > > > The best bet is to reduce the amount of data you are requesting to fit > > within > > your limitations. > > > > Steve Chiswell > > Unidata User Support > > > > > > On Fri, 6 Aug 2004, Robert Dewey wrote: > > > > > Hello, > > > > > > I am having some troubles with the LDM, particularly the CONDUIT feed. I > > > am always missing random grids from a gridded set (ETA/GFS/etc.). Here > > > is an example of the 06Z ETA from this morning, I used GARP to loop > > > hours 00 through 84: > > > > > > GEMPAK: [DG -7] Input grid EMSL ^040806/0600F063 @0 %NONE cannot be > > > found. > > > GEMPAK: [DG -7] Input grid EMSL ^040806/0600F069 @0 %NONE cannot be > > > found. > > > GEMPAK: [DG -7] Input grid EMSL ^040806/0600F084 @0 %NONE cannot be > > > found. > > > GEMPAK: [DG -7] Input grid EMSL ^040806/0600F063 @0 %NONE cannot be > > > found. > > > GEMPAK: [DG -7] Input grid EMSL ^040806/0600F069 @0 %NONE cannot be > > > found. > > > GEMPAK: [DG -7] Input grid EMSL ^040806/0600F084 @0 %NONE cannot be > > > found. > > > GEMPAK: [DG -7] Input grid WND ^040806/0600F057 @500 %PRES cannot be > > > found. > > > GEMPAK: [DG -7] Input grid WND ^040806/0600F060 @500 %PRES cannot be > > > found. > > > GEMPAK: [DG -7] Input grid WND ^040806/0600F063 @500 %PRES cannot be > > > found. > > > GEMPAK: [DG -7] Input grid WND ^040806/0600F069 @500 %PRES cannot be > > > found. > > > GEMPAK: [DG -7] Input grid WND ^040806/0600F084 @500 %PRES cannot be > > > found. > > > > > > As you can see, I am missing random grids. What could the problem be? I > > > have set my product queue in the ldmadmin script to 2GB (thinking that > > > maybe the data is being lost), with my physical memory being 1GB (I have > > > an additional 2GB of SWAP, but this never gets used, the entire system > > > usually never uses more than 900MB). I do not have any "accept" lines in > > > my ldmd.con file, but I don't think that is the problem. I checked my > > > ldmd.log files, and found these mixed in (quite frequently): > > > _________________________________________________________________________________________________ > > > Aug 06 22:03:38 LDM weather2[2648]: Connected to upstream LDM-6 > > > Aug 06 22:03:38 LDM weather2[2648]: Upstream LDM is willing to feed > > > Aug 06 22:11:47 LDM bigbird[2725]: ERROR: requester6.c:206: Connection > > > to upstream LDM closed > > > Aug 06 22:12:02 LDM weather2[2649]: ERROR: requester6.c:206: Connection > > > to upstream LDM closed > > > Aug 06 22:12:17 LDM bigbird[2725]: Desired product class: > > > 20040806221107.220 TS_ENDT {{CRAFT, ".(KDTX|KGRR|KIWX)"}} > > > > > > Aug 06 22:47:39 LDM weather2[2646]: Connected to upstream LDM-6 > > > Aug 06 22:47:39 LDM weather2[2646]: Upstream LDM is willing to feed > > > Aug 06 22:48:46 LDM bigbird[2650]: ERROR: requester6.c:206: Connection > > > to upstream LDM closed > > > Aug 06 22:49:16 LDM bigbird[2650]: Desired product class: > > > 20040806213319.386 TS_ENDT {{CONDUIT, ".*"}} > > > _________________________________________________________________________________________________ > > > > > > I checked the RUC2 data, from the CONDUIT feed... And all of the grids > > > seem to be there, so I am not sure what exactly the problem is, or what > > > these error messages mean... > > > > > > Here is the LDM system stats: > > > > > > O/S: Fedora Core 2 > > > Processor: 1.8Ghz AMD Duron > > > System RAM: 1GB DDR-2100 RAM, (2GB SWAP) > > > Hard Drive: 80GB HDD (7200RPM) > > > Connection: 3-4Mbps connection > > > > > > Thanks! > > > Robert Dewey > > > > > > > > > > >