This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
CONDUIT data users, We are actively working on a solution to increase the throughput of data products and decrease the queueing time of these products from the NWS file server to the LDM queue. On January 5th, the product insertion for the CONDUIT data stream was changed to allow product insertion in parallel. This method increased the aggregate throughput from a maximum of 1.8GB per hour, to 2.5 to 2.7 GB per hour at the 4 peak times. However, a number of users stated that this method was troublesome for their processing of forecast hour products since while the aggregate data volume was increased, the arrival time of specific forecast times was intermixed and the earliest forecast times were not guaranteed to complete prior to other times. Several alternatives were tried to maintain product order while preserving increased throughput, but products were lost in queueing order, so this process was halted as soon as these consequences were observed. On Thursday morning, the CONDUIT insertion queue was restored to the synchronus remote method that existed at the beginning of December when the 0.5 degree GFS files were added. This return was seemingly the most preferable prior to a holiday weekend, from input to the conduit mail list so as to not have "missing products" while an improvement in timeliness was being sought. The queueing backlog for this was known to exceed an hour and a half. The largest backlog of queuing occurs when the twice daily ensembles are posting at the time that the 4x per day GFS and ETA are posting, as well as the hourly RUC. This begins be delaying the posting the 12Z GFS and 18Z NAM and continues through the peak posting when grids are posting to the file server at the rate several models and multiple grids per minute. Early next week a change will be made to isolate the underlying file system from product insertion which will hopefully improve timeliness. Steve Chiswell Unidata User Support On Fri, 2005-01-14 at 14:47, Robert Mullenax wrote: > It would be nice to go back to pre-addition of 0.5 deg GFS. > > -----Original Message----- > From: address@hidden > To: Jerrold Robaidek > Cc: Conduit Users; address@hidden > Sent: 1/14/2005 1:16 PM > Subject: Re: Late 1 degree GFS and late eta's on CONDUIT > > Oh yes, we are seeing huge delays. As an example, we received last > night's 0Z run 72 hour forecast for the GFS at 2 hours and 54 later > than I was able to obtain the file via FTP (23:01:58 PST vs 20:08:15 > PST). > > > On Fri, Jan 14, 2005 at 11:56:50AM -0600, Jerrold Robaidek wrote: > > > > > > The 12 Z GFS 1 degree just started coming in and the NAMs (etas) are > very > > late.... > > > > Things seem to have gone from bad to worse .... > > > > (I do have some latency issues, but only 10 minutes, not the hours > late > > that we are seeing.) > > > > Anybody else seeing these problems? > > > > > > Jerry > > > > -- > > Jerrold Robaidek Email: address@hidden > > SSEC Data Center Phone: (608) 262-6025 > > University of Wisconsin Fax: (608) 263-6738 > > Madison, Wisconsin