This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Hi Pete (with CC to Donna Cote (TAMU), Art Person (PSU) and Patrick Francis (MLG)), re: > Just curious as to the plans for getting the 0.25 GFS data onto the > CONDUIT or some other data stream. We are preparing a survey that will be sent to CONDUIT users subscribed to the address@hidden email list to see if enough of them want the 0.25 degree GFS data to warrant the data's inclusion in CONDUIT. re: > I've had a few downstream sites asking me why the 0.25 degree data isn't > on the CONDUIT feed yet, Can you let us know which sites have contacted you about the 0.25 degree GFS data? This will help in determining the need to add the data to the datastream. re: > and I've been responding that I thought the > reason that data isn't available yet was primarily a size issue. Is that > accurate? Yes, the size issue is really whether or not the CONDUIT top level relays (like UW/AOS) can handle the additional data volume while still keeping enough residency time in their LDM queue so that end user sites experiencing short LDM outages or slow links will not lose data. We heard back from you that you thought you might be able to handle the increased volume, but we heard back from Penn State that they are nearly at capacity in the network where their top level IDD relay cluster resides. They expect that this situation be mitigated in the near time frame (e.g., a couple of months), and eventually become a non-issue when they connect to a research network that will have 40 Gbps capacity. We never heard back from UIllinois. re: > Tom at Unidata had mentioned in one email to me that their experience so > far through some testing is that the 0.25 deg data is about 20 Gb per > run (4 runs per day), and he had asked my thoughts on replacing the 0.5 > deg data set with the 0.25 data set. Correct. Penn State sent a very thoughtful response to the inquiry that I sent them, and they suggested keeping the 0.5 degree data and only including the 0.25 degree forecasts out to 120 hours. We should have a better idea of what the end-users want/need after we get the survey results. A subcommittee of the Users Committee will be involved in evaluating user responses and making recommendations on the addition of the 0.25 degree data and possibly other data. re: > I know there are some people in our department who are using the 0.5 deg > data (also others that use the 1 deg) and would probably need to rework > some things for their processes to work with the 0.25 deg data instead. Penn State also mentioned this, and they were concerned about the extra processing that would be needed to move from use of the 0.5 degree data to the 0.25 degree data. Do you/your users have a similar concern? re: > In the same context, Tom also asked whether I thought my relay would be > able to handle the additional capacity if the 0.25 data were added, how > big my IDD relay queue is, and if it could be increased. My queue is 20 > Gb right now, and I'm not sure how much bigger it can go. Memory on my > IDD relay is 32 Gb, but I've found that if my ldm queue is much bigger > than about half the RAM on the machine, my data throughput can go way > down. I don't know if that machine could handle an additional 20 Gb > every 6 hours or not. The ability for multiple sites to relay the data is one of the biggest issues that must be solved in order to be able to relay all of the 0.25 degree data while still keeping everything else that is already in CONDUIT. Question: - are you evaluating options that would allow your system(s) to handle the higher data volumes? re: > Those questions lead me to believe that there is concern about the size > of adding the 0.25 deg data to CONDUIT, especially for downstream sites > that may have lower bandwidth and smaller machines ingesting the data. The key issue in my mind is the ability of the top level relays to handle the higher volumes. End users that do not have enough bandwidth to handle the data can simply _not_ REQUEST the 0.25 degree data until their infrastructure is updated to be able to handle the flow and processing. re: > Thoughts? We (UPC) need to get the data so that it can be made available via the TDS; there have been a number of requests from users for this service. The easiest and most robust way that this can happen is for the 0.25 degree GFS data to be included in CONDUIT. This, in turn, is dependent on the top level relays being able to handle the CONDUIT and other data they are already relaying. Question to all: - what is your view on what should be done? Comment: - perhaps a conference call with all CONDUIT top level relays to discuss the situation is in order We will put together a Doodle (tm) poll to see when folks could be available for such a conference call. Cheers, Tom -- **************************************************************************** Unidata User Support UCAR Unidata Program (303) 497-8642 P.O. Box 3000 address@hidden Boulder, CO 80307 ---------------------------------------------------------------------------- Unidata HomePage http://www.unidata.ucar.edu **************************************************************************** Ticket Details =================== Ticket ID: YLK-904648 Department: Support CONDUIT Priority: Normal Status: Closed