This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Hi Brian, re: > metofis will not be unplugged but is aging untended (amazingly gracefully...) OK. This provides a bit of breathing room. re: > To make firewall openings for weather, our systems people need a list of > > 1. Ports and Targets Ports: 80 - webserver or THREDDS 22 - ssh 112 - McIDAS ADDE 388 - LDM 8080 - RAMADDA Targets? What exactly do you mean? If you mean specific hosts that will be able to connect to the ports above, I was assuming/hoping that RSMAS would be playing a role where it could serve data to other Unidata sites. To do this, all ports except 22 and 388, would need to be open to the world. re: > 2. Protocols (UDP or TCP) McIDAS ADDE, LDM, THREDDS and RAMADDA are all UDP. I assume that SSH is as well. re: > 3. sending or receiving? (how does LDM work?) To serve data to other sites (ADDE, RAMADDA, THREDDS, LDM), the ports would need to be open to inbound requests. re: > I know I need port 8080 for the RAMADDA. > The rest I don't know. OK. re: did you save off the LDM configuration files before wiping weather? > Nope, gone, sorry. Oops No big worries especially since you really want to redo the setup. re: objective is to setup weather just like our motherlode? > Sure, and with a nice long rolling archive (like 3 mos) ideally. > Capacity permitting. (and we will replace it with a new machine when > or Unidata ship comes in, so dream big, right?). The amount of disk space on weather right now is _not_ enough to keep 3 months of all data. As a comparison, motherlode has 10 TB of disk and as the data volumes increase it routinely is over 90% full. When this happens we decrease the number of days of data we keep available. We are down to less than one month for model and radar data (in fact, it is currently 20 days). re: 12 GB/hr input average with peaks that exceed 20 GB/hr in the IDD > Will these numbers scare our firewall or network people, Perhaps. These numbers are "healthy". re: > or just > me the weather machine person? The IDD datastreams contain LOTS of data. Processing it all and making it available by server technologies requires LOTS of disk space and fast processors. Being scared of the numbers I quoted is reasonable, but just remember that they are certainly going to go UP from here. re: > If just the machine, then when we get a new one > we will want it all and let's set it up aggressively now, then comment out a > few things temporarily for the current machine's weaknesses. OK. re: > I hope a lot of your setup work will transfer seamlessly to a new machine? It should. re: > As an image even, or will we have to rebuild it all within a new system? It is always best to build software distributions on the machine(s) that they are going to run on. This is not the case for things like THREDDS and RAMADDA which are instantiated in Java war files. re: GEMPAK decoding? > Sounds good, turn off the decoding into GEMPAK format. OK. re: > Or is that what we will want for AWIPS-II when it comes along? No, AWIPS-II does its own thing (but it is supposed to be able to use GEMPAK decoded files in the future). re: > > Musing about the above: I think I will start off by processing all > > of NOAAPort data first. This excludes processing of the high resolution > > NCEP model output in the CONDUIT datastream and the full volume scan > > radar data in the NEXRAD2 datastream. Please let me know if you > > agree or disagree with this approach. > > Sounds like a good start. OK. > Is there a big table somewhere of LDM's available streams and their sizes and > so on? The best thing for volumes of IDD datastreams is to look at the real time stats being reported by top level IDD relay nodes: Unidata HomePage http://www.unidata.ucar.edu Projects -> Internet Data Distribution http://www.unidata.ucar.edu/projects/index.html#idd IDD Current Operational Status http://www.unidata.ucar.edu/software/idd/rtstats/ Statistics by Host http://www.unidata.ucar.edu/cgi-bin/rtstats/siteindex Look at uni14.unidata.ucar.edu, for example. Cheers, Tom -- **************************************************************************** Unidata User Support UCAR Unidata Program (303) 497-8642 P.O. Box 3000 address@hidden Boulder, CO 80307 ---------------------------------------------------------------------------- Unidata HomePage http://www.unidata.ucar.edu **************************************************************************** Ticket Details =================== Ticket ID: DLW-777823 Department: Support IDD Priority: Normal Status: Closed