This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Gabe, The GFS data for grids #2, 3, and 4 (eg 2.5 deg, 1 deg and 0.5 deg received here all have the 10, 20, 30, 50, 70, 100, and 150 mb levels. http://motherlode.ucar.edu/cgi-bin/ldm/statusgen.csh?SL.us008001/ST.opnl/MT.gfs_CY.00/RD.20050916 http://motherlode.ucar.edu/cgi-bin/ldm/statusgen.csh?SL.us008001/ST.opnl/MT.gfs_CY.06/RD.20050916 The 12Z run is just starting to arrive, but the grids that have arrived also have the upper levels. Your IDD reception on plarmet12 looks good through last night, with a short latency blip this morning which might have been restarting your LDM or a brief network outage: http://my.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+polarmet12.mps.ohio-state.edu If you have specific times and grids, I can double check specific arrivals. Steve Chiswell Unidata User Support On Fri, 2005-09-16 at 09:17, Gabe Langbauer wrote: > Hello, > > The GFS data I received today did not have all the vertical levels (only went > up to 200mb) I was just wondering if this was a problem on my end...Did > anyone > else have the same experience? > > pqact call: > CONDUIT MT.gfs_CY.../(.*) !(.*)! > FILE > data/conduit/GFS/\1 > > --Gabe Langbauer