[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Extended GFS 0.5 degree data
- Subject: Re: Extended GFS 0.5 degree data
- Date: Mon, 12 Jun 2006 17:33:39 -0500
Steve,
Thanks for the quick reply, I'll compare your files to mine and get
back to you with what I find.
Pete
In a previous message to me, you wrote:
>Pete,
>
>The GRIB2 data does take more CPU to decode and store in GEMPAK
>at the moment beacuse it must uncompress the product, and then repack
>into the GEMPAK GRIB packing. I'm working to modify the storage so that
>it can simply write out the compressed bits just as GRIB1 decoding does
>(when there is no bit mask or tiling involved).
>
>As for the output file size, it should be the same as the data stream
>contents. If it is not, then your pqact FILE action may be either:
>1) writing the .status file into the raw data file
>2) receiving the data more than once due to queue to small
>3) disk IO too large so that a second IO stream is opened by pqact
>
>If your feed request in LDM has more than one CONDUIT request, you may
>get the individual GRIB products in a different order, but that woudn't
>affect decoding an individual GRIB bulletin.
>
>As a quick check, you can see what we have received here:
>ftp://unidata:address@hidden/native/conduit/SL.us008001/ST.opn
>t/MT.gfs_CY.12/RD.20060612/PT.grid_DF.gr2
>and tell me if your decoding works on those files and/or if your file
>size is different.
>
>Steve Chiswell
>Unidata User Support
>
>On Mon, 2006-06-12 at 15:48, Pete Pokrandt wrote:
>> Steve, et al,
>>
>> One other thing I noticed, it takes a LOT more CPU power and
>> disk i/o bandwidth to decode the 0.5 deg GFS files in real time
>> than it does to decode the 1 deg GFS. In fact, f5.aos.wisc.edu
>> (dual PIII 1 Ghz, 3 Gb RAM, data being written to a 3 disk scsi RAID0
>> filesystem, 4 Gb ldm queue) was not able to keep up decoding the
>> 0.5 GRIB2 files into gempak format. The gempak files were truncated,
>> as if the GRIB2 data were being overwritten in the queue before
>> the decoder had gotten to them.
>>
>> I've currently got the GRIB2 -> gempak stuff running on another machine
>> (dual athlon MP2600+, 2 Gb ram, SATA RAID1 filesystem, 4 Gb ldm queue)
>> and it is able to keep up as far as I can tell, with the exception
>> from my prior email that the raw GRIB2 files from conduit do not match
>> up size-wise with the files if I ftp them from tgftp, and the CONDUIT
>> files give wgrib2 errors, and will not work with the WRF.
>>
>> Pete
>>
>> In a previous message to me, you wrote:
>>
>> >
>> >CONDUIT data users,
>> >
>> >We would like to increase the 0.5 degree (grid #4) GFS availability to
>> >the CONDUIT data stream as discussed at the working group meeting in
>> >January:
>> >http://www.unidata.ucar.edu/projects/CONDUIT/Unidata_CONDUIT_Status_and_I
>mple
>> >mentation_Update.pdf
>> >
>> >Currently, the CONDUIT feed provides F000 through F084 fields for the
>> >GFS at the half degree resolution in GRIB2 format. We now have access to
>> >the grids through F180 (an additional 1.2GB of data per model cycle)
>> >and plan to add this to the data stream.
>> >
>> >The addition of these files will involve a change in source file names
>> >of the existing and additional data files from the NWS ftp server file
>> >naming convention:
>> >ftp://tgftp.nws.noaa.gov/SL.us008001/ST.opnt/MT.gfs_CY.HH/RD.YYYYMMDD/PT.
>grid
>> >_DF.gr2/fh.FFFF_tl.press_gr.0p5deg
>> >to the NCEP ftp server naming convention:
>> >ftp://ftpprd.ncep.noaa.gov/pub/data/nccf/com/gfs/prod/gfs.YYYYMMDD/gfs.tH
>Hz.p
>> >grb2fFFF
>> >
>> >If you are currently using the 0.5 degree GFS products and the change of
>> >file naming would cause problems for your use of the data, please
>> >contact address@hidden as soon as possible.
>> >Eventually, we would like to remove the legacy 1.0 degree GRIB1 format
>> >(grid #3) products currently being sent in CONDUIT for the same
>> >F000-F180 time period to allow for additional new data sets. We
>> >encourage all users to transition to GRIB2 along with the NWS
>> >announced changes so that new data sets can be utilized to their
>> >fullest extent.
>> >
>> >Steve Chiswell
>> >Unidata User Support
>> >
>> >=========================================================================
>====
>> >==
>> >To unsubscribe c2, visit:
>> >http://www.unidata.ucar.edu/mailing-list-delete-form.html
>> >=========================================================================
>====
>> >==
>> >
>>
>>
>> --
>> +>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>+<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<+
>> ^ Pete Pokrandt V 1447 AOSS Bldg 1225 W Dayton St^
>> ^ Systems Programmer V Madison, WI 53706 ^
>> ^ V address@hidden ^
>> ^ Dept of Atmos & Oceanic Sciences V (608) 262-3086 (Phone/voicemail) ^
>> ^ University of Wisconsin-Madison V 262-0166 (Fax) ^
>> <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<+>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>+
--
+>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>+<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<+
^ Pete Pokrandt V 1447 AOSS Bldg 1225 W Dayton St^
^ Systems Programmer V Madison, WI 53706 ^
^ V address@hidden ^
^ Dept of Atmos & Oceanic Sciences V (608) 262-3086 (Phone/voicemail) ^
^ University of Wisconsin-Madison V 262-0166 (Fax) ^
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<+>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>+