[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[McIDAS #EGS-225845]: McIDAS GRID decoder
- Subject: [McIDAS #EGS-225845]: McIDAS GRID decoder
- Date: Thu, 18 May 2006 18:41:01 -0600
Hi Hsie,
re:
> I have 2 linux machines (both are running RedHat EL 3.0 and McIDAS 2005) :
>
> (1) The first machine, squall with :
> request HDS "/mNAM.*" rainbow.al.noaa.gov PRIMARY
OK, this machine receives a small subset of the grids available in the HDS
datastream.
> The file size is much small and GRDLIST gives me the correct listing:
File size, or number of files listed? The reason I ask is because of the
following comparison in file sizes (from your listing):
squall GRIDnnnn rainbow
=========+==============+=================================+==========
-rw-rw-r-- 1 ldm unidata 429603068 May 16 20:08 GRID5507 429603068
-rw-rw-r-- 1 ldm unidata 431716796 May 17 20:08 GRID5508 431716796
-rw-rw-r-- 1 ldm unidata 433193764 May 16 20:38 GRID5517 443148228
-rw-rw-r-- 1 ldm unidata 433193764 May 17 20:36 GRID5518 443148228
-rw-rw-r-- 1 ldm unidata 216666656 May 16 20:38 GRID5527 216666656
-rw-rw-r-- 1 ldm unidata 216666656 May 17 20:36 GRID5528 216666656
-rw-rw-r-- 1 ldm unidata 325901836 May 17 02:26 GRID5537 325901836
-rw-rw-r-- 1 ldm unidata 325901836 May 18 02:23 GRID5538 325901836
-rw-rw-r-- 1 ldm unidata 288841940 May 17 02:26 GRID5547 288841940
-rw-rw-r-- 1 ldm unidata 288841940 May 18 02:23 GRID5548 288841940
-rw-rw-r-- 1 ldm unidata 437537564 May 17 08:07 GRID5567 437537564
-rw-rw-r-- 1 ldm unidata 438224924 May 18 08:06 GRID5568 438224924
-rw-rw-r-- 1 ldm unidata 433193764 May 17 08:35 GRID5577 443148228
-rw-rw-r-- 1 ldm unidata 433193764 May 18 08:34 GRID5578 443148228
-rw-rw-r-- 1 ldm unidata 216666656 May 17 08:36 GRID5587 89277324
-rw-rw-r-- 1 ldm unidata 216666656 May 18 08:34 GRID5588 89277324
-rw-rw-r-- 1 ldm unidata 325901836 May 17 14:25 GRID5597 325901836
-rw-rw-r-- 1 ldm unidata 325901836 May 18 14:25 GRID5598 325901836
-rw-rw-r-- 1 ldm unidata 288841940 May 17 14:25 GRID5607 288841940
-rw-rw-r-- 1 ldm unidata 288841940 May 18 14:26 GRID5608 288841940
There are 6 files that are a different size on rainbow:
squall GRIDnnnn rainbow
=========+==============+=================================+==========
-rw-rw-r-- 1 ldm unidata 433193764 May 16 20:38 GRID5517 443148228
-rw-rw-r-- 1 ldm unidata 433193764 May 17 20:36 GRID5518 443148228
-rw-rw-r-- 1 ldm unidata 433193764 May 17 08:35 GRID5577 443148228
-rw-rw-r-- 1 ldm unidata 433193764 May 18 08:34 GRID5578 443148228
-rw-rw-r-- 1 ldm unidata 216666656 May 17 08:36 GRID5587 89277324
-rw-rw-r-- 1 ldm unidata 216666656 May 18 08:34 GRID5588 89277324
All of the other files are exactly the same size.
I understand why GRID551[78] and GRID557[78] are larger on rainbow.
The contents of the files are identical until near the end where the
GRID files on rainbow have a number of fields that were already decoded
in the beginning of the file. The explanation for this is that these
fields were resent in the HDS datastream and they had been scoured out
of the LDM queue on rainbow (it is getting ALOT more data than squall).
'squall', on the other hand, is not getting much data at all, so the
original products were still in the LDM queue when the duplicates in the
HDS stream were received. Since the products were already in the queue,
they were rejected as duplicates and, therefore, not processed.
Why GRID558[78] are smaller on rainbow is a bit of a mystery _unless_
the processing on rainbow got behind and lots of products were never
processed out of the queue. Please note that the relay of the products
would occur as soon as they were received so squall evidently did get
them and was able to process them out of its queue before they got
deleted.
By the way, it is possible that the products got scoured
out of rainbow's queue because of the 'exec pqexpire' invocation.
I recommend that you comment out the 'exec pqexpire' line in
rainbow's ldmd.conf file and restart its LDM just to make sure
that this is not the culprit.
> squall:[49]% grdlist.k RTGRIDS/NAM FOR=FILE
> DATASET NAME: RTGRIDS/NAM
> Dataset Position Creation Date Max Grids Directory Title
> ---------------- ------------- --------- -------------------------------
> 7 2006137 5000 ALL 00Z NAM 0 HR<=VT<=24 HR
> 8 2006138 5000 ALL 00Z NAM 0 HR<=VT<=24 HR
> 17 2006137 5000 ALL 00Z NAM 24 HR< VT<=48 HR
> 18 2006138 5000 ALL 00Z NAM 24 HR< VT<=48 HR
> 27 2006137 5000 ALL 00Z NAM 48 HR< VT<=72 HR
> 28 2006138 5000 ALL 00Z NAM 48 HR< VT<=72 HR
> 37 2006137 5000 ALL 06Z NAM 0 HR<=VT<=24 HR
> 38 2006138 5000 ALL 06Z NAM 0 HR<=VT<=24 HR
> 47 2006137 5000 ALL 06Z NAM 24 HR< VT<=48 HR
> 48 2006138 5000 ALL 06Z NAM 24 HR< VT<=48 HR
> 67 2006137 5000 ALL 12Z NAM 0 HR<=VT<=24 HR
> 68 2006138 5000 ALL 12Z NAM 0 HR<=VT<=24 HR
> 77 2006137 5000 ALL 12Z NAM 24 HR< VT<=48 HR
> 78 2006138 5000 ALL 12Z NAM 24 HR< VT<=48 HR
> 87 2006137 5000 ALL 12Z NAM 48 HR< VT<=72 HR
> 88 2006138 5000 ALL 12Z NAM 48 HR< VT<=72 HR
> 97 2006137 5000 ALL 18Z NAM 0 HR<=VT<=24 HR
> 98 2006138 5000 ALL 18Z NAM 0 HR<=VT<=24 HR
> 107 2006137 5000 ALL 18Z NAM 24 HR< VT<=48 HR
> 108 2006138 5000 ALL 18Z NAM 24 HR< VT<=48 HR
> GRDLIST - done
> squall:[50]%
>
> ...
>
> rainbow:[50]% grdlist.k RTGRIDS/NAM FOR=FILE
> DATASET NAME: RTGRIDS/NAM
> Dataset Position Creation Date Max Grids Directory Title
> ---------------- ------------- --------- -------------------------------
> 7 2006137 5000 ALL 00Z NAM 0 HR<=VT<=24 HR
> 8 2006138 5000 ALL 00Z MAPS
> 17 2006137 5000 ALL 00Z NAM 24 HR< VT<=48 HR
> 18 2006138 5000 ALL 00Z NGM 24 HR< VT<=48 HR
> 27 2006137 5000 ALL 00Z NAM 48 HR< VT<=72 HR
> 28 2006138 5000 ALL 00Z NAM 24 HR< VT<=48 HR
> 37 2006137 5000 ALL 06Z NAM 0 HR<=VT<=24 HR
> 38 2006138 5000 ALL 06Z NAM 0 HR<=VT<=24 HR
> 47 2006137 5000 ALL 06Z NAM 24 HR< VT<=48 HR
> 48 2006138 5000 ALL 06Z NAM 24 HR< VT<=48 HR
> 67 2006137 5000 ALL 12Z NAM 0 HR<=VT<=24 HR
> 68 2006138 5000 ALL 12Z NAM 0 HR<=VT<=24 HR
> 77 2006137 5000 ALL 12Z NAM 24 HR< VT<=48 HR
> 78 2006138 5000 ALL 12Z NAM 24 HR< VT<=48 HR
> 87 2006137 5000 ALL 12Z NAM 48 HR< VT<=72 HR
> 88 2006138 5000 ALL 12Z NAM 48 HR< VT<=72 HR
> 97 2006137 5000 ALL 18Z NAM 0 HR<=VT<=24 HR
> 98 2006138 5000 ALL 18Z NAM 0 HR<=VT<=24 HR
> 107 2006137 5000 ALL 18Z NAM 24 HR< VT<=48 HR
> 108 2006138 5000 ALL 18Z NAM 24 HR< VT<=48 HR
> GRDLIST - done
> rainbow:[51]%
So, the headers in GRID files 5508 and 5518 were somehow written
incorrectly when the files were first created:
GRID5508 -> MAPS instead of NAM and no time range registered
GRID5518 -> NGM instead of NAM
I don't understand how this could happen unless the dmgrid.k executable
is somehow damaged on rainbow or there was some sort of a hiccup. It is
interesting that the size of the files is the same on both systems, and a
side-by-side comparison of file contents (using GRDLIST) is identical.
So, the data all got decoded correctly, but the file headers got damaged
upon creation.
For the record: the configuration setup for running XCD is identical
on rainbow and squall. The only differences I can see on the two
systems is:
- rainbow receives alot more data than squall
- rainbow has a 'exec pqexpire' entry in ~ldm/etc/ldmd.conf; this
is not needed
- rainbow - dual 3.2 Ghz Xeon processors w/4 GB RAM; OS supports
hyperthreading
- squall - single 2.2 Ghz P4 w/1.5 GB RAM; no hyperthreading
Both systems are running 4 GB queues.
> Could you help find out what's wrong in rainbow?
I wish I had a simple explanation for what happened in the writing
of the GRID file headers you noted, but I don't. I believe that
I understand how a GRID file on rainbow can be larger than the
same one on squall _if_ duplicate products were received in the
IDD HDS stream and the previous ones of the same kind had been
scoured out of the queue.
I suspect that there was some sort of a hiccup on rainbow when
GRID5508 and GRID5518 were being created. What this hiccup would
be I don't know.
Cheers,
Tom
****************************************************************************
Unidata User Support UCAR Unidata Program
(303) 497-8642 P.O. Box 3000
address@hidden Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage http://www.unidata.ucar.edu
****************************************************************************
Ticket Details
===================
Ticket ID: EGS-225845
Department: Support McIDAS
Priority: Normal
Status: Closed