[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
19990827: decoding grib by XCD (cont.)
- Subject: 19990827: decoding grib by XCD (cont.)
- Date: Fri, 27 Aug 1999 14:49:14 -0600
>From: "Jennie L. Moody" <address@hidden>
>Organization: UVa
>Keywords: 199907281535.JAA02200 McIDAS GRIB DMGRID
Jennie,
re: finishing off the LDM style of directory structure on aeolus
I did the following to finish off what Clay had started:
<login as 'mcidas'>
ln -s runtime/workdata workdata
ln -s runtime/admin admin
ln -s runtime/help help
ln -s runtime/savedata savedata
ln -s runtime/tcl tcl
re: lots of files in the /incoming/data/grib directory
>Right, and they are all supposed to by global model runs....
>mrf grids are what we requested, and what Dolores said she was
>giving us...more about this below.
OK.
re: directory setup
>well, lets see, the directory structure is
>/home/mcidas/710
>mcidas: /home/mcidas/710 $ ls -al
>total 112
>drwxr-xr-x 11 mcidas usr 512 Jan 2 1997 ./
>drwxrwxrwx 17 mcidas sys 1536 Aug 27 08:52 ../
>drwxr-xr-x 2 mcidas usr 512 Jan 2 1997 admin/
>drwxr-xr-x 2 mcidas usr 7168 Mar 15 16:43 bin/
>drwxr-xr-x 2 mcidas usr 6144 Nov 18 1997 data/
>drwxr-xr-x 2 mcidas usr 6144 Jan 2 1997 help/
>drwxr-xr-x 2 mcidas usr 512 Jan 2 1997 inc/
>drwxr-xr-x 3 mcidas usr 512 Jan 2 1997 lib/
>drwxr-xr-x 2 mcidas usr 512 Jan 2 1997 savedata/
>drwxr-xr-x 6 mcidas usr 512 Jan 2 1997 tcl/
>drwxrwxrwx 2 mcidas usr 2048 Aug 26 17:22 workdata/
>
> lrwxrwxrwx 1 mcidas usr 16 Jan 3 1997 runtime@ -> /home/mcida
> s/710/
>
>runtime links to /home/mcidas/710 and all the subdirectories.
Yes, but the subdirectories themselves were not all called out at the top
level. The process is:
<login as mcidas>
cd
ln -s 710 runtime
ln -s runtime/admin admin
ln -s runtime/bin bin
ln -s runtime/data data
ln -s runtime/help help
ln -s runtime/inc inc
ln -s runtime/lib lib
ln -s runtime/savedata savedata
ln -s runtime/tcl tcl
ln -s runtime/workdata workdata
(This, at least is the complete list for 7.1.)
The 'mcidas' users McIDAS environment variables then become:
MCDATA=/home/mcidas/workdata
MCPATH=${MCDATA}:/home/mcidas/data:/home/mcidas/help
MCGUI=/home/mcidas/bin
etc.
These environment variables will stay this way from then on. All that will
change when switching to a new distribution is the runtime link:
rm runtime
ln -s 760 runtime
etc.
>Its actually the src directory (which you found hiding) that is
>or appears to be missing.
No, the src directory does not get called out to the home directory.
This remains buried under the distribution's tree (i.e. mcidas7.1/src,
mcidas7.4/src, etc.).
>Anyway, I agree, its not a nice clean setup.
It was soooo close...
>Okay, I may have a question about trying to retrofit the old version
>(7.4) under the runtime setup,
What I would to retrofit 7.4 in the new structure is:
o login as 'ldm' and stop the LDM:
ldmadmin stop
o login as 'mcidas'
cd
mkdir 740
mv admin 740/admin
mv bin 740/bin
mv data 740/data
mv help 740/help
mv inc 740/inc
mv lib 740/lib
mv man 740/man
mv savedata 740/savedata
mv tcl 740/tcl
mv workdata 740/workdata
This will move everything from the old installation point (defined
by McINST_ROOT as /home/mcidas) to the new installation point
of /home/mcidas/740
o set McINST_ROOT to be /home/mcidas/740 (for example. What you call
the install point is somewhat arbitrary; it could be /home/mcidas/mcidas740
for instance)
o cd to the mcidas7.4/src directory and redo the Tcl/Tk stuff. The reason
for this is the the installation target is going to be changed from
/home/mcidas to /home/mcidas/740. The Tcl/Tk module 'mcwish' gets the
location of the installation directory for Tcl "burned" into it when
it gets built:
cd mcidas7.4/src
rm tclcomp
cd ../tcl8.0/unix
make distclean
cd ../../tk8.0/unix
make distclean
cd ../../src
make
<wait until the rebuild of the Tcl/Tk stuff finishes>
At this point, you can reinstall just the Tcl/Tk stuff:
make install.tcl
o create the new runtime link structure in /home/mcidas
rm -f runtime
ln -s 740 runtime
ln -s runtime/admin admin
ln -s runtime/bin bin
ln -s runtime/data data
ln -s runtime/help help
ln -s runtime/inc inc
ln -s runtime/lib lib
ln -s runtime/savedata savedata
ln -s runtime/tcl tcl
ln -s runtime/workdata workdata
o if 7.40 was installed in the standard location (i.e. /home/mcidas)
then the next few things will not have to be done)
change the shell configuration file (.profile or .kshrc in your
case) to reflect the new directory structure:
MCDATA=/home/mcidas/workdata
MCPATH=${MCDATA}:/home/mcidas/data:/home/mcidas/help
MCGUI=/home/mcidas/bin
etc.
o modify the setup for your remote ADDE server. This will be done
in the .mcenv file in the home directory of 'mcidas'. Change
MCDATA, etc. to reflect the new runtime links.
<as the user 'ldm'>
o edit the copy of 'xcd_run' that the LDM uses for XCD decoding. Modify
MCDATA, MCPATH, etc. to match the new McIDAS runtime links (e.g.
MCDATA=/home/mcidas/data IF you want to use the new version each
time you install it. If you want to keep using a specific version
of the code for XCD and ROUTE PostProcess BATCH invocations, then change
MCDATA to /home/mcidas/740/workdata, MCPATH to
${MCDATA}:/home/mcidas/740/data:/home/mcidas/740/help, etc.
o modify the copy of the Bourne shell script, batch.k, that the LDM kicks
of for PostProcess BATCH files. The modifications will be the same
as for xcd_run
At this point, I think that you should be able to restart your LDM,
and be able to run McIDAS sessions as 'mcidas'. If other users
Were setup to use the McIDAS installation, things should keep working
if the 7.4 installation was in the recommended location (i.e. /home/mcidas).
I believe that takes care of the retrofit.
Now, when you build 7.60, your McINST_ROOT would be set to /home/mcidas/760.
After you install in that directory tree, you will be able to change use
of 7.40 to 7.60 with a simple change of the runtime link:
cd
rm runtime
ln -s 760 runtime
I believe that that just about covers things.
>but I think I know how to setup the
>new version (7.6) to have the right structure (on windfall).
re: existing GRID files
>Yeah, it was okay, we had already decoded this spool. So all you
>did was add to the existing GRID files, I believe.
OK, should be no problem.
>Once I was able to reset the GRIBDEC.PRO pointer, I tried to move on:
>
>As the ldm-user, I cp /dev/null to HRS.SPL (to clear out the old
>eta data files), then I cat'ed new grib files into the spool, these
>should be the mrf grids I mentioned at the beginning of this note.
>
>cat us008_gf078_97090500_YxAAx | xcd_run GRID
OK.
>This started to add new grib data to the spool. Since I thought
>I would only have to read the first part of a file to at least
>start a grid file, I stopped building the spool and went to try
>to read it.
You were correct in your assumption.
>That is, I logged in as user mcidas, started a mcidas
>session, poked the value of GRIBDEC.PRO (to be 0, so the pointer
>would not have the value of where you left off in the old spool
>yesterday, but would have a value that should make it look at the
>top of the new spool I had just created).
I don't think that POKEing the value to 0 was bad, but I would have
set it to be 4096.
>The poke worked fine.
Good.
>I ran DMGRID, and watched the processes with top, and I could see
>dmgrid.k start, run and eventually stop (although the DMGRID process
>continues to exist with a PID, but it is not an active process via top).
>No grids came out.
Rats!
>So, I thought my theory about this first piece
>at least writing out the header of a grid file was wrong.
No, you were/are correct.
>So then I catted all the files for the same time period above (9709
>0500) to xcd_run (there were 22 of them, all around 220k, so a total of about
>4.6MB, less that the 30MB spool allocation). I exited from McIDAS,
>killed off the still running DMGRID process, restarted McIDAS,
>reset the pointer with the POKE command, and started DMGRID
>again....then I went off to smoke a pack of cigarettes...
OK (except for the cigarettes, that is).
>(just kidding, but I did go off to other windows to work, since
>I knew that it would take aeolus a while to chew through this
>spool.) Before I left, I started top, and could see dmgrid.k
>actively consuming CPU, chewing... when I came back, it was not
>active, BUT, there were still no grid files.
Double Rats!
>I had already looked through RTMODELS.CFG and NOGRIB.CFG to make
>sure I was not prohibiting the decoding of any grids, and I
>wasn't! SO, I don't get it? Can you provide any insight into
>why the grib decoder might not work (ie., might not be able to
>decode the files I got from Dolores)? At this point I am
>pretty well stuck.
I saw that the copy of NOGRIB.CFG had no grids prohibited. This is
what I would have done.
>Is it possible that since aeolus is running an older version
>of the decoders, and these are 1997 data files, that somehow the
>headers changed and it just doesn't recognize anything in these
>grib files?
I don't think so.
>I guess the only way I could tell this would be
>to try to set up the decoding to happen over on windfall.
>(Or maybe you could grab the first piece of one of these
>grib-sets of files, and see if you could decode it out there??)
OK, I will try this later.
>I guess this is a big pain in the ass, huh?
All problems are a pain until they are solved.
>Well, I do appreciate your assistance. Until I can read these
>new file types, any discussion of mechanizing the process is
>at a standstill (well, we will eventually get more eta data
>for other purposes, but I guess I have to deal with these data
>and this purpose first!)
OK.
>Where are the beginning, ending and fill pointers defined? I just looked
>through all the CFG files and I cannot find anything other than the
>name and size of the spool and the pointer GRIBDEC.PRO.
They are not defined anywhere. I just read the code to know how things
worked. This kind of stuff is supposed to be hidden from the end
user.
>You can see the change in top, but I agree, this doesn't seem to lend
>itself to a script.
Right. Checking the fill pointer against the decode pointer (the one
in GRIBDEC.PRO) is really the only way to proceed.
>Thanks again Tom.
I will give decoding one of the MRF grids a whirl here and see what
happens.
Tom