[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
20010418: 20010404: 20010403: 20010321: rebuilding gempak for large gribs
- Subject: 20010418: 20010404: 20010403: 20010321: rebuilding gempak for large gribs
- Date: Wed, 18 Apr 2001 15:24:47 -0600
Art,
I rebuilt GEMPAK on Solaris 5.7, modifying $GEMPAK/include/gemprm.h
and MCHPRM.SunOS & GEMPRM.PRM with values:
gemprm.h:
#define LLMXGD ( 1200000 )
/* Max # grid points */
#define LLMDGG ( 3600000 )
/* Max mem for intern grids */
MCHPRM.SunOS:
PARAMETER ( LLMXGD = 1200000 )
C! Max # grid points
GEMPRM.PRM:
PARAMETER ( LLMDGG = 3600000 )
The limits on my account:
[86]chiz on laraine --> limit
cputime unlimited
filesize unlimited
datasize 2097148 kbytes
stacksize 8192 kbytes
coredumpsize unlimited
descriptors 64
memorysize unlimited
I placed the decoded file in ~gbuddy/incoming/2000050523_stage2.gem
GEMPAK-GDINFO>r
GRID FILE: /dist/gbuddy/incoming/2000050523_stage2.gem
GRID NAVIGATION:
PROJECTION: STR
ANGLES: 90.0 255.0 0.0
GRID SIZE: 1160 880
LL CORNER: 22.77 -120.38
UR CORNER: 45.62 -60.07
GRID ANALYSIS BLOCK:
UNKNOWN ANALYSIS TYPE
Number of grids in file: 1
Maximum number of grids in file: 5000
NUM TIME1 TIME2 LEVL1 LEVL2 VCORD PARM
1 000505/2300F001 0 NONE P01M
I also copied my current ldmbridge/dcgrib2 development tree files into
~gbuddy/nawips-5.6/patches/dcgrib2.
In particular, as I mentioned in previous echanges with you, I re-formulated
the decode_grib.c routine to dynamically allocate the space rather than
declaring it at compile time. You can update your tree accordingly
so that you are comparing apples to apples. Rebuild with:
cd $NAWIPS/unidata/ldmbridge/dcgrib2
make clean
make all
make install
make clean
I copied my dcgrib2 solaris executable to ~gbuddy/incoming/dcgrib2_sol_1200000.
Steve Chiswell
Unidata User SUpport
>From: "Arthur A. Person" <address@hidden>
>Organization: UCAR/Unidata
>Keywords: 200104181930.f3IJUdL00452
>On Wed, 18 Apr 2001, Unidata Support wrote:
>
>>
>> Art,
>>
>> Can you stick a sample of your grib in ~gbuddy/incoming, or tell me
>> where you are downloading the grids from?
>
>Okay, I uploaded mul4.20000506.00 onto ~gbuddy/incoming. The data were
>obtained from http://www.emc.ncep.noaa.gov/mmb/stage2/ by a PhD student
>here but were obtained on tape from long-term archives.
>
> Thanks again...
>
> Art.
>
>> I am at a point where I can rebuild my distribution for testing.
>>
>> Since your dbx output shows that it is at a line in the variable
>> declarations, and not at a point in executing a statement, I believe
>> that the stack size is the culprit. It may be possible to rearrange
>> space allocation.
>>
>> Steve Chiswell
>>
>>
>>
>> >From: "Arthur A. Person" <address@hidden>
>> >Organization: UCAR/Unidata
>> >Keywords: 200104181716.f3IHGCL20851
>>
>> >Steve,
>> >
>> >Here's what I get when I run dcgrib2 with dbx:
>> >
>> >(dbx) run
>> >Running: dcgrib2 -v 3 -d dcgrib2.log -e
>> >GEMTBL=/opt1/gempak/NAWIPS-5.6.a-big/gempak/tables < mul4.20000506.00
>> >(process id 6546)
>> > PDS bytes 1- 3 (pds.length) = 28
>> > PDS byte 4 (pds.version) = 2
>> > PDS byte 5 (pds.center) = 7
>> > PDS byte 6 (pds.process) = 152
>> > PDS byte 7 (pds.grid_id) = 240
>> > PDS byte 8 (pds.flag) = 192
>> > PDS byte 9 (pds.parameter) = 61
>> > PDS byte 10 (pds.vcoord) = 1
>> > PDS bytes 11 (pds.level_1) = 0
>> > PDS bytes 12 (pds.level_2) = 0
>> > PDS bytes 11-12 (pds.level) = 0
>> > PDS byte 13 (pds.year) = 100
>> > PDS byte 14 (pds.month) = 5
>> > PDS byte 15 (pds.day) = 5
>> > PDS byte 16 (pds.hour) = 23
>> > PDS byte 17 (pds.minute) = 0
>> > PDS byte 18 (pds.time_unit) = 1
>> > PDS byte 19 (pds.time_p1) = 0
>> > PDS byte 20 (pds.time_p2) = 1
>> > PDS byte 21 (pds.time_range) = 4
>> > PDS bytes 22-23 (pds.avg_num) = 0
>> > PDS byte 24 (pds.avg_miss) = 0
>> > PDS byte 25 (pds.century) = 20
>> > PDS byte 26 (pds.izero) = 0
>> > PDS bytes 27-28 (pds.dec_scale) = 1
>> > PDS EXT FLAG (1-app,0-nc,-1-rep) = 0
>> > PDS EXT STRING =
>> > GDS bytes 1 - 3 (gds.length) = 32
>> > GDS byte 4 (gds.NV) = 0
>> > GDS byte 5 (gds.PV) = 255
>> > GDS byte 6 (gds.grid_proj) = 5
>> > GDS bytes 7 - 8 (Nx) = 1160
>> > GDS bytes 9 - 10 (Ny) = 880
>> > GDS bytes 11 - 13 (La1) = 22774
>> > GDS bytes 14 - 16 (Lo1) = 239624
>> > GDS byte 17 (flag1) = 8
>> > GDS bytes 18 - 20 (LoV) = 255000
>> > GDS bytes 21 - 23 (Dx) = 4763
>> > GDS bytes 24 - 26 (Dy) = 4763
>> > GDS byte 27 (flag2) = 0
>> > GDS byte 28 (scan_mode) = 64
>> > GDS bytes 29 - 32 (skipped)
>> > BDS bytes 1 - 3 (bds.length) = 274394
>> > BDS byte 4 (bds.flag) = 4
>> > BDS bytes 5 - 6 (bds.binary_scale) = 0
>> > BDS bytes 7 - 10 (bds.ref_value) = 0.000000
>> > BDS byte 11 (bds.num_bits) = 10
>> > Changing center table to cntrgrib1.tbl
>> > Changing vertical coord table to vcrdgrib1.tbl
>> > Changing WMO parameter table to wmogrib2.tbl
>> > Changing center parameter table to ncepgrib2.tbl
>> >signal SEGV (access to address exceeded protections) in dcsubgrid at line
>> >37 in file "dcsubgrid.c"
>> > 37 unsigned char *bmptr, *indxb, kbit=0;
>> >
>> >
>> >Does this help?
>> > Thanks.
>> >
>> > Art.
>> >
>> >
>> >On Tue, 10 Apr 2001, Unidata Support wrote:
>> >
>> >> Art,
>> >>
>> >> The problem is still likely the array sizes when the program is run,
>> >> calling the subroutines which use the large arrays.
>> >>
>> >> I have changed decode_grib.c for:
>> >>
>> >> static unsigned char *gribbul=NULL;
>> >> static int *xgrid=NULL;
>> >>
>> >> if(gribbul == NULL)
>> >> {
>> >> gribbul = (unsigned char *)malloc(MAXGRIBSIZE);
>> >> xgrid = (int *)malloc(3*LLMXGD*sizeof(int));
>> >> }
>> >>
>> >>
>> >> The largest use of space remaining is the dcfillgrid.c routine, which sho
> uld
>> >> not be called since this isn't an arakawa grid, but the float arrays can
> be
>> >> rearranged to allocate and free upon use of this routine as is done in dc
> sub
>> > grid.c.
>> >> Since you aren't enetring that routine, the core dump shouldn't be there.
>> >>
>> >> If dbx shows "where" the program is when the core dump occurs, that would
>> >> be useful. I believe it will be at the point of a subroutine where space
>> >> is being allocated.
>> >>
>> >> I looked at the table reading and it appears to be reading the last entry
>> >> but will double check this.
>> >>
>> >>
>> >> Steve Chiswell
>> >> Unidata User Support
>> >>
>> >>
>> >>
>> >> >From: "Arthur A. Person" <address@hidden>
>> >> >Organization: UCAR/Unidata
>> >> >Keywords: 200104041804.f34I4iL00470
>> >>
>> >> >On Tue, 3 Apr 2001, Unidata Support wrote:
>> >> >
>> >> >> Art,
>> >> >>
>> >> >> If the program core dumps without any input etc, then it is likely
>> >> >> running into stack size problems in allocating the space required.
>> >> >>
>> >> >> You might try checking any stack size limits you have.
>> >> >> You can compile with -g to see where the dcgrib2 program is dumping.
>> >> >> It may be that an array size is larger than allowed by the system.
>> >> >>
>> >> >> I have defined MAXGRIBSIZE as 1000000 here for an ECMWF data set
>> >> >> (.5 degree global grid) and that works for the product size, however
>> >> >> the data size is only 720x361, so not using very large arrays in the
>> >> >> gemlib defined LLMXGD.
>> >> >
>> >> >Okay, I increased the stack limit and that solved that problem. However
> ,
>> >> >I now observe the following:
>> >> >
>> >> >/opt1/gempak/NAWIPS-5.6.a-big/unidata/ldmbridge/dcgrib2/dcgrib2 -v 3 -d
>> >> >dcgrib2.log -e GEMTBL=/opt1/gempak/NAWIPS-5.6.a-big/gempak/tables
>> >> > PDS bytes 1- 3 (pds.length) = 28
>> >> > PDS byte 4 (pds.version) = 2
>> >> > PDS byte 5 (pds.center) = 7
>> >> > PDS byte 6 (pds.process) = 152
>> >> > PDS byte 7 (pds.grid_id) = 240
>> >> > PDS byte 8 (pds.flag) = 192
>> >> > PDS byte 9 (pds.parameter) = 61
>> >> > PDS byte 10 (pds.vcoord) = 1
>> >> > PDS bytes 11 (pds.level_1) = 0
>> >> > PDS bytes 12 (pds.level_2) = 0
>> >> > PDS bytes 11-12 (pds.level) = 0
>> >> > PDS byte 13 (pds.year) = 100
>> >> > PDS byte 14 (pds.month) = 5
>> >> > PDS byte 15 (pds.day) = 5
>> >> > PDS byte 16 (pds.hour) = 23
>> >> > PDS byte 17 (pds.minute) = 0
>> >> > PDS byte 18 (pds.time_unit) = 1
>> >> > PDS byte 19 (pds.time_p1) = 0
>> >> > PDS byte 20 (pds.time_p2) = 1
>> >> > PDS byte 21 (pds.time_range) = 4
>> >> > PDS bytes 22-23 (pds.avg_num) = 0
>> >> > PDS byte 24 (pds.avg_miss) = 0
>> >> > PDS byte 25 (pds.century) = 20
>> >> > PDS byte 26 (pds.izero) = 0
>> >> > PDS bytes 27-28 (pds.dec_scale) = 1
>> >> > PDS EXT FLAG (1-app,0-nc,-1-rep) = 0
>> >> > PDS EXT STRING =
>> >> > GDS bytes 1 - 3 (gds.length) = 32
>> >> > GDS byte 4 (gds.NV) = 0
>> >> > GDS byte 5 (gds.PV) = 255
>> >> > GDS byte 6 (gds.grid_proj) = 5
>> >> > GDS bytes 7 - 8 (Nx) = 1160
>> >> > GDS bytes 9 - 10 (Ny) = 880
>> >> > GDS bytes 11 - 13 (La1) = 22774
>> >> > GDS bytes 14 - 16 (Lo1) = 239624
>> >> > GDS byte 17 (flag1) = 8
>> >> > GDS bytes 18 - 20 (LoV) = 255000
>> >> > GDS bytes 21 - 23 (Dx) = 4763
>> >> > GDS bytes 24 - 26 (Dy) = 4763
>> >> > GDS byte 27 (flag2) = 0
>> >> > GDS byte 28 (scan_mode) = 64
>> >> > GDS bytes 29 - 32 (skipped)
>> >> > BDS bytes 1 - 3 (bds.length) = 274394
>> >> > BDS byte 4 (bds.flag) = 4
>> >> > BDS bytes 5 - 6 (bds.binary_scale) = 0
>> >> > BDS bytes 7 - 10 (bds.ref_value) = 0.000000
>> >> > BDS byte 11 (bds.num_bits) = 10
>> >> > Changing center table to cntrgrib1.tbl
>> >> > Changing vertical coord table to vcrdgrib1.tbl
>> >> > Changing WMO parameter table to wmogrib2.tbl
>> >> > Changing center parameter table to ncepgrib2.tbl
>> >> >Segmentation Fault (core dumped)
>> >> >
>> >> >
>> >> >I added a line as follows to gribkey.tbl before running the above:
>> >> >
>> >> >007 x 152 240 data/YYYYMMDDHH_pcn@@@.gem 20
> 00
>> >> >
>> >> >By-the-way... I added the above line to the bottom of the gribkey.tbl fi
> le
>> >> >and it had no effect until I moved it to the top of the file, so I think
>> >> >there may be a bug in dcgrib2 that prevents it from reading the last
>> >> >line(s) or doesn't notify of an array shortage for entries or some such
>> >> >thing, FYI.
>> >> >
>> >> >The output log file contains:
>> >> >
>> >> >[28000] 010404/1347 [DC 3] Starting up.
>> >> >[28000] 010404/1347 [DC -11] No command line arguments found.
>> >> >[28000] 010404/1347 [DC 2] read 10240/16383 bytes strt 0 newstrt 10240
>> >> >[28000] 010404/1347 [DC 2] read 6144/6144 bytes strt 10240 newstrt 1638
> 4
>> >> >[28000] 010404/1347 [DC 2] read 9216/10238 bytes strt 0 newstrt 9216
>> >> >[28000] 010404/1347 [DC 2] read 7168/7168 bytes strt 9216 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9214/9214 bytes strt 0 newstrt 9214
>> >> >[28000] 010404/1347 [DC 2] read 7170/7170 bytes strt 9214 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9212/9212 bytes strt 0 newstrt 9212
>> >> >[28000] 010404/1347 [DC 2] read 7172/7172 bytes strt 9212 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9210/9210 bytes strt 0 newstrt 9210
>> >> >[28000] 010404/1347 [DC 2] read 7174/7174 bytes strt 9210 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9208/9208 bytes strt 0 newstrt 9208
>> >> >[28000] 010404/1347 [DC 2] read 7176/7176 bytes strt 9208 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9206/9206 bytes strt 0 newstrt 9206
>> >> >[28000] 010404/1347 [DC 2] read 7178/7178 bytes strt 9206 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9204/9204 bytes strt 0 newstrt 9204
>> >> >[28000] 010404/1347 [DC 2] read 7180/7180 bytes strt 9204 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9202/9202 bytes strt 0 newstrt 9202
>> >> >[28000] 010404/1347 [DC 2] read 7182/7182 bytes strt 9202 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9200/9200 bytes strt 0 newstrt 9200
>> >> >[28000] 010404/1347 [DC 2] read 7184/7184 bytes strt 9200 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9198/9198 bytes strt 0 newstrt 9198
>> >> >[28000] 010404/1347 [DC 2] read 7186/7186 bytes strt 9198 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9196/9196 bytes strt 0 newstrt 9196
>> >> >[28000] 010404/1347 [DC 2] read 7188/7188 bytes strt 9196 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9194/9194 bytes strt 0 newstrt 9194
>> >> >[28000] 010404/1347 [DC 2] read 7190/7190 bytes strt 9194 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9192/9192 bytes strt 0 newstrt 9192
>> >> >[28000] 010404/1347 [DC 2] read 7192/7192 bytes strt 9192 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9190/9190 bytes strt 0 newstrt 9190
>> >> >[28000] 010404/1347 [DC 2] read 7194/7194 bytes strt 9190 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9188/9188 bytes strt 0 newstrt 9188
>> >> >[28000] 010404/1347 [DC 2] read 7196/7196 bytes strt 9188 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9186/9186 bytes strt 0 newstrt 9186
>> >> >[28000] 010404/1347 [DC 2] read 7198/7198 bytes strt 9186 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9184/9184 bytes strt 0 newstrt 9184
>> >> >[28000] 010404/1347 [DC 2] read 7200/7200 bytes strt 9184 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9182/9182 bytes strt 0 newstrt 9182
>> >> >[28000] 010404/1347 [DC 2] read 7202/7202 bytes strt 9182 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9180/9180 bytes strt 0 newstrt 9180
>> >> >[28000] 010404/1347 [DC 2] read 7204/7204 bytes strt 9180 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9178/9178 bytes strt 0 newstrt 9178
>> >> >[28000] 010404/1347 [DC 2] read 7206/7206 bytes strt 9178 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9176/9176 bytes strt 0 newstrt 9176
>> >> >[28000] 010404/1347 [DC 2] read 7208/7208 bytes strt 9176 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9174/9174 bytes strt 0 newstrt 9174
>> >> >[28000] 010404/1347 [DC 2] read 7210/7210 bytes strt 9174 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 9172/9172 bytes strt 0 newstrt 9172
>> >> >[28000] 010404/1347 [DC 2] read 7212/7212 bytes strt 9172 newstrt 16384
>> >> >[28000] 010404/1347 [DC 2] read 8856/9170 bytes strt 0 newstrt 8856
>> >> >[28000] 010404/1347 [DCGRIB 0] grib tables [cntr 7 edtn 1 vers 2]
>> >> >[28000] 010404/1347 [DCGRIB 0] Opened data/2000050523_pcn240.gem model:1
> 52
>> >> >[28000] 010404/1347 [DCGRIB 0] grid:240
>> >> >
>> >> >
>> >> >A gempak file is produced, but I'm not sure if it's usable. gdinfo show
> s:
>> >> >
>> >> > GEMPAK-GDINFO>list
>> >> > GDFILE = 2000050523_pcn240.gem
>> >> > LSTALL = YES
>> >> > OUTPUT = T
>> >> > GDATTIM = ALL
>> >> > GLEVEL = ALL
>> >> > GVCORD = ALL
>> >> > GFUNC = ALL
>> >> > GEMPAK-GDINFO>r
>> >> >
>> >> > GRID FILE: 2000050523_pcn240.gem
>> >> >
>> >> > GRID NAVIGATION:
>> >> > UNKNOWN GRID NAVIGATION
>> >> > GRID ANALYSIS BLOCK:
>> >> > UNKNOWN ANALYSIS TYPE
>> >> >
>> >> > Number of grids in file: 0
>> >> >
>> >> > Maximum number of grids in file: 2000
>> >> >
>> >> > [GDU 2] Did not find any matching grids.
>> >> > Parameters requested: GDFILE,LSTALL,OUTPUT,GDATTIM,GLEVEL,GVCORD,GFUNC.
>> >> >
>> >> >
>> >> >Can you give me some ideas? Is there a problem trying to use the 240
>> >> >grid, or something else? I have a description for the 240 grid if it's
>> >> >needed.
>> >> >
>> >> > Thanks again.
>> >> >
>> >> > Art.
>> >> >>
>> >> >> Steve Chiswell
>> >> >> Unidata User Support
>> >> >>
>> >> >>
>> >> >>
>> >> >> >From: "Arthur A. Person" <address@hidden>
>> >> >> >Organization: UCAR/Unidata
>> >> >> >Keywords: 200104032111.f33LBAL14985
>> >> >>
>> >> >> >On Wed, 21 Mar 2001, Unidata Support wrote:
>> >> >> >
>> >> >> >> >From: "Arthur A. Person" <address@hidden>
>> >> >> >> >Organization: UCAR/Unidata
>> >> >> >> >Keywords: 200103211815.f2LIFnL03774
>> >> >> >>
>> >> >> >> >Hi,
>> >> >> >> >
>> >> >> >> >I want to decode 4km precip grib data from NCEP using dcgrib2 but
> nee
>> > d t
>> >> > o
>> >> >> >> >make mods to allow for larger gribs and grids (1160 X 880). The N
> CEP
>> >> >> >> >instructions said to use nagrib but I should be able to use dcgrib
> 2,
>> >> >> >> >correct? I've identified that I need to make MAXGRIBSIZE larger i
> n
>> >> >> >> >decode_grib.c and I think I need to make LLMXGD larger for the ove
> ral
>> > l
>> >> >> >> >grid size. Can you tell me if LLMXGD is the only other thing I ne
> ed
>> > to
>> >> >> >> >modify to make this work correctly? Do I then need to rebuild the
> wh
>> > ole
>> >> >> >> >gempak package, or can I just build part of it to get dcgrib2 to w
> ork
>> > ?
>> >> >> >> >
>> >> >> >> > Thanks.
>> >> >> >> >
>> >> >> >> > Art.
>> >> >> >> >
>> >> >> >> >Arthur A. Person
>> >> >> >> >Research Assistant, System Administrator
>> >> >> >> >Penn State Department of Meteorology
>> >> >> >> >email: address@hidden, phone: 814-863-1563
>> >> >> >> >
>> >> >> >>
>> >> >> >>
>> >> >> >> Art,
>> >> >> >>
>> >> >> >> You need to rebuild the entire package (including all the $GEMLIB l
> ibr
>> > ary
>> >> >> >> files) whenever changing any array sizes defined in the $GEMPAK/inc
> lud
>> > e
>> >> >> >> files (since this will change common block sizes and sizes passed i
> n
>> >> >> >> subroutine calls). You can run "make distclean" from $NAWIPS to
>> >> >> >> remove the $GEMLIB files as well as any other build files from the
> tre
>> > e.
>> >> >> >>
>> >> >> >> LLMXGD should be changed in both the fortran GEMPRM.xxx file as wel
> l a
>> > s
>> >> >> >> the C gemprm.h file. The computation heap for grids defined as LLMD
> GG
>> >> >> >> is related since this is the amount of space used by computations
>> >> >> >> (eg each grid in the computation uses this space) that
>> >> >> >> use more than 1 grid- so it should be large enough to allow you to
>> >> >> >> hold at least 4x the grid size probably.
>> >> >> >>
>> >> >> >> The MAXGRIBSIZE parameter in decode_grib.c is the size of the buffe
> r
>> >> >> >> for the largest "grib message" you will encounter in the data strea
> m
>> >> >> >> (that is the grib packed message).
>> >> >> >
>> >> >> >Okay, I made a new gempak installation tree called NAWIPS-5.6.a-big a
> nd
>> >> >> >put a fresh copy of gempak there. I modified the following:
>> >> >> >
>> >> >> > gempak/include/MCHPRM.SunOS -> LLMXGD=1200000
>> >> >> > gempak/include/gemprm.h -> LLMXGD=1200000
>> >> >> > gempak/include/GEMPRM.PRM -> LLMDGG=7200000
>> >> >> > unidata/ldmbridge/dcgrib2/decode_grib.c -> MAXGRIBSIZE=500000
>> >> >> >
>> >> >> >I re-make'd and re-install'd (on Solaris 2.7) and tried running the
>> >> >> >dcgrib2 decoder and it core dumps. I tried just an empty command to
> see
>> >> >> >what it would do and I get:
>> >> >> >
>> >> >> > [16863] 010403/1658 [DC 3] Starting up.
>> >> >> > [16863] 010403/1658 [DC -11] No command line arguments found
> .
>> >> >> > Segmentation Fault (core dumped)
>> >> >> >
>> >> >> >I've apparently missed something else that needs modified... any clue
> s?
>> >> >> >
>> >> >> > Thanks.
>> >> >> >
>> >> >> > Art.
>> >> >> >
>> >> >> >
>> >> >> >Arthur A. Person
>> >> >> >Research Assistant, System Administrator
>> >> >> >Penn State Department of Meteorology
>> >> >> >email: address@hidden, phone: 814-863-1563
>> >> >> >
>> >> >>
>> >> >> **********************************************************************
> ***
>> > ***
>> >> >> Unidata User Support UCAR Unidata P
> rog
>> > ram
>> >> >> (303)497-8644 P.O. Bo
> x 3
>> > 000
>> >> >> address@hidden Boulder, CO
> 80
>> > 307
>> >> >> ----------------------------------------------------------------------
> ---
>> > ---
>> >> >> Unidata WWW Service http://www.unidata.ucar.edu
> /
>> >> >> **********************************************************************
> ***
>> > ***
>> >> >>
>> >> >
>> >> >Arthur A. Person
>> >> >Research Assistant, System Administrator
>> >> >Penn State Department of Meteorology
>> >> >email: address@hidden, phone: 814-863-1563
>> >> >
>> >>
>> >> *************************************************************************
> ***
>> >> Unidata User Support UCAR Unidata Prog
> ram
>> >> (303)497-8644 P.O. Box 3
> 000
>> >> address@hidden Boulder, CO 80
> 307
>> >> -------------------------------------------------------------------------
> ---
>> >> Unidata WWW Service http://www.unidata.ucar.edu/
>> >> *************************************************************************
> ***
>> >>
>> >
>> >Arthur A. Person
>> >Research Assistant, System Administrator
>> >Penn State Department of Meteorology
>> >email: address@hidden, phone: 814-863-1563
>> >
>>
>> ****************************************************************************
>> Unidata User Support UCAR Unidata Program
>> (303)497-8644 P.O. Box 3000
>> address@hidden Boulder, CO 80307
>> ----------------------------------------------------------------------------
>> Unidata WWW Service http://www.unidata.ucar.edu/
>> ****************************************************************************
>>
>
>Arthur A. Person
>Research Assistant, System Administrator
>Penn State Department of Meteorology
>email: address@hidden, phone: 814-863-1563
>