[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
19990916: McIDAS: Disk Partitioning
- Subject: 19990916: McIDAS: Disk Partitioning
- Date: Thu, 16 Sep 1999 18:39:06 -0600
>From: Michael Keables <address@hidden>
>Organization: DU
>Keywords: 199909152035.OAA16060 LDM ldm-mcidas
Mike,
>First, thanks so much for all of your help with cyclone and the LDM. Being
>a unix novice, I make a lot of mistakes and as such, I really appreciate
>not only your assistance but the explanations as well.
After I got your voicemail, we ran amok on cyclone :-)
>Here's the deal with cyclone. It has two hard drives installed in it with
>/export/home residing on one and everything else on the other:
>
> df -k
>Filesystem kbytes used avail capacity Mounted on
>/proc 0 0 0 0% /proc
>/dev/dsk/c0t0d0s0 96455 40768 46042 47% /
>/dev/dsk/c0t0d0s6 877790 665487 150858 82% /usr
>fd 0 0 0 0% /dev/fd
>/dev/dsk/c0t0d0s1 413639 250169 122107 68% /var
>/dev/dsk/c0t1d0s7 8509324 339115 8085116 5% /export/home
>/dev/dsk/c0t0d0s5 3007086 917816 2029129 32% /opt
>/dev/dsk/c0t0d0s7 4031022 140916 3849796 4% /usr/local
>swap 643192 400 642792 1% /tmp
>>
This was the original setup on cyclone. As you read through this email,
you will see that things have changed (for the better).
>Given the above, how do you suggest I deal with the matter of /var being
>set to 400 MB? Just looking at the allocation, I can reduce the storage
>allocated to /usr/local and transfer it to /var, or I can have the data
>stored on /export/home which you indicated was a less desirable solution.
Here is the implementation (i.e. what we did) of our recommendation:
<login as ldm>
ldmadmin stop
<login as root>
<move everything in /usr/local into /opt>
cd /opt
ufsdump 0f - /dev/dsk/c0t0d0s7 | ufsrestore rf -
sync
rm restoresymtable
umount /usr/local
newfs -i 8192 /dev/dsk/c0t0d0s7
<move everthing in /export/home to what was /usr/local>
mount /dev/dsk/c0t0d0s7 /mnt
cd /mnt
ufsdump 0f - /dev/dsk/c0t1d0s7 | ufsrestore rf -
sync
cd /mnt
rm restoresymtable
cd /
umount /mnt
cd /etc
<edit vfstab>
change:
/dev/dsk/c0t1d0s7 /dev/rdsk/c0t1d0s7 /usr/local ufs 2
yes -
to:
/dev/dsk/c0t1d0s7 /dev/rdsk/c0t1d0s7 /opt ufs 2
yes -
Do the same thing for the entry for the /export/home entry in vfstab:
change:
/dev/dsk/c0t1d0s7 /dev/rdsk/c0t1d0s7 /export/home ufs 2 yes
-
to:
/dev/dsk/c0t1d0s7 /dev/rdsk/c0t1d0s7 /data ufs 2 yes
-
reboot
newfs -i 8192 /dev/dsk/c0t1d0s7
After this, what was /export/home will be empty.
cd /
mkdir /data
mount /data
mkdir /data/ldm
mkdir /data/ldm/mcidas
chown ldm:ldmgroup /data/ldm /data/ldm/mcidas
chmod 775 /data/ldm/mcidas
cp -p /var/data/mcidas/* /data/ldm/mcidas
cd /var
rm -rf data
cd /usr/local/ldm
rm data
ln -s /data/ldm data
ln -s /data/ldm/logs logs
<login as 'ldm'>
<edit etc/pqact.conf>
change all occurrances of /var/data/mcidas to data/mcidas
ldmadmin mkqueue
touch data/logs/ldmd.log
hupsyslog
ldmadmin start
You see, piece of cake ;-)
This leaves you with the following disk configuration:
cyclone# df -k
Filesystem kbytes used avail capacity Mounted on
/proc 0 0 0 0% /proc
/dev/dsk/c0t0d0s0 96455 40767 46043 47% /
/dev/dsk/c0t0d0s6 877790 665199 151146 82% /usr
fd 0 0 0 0% /dev/fd
/dev/dsk/c0t0d0s1 413639 22750 349526 7% /var
/dev/dsk/c0t0d0s5 3007086 1058723 1888222 36% /opt
/dev/dsk/c0t0d0s7 4031022 340533 3650179 9% /export/home
swap 673024 16 673008 1% /tmp
/dev/dsk/c0t1d0s7 8509324 180144 8244087 3% /data
Now, you can put LOTS of data in /data/ldm... User's home directories
will go into /export/home where there is plenty of room. /usr/local
is now the same as /opt.
There are several things you don't need and several security holes that
we fixed for you. This was done by commenting out entries in
/etc/inetd.conf as 'root':
name (these are just annoying)
uucp (these are just annoying)
discard (these are just annoying)
daytime (these are just annoying)
chargen (these are just annoying)
100083/1 (tooltalk database is a known security problem)
100221/1 (KCMS Profiler Server is a known security problem)
100068/2-5 (calendar manager is a known security problem)
kill -HUP 148 (148 is the process ID of inetd)
>My current plan is to still have cyclone ingest and decode the data and
>then to have students using the linux machines to access and display the
>data products.
This sounds like a good, usable plan.
The next thing to do is to finish off the McIDAS installation:
<login as 'mcidas'>
cd mcidas7.6/update
ftp ftp.unidata.ucar.edu
<user>
<pass>
cd unix/760/bugfix
bin
get mcupdate.tar.Z
quit
./mcunpack
cd ../src
make all
make install.all
cd ~/data
cp EXAMPLE.NAM LOCAL.NAM
cp DSSERVE.BAT LSSERVE.BAT
cp DATALOC.BAT LOCDATA.BAT
<edit LOCDATA.BAT and change all occurrences of fully_qualified_hostname
to cyclone.natnet.du.edu>
<edit LOCAL.NAM and set the directories for McIDAS data files to
/data/ldm/mcidas>
cd ~/workdata
<start a McIDAS-X session>
REDIRECT REST LOCAL.NAM
TE XCDDATA "/data/ldm/mcidas
BATCH XCD.BAT
BATCH XCDDEC.BAT
DECINFO SET DMGRID ACTIVE
BATCH LSSERVE.BAT
ROUTE REL C
ROUTE REL N
EXIT
The TE XCDDATA... set of commands sets up the McIDAS environment
for running of the XCD decoders.
The BATCH LSSERVE.BAT line sets up the ADDE data sets for the remote
ADDE server.
The ROUTE REL commands enable the creation of composite images by
way of ROUTE PostProcess BATCH invocations. This works because I
configured ~ldm/decoders/batch.k yesterday.
<edit mcscour.sh and set MCHOME to /export/home/mcidas. This sets up
McIDAS data file scouring>
setenv EDITOR vi
crontab -e
<add entry for scouring McIDAS data files>
<login as 'ldm'>
ldmadmin stop
cd etc
<edit ldmd.conf and add the entry:
exec "xcd_run MONITOR"
<edit pqact.conf and uncomment out the entries for XCD stuff that I
added yesterday:
# Entries for XCD decoders
DDPLUS|IDS ^.* PIPE
xcd_run DDS
HRS ^.* PIPE
xcd_run HRS
Restart the LDM:
ldmadmin start
You now have XCD decoders producing McIDAS data files. The data will
be scoured by virtue of the crontab entry above made as the user 'mcidas'.
Your remote ADDE server is working; I was able to point to it and list
and load data back to my machine here at Unidata.
>Thanks in advance.
You are welcome. There may be a couple of issues related to proper
execution of some McIDAS commands. I have discovered a bug in the
Sun SC5.0 Fortran compiler that affects several routines here at Unidata.
I see that the same problem is showing up on your system by getting:
PTLIST RTPTSRC/SFCHOURLY
Row : 1 Col : 8
Program terminated, segmentation violation
The other programs that will probably have problems are GRDCOPY and GRDDISP.
I am working on finding a fix/workaround for the SC5.0 bug.
>P.S. NLDN data are now being ingested as well as the mcidas and FSL2
>datastreams.
OK, I see that:
PTLIST RTPTSRC/PTSRCS.ALL FORM=FILE
Pos Description Schema NRows NCols Date
------ -------------------------------- ------ ----- ----- -------
9 SAO/METAR data for 16 SEP 1999 ISFC 72 4500 1999259
10 SAO/METAR data for 17 SEP 1999 ISFC 72 4500 1999260
19 Mand. Level RAOB for 16 SEP 1999 IRAB 8 1300 1999259
20 Mand. Level RAOB for 17 SEP 1999 IRAB 8 1300 1999260
29 Sig. Level RAOB for 16 SEP 1999 IRSG 16 6000 1999259
30 Sig. Level RAOB for 17 SEP 1999 IRSG 16 6000 1999260
39 SHIP/BUOY data for 16 SEP 1999 ISHP 24 2000 1999259
40 SHIP/BUOY data for 17 SEP 1999 ISHP 24 2000 1999260
59 SYNOPTIC data for 16 SEP 1999 SYN 8 6000 1999259
60 SYNOPTIC data for 17 SEP 1999 SYN 8 6000 1999260
69 PIREP/AIREP data for 16 SEP 1999 PIRP 24 1500 1999259
70 PIREP/AIREP data for 17 SEP 1999 PIRP 24 1500 1999260
79 NLDN data for 16 Sep 1999 NLDN 1000 1000 1999259
80 NLDN data for 17 Sep 1999 NLDN 1000 1000 1999260
89 Hourly Prof data for 16 Sep 1999 WPRO 48 35 1999259
90 Hourly Prof data for 17 Sep 1999 WPRO 48 35 1999260
99 6-min Prof data for 16 Sep 1999 WPR6 480 35 1999259
100 6-min Prof data for 17 Sep 1999 WPR6 480 35 1999260
PTLIST: Done
I will look at your setup more tomorrow...
Tom