[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[LDM #DYA-291592]: Status of SD Mines LDM/GEMPAK installation (cont.)
- Subject: [LDM #DYA-291592]: Status of SD Mines LDM/GEMPAK installation (cont.)
- Date: Mon, 10 Jul 2017 12:50:27 -0600
Hi Bill,
re:
> We seem to be having some trouble with our LDM server. It seems to be
> stopping periodically and I can’t determine as to why.
Are there any informative messages at the time(s) of the stop(s) in the
LDM log file (~ldm/logs/ldmd.log I think)?
re:
> I’ve run yum
> updates and rebooted more than once as well as made a fresh queue but
> nothing is working.
Can we login to your machine? (I tried to SSH to hurricane, but I can't
get to it:
% ssh address@hidden
ssh: connect to host hurricane.ias.sdsmt.edu port 22: No route to host
I don't remember if you are using a different port to SSH to...
> Bill
>
> ------------------------------------------------
> Bill Capehart <address@hidden>
> Atmospheric and Environmental Sciences Program Coordinator
> Civil and Environmental Engineering
> 201 Mineral Industries Building
> South Dakota School of Mines and Technology
> 501 East St Joseph Street
> Rapid City, SD 57701-3995
> Ph: +1-605-394-1994 Mobile: +1-605-484-5692
>
> On 5/9/17, 15:14 MDT, "Tom Yoksas" <address@hidden> wrote:
>
> Hi Bill,
>
> On 05/09/2017 02:59 PM, Capehart, William J wrote:
> > Yes. What was hurricane is now named squall. Which was squall has
> > been turned off for now.
>
> OK.
>
> re:
> > Our big test will start in 15 minutes when our 18Z wrf cycle starts.
>
> So, you are running WRF locally? I didn't seen anything that would
> get the WRF output and run it through the LDM and/or GEMPAK decoders.
> Am I missing something, or are your WRF efforts outside of the
> LDM/GEMPAK setup I did on the new squall?
>
> Cheers,
> Tom
>
> > ------------------------------------------------
> > Bill Capehart <address@hidden>
> > Atmospheric and Environmental Sciences Program Coordinator
> > Civil and Environmental Engineering
> > 201 Mineral Industries Building
> > South Dakota School of Mines and Technology
> > 501 East St Joseph Street
> > Rapid City, SD 57701-3995
> > Ph: +1-605-394-1994 Mobile: +1-605-484-4692
> >
> > On 5/9/17, 14:56 MDT, "Tom Yoksas" <address@hidden> wrote:
> >
> > Hi Bill,
> >
> > On 05/09/2017 12:18 PM, Capehart, William J wrote:
> > > And we have changed the names and all is good!
> >
> > Very good. I assume you mean that the machine that was hurricane
> > is now squall, and vice versa? I know that the first part of
> this
> > is true as I just logged in to the new squall. While I was
> there,
> > I took care of the updating of the hostname in the LDM registry,
> > ~ldm/etc/registry.xml (I mentioned the need to do this in a
> previous
> > email).
> >
> > re:
> > > We are ingesting data. The big test is the real time model
> and if
> > > it engages this afternoon!
> >
> > You are already receiving model output, and it is being converted
> > into GEMPAK format:
> >
> > <as 'ldm' on the new squall>
> >
> > ls -alt /squall1/gempak-data/model
> >
> > total 16
> > drwxr-xr-x. 2 ldm fxalpha 4096 May 9 14:53 rap
> > drwxr-xr-x. 2 ldm fxalpha 4096 May 9 14:25 qpf
> > drwxr-xr-x. 2 ldm fxalpha 207 May 9 14:25 nam
> > drwxr-xr-x. 2 ldm fxalpha 4096 May 9 13:50 ffg
> > drwxr-xr-x. 2 ldm fxalpha 196 May 9 12:59 nww
> > drwxrwxr-x. 15 ldm fxalpha 4096 May 9 11:47 ..
> > drwxr-xr-x. 2 ldm fxalpha 94 May 9 11:45 ecmwf
> > drwxr-xr-x. 12 ldm fxalpha 123 May 9 11:42 .
> > drwxr-xr-x. 2 ldm fxalpha 36 May 9 11:41 ens
> > drwxr-xr-x. 2 ldm fxalpha 33 May 9 10:12 gfs
> > drwxr-xr-x. 2 ldm fxalpha 61 May 9 09:32 ukmet
> > drwxr-xr-x. 2 ldm fxalpha 64 May 9 09:24 nam-ak
> >
> > You may, of course, want/need to update your LDM
> configuration file
> > (~ldm/etc/ldmd.conf) REQUEST(s) to get more model output of
> interest
> > and your LDM pattern action file (~ldm/etc/pqact.conf) to
> process the
> > new model output REQUESTed.
> >
> > re:
> > > Thanks for everything
> >
> > No worries.
> >
> > Cheers,
> >
> > Tom
> >
> > > ------------------------------------------------
> > > Bill Capehart <address@hidden>
> > > Atmospheric and Environmental Sciences Program Coordinator
> > > Civil and Environmental Engineering
> > > 201 Mineral Industries Building
> > > South Dakota School of Mines and Technology
> > > 501 East St Joseph Street
> > > Rapid City, SD 57701-3995
> > > Ph: +1-605-394-1994 Mobile: +1-605-484-4692
> > >
> > > On 5/9/17, 12:00 MDT, "Tom Yoksas" <address@hidden> wrote:
> > >
> > > Hi Bill,
> > >
> > > On 05/09/17 11:15, Capehart, William J wrote:
> > > > Thanks Much!
> > >
> > > No worries.
> > >
> > > re:
> > > > Is it time then to finally put squall out of our its
> misery and move
> > > > to Hurricane fulltime?
> > >
> > > That is something only you can answer since I don't
> know all of the
> > > things that squall was used for, AND I have not made an
> exhaustive
> > > study of the various files/directories in squall's
> 'ldm' account
> > > to see if there are things that need to be saved before
> decommissioning.
> > >
> > > One comment that I feel safe in making is that it would
> be a good idea
> > > to wait for a few days to make sure that the 'ldm'
> installation/setup on
> > > hurricane is doing the things you want it to do AND all
> of the
> > > things that the same account was used for on squall.
> > >
> > > Cheers,
> > >
> > > Tom
> > >
> > > > ------------------------------------------------
> > > > Bill Capehart <address@hidden>
> > > > Atmospheric and Environmental Sciences Program
> Coordinator
> > > > Civil and Environmental Engineering
> > > > 201 Mineral Industries Building
> > > > South Dakota School of Mines and Technology
> > > > 501 East St Joseph Street
> > > > Rapid City, SD 57701-3995
> > > > Ph: +1-605-394-1994 Mobile: +1-605-484-4692
> > > >
> > > > On 5/9/17, 09:54 MDT, "Tom Yoksas" <address@hidden>
> wrote:
> > > >
> > > > Hi Bill,
> > > >
> > > > On 05/09/17 07:37, Capehart, William J wrote:
> > > > > Squall is now set to the same password
> > > >
> > > > Excellent, thanks. Access to the 'ldm' account
> on squall allowed me to
> > > > really see how it was setup LDM/GEMPAK wise, and
> this, in turn, helped
> > > > guide me on the setup on hurricane.
> > > >
> > > > re:
> > > > > Unfortunately, I think Squall is on its last
> leg. Nothing on it seems
> > > > > to be working at this point from ypbind to ldm
> itself.
> > > >
> > > > hurricane is now running an LDM and decoding data
> into the same
> > > > (I think!) hierarchy as on squall.
> > > >
> > > > Comments:
> > > >
> > > > - I had to adjust the number of days of decoded
> data to keep down to 7
> > > > since I did not believe that the available
> space on the '/' file
> > > > system (/squall1 is in the '/' file system)
> was enough to keep the 7,
> > > > 15, 20 and 30 days that was the setup on
> squall. We will know better
> > > > how much data can be kept after a couple of
> days of running in the
> > > > current configuration.
> > > >
> > > > - GEMPAK was installed in the 'ldm' account on
> squall; it is installed
> > > > in the 'gempak' account on hurricane
> > > >
> > > > - Unidata McIDAS-X v2016 is installed on
> hurricane in the 'mcidas'
> > > > account; it was not installed on squall
> > > >
> > > > - I consolidated LDM REQUEST lines in the LDM
> configuration file,
> > > > ~ldm/etc/ldmd.conf since hurricane has a good
> (Gpbs) network
> > > > connection
> > > >
> > > > I did not alter what was/is being REQUESTed
> beyond REQUESTing
> > > > all of the HDS feed. Given the good
> networking situation on
> > > > hurricane, and given that hurricane appears to
> be a much more
> > > > powerful machine than squall, you may want to
> reconsider what
> > > > data you REQUEST, decode and keep on hurricane.
> > > >
> > > > - I reactivated a REQUEST for LIGHTNING data from
> UAlbany
> > > > but no data is flowing since their top level
> distribution
> > > > machine, striker.atmos.albany.edu, does not
> have an ALLOW
> > > > for hurricane
> > > >
> > > > You need to contact UAlbany to request that
> they allow
> > > > hurricane to REQUEST LIGHTNING (aka NLDN)
> data. You may want to
> > > > wait to do this if you still plan on renaming
> hurricane to
> > > > squall at some point.
> > > >
> > > > - if you rename hurricane to squall, you will
> need to edit the
> > > > hostname setting in the LDM registry,
> ~ldm/etc/registry.xml.
> > > >
> > > > Let's keep an eye on disk usage on hurricane for
> a couple of days
> > > > and make a decision about how much decoded GEMPAK
> data can be
> > > > reasonably kept.
> > > >
> > > > Cheers,
> > > >
> > > > Tom
> > > >
> > > > > On 5/8/17, 18:35 MDT, "Tom Yoksas"
> <address@hidden> wrote:
> > > > >
> > > > > Hi Bill,
> > > > >
> > > > > I've created both 'gempak' and 'mcidas'
> accounts on hurricane, and have
> > > > > built both packages. I also downloaded the
> latest ldm-mcidas
> > > > > distribution and built it in the
> ~ldm/ldm-mcidas/ldm-mcidas-2012
> > > > > directory (I built McIDAS-X v2016 to make
> building the ldm-mcidas
> > > > > decoders easier/more up to date).
> > > > >
> > > > > What I need now is for you to remind me
> where you want decoded GEMPAK
> > > > > files written. Here is a 'df' listing that
> shows the various options:
> > > > >
> > > > > Filesystem Size Used
> Avail Use% Mounted on
> > > > > /dev/mapper/cl_foehn-root 523G 6.4G
> 517G 2% /
> > > > > devtmpfs 7.7G 0
> 7.7G 0% /dev
> > > > > tmpfs 7.7G 100K
> 7.7G 1% /dev/shm
> > > > > tmpfs 7.7G 834M
> 6.9G 11% /run
> > > > > tmpfs 7.7G 0
> 7.7G 0% /sys/fs/cgroup
> > > > > /dev/sda1 1014M 350M
> 665M 35% /boot
> > > > > /dev/mapper/cl_foehn-home 400G 6.6G
> 394G 2% /home
> > > > > kyrill.ias.sdsmt.edu:/media/hd 5.9T 4.8T
> 765G 87% /IAS_RAID
> > > > > kyrill.ias.sdsmt.edu:/media/hd2 35T 28T
> 7.7T 79% /IAS_RAID2
> > > > > tmpfs 1.6G 16K
> 1.6G 1% /run/user/42
> > > > > tmpfs 1.6G 0
> 1.6G 0% /run/user/750
> > > > >
> > > > >
> > > > > I seem to recall us discussing use of
> /IAS_RAID and/or /IAS_RAID2,
> > > > > but I can't remember if either of these two
> were to be used, or if
> > > > > you wanted the data to go somewhere else.
> > > > >
> > > > > After I know where you want data to be
> written, it will take me a
> > > > > couple of hours to configure the LDM and
> GEMPAK decoders to reflect
> > > > > that decision. The majority of the time
> will come from trying to
> > > > > sort out and update the LDM configuration
> file (~ldm/etc/ldmd.conf)
> > > > > and LDM pattern-action file
> (~ldm/etc/pqact.conf) files that
> > > > > came over from your previous machine.
> > > > >
> > > > > Cheers,
> > > > >
> > > > > Tom
> > > > >
> > > > > On 05/08/2017 10:26 AM, Tom Yoksas wrote:
> > > > > > Hi Bill,
> > > > > >
> > > > > > On 05/08/17 10:12, Capehart, William J wrote:
> > > > > >> Just wondering how things are going with
> the LDM system?
> > > > > >
> > > > > > After a frustrating week last week of not
> being able to get after
> > > > > > the GEMPAK installation, I should now
> have time to get to this
> > > > > > either this afternoon or tomorrow.
> > > > > >
> > > > > > Many apologies for the delay!
> > > > > >
> > > > > > Cheers,
> > > > > >
> > > > > > Tom
> --
> +----------------------------------------------------------------------+
> * Tom Yoksas UCAR Unidata
> Program *
> * (303) 497-8642 (last resort) P.O. Box
> 3000 *
> * address@hidden Boulder, CO
> 80307 *
> * Unidata WWW Service
> http://www.unidata.ucar.edu/ *
>
> +----------------------------------------------------------------------+
>
>
Cheers,
Tom
--
****************************************************************************
Unidata User Support UCAR Unidata Program
(303) 497-8642 P.O. Box 3000
address@hidden Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage http://www.unidata.ucar.edu
****************************************************************************
Ticket Details
===================
Ticket ID: DYA-291592
Department: Support LDM
Priority: Normal
Status: Closed
===================
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata
inquiry tracking system and then made publicly available through the web. If
you do not want to have your interactions made available in this way, you must
let us know in each email you send to us.