[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
20040209: McIDAS: Data Problem on cyclone cause by ufsdump output
- Subject: 20040209: McIDAS: Data Problem on cyclone cause by ufsdump output
- Date: Mon, 09 Feb 2004 11:06:16 -0700
>From: Michael Keables <address@hidden>
>Organization: DU
>Keywords: 200402091525.i19FPNp1028382 McIDAS decode mail /var full
Hi Mike,
>Hope all is well.
Things keep moving along thanks :-)
>I've run into dataflow problems again on cyclone...no point data since 4 Feb.
>It appears that /var is full again, but I can't find the offending file(s).
>I assume this is the problem, but I'm not sure.
>I've stopped, flushed, and re-started the LDM without results.
>
>Would you mind taking a look at cyclone to see what's up?
I logged onto cyclone this morning and found that /var was getting
filled by the log output of /opt/etc/hostdump.cron which is being run
out of root's cron. What is happening is that the dump script is
generating an extremely verbose logging of ufsdump activity and then it
tries to mail it to you (mkeables) and cgibbons. Your sendmail is
setup to not allow mail messages greater than 10000 kilobytes, and the
ufsdump output is on the order of 10 MB. When the mail fails, it is
attempted to be sent to root, but this fails because of the same size
limit in sendmail. The delivery attempt is rescheduled for 15 minutes
later, and the process begins again. When I got on, virtually all of
/var was used by files in /var/spool/mqueue. Since you have to be root
to look in this directory, you (mkeables) would not have been able to
see the files or see their size in the output of a 'df -k'.
So, the bottom line is that the file system dump to tape routine is
generating extremely verbose log output which is then attempted to be
mailed. To see if the majority of the log output was being written to
stdout, I redirected the standard output of the dump script to
/dev/null. I am hoping that any problems and the overall status will
be written to stderr, so you (and cgibbons) will continue to get email
about the dump. If, however, all output is to stdout, or all output is
to stderr, you will either get no mail notification of the dump run, or
the same 10 MB files will be generated every 15 minutes and /var will
fill once again.
So you could take a look at the files that were filling
/var/spool/mqueue, I copied two sets (the files come as triplets) in
/opt/scratch. After you or someone else at DU (cgibbons?) has a look
at the saved files, you should delete them to regain the 20 MB they are
using.
As I logged off, I noted that data was again flowing to cyclone and
being decoded.
>Thanks, Tom.
No worries.
Tom
--
NOTE: All email exchanges with Unidata User Support are recorded in the
Unidata inquiry tracking system and then made publically available
through the web. If you do not want to have your interactions made
available in this way, you must let us know in each email you send to us.