This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Sitler Jeffrey L Civ AFIT/ENP wrote: > > Hi Anne, > That is fine, if we get things working properly, I can move it over to > Nimbus at a later date and time, when I have more experience in such > matters. > Jeff > Hi Jeff, I have to backpedal. I assumed that the ldm account was on blizzard and was puzzled by how the queue could have been built on fujita. Now I see that the ldm account is remotely mounted on blizzard, and actually exists on fujita, so the queue is indeed already local to the ldm account. So, I will *not* relocate the queue. I just rlogged onto fujita, and am in the process of looking around. And, adding the local0 facility to /etc/syslog.conf on blizzard was for naught - it doesn't need to be there because the ldm is not running on blizzard. I see that the proper entry already exists on /etc/syslog.conf on fujita and logging is successful. In looking at the logs, I saw a major problem with space that may have been resolved by now. Still, to clean the slate, I removed all the stats files in ~ldm/logs. More will be generated. See below for info about how to keep those from piling up. I wanted to see the log, but it was too large to view. To handle this, I split the log up into smaller files. In ~ldm/logs you'll see several smaller files called 'xaa' through 'xam'. I just made a crontab for you. If you do 'crontab -l' you will see the jobs that I put in there. Do 'man crontab' for more information about how this works. Of course, feel free to change these in any way that better suits your needs. These are intended just to get you started. There first job listed in the crontab is to send in the statistics. This happens once per hour at 35 minutes after the hour. Not only will this allow us to see information about product latencies at your site, it will clean up all but the most recent 24 *.stats files in ~ldm/logs, which will be a help in saving space. The second job is commented out. It is to run 'scour' which will delete old products. I left it commented out so you could decide how long you want to keep products and configure scour accordingly. I'll help you with that later. The third job is to rotate the logs once per day, at 10 minutes after midnight. This will keep the log files from getting too large. I need to leave for a meeting in a few minutes, but I wanted to let you know what I found and what I've done so far. Anne -- *************************************************** Anne Wilson UCAR Unidata Program address@hidden P.O. Box 3000 Boulder, CO 80307 ---------------------------------------------------- Unidata WWW server http://www.unidata.ucar.edu/ ****************************************************