This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
=============================================================================== Robb Kambic Unidata Program Center Software Engineer III Univ. Corp for Atmospheric Research address@hidden WWW: http://www.unidata.ucar.edu/ =============================================================================== ---------- Forwarded message ---------- Date: Thu, 29 Jul 1999 11:46:00 -0600 From: Jim Cowie <address@hidden> To: Robb Kambic <address@hidden> Subject: Re: 19990727: ldmadmin isrunning Robb Kambic wrote: > > Jim, > > Another solution would be to add the command ldmadmin clean to the start > up script, that clears out the lock file and the ldmd.pid file. I decided > to go with your solution because it doesn't cause any changes to the LDM > preinstall configurations. Yes, the ldmadmin clean would work if this is used in a boot-up script, but I actually do this from a cron job that runs every half hour to check on the ldm, so I could potentially wreck a live LDM if I did that. (or maybe not if it looks for the process ID the "hard way" ??? ) > > I talked to Russ about syncing netCDF file cost. He seems to think that > the current mechanism is adequate for almost all our users, so I not going > to change it. One is always welcome to make mods for their own site needs > and make changes as you have already done. As long as the data is actually available after it's written before a sync I guess that's OK. I was only using ncdump to look at the record dimension (stations) periodically. MAybe the data is all available and the sync just updates the dimension variables... I don't know. For the synoptic decoder especially, for what we were decoding, and the way we were invoking the decoder, the decoder had a very long life span and it was using the same output file for a very long time. Thus hours would go by without a sync and the record (stations) value would not change, and it was just confusing. The actual file size was growing this whole time of course, so maybe the data really is accessible during this time. -jim