This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Hi Martha, re: phantom disk usage until LDM is stopped > I don't quite understand. I understand that our scour process thinks > that it should delete files that LDM needs, and that the files are not really > deleted until LDM restarts. OK. > But if we need the disk space, how are we > supposed to reclaim it except by restarting LDM? There is no problem in deleting files that are not open for writing. The problem occurs when there is an open file descriptor for a file and some sort of a scouring process is run that deletes it. Since there is an open file descriptor, the file does not really get deleted. Instead, a bunch of garbage is likely to be seen in the file followed by the newly written stuff. We experienced this kind of situation a long time ago before it sunk in that the files to be deleted must be closed first. I would check the following types of situations: - a scouring process is deleting the LDM log file, ~ldm/logs/ldmd.log. This file should only be deleted/rotated by: - renaming the file - running hupsyslog to tell syslogd to close all of its open file descriptors - creating a new copy of the file This is the process taken by the LDM 'newlog' (or 'ldmadmin newlog' procedure. - a 'pqact' action has a file open for writing (i.e., the FILE action with no '-close' flag) - some other decoder/process has a file open for writing (e.g., 'ingetext' or 'ingebin' which are McIDAS ingesters that are run from 'xcd_run') I am betting that you have a 'scour' action (defined in ~ldm/etc/scour.conf) that is walking through directories where files are being written and deleting a file that is open. Exactly what the file might be is unknown to me as I am not intimately familiar with your processing setup (I have looked at it, but not that closely). > Just telling scour not to delete the files won't solve our disk space > problem. Correct. You should be able to find the directory in which the open file(s) is(are) being deleted and then adjust your processing (e.g., pattern-action file action(s)) and/or scouring to not delete the file or close it so it can be deleted. I would use the 'du' facility to see how much space is being used in a directory. Look for a directory that has an indicated high usage when the long listing of files in that directory do not look like they should be occupying that much disk space. Cheers, Tom -- **************************************************************************** Unidata User Support UCAR Unidata Program (303) 497-8642 P.O. Box 3000 address@hidden Boulder, CO 80307 ---------------------------------------------------------------------------- Unidata HomePage http://www.unidata.ucar.edu **************************************************************************** Ticket Details =================== Ticket ID: WDB-706909 Department: Support LDM Priority: Normal Status: Closed