This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Mike, If pqcheck(1) didn't terminate in 40 minutes, then your product-queue was corrupted. The question is how. Did you have any power outages? Is the LDM system running -- via an "ldmadmin start" command -- while the perl(1)-script executes? Your pqact(1) configuration-file looks OK. Your perl(1)-script looks OK. The pqinsert(1) processes should log any problems in the LDM log file. The product-queue library, which is used by pqinsert(1), will automatically delete the oldest products in the queue to make room for a new product. So adding a product after the queue is full isn't a problem. > Good Morning Steve: > > The perl script that does the insert was logging but running on a cron so all > I have is an overwrite of the log file stating that the wget processes which > were the pre-triggers to the pqinsert were already running. > > I don’t have the product queue. I tried to get pqcheck to validate the > product queue but was in a time crunch so after 40 minutes of run time, I > started over with a fresh queue. > > I have the pqact.conf file and the ftp insert script that I am attaching. > > My theory is that for some reason the pqinsert was called on a file that > never got closed by the wget command. I’m still not sure how that could have > been triggered but I’m scratching my head over this. > > My coworker has a theory that he expects to see this happen again when the > product queue gets to max size and pqinsert tries to add the next product. I > don’t believe that is the case but he has an alarm set to check the queue in > 5 days when it fills up. > > Thanks for the interest. > -Mike > > This is a paste of the Cron job that fired the script for ref to the > variables. > > > 02 * * * * $HOME/perllib/wget_netr8_http -host > > UNAVCO:jimunavco@200.42.227.108 -lpath $HOME/data/rdsd_24089/ -days 31 > > -dbm -ldm -search YYYYMM/session -session M -site RDSD -hourly -fmt M.T02 > > > $HOME/logs/rdsd_24089_hr.log 2>&1 > Good Morning Steve: > > The perl script that does the insert was logging but running on a cron so all > I have is an overwrite of the log file stating that the wget processes which > were the pre-triggers to the pqinsert were already running. > > I don’t have the product queue. I tried to get pqcheck to validate the > product queue but was in a time crunch so after 40 minutes of run time, I > started over with a fresh queue. > > I have the pqact.conf file and the ftp insert script that I am attaching. > > My theory is that for some reason the pqinsert was called on a file that > never got closed by the wget command. I’m still not sure how that could have > been triggered but I’m scratching my head over this. > > My coworker has a theory that he expects to see this happen again when the > product queue gets to max size and pqinsert tries to add the next product. I > don’t believe that is the case but he has an alarm set to check the queue in > 5 days when it fills up. > > Thanks for the interest. > -Mike > > This is a paste of the Cron job that fired the script for ref to the > variables. > > > 02 * * * * $HOME/perllib/wget_netr8_http -host > > UNAVCO:jimunavco@200.42.227.108 -lpath $HOME/data/rdsd_24089/ -days 31 > > -dbm -ldm -search YYYYMM/session -session M -site RDSD -hourly -fmt M.T02 > > > $HOME/logs/rdsd_24089_hr.log 2>&1 Regards, Steve Emmerson Ticket Details =================== Ticket ID: HCO-891524 Department: Support LDM Priority: Normal Status: Closed