This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Pete, > We have noticed recently on f5.aos.wisc.edu that during times of > high data throughput (particularly when the GFS grib1 and grib2 > data are coming through on CONDUIT) that our DDPLUS data saved to > disk falls behind. > > Looking at the products coming in via ldmadmin watch, it appears > that the products are coming in in a timely manner, but the pqact > process is falling behind at parsing through the queue. > > I had the thought today that I could run a second pqact process > with a different pqact.conf that would only handle DDPLUS data, > and take the DDPLUS out of the main pqact. That way the DDPLUS > pqact could skip all the other data and just handle the DDPLUS, > hopefully quicker. Does this make sense? Think it would solve > this problem (if that is indeed the problem?) That's a good idea. A pqact(1) process matches every data-product that it gets with every entry in its configuration-file. Consequently, the fewer matches it has to do, the less CPU it uses. If you're going to do this, however, then be sure to add feedtype and prodoct-identifier patters to each invocation of pqact(1) by an EXEC entry in the LDM configuration-file. The objective is to ensure that every product that a pqact(1) processes sees is matched by something in its configuration-file. Regards, Steve Emmerson Ticket Details =================== Ticket ID: COH-141109 Department: Support LDM Priority: Normal Status: Closed