This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Jim, The LDM queue can be bigger than the amount of ram since the OS will page in from swap, and therefore should be efficiently managed. The overhead of pqexpire is that it forces the entire queue to be inspected every time it runs through the queue. I started the NMC2 feed with 128MB of RAM. I now have 384MB of RAM, but I am running a 800MB queue. The high water mark I have in the queue is currently about 614MB, but I created the queue a little larger so that IRIX wouldn't have to increase the queue size (solaris defaults to deleting oldest product, but Irix allows the Growable memory map). I run pqexpire with -i 1200 to decrease the number of times pqexpire runs. The current bottle neck in the NMC2 feed is the T1 line from NCEP to GSFC. That should be the maximum amount of data in an hour ~550MB. If you remade the LDM queue and restarted with a good connection, then you could probably get 2x the T1 in an hour for the first hour where you get the backlog from your upstream plus the T1. At any rate, the pqexpire interval accounts for the queue larger than the T1 bandwidth. Even if your queue qould fit into RAM, the OS would probably page out some of memory for other tasks running on the computer. But keeping the queue smaller means that pqexpire is less of a pig. Steve Chiswell Unidata User Support >From: Jim Cowie <address@hidden> >Organization: . >Keywords: 199907061500.JAA10934 >Celia Chen wrote: >> >> Steve, >> >> Thanks so much for the quick reply. I have reset the >> pq_size to 800MB already and ldm.pq shows 818790400 now. >> I will watch it over the weekend to see what is >> going to happen. >> > >Hi Chiz, > >Looks like your suggestion helped. The only thing I'm >wondering about along these lines is what happens if you raise >the queue size to larger than the amount of RAM you have on the >system. Obviously the entire queue cannot be mapped into memory >if the queue is larger than system RAM, so I'm wondering if that >causes inefficiencies in the LDM or not. You've mentioned that >pqexpire causes the machine to thrash a bit, as the entire queue >has to be swapped in to remove old products, but what about the >general receipt of new products into the queue and the actions >of pqact? If new products are recieved and written into the >portion of the queue (and acted on by pqact) that is swapped in, >then I guess I could see it working OK without thrashing to disk >too much. > >I've always tried to keep the queue size smaller than RAM but maybe >I don't need to. Any thoughts you have on this would be appreciated. > >-jim > >-- >Jim Cowie Software Engineer >WITI Corporation address@hidden >3300 Mitchell Lane >Boulder, CO 80301 (303) 497-8584 >