This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
>To: address@hidden >From: David Wojtowicz <address@hidden> >Subject: observation about LDM product queue >Organization: UCAR/Unidata >Keywords: 200208142221.g7EMLOK12910 David, > Here's an observation I've made that may or may not be of interest to > you all.... our main LDM machine is now flood.atmos.uiuc.edu.... it > is doing *tons* of traffic (UNIDATA,NNEXRAD,NMC2, plus several other > 3rd party model feeds) > > It is a dual processor AMD machine with 2GB of RAM and a 1GB product > queue running Linux. This is handy because the whole product queue > can fit into RAM cache, speeding things up. But still I found, it > spends alot of time flushing changes to the cache back to disk... > > I tried an experiment and put ldm.pq on /dev/shm... which is defined > by default on recent RedHat linux releases as a tmpfs > filesystem...which is similar to a ramdisk, but only has a fixed > limit, not a fixed size. This is similar to using a regular file > that can stay in cache, except that no writes to the disk need be > done. Although I am having difficulty quantifying it, there seems to > be an increase in performance. The machine would previously bog > down during heavy traffic periods, but doesn't anymore. > > This wouldn't work for everyone... for one thing, you need about 2x > as much RAM as your product queue and be willing to devote half of it > for this purpose. This works fine in our case. And, while the > ldm.pq file persists between ldm invocations, it does not persist > between system reboots. However, in our case, our system is quite > stable and when an administrative reboot is necessary, the file can > be copied to a standard disk file and then copied back after reboot, > if the downtime is going to be short where it's worth preserving > what's in the queue. > > As I said, not for everyone, but an interesting experiment you might > want to know about. Thanks for reporting on this. I think it's of enough interest to warrant a posting to the ldm-users mailing list, with memory so cheap now. As you've implied, this technique could be used on non-Linux systems as well, for example with the tmpfs file system on Solaris systems. I just tried timing the creation of a 1 Gbyte product queue on a Solaris tmpfs file system versus a local disk and it's more than three times as fast on the tmpfs file system, so I assume other queue operations would see a similar speed up from not needing to write to disk: /tmp$ time pqcreate -v -c -q test.pq -s 1000000000 # on tmpfs file system Creating test.pq, 1000000000 bytes, 244140 products. real 0m14.60s user 0m0.19s sys 0m10.50s /buddy/russ$ time pqcreate -v -c -q test.pq -s 1000000000 # on local disk Creating test.pq, 1000000000 bytes, 244140 products. real 0m45.87s user 0m0.17s sys 0m9.38s As you point out, this should only be run on a stable system and the queue should be copied before rebooting. --Russ