This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
YangXin, > Regarding the Product Queue, I really appreciate the concept > and implementation of the Prouct Queue, I understand it in > this way: based on the LDM Design, the size of Product Queue > must be smaller than the size of RAM. My question is, would > it be possible for LDM to use a Product Queue lager than the > RAM, or the Product Queue will not be tightly mapped to the > phisical memmory. The LDM product-queue can be larger than physical memory *in theory*. In practice, however, this can greatly reduce the performance of the LDM due to time spent swapping the product- queue into and out of physical memory. This is especially true in high-volume situations such as yours. We therefore strongly recommend that you have sufficient memory to prevent the product-queue from swapping to and from disk. > I also have two more suggestions for the mechanisms of the > Product Queue: one is, would it be possible for LDM to run > multiple Product Queues concurrently, in order to seperate > data regarding different characteristics, for example, > incoming data go into one PQ, sending data be ingested into > another PQ, or further seperate sending data to multiple > sending PQ based on different feed type or some other > properties, and so on? If yes, each PQ then only hold > certain part of the complete data set that be handled by > one PQ today. Unfortunately, the LDM system is designed around one, and only one, product-queue. It is possible, however, to run more than one LDM system on a single computer (each LDM would have its own user account, of course). A cleaner, more direct solution, however, would be to have multiple computers each running their own LDM. > The other is, would it be possible to change the PQ to > handle only metadata of each product, such as feed > type, product file location, etc. Thus, the PQ probably > is able to deal with much more products than when each > complete product data be put into the PQ. Good idea. We've thought about it here. Unfortunately, it would take some months to implement. > > * run UPC's "uptime" script so that we can view time-series plots of > > operational parameters (we can install this monitoring tool whenever > > permission is granted) > How does this tool work? What is the prerequisite for > the installation and the "uptime" script to run? > I mean do I need to prepare something? The "uptime" tool is a python script that's run out of the LDM user's "crontab" file --- usually, once per minute. It collects operational parameters such as the age of the oldest product in the queue, the number of connections, etc., and appends that information to a file. This allows us to make time series plots in order to understand how the LDM is performing. Regards, Steve Emmerson Ticket Details =================== Ticket ID: LGY-600646 Department: Support IDD TIGGE Priority: Normal Status: On Hold