This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Hi Tim, >Has anyone noticed any performance problems with the LDM running under >2.6.11-1.27_FC3smp? We are using the 64-bit version of this FC3 distribution on the data server backends of our toplevel IDD relay node cluster idd.unidata.ucar.edu, and we do not see any performance problems when the LDM queue size is less than real memory. In fact, we ran a stress test of the cluster the weekend before last where we were able to feed a sustained 500 Mbps (5.4 TB/day) off of the cluster with no introduction of latencies or other problems. The data server backends were essentially idling during this stress test. >We're not sure if the kernel is the problem, but >for a while now, we've noticed extreme performance issues (taking 25 >seconds to rlogin etc) while the LDM is running. The LDM seems to be >doing it's job, but many other things (logging in, copying files, etc) >drag unless we shut the LDM down. What is your queue size in relation to your RAM? How much swap do you have? >Unless I've missed something, our setup is that recommended by Unidata >(queue on the same filesystem as LDM home, no NFS mounts between LDM >home and data, etc...). The only thing we do differently is that our >hostname in ldmadmin.pl.conf is "ldm.comet.ucar.edu" but the machine's >real name is oneu1.comet.ucar.edu, with ldm being a CNAME. (This is >temporary, as we migrate to new servers - also, changing the conf file >to read "oneu1.comet.ucar.edu" didn't make a difference) We have noticed that oneu1.comet.ucar.edu is connecting and disconnecting from idd.unidata.ucar.edu frequently. Perhaps there is something else amiss on this machine? >If anyone has any clues, or has seen similar problems, I'd appreciate >hearing about them. Cheers, Tom