This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Justin, Wow! A segmentation violation in the product-queue module. That's about as likely as an error in the C runtime. It's possible that the queue became corrupt somehow and that this manifested itself in the segfault. In any case, this fault is not the same as the other segfault I mentioned, so a more recent version of the LDM might not fix the problem. Please keep me apprised after you upgrade. > Steve, > > Carissa has left for the day so I'm jumping in, here is the stack trace: > > (gdb) bt > #0 0x00002ad5ac00be2a in tq_get_tqelem (tq=0x2ad624a0f2d8, > offset=1146355592) at pq.c:518 > #1 tq_add (tq=0x2ad624a0f2d8, offset=1146355592) at pq.c:587 > #2 0x00002ad5ac00c0d9 in pq_insertNoSig (pq=0xb71bd0, prod=0x7fff7dc376a0) > at pq.c:5604 > #3 0x00002ad5ac00c466 in pq_insert (pq=<value optimized out>, prod=<value > optimized out>) at pq.c:5636 > #4 0x00002ad5ac013e73 in dh_saveDataProduct (pq=<value optimized out>, > info=0xb84160, data=<value optimized out>, wasHereis=1, notifyAutoShift=1) > at DownHelp.c:159 > #5 0x00002ad5ac0191fd in hereis_6_svc (prod=<value optimized out>, > rqstp=0x7fff7dc37ce0) at ldm6_server.c:697 > #6 0x00002ad5ac017861 in ldmprog_6 (rqstp=0x7fff7dc37ce0, transp=0xb71d00) > at ldm6_svc.c:99 > #7 0x00002ad5ac02cc69 in svc_getreqsock (sock=<value optimized out>) at > svc.c:541 > #8 0x00002ad5ac01cc32 in one_svc_run (xp_sock=4, inactive_timeo=<value > optimized out>) at one_svc_run.c:91 > #9 0x00002ad5ac01f75a in run_service (upName=0xb71b30 "205.156.51.46", > port=12000512, request=<value optimized out>, inactiveTimeout=60, > pqPathname=0x7fff7dc37fb4 "?*", > pq=<value optimized out>, isPrimary=1) at requester6.c:229 > #10 req6_new (upName=0xb71b30 "205.156.51.46", port=12000512, > request=<value optimized out>, inactiveTimeout=60, > pqPathname=0x7fff7dc37fb4 "?*", pq=<value optimized out>, isPrimary=1) > at requester6.c:671 > #11 0x00002ad5ac01137b in prog_requester (ldmPort=<value optimized out>) at > acl.c:1627 > #12 run_requester (ldmPort=<value optimized out>) at acl.c:1813 > #13 new_requester (ldmPort=<value optimized out>) at acl.c:1869 > #14 requester_add (ldmPort=<value optimized out>) at acl.c:1912 > #15 invert_request_acl (ldmPort=<value optimized out>) at acl.c:1978 > #16 0x0000000000405bd0 in read_conf (conf_path=0x2ad5ac252400 > "/iodprod/dbnet/ldm/etc/ldmd.conf", doSomething=1, defaultPort=388) at > parser.y:593 > #17 0x0000000000406d0b in main (ac=<value optimized out>, av=<value > optimized out>) at ldmd.c:1037 > > > We actually had already planned to upgrade to 6.11.6 this Wednesday > (07/03), good to hear that is may resolve this issue. > > Thanks, > Justin Regards, Steve Emmerson Ticket Details =================== Ticket ID: BPI-711373 Department: Support LDM Priority: Normal Status: Closed