This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Hello, Version 4.3.2 is fairly old at this point; would it be possible to see if there is an issue using the latest netCDF, 4.6.1? I'm afraid I'm not familiar with debugging tools in an MPI environment, so I can only be of limited help there. Does this issue happen with other compute nodes, or when running different netCDF code? -Ward > Hello netCDF support, > > One of the users of our SLURM cluster is having a problem with > netCDF-4.3.2, and he sent the following log. Could you help me find a > way to debug this? > > ##### Log starts ##### > MPI_Irecv(165)......................: MPI_Irecv(buf=0x235ca40, count=4, > MPI_DOUBLE_PRECISION, src=151, tag=7, MPI_COMM_WORLD, request=0x77d4bc) > failed > MPIDI_CH3U_Request_unpack_uebuf(624): Message truncated; 3968 bytes > received but buffer size is 32 > srun: error: foobar: task 128: Exited with exit code 1 > ##### Log ends ####### > > p.s. The compute node and slurmd look fine. > > Thank you and best regards, > Koji > > Ticket Details =================== Ticket ID: XYY-601507 Department: Support netCDF Priority: Normal Status: Closed =================== NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.