This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Kent, > It indeed helps. If we can have the parameters passed to H5Dread, that will > be more helpful. Since h5dump can dump the data correctly, so I suspect > that the parameters passed to H5Dread are not quite right. Note this is a > fixed size HDF5 string. It seems netCDF-4 is using variable length HDF5 > string to store a string. Here's the parameters passed to H5Dread, in which the segmentation violation occurs: (gdb) p *var $24 = {name = 0x779080 "StructMetadata.0", hdf5_name = 0x0, ndims = 0, dimids = 0x0, dim = 0x0, varid = 0, natts = 0, next = 0x0, prev = 0x0, dirty = 0, created = 1, written_to = 0, dimscale_attached = 0x0, type_info = 0x78ed50, xtype = 2, hdf_datasetid = 83886087, att = 0x0, no_fill = 1, fill_value = 0x0, chunksizes = 0x0, contiguous = 1, parallel_access = 0, dimscale = 0, dimscale_hdf5_objids = 0x0, deflate = 0, deflate_level = 0, shuffle = 0, fletcher32 = 0, options_mask = 0, pixels_per_block = 0, chunk_cache_size = 1048576, chunk_cache_nelems = 521, chunk_cache_preemption = 0.75, sdsid = 0, hdf4_data_type = 0, diskless_data = 0x0} (gdb) p *var->type_info $25 = {next = 0x0, prev = 0x0, nc_typeid = 2, hdf_typeid = 50332112, native_typeid = 50332113, size = 1, committed = 0, name = 0x78db70 "char", class = 3, num_enum_members = 0, enum_member = 0x0, field = 0x0, num_fields = 0, base_nc_type = 0, base_hdf_typeid = 0, close_hdf_typeid = 1, endianness = 0} (gdb) p mem_spaceid $27 = 67108867 (gdb) p file_spaceid $28 = 67108866 (gdb) p xfer_plistid $29 = 167772253 (gdb) p bufr $30 = (void *) 0x76c680 It appears to me that netCDF-4 is just trying to read a single character of data, because it thinks the HDF5 dataset named StructMetadata.0 is a single character in size. Is that the dataset that is actually 32000 characters? --Russ > address@hidden> wrote: > > > Kent, > > > > I tried running ncdump on this file using valgrind, and got some more > > information that might be of use in debugging the problem. > > > > --Russ > > > > $ valgrind ./ncdump grid_1_3d_xyz_aug.h5 > > group: HDFEOS\ INFORMATION { > > variables: > > char StructMetadata.0 ; > > > > // group attributes: > > :HDFEOSVersion = "HDFEOS_5.1.13" ; > > data: > > > > ==19314== Source and destination overlap in memcpy(0x5c97b90, 0x5c99550, > > 32000) > > ==19314== at 0x4A075E6: memcpy (mc_replace_strmem.c:635) > > ==19314== by 0x4EB5ED2: H5D_contig_readvv_sieve_cb (H5Dcontig.c:796) > > ==19314== by 0x50780A2: H5V_opvv (H5V.c:1453) > > ==19314== by 0x4EB5B1F: H5D_contig_readvv (H5Dcontig.c:887) > > ==19314== by 0x4EC57FC: H5D_select_io (H5Dselect.c:149) > > ==19314== by 0x4EC5C3C: H5D_select_read (H5Dselect.c:273) > > ==19314== by 0x4EB57CC: H5D_contig_read (H5Dcontig.c:559) > > ==19314== by 0x4EC13E8: H5D_read (H5Dio.c:447) > > ==19314== by 0x4EC1848: H5Dread (H5Dio.c:173) > > ==19314== by 0x44F888: nc4_get_vara (nc4hdf.c:1051) > > ==19314== by 0x45D4C5: nc4_get_vara_tc (nc4var.c:1467) > > ==19314== by 0x45D56B: NC4_get_vara (nc4var.c:1484) > > ==19314== > > ==19314== Invalid write of size 8 > > ==19314== at 0x4A0765B: memcpy (mc_replace_strmem.c:635) > > ==19314== by 0x4EB5ED2: H5D_contig_readvv_sieve_cb (H5Dcontig.c:796) > > ==19314== by 0x50780A2: H5V_opvv (H5V.c:1453) > > ==19314== by 0x4EB5B1F: H5D_contig_readvv (H5Dcontig.c:887) > > ==19314== by 0x4EC57FC: H5D_select_io (H5Dselect.c:149) > > ==19314== by 0x4EC5C3C: H5D_select_read (H5Dselect.c:273) > > ==19314== by 0x4EB57CC: H5D_contig_read (H5Dcontig.c:559) > > ==19314== by 0x4EC13E8: H5D_read (H5Dio.c:447) > > ==19314== by 0x4EC1848: H5Dread (H5Dio.c:173) > > ==19314== by 0x44F888: nc4_get_vara (nc4hdf.c:1051) > > ==19314== by 0x45D4C5: nc4_get_vara_tc (nc4var.c:1467) > > ==19314== by 0x45D56B: NC4_get_vara (nc4var.c:1484) > > ==19314== Address 0x5c97b90 is 0 bytes inside a block of size 1 alloc'd > > ==19314== at 0x4A06031: malloc (vg_replace_malloc.c:236) > > ==19314== by 0x41055F: emalloc (utils.c:41) > > ==19314== by 0x40AB4D: vardata (vardata.c:484) > > ==19314== by 0x408896: do_ncdump_rec (ncdump.c:1579) > > ==19314== by 0x408A56: do_ncdump_rec (ncdump.c:1618) > > ==19314== by 0x408BC7: do_ncdump (ncdump.c:1655) > > ==19314== by 0x409E1E: main (ncdump.c:2432) > > ==19314== > > ==19314== Invalid read of size 4 > > ==19314== at 0x4F481B8: H5I_object_verify (H5I.c:2130) > > ==19314== by 0x4FA5B47: H5Sclose (H5S.c:364) > > ==19314== by 0x44FB87: nc4_get_vara (nc4hdf.c:1134) > > ==19314== by 0x45D4C5: nc4_get_vara_tc (nc4var.c:1467) > > ==19314== by 0x45D56B: NC4_get_vara (nc4var.c:1484) > > ==19314== by 0x414D37: NC_get_vara (dvarget.c:34) > > ==19314== by 0x415963: nc_get_vara (dvarget.c:460) > > ==19314== by 0x40ADD4: vardata (vardata.c:537) > > ==19314== by 0x408896: do_ncdump_rec (ncdump.c:1579) > > ==19314== by 0x408A56: do_ncdump_rec (ncdump.c:1618) > > ==19314== by 0x408BC7: do_ncdump (ncdump.c:1655) > > ==19314== by 0x409E1E: main (ncdump.c:2432) > > ==19314== Address 0x4a424f5f444e4509 is not stack'd, malloc'd or > > (recently) free'd > > ==19314== > > ==19314== > > ==19314== Process terminating with default action of signal 11 (SIGSEGV) > > ==19314== General Protection Fault > > ==19314== at 0x4F481B8: H5I_object_verify (H5I.c:2130) > > ==19314== by 0x4FA5B47: H5Sclose (H5S.c:364) > > ==19314== by 0x44FB87: nc4_get_vara (nc4hdf.c:1134) > > ==19314== by 0x45D4C5: nc4_get_vara_tc (nc4var.c:1467) > > ==19314== by 0x45D56B: NC4_get_vara (nc4var.c:1484) > > ==19314== by 0x414D37: NC_get_vara (dvarget.c:34) > > ==19314== by 0x415963: nc_get_vara (dvarget.c:460) > > ==19314== by 0x40ADD4: vardata (vardata.c:537) > > ==19314== by 0x408896: do_ncdump_rec (ncdump.c:1579) > > ==19314== by 0x408A56: do_ncdump_rec (ncdump.c:1618) > > ==19314== by 0x408BC7: do_ncdump (ncdump.c:1655) > > ==19314== by 0x409E1E: main (ncdump.c:2432) > > StructMetadata.0 = ==19314== > > ==19314== HEAP SUMMARY: > > ==19314== in use at exit: 1,833,234 bytes in 2,909 blocks > > ==19314== total heap usage: 10,658 allocs, 7,749 frees, 2,383,459 bytes > > allocated > > ==19314== > > ==19314== LEAK SUMMARY: > > ==19314== definitely lost: 976 bytes in 9 blocks > > ==19314== indirectly lost: 0 bytes in 0 blocks > > ==19314== possibly lost: 0 bytes in 0 blocks > > ==19314== still reachable: 1,832,258 bytes in 2,900 blocks > > ==19314== suppressed: 0 bytes in 0 blocks > > ==19314== Rerun with --leak-check=full to see details of leaked memory > > ==19314== > > ==19314== For counts of detected and suppressed errors, rerun with: -v > > ==19314== ERROR SUMMARY: 692 errors from 3 contexts (suppressed: 4 from 4) > > Memory fault(coredump) > > > > > > > New Ticket: segmentation fault of ncdump in netCDF-4 > > > > > > Hi Russ, > > > > > > For some time we've observed that ncdump cannot dump some HDF5 string > > data > > > properly. It caused segmentation fault. However, ncdump -H can dump the > > > header successfully. > > > The files are augmented HDF-EOS5 files that are netCDF-4 compliant. > > > We observed this problem in version netCDF-4.1.1. With the current > > > netCDF-4.1.3, we still saw the same problem. > > > > > > You can find two example files: > > > > > > > > ftp://ftp.hdfgroup.uiuc.edu/pub/outgoing/donglm/netcdf_seg_fault/grid_1_3d_xy > > > z_aug.h5 > > > > > ftp://ftp.hdfgroup.uiuc.edu/pub/outgoing/donglm/netcdf_seg_fault/za_1_3d_yztd > > > _aug.h5 > > > > > > A detailed report can be found: > > > > > ftp://ftp.hdfgroup.uiuc.edu/pub/outgoing/donglm/netcdf_seg_fault/error_report > > > .txt > > > > > > The h5dump output of one file can be found: > > > > > ftp://ftp.hdfgroup.uiuc.edu/pub/outgoing/donglm/netcdf_seg_fault/output_Struc > > > tMetadata.0_h5dump > > > > > > We tested it on a linux 32-bit machine. I think it happens on all > > platforms. > > > > > > This string is just a simple fixed size string. Although the size is > > > relatively large(32000), it should be handled. > > > Let me know if you need more information. > > > > > > Thanks, > > > > > > Kent > > > -- > > > Kent Yang The HDF Group > > > 1800 So. Oak St., Suite 203, Champaign, IL 61820 > > > 217.531.6107 > > > http://hdfgroup.org http://hdfeos.org > > > > > > > > > > > > Ticket Details > > > =================== > > > Ticket ID: ETC-878093 > > > Department: Support netCDF > > > Priority: Normal > > > Status: Open > > > Link: > > https://www.unidata.ucar.edu/esupport/staff/index.php?_m=tickets&_a=vi > > > ewticket&ticketid=19510 > > > > > > > > Ticket Details > > =================== > > Ticket ID: ETC-878093 > > Department: Support netCDF > > Priority: Normal > > Status: Closed > > > > > > > -- > Kent Yang The HDF Group > 1800 So. Oak St., Suite 203, Champaign, IL 61820 > 217.531.6107 > http://hdfgroup.org http://hdfeos.org > > Russ Rew UCAR Unidata Program address@hidden http://www.unidata.ucar.edu Ticket Details =================== Ticket ID: ETC-878093 Department: Support netCDF Priority: Normal Status: Closed