This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
> Ed: Thanks for explanation. My email to netcdf-group today explained > my dilemma in a little more detail. Basically, it's hard for me to > assure that the data is packed according to the default compiler > alignment. Sometimes this is because the data is packed, or sometimes > it is because the client program was compiled with a different compiler > (with a different default alignment) than the one that was used to > compile HDF5. I don't think you should use H5Tpack. I would really like > netcdf to just use the offsets I provide, instead of ignoring those and > using the default compiler alignment. AFAICT, this is what HDF5 does. > I've looked at your commit_type code and I can't figure out how the > offsets are being changed from what the user specifies to something > consistent with the default compiler alignment. See the C program I > sent in my message to netcdf-groups today for an example that shows that > the user-specified offsets are being ignored. Howdy Jeff! OK, I think we have answered this issue, and the answer is: you can't get there from here. NetCDF-4 does not allow you to use things like __attribute__ ((__packed__)) to modify the packing of a struct that is going to be used to read or write compound types. For that kind of thing, you will have to use HDF5. But it is not clear to me whether there is a use-case for this anyway. In other words, why do you want to do it? As for the data on disk, you don't need it to have any particular offsets, you just need it to be readable on any machine. So leave the question of the file type and the difference between memory types on different systems to the HDF5 layer. Does that make sense? The reason that this is not obvious in the commit_type function is that the offsets you use actually are committed to the file, but the problem is in the data write, in which the native HDF5 type (not your packed version) is used. So the data are garbled on writing. To write a struct that has this packing attribute you must use HDF5, which allows you to specify two types for every read/write operation. With netCDF-4 only one type may be involved, the "native" type of the struct. Please let me know if you think I have missed something here. Thanks, Ed Ticket Details =================== Ticket ID: LRS-412716 Department: Support netCDF Priority: Normal Status: Closed