This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
> From: address@hidden (John Sheldon) > Subject: Re: 2 GB netCDF limit > To: address@hidden (Russ Rew) Hi again John, > I just took 3 "real" data files, each 1.2 GB, and concatenated them, > then used ncrcat to pick off the last 20 timesteps (about 20% of the > 3.6GB file). I was able to ncview the result on my SGI and everything > looked OK! (?) > > Any idea how this is possible? How does that "32-bit slot for a > variable offset" work? Is it actually a byte count, or a record count, > from which a byte count can be computed knowing the record size? It > still would bomb on a 32-bit system, but be OK on a 64-bit system. > > As you might surmise, I'm still hopeful.... I guess if the length is all in record variables, the offsets can be computed in 64-bit ints from the 32-bit starting point in the 0th record, and everything just works, because the large integers are computed, not stored. If this works for you, you're in luck. I'm forwarding this to Glenn for an authoritative answer. --Russ