This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
>To: address@hidden
>From: John Caron <address@hidden>
>Subject: 500MB file
>Organization: .
>Keywords: 199712081818.LAA05231
Hi John,
> I have a 500MB file, which ncdump is hanging on when I say:
> ncdump -v time <name>
>
> Is there any problems with files of that size?
No problem that I know of. Files larger than 2 Gbytes may cause a
problem because of 31-bit offsets.
> Any way to confirm possible corruption?
There's no checksum or similar redundancy in netCDF files, so corruption
may not be detectable.
Can you just do
ncdump <name> > /dev/null
to verify that ncdump doesn't hang when you don't specify the variable
"time"? If you have enough disk space, can you verify that ncdump
works on the whole file without specifying the variable "time"?
Alternatively, you might try
ncdump <name> | grep "}"
to verify whether ncdump hangs in this case.
If you could make the file available for FTP, we could try to reproduce
and diagnose the problem.
--Russ
_____________________________________________________________________
Russ Rew UCAR Unidata Program
address@hidden http://www.unidata.ucar.edu