[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: 20050302: netCDF General - Large file problem
- Subject: Re: 20050302: netCDF General - Large file problem
- Date: Thu, 03 Mar 2005 10:04:46 -0700
>To: address@hidden
>From: "Jim Cowie" <address@hidden>
>Subject: netCDF General - Large file problem
>Organization: RAL
>Keywords: 200503022003.j22K3ZjW025634
Jim,
> Actually, that large sample file compresses pretty well, so I
> have it for you at:
>
> ftp.rap.ucar.edu (anonymous)
> cd /pub/dicast
> get gfs00_dmos_emp.20050217.0040.nc.gz
>
> Also, I installed 3.5.1 and built it with the -D_FILE_OFFSET_BITS=64
> -D_LARGEFILE_SOURCE switches, and now I get an assertion from
> ncdumpand my C++ app, so I'm sure I'm getting the new libray:
>
> ncx.c:1773: ncx_get_off_t: Assertion `(*cp & 0x80) == 0' failed.
>
> This happens when I try to read the 3.1GB file but not with
> a file smaller than 2GB.
OK, I got it, uncompressed it, and have verified there is a problem
even trying to read a variable in the last third of the file with the
latest ncdump:
$ ncdump -v cprob_snow gfs00_dmos_emp.20050217.0040.nc > snow.cdl
Assertion failed: offset >= 0, file posixio.c, line 366
Abort(coredump)
I'm going to try to figure out what is going wrong, and whether this
is just a bug in ncdump or a bug in the netCDF library, although it
sounds like it may be the latter from what you have tried with the C++
interface. Also it's important to determine whether the bug is just
in reading the data, in which case there may be a workaround. I'll
let you know when I have more information. We have run tests that
successfully write and access data beyond 2 GiB with 3.5.1, so I need
to also determine under what particular circumstances the bug occurs.
I have to leave this afternoon, but will consider this a high priority
until I can diagnose the problem.
Thanks for reporting this!
--Russ