This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
> Unidata netCDF Support wrote: > > Hi Jan, > > > > > >> I would like to inquire on the status of my support request regarding a > >> failure of the netCDF library that produces the error message > >> > >> wrf.exe: posixio.c:232: px_pgin: Assertion `*posp == ((off_t)(-1)) || > >> *posp == lseek(nciop->fd, 0, 1)' failed. > >> > >> What is the cause of this failure? > >> > > > > I'm sorry it's taken us so long to respond to your support request from 26 > > days > > ago, it seems to have slipped through the cracks in our support system. > > > > A previous time we saw this assertion violation, it was diagnosed as a > > problem > > with AIX returning an erroneous value from an lseek() call when it was > > unable to write a large file, due to file system misconfiguration: > > > > http://www.unidata.ucar.edu/support/help/MailArchives/netcdf/msg02656.html > > > > We saw this assertion violation in another case on an HP-UX platform when it > > was trying to write a file larger than the 2 GiB limit an older version > > of the netCDF library. > > > > From that analysis, I would suspect the file system on which you're trying > > to > > write this file on your Linux 2.6.18-92.1.13.el5 SMP x86_64 platform might > > not be properly configured for writing files larger than 2 GiB. > > > > Can you try running the following commands writing a large file on the same > > file system you were using when you got the error? If it was remotely > > mounted from a server rather than a local file system, please try to use > > it in the same way when running this command, just to verify whether you can > > write large files: > > > > dd if=/dev/zero bs=1000000 count=3000 of=./largefile > > ls -l largefile > > rm largefile > > > > That should write a 3 GByte file named "largefile" in the current directory, > > verify its size, and remove it. > > > > If that works, then I would suggest upgrading to netCDF 3.6.3 or 4.0.1 and > > seeing if the problem still exists. (4.0.1 built without --enable-netcdf4 > > will still support your netCDF-3 programs and files by default). > > > > Otherwise, if you could provide us with a small program that demonstrates > > the assertion violation, we could try to reproduce it here to diagnose the > > problem more completely, although we don't have access to the exact > > platform on which you are encountering the problem. > > > > --Russ > > > > Hi Russ > > Thanks for the response. I did the test and the system is able to write > large files. Unfortunately, I can't provide a test code which reproduces > the problem reliably, because it does not occur every time in my > production code. What I will do as a temporary workaround is to split > the output into files < 2 GB, which fortunately is easy to do, and see > if the problem still occurs. If that is the case I would compile a more > recent version of the netCDF library as you suggest on the production > machine. The problem does not occur on my workstation, where I am using > netCDF 4.0, but which is also a different architecture. > > Jan Howdy Jan, I strongly suggest that you upgrade to the latest version of netCDF and see if this problem goes away. Thanks, Ed Ticket Details =================== Ticket ID: GCB-165173 Department: Support netCDF Priority: Critical Status: Closed