[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[netCDF #XRV-955693]: core dump preparing a moderate size netcdf file
- Subject: [netCDF #XRV-955693]: core dump preparing a moderate size netcdf file
- Date: Tue, 30 Jun 2009 14:57:59 -0600
Hi Stephen,
> As I have problems with writing a moderate size netcdf file, I decided
> to try one of your examples. it is pres_temp_4D_wr.f90. I compile this
> code on our aix power 6 machine and for the parameter settings you
> provide with the code there is no problem. The problem start if I
> increase the dimensions of the arrays to something more interesting: in
> my case nlats 300, nlons 300 and nlvls 1738. This causes a core
> dump.
Hmmm, I just tried this and it worked fine, writing a netCDF "classic"
format file of about 2.5 Gbytes:
-rw-r--r-- 1 russ ustaff 2502722808 Jun 30 14:39 ptw.nc
You should be able to write such a file with no special flags from any
version of netCDF since about 3.5.1. Check the possible causes in the
answer to this FAQ:
Why do I get an error message when I try to create a file larger than
2 GiB with the new library?
http://www.unidata.ucar.edu/netcdf/docs/faq.html#Large%20File%20Support12
You don't need a 64-bit library (or a 64-bit computer) in order to write
larger (netCDF 64-bit offset) files, as explained here:
http://www.unidata.ucar.edu/netcdf/docs/faq.html#Large%20File%20Support7
So you should be able to create even larger files using the 64-bit offset
format by using the correct creation mode in the call to nf90_create, as
in:
! Create the file.
call check( nf90_create(FILE_NAME, or(nf90_noclobber,nf90_64bit_offset),
ncid) )
> No special compilation flags (so on our machine xlf90 as I use the ibm
> compiler) however, in order to make use of max.ram i edit the loader
> file /usr/ccs/bin/ldedit -bmaxdata:0x60000000 name.of.executable
You should not need larger RAM, but you have to have a file system that
supports large files, as explained in the first FAQ above.
> The question I have is do I hit a limit within netcdf itself.
>
> to my knowledge we are running a standard netcdf library (so not the
> one for 64 bit).
Do you know what version library you have? If it's pre-3.6 (about Feb 2005),
there were some problems with large file support that were fixed in later
releases.
--Russ
Russ Rew UCAR Unidata Program
address@hidden http://www.unidata.ucar.edu
Ticket Details
===================
Ticket ID: XRV-955693
Department: Support netCDF
Priority: Normal
Status: Closed