[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Fwd: Re: netCDF file question]
- Subject: Re: [Fwd: Re: netCDF file question]
- Date: Tue, 11 May 2004 08:19:29 -0600
Wei,
> > Were you able to successfully write large files? If you can write
> > them successfully, then the library is OK and you should also be able
> > to read them. If you can't write them successfully, what was the
> > error you got?
>
> I am not sure if the file was successfully written. I can see the file
> got to the size that is bigger than 2 Gb - without the option, the model
> will not write a file that is that big. However, I cannot read it back,
> at least the complete fields inside it - that makes me wonder if it
> is written successfully. I will have to go back and check
> if it has problem writting the files.
The actual size limit for large files is 2**31 or 214748364 bytes,
about 2.15 Gbytes, so the file would have to bigger than this to show
it was successful in writing a large file.
In case it helps, I've appended a small test program in C that writes
a 3 Gbyte netCDF file. If it succeeds, it prints nothing, and you get
a file in the directory from which it was invoked that's larger than 3
Gbytes:
-rw-r--r-- 1 russ ustaff 3000000096 May 11 08:11 big.nc
If it fails becasue it's linked with a library not compiled to support
large files or on a file system not configured to support large files,
it will print an error message something like:
line 79 of lfs.c: Invalid argument
after creating a file near the limit in size:
-rw-r--r-- 1 russ ustaff 2147475456 May 11 07:57 big.nc
You could try compiling this, linking it against the netCDF library on
bluesky, and letting us know if it gets an error or successfully
writes a 3 Gbyte file.
To test whether you can read such a file, you can try the "ncdump"
utility, compiled with the same netCDF library you are using, on the 3
Gbyte netCDF file big.nc. If it succeeds, you will see something
like:
$ /home/russ/netcdf/build/buddy64-cdf2/bin/ncdump -h big.nc
netcdf big {
dimensions:
n = 1000000 ;
r = UNLIMITED ; // (750 currently)
variables:
float x(r, n) ;
}
If it fails, because the ncdump library was compiled without the flags
needed for large file support, you will get an error message,
something like:
$ /local/bin/ncdump -h big.nc
/local/bin/ncdump: big.nc: Value too large for defined data type
--Russ
#include <stdio.h>
#include <stdlib.h>
#include <netcdf.h>
void
check_err(const int stat, const int line, const char *file) {
if (stat != NC_NOERR) {
(void) fprintf(stderr, "line %d of %s: %s\n", line, file,
nc_strerror(stat));
exit(1);
}
}
#define NFIXED 1000000
#define NRECS 750 /* 100 -> 400 MB, 500 -> 2GB, 750 -> 3GB */
int
main() { /* create big.nc */
int ncid; /* netCDF id */
/* dimension ids */
int n_dim;
int r_dim;
/* dimension lengths */
size_t n_len = NFIXED;
size_t r_len = NC_UNLIMITED;
/* variable ids */
int x_id;
/* rank (number of dimensions) for each variable */
# define RANK_x 2
/* variable shapes */
int x_dims[RANK_x];
/* enter define mode */
int stat = nc_create("big.nc", NC_CLOBBER, &ncid);
check_err(stat,__LINE__,__FILE__);
/* define dimensions */
stat = nc_def_dim(ncid, "n", n_len, &n_dim);
check_err(stat,__LINE__,__FILE__);
stat = nc_def_dim(ncid, "r", r_len, &r_dim);
check_err(stat,__LINE__,__FILE__);
/* define variables */
x_dims[0] = r_dim;
x_dims[1] = n_dim;
stat = nc_def_var(ncid, "x", NC_FLOAT, RANK_x, x_dims, &x_id);
check_err(stat,__LINE__,__FILE__);
/* leave define mode */
stat = nc_enddef (ncid);
check_err(stat,__LINE__,__FILE__);
{ /* store x */
static size_t x_start[RANK_x];
static size_t x_count[RANK_x];
float x[NFIXED];
int i;
int r;
r_len = NRECS; /* number of records of x data */
for (i=0; i<NFIXED; i++) {
x[i] = i;
}
for (r = 0; r < NRECS; r++) {
x_start[0] = r;
x_start[1] = 0;
x_count[0] = 1;
x_count[1] = n_len;
stat = nc_put_vara_float(ncid, x_id, x_start, x_count, x);
check_err(stat,__LINE__,__FILE__);
}
}
stat = nc_close(ncid);
check_err(stat,__LINE__,__FILE__);
return 0;
}