[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Benchmarking for netCDF

This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.


  • Subject: Re: Benchmarking for netCDF
  • Date: Mon, 29 Jan 2001 09:45:01 -0700

Hi Paul,

> A search of your website did not reveal any perfomance
> figures for the various versions of netCDF. Do you have
> any benchmarks that you run to check the upgrades made?
> If so would you be willing to forward these numbers to
> me? I'm interested in benchmarks for the latest version,
> 3.5-beta6 any any most recent versions you have.

Just timing "make test" after building the library and utility from
sources servers as a blunder test to make sure we haven't made
anything slower, but improvements typically don't show up in that
coarse benchmark, which is probably dominated by the startup time for
processes.

There is an old timing benchmark I developed once that's still
distributed with the distribution, in nctest/nctime.c, but it's not
compiled as a default.  I haven't checked it lately, because I know
the performance improvements we made in netCDF-3.5 were for special
circumstances that it doesn't test, namely

 - making the nc__endef() function work as intended to reserve extra
   space in the file header, so that attributes and variables can
   later be added without copying all the data to a new file.

--Russ