This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
>To: address@hidden >From: Mark Davis <address@hidden> >Subject: netCdf and Windwos >Organization: . >Keywords: 199610311812.AA23503 Hi Mark, > I am currently porting my app to all Microsoft windows platforms > from X/Motif... Not my idea of progression.. Anyway I use the netCDF > file format for elevation data that is stored as short integers ( 2 > byte ). I originally downloaded the 2.3.2 libraries that you have > compiled for MSDOS and recompiled them for Windows 16-bit... > Everything works fine excpept the ncvarget. What happens is that the > values returned to me are incorrect.. Yet when I dump the file the > values are ok.... My dataset is 1200x1200 and the strangest thing > happens when I try to load a hyperslab of 100x100; The first > [0][0-100] come back correct and the rest of the values are > incorrect.... Neat feature.. I have done some digging and have found > that the xdr library might be the cause.. I know the xdr is something > not really supported on DOS/WINDOWS and that is why you so generously > included the ported xdr, but I believe that it mistreats short > integers... So far I have no solution. You may be running into a problem that's documented in a footnote to the ncvarget and ncvarput documentation in the NetCDF User's Guide: The current implementation of XDR on MSDOS systems restricts the amount of data accessed to no more than 64 Kbytes for each call to ncvarput (or NCVPT or NCVPTC for FORTRAN). This restriction should really be for "16-bit systems" rather than "MSDOS systems", but it means you would only be able to access about 32000 2-byte integers in a single call. But 100x100=10000 integers should be OK, so this may not really be the problem. I know the test code in nctest does varput and varget arrays of short integers and checks that the expected values are retrieved, but it doesn't test large arrays. Does nctest run correctly with your 2.3.2 library? If not, then that indicates a bad port; if so, then this may indicate a bug we haven't seen. > What I have done is go ahead and move the version 2.4.3 and I > was going to port that into the different Windows platforms since I know > you do not have those capabilities due to the lack of PC equipment.. I > have kept good records of what I have updated in the 2.4.3 libraries and > most of it was just adding the #define statements for each different > windows platform for the include files. I did notice although that > there are hundreds upon hundreds of warnings using Borland C++ 5.01. > Most of the warnings can be ignored since they are that of "no prototype > for function (x)". Some are a little more serious and I have made the > neccessary provisions to remove them.. Now though I have the libraries, > xdr and netcdf, compiled (in the large memory model for Win-16) > everytime I try to access one of my datafiles that are from 2.3.2 I get > the message from ncopen that "this is not a netCF file". If I am not > mistaken you supply backwards compatibility...... Correct??? Yes, there has been no change to the file format so the netcdf 2.4.3 library should be able to read any file generated by the netcdf 2.3.2 library (or even the netcdf 1.0 library!). > I looked > up the error and it seems to happen if the MAGIC number is not > correct... Did you change the MAGIC number??? What did I miss or do > incorrectly??? Any information provided would be greatly appreciated... The "magic number" is just the first four bytes of the file, which should be 'C' 'D' 'F' and a VERSION_BYTE for the file format version (not the library version) that is currently 1, so the fourth byte is '\001'. If the first four bytes are 'D', 'C', '\001', 'F', for example, that would indicate swapped bytes, probably indicating an XDR problem. > Also I will be more than happy to supply you with the diff files > and the new libraries once I have them working 100%. Just for your > information too, when you increased the size of NC_MAX_?????? you > induced a bug for DOS/WINDOWS environments since they have a very > limited local stack segment... So now the nctest program will not > compile since in many functions you exceed this local stack space... > Declaring them static does clear up some of the errors but if you > declare them all static you will run out of space in DGROUP. This is > something that needs to be looked into when there is time... I will see > what I can do if you wish to point me in the right direction... Isn't there a compiler flag that can be used to increase the stack size? It's my vague recollection that we had to use something like this, but we did succeed in running nctest on an MSDOS platform with Microsoft compilers with the higher NC_MAX_* limits. Maybe this problem is compiler-specific? Otherwise, I'm sure that nctest doesn't get anywhere close to any of these maximums, and probably just expediency explains why I used those limits in the nctest code. Maybe the easiest solution would be to find out where they're used and lower them to what's really needed? > ... Please > let me know on the information regarding the MAGIC number and also if > you would be interested in me providing you with the diff and library > files... Before you do too much work on the 2.4.3 libraries, we're actually working on netCDF version 3 now and have made the source and binaries available for a netCDF-3 prerelease (alpha release) for the C-interface-only part of the library. The prerelease includes a completely new implementation of nctest that tests the library interfaces much more rigorously. The netCDF-3 C library has already been ported to Win32 platforms, and it is independent of any vendor XDR libraries (we implement XDR completely portably within the library), so it's easier to port. Getting rid of dependence on XDR also improved performance significantly. So I guess I'd recommend that if you are still having problems with 2.4.3, it might be worthwhile to pick up the new alpha release and try it instead. It supports complete backward compatibility with the netCDF-2 interfaces and file format, so you should be able to link old applications to it and just run with the old files. If it requires any modifications for DOS/WINDOWS that we haven't already incorporated, we would be very interested in getting your diffs. You can check out the prerelease notes at http://www.unidata.ucar.edu/packages/netcdf/3.1a.html download the source from ftp://ftp.unidata.ucar.edu/pub/netcdf/netcdf-3.1a.tar.Z download a binary release for 32-bit Windows platforms from ftp://ftp.unidata.ucar.edu/pub/netcdf/nc31a.zip or get the source and .mak files for a Win32 DLL from ftp://ftp.unidata.ucar.edu/pub/netcdf/nc31a.Win32DLL.zip --Russ _____________________________________________________________________ Russ Rew UCAR Unidata Program address@hidden http://www.unidata.ucar.edu