This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Steve, I looked at the acars decoding, and there are 2 separate issues for duplicates that I have found. For example, I have the Tail ID FSL01313 today at 1343Z. Station FSL01313 : 969630180 000922/1343 Lat: 46.8167 Lon: 3.1000 Elev: 9387.84 Station FSL01313 : 969630180 000922/1343 Lat: 46.8833 Lon: 3.0833 Elev: 9204.96 There are 2 reports from this plane in 1 minute. Time of these observations is identical down to the second. In reality, the time is given to the whole minute. However, the location and elevation are different, so we want to keep both of these reports...but, filter out duplications of these reports in subsequent netcdf files. First, the problem of round off will exist with subsequent reception of this file. In write_gempak.c, I have a duplicate test, so that if a station exists at the same time, it is treated as the same if the Lat and Lon are identical to +/- .001 degrees. However, slat and slon are storred in SF_WSDD() to the nearest .01 degrees. So, my test it too stingent for this. You can change this from: if((latdif < .0010)&&(londif < .0010)) to: if((latdif < .0100)&&(londif < .0100)) Now, the second problem that will occur is that once both observations are in the ship file for the same station name and time, SF_STIM followed by SF_SSTN will always return the first instance. So the duplication of the first report will be caught, but the second report at the same time will be added. This problem will only occur for aircraft with more than 1 observation per minute. So, I will probably have to recode this to check all reports at a given time, to see if the station exists more than once. This should be a smaller fraction of the duplicates, so changing the line of code above should cut down on the majority of the duplicates. I will let you know when I have a fix for this. Steve Chiswell