[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
20041220: IDD data requests from pp42 to atm.geo.nsf.gov (cont.)
- Subject: 20041220: IDD data requests from pp42 to atm.geo.nsf.gov (cont.)
- Date: Mon, 20 Dec 2004 15:14:11 -0700
>From: "Yoshihiro Yamasaki" <address@hidden>
>Organization: Universidade de Avarez
>Keywords: 200412201553.iB8F1MlI015376 IDD NIMAGE clock
Hi Yoshihiro,
re: feed requests
>Actually I have set :
>
>1. - atm77.fis.ua.pt --> conduit
>2. - pp42.fis.ua.pt --> HDS ; IDS|DPLUS ; NIMAGE
I allowed both machines to feed all of these feeds. You
can now pick and choose which machine you want to ingest
the various feeds.
>Looking at clock of pp42 and atm77 it seems ok, although
>not beng set by ntpdate, becuse I could not manage my
>proxy restriction.
I stand corrected. The clock on pp42 looks OK. The clock on atm77,
however, looks like it is off by 1000 seconds or so.
The latency plots indicate a substantial difference in times between
atm77 and atm.geo.nsf.gov:
atm77.fis.ua.pt feeding CONDUIT from atm.geo.nsf.gov:
http://my.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+atm77.fis.ua.pt
I was disturbed that the lowest latency for the CONDUIT data was
something over 1000 seconds. Given the traceroute times from
atm.geo.nsf.gov to atm77.fis.ua.pt:
/local/ldm% traceroute atm77.fis.ua.pt
traceroute to atm77.fis.ua.pt (193.137.81.77), 30 hops max, 40 byte packets
1 colima.nsf.gov (198.181.231.1) 1 ms 1 ms 0 ms
2 arlg-nsf.maxgigapop.net (206.196.177.137) 1 ms 1 ms 5 ms
3 dcne-so3-0-0.maxgigapop.net (206.196.178.42) 1 ms 2 ms 1 ms
4 abilene-rtr.maxgigapop.net (206.196.177.2) 10 ms 1 ms 1 ms
5 abilene.de2.de.geant.net (62.40.103.253) 95 ms 95 ms 96 ms
6 * de.it1.it.geant.net (62.40.96.62) 105 ms 104 ms
7 it.es1.es.geant.net (62.40.96.185) 127 ms 156 ms 127 ms
8 es.pt1.pt.geant.net (62.40.96.78) 121 ms 160 ms 119 ms
9 fccn-gw.pt1.pt.geant.net (62.40.103.178) 119 ms 119 ms 119 ms
10 ROUTER8.GE.Lisboa.fccn.pt (193.137.0.8) 119 ms 119 ms 119 ms
11 ROUTER11.GE.Lambda.Porto.fccn.pt (193.137.1.242) 124 ms 124 ms 124 ms
12 Router15.FE0-0-1.2.Porto.fccn.pt (193.137.4.19) 124 ms 124 ms 124 ms
13 ROUTER4.FE.Porto.fccn.pt (193.136.4.10) 126 ms 126 ms 127 ms
14 UA.Aveiro.fccn.pt (193.136.4.114) 127 ms 126 ms 127 ms
15 fw1-ext.core.ua.pt (193.137.173.253) 127 ms 126 ms 127 ms
16 gt-cicua.core.ua.pt (193.136.86.193) 127 ms 127 ms 127 ms
17 * * *
18 atm77.fis.ua.pt (193.137.81.77) 127 ms 128 ms 127 ms
I would expect to see the CONDUIT latencies decrease to numbers much
smaller than 1000 seconds. Also, if the latency were really about 1000
seconds, I would expect to see a variation above and below (greater and
less) 1000 seconds. I do not see any latencies below a hard lower
limit which looks to be about 1000 seconds.
As far as pp42 goes, the latest latency plots for the IDS|DDPLUS feed
shows that its clock looks correct.
pp42.fis.ua.pt feeding IDS|DDPLUS from atm.geo.nsf.gov:
http://my.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?IDS|DDPLUS+pp42.fis.ua.pt
>Finally, concerning the NIMAGE, you are right, people
>here are more interested in a global composite image.
OK. There are three Northern Hemisphere composite images in the NIMAGE
datastream:
24 km VIS
24 km IR
24 km WV
You could request just those images with the following ~ldm/etc/ldmd.conf
request line:
request NIMAGE "NHEM-MULTICOMP" atm.geo.nsf.gov
This would save you and atm.geo.nsf.gov a good bit of bandwidth given
that the North American VIS sectors are on the order of 17 MB each;
there are four sent each hour for each satellite; and they are being
sent for both GOES-East and GOES-West. The total bandwidth savings for
the VIS products alone would be about 136 MB/hour.
Also, there are global composite IR and WV images in the UNIWISC
IDD stream. You could get just those images using the following
~ldm/etc/ldmd.conf request line:
request UNIWISC "pnga2area Q. U[XY]" atm.geo.nsf.gov
If you will be using GEMPAK to view/use the images, you can save
them unlatered on disk as GEMPAK can use the compressed images
directly. Here are example ~ldm/etc/pqact.conf actions that
FILE the images:
# UW Mollweide composite (IR and WV) -- raw
UNIWISC ^pnga2area Q. (UX) (.*) (.*)_IMG (.*) (.*) (........) (....)
FILE data/gempak/images/sat/MOLLWEIDE/raw/IR/IR_\6_\7
UNIWISC ^pnga2area Q. (UY) (.*) (.*)_IMG (.*) (.*) (........) (....)
FILE data/gempak/images/sat/MOLLWEIDE/raw/WV/WV_\6_\7
If you want the images to be converted back into their original AREA
format, you will need to uncompress them using the ldm-mcidas
utility 'pnga2area'. Example ~ldm/etc/pqact.conf actions to decode these
images would look something like:
# UW Mollweide composite (IR & WV)
UNIWISC ^pnga2area Q. (UX) (.*) (.*)_IMG (.*) (.*) (........) (....)
PIPE -close
pnga2area -vl logs/ldm-mcidas.log
-a etc/SATANNOT
-b etc/SATBAND
data/gempak/images/sat/MOLLWEIDE/24km/IR/IR_\6_\7
UNIWISC ^pnga2area Q. (UY) (.*) (.*)_IMG (.*) (.*) (........) (....)
PIPE -close
pnga2area -vl logs/ldm-mcidas.log
-a etc/SATANNOT
-b etc/SATBAND
data/gempak/images/sat/MOLLWEIDE/24km/WV/WV_\6_\7
Thanks for your quick reply!
Cheers,
Tom
--
NOTE: All email exchanges with Unidata User Support are recorded in the
Unidata inquiry tracking system and then made publicly available
through the web. If you do not want to have your interactions made
available in this way, you must let us know in each email you send to us.