[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
20050610: please change moingobe feed requests (cont.)
- Subject: 20050610: please change moingobe feed requests (cont.)
- Date: Sat, 11 Jun 2005 10:49:04 -0600
>From: Waldenio Almeida <address@hidden>
>Organization: INPE/CPTEC
>Keywords: 200506101506.j5AF6uZu006399 IDD LDM feed request
Hi Waldenio,
>I did the changes that you request.
Thank you for the quick action! The only site left feeding from
emo is LNCC:
tcp4 0 44 emo.ldm trindade.lncc.br.55562 ESTABLISHED
>You can feed UBA from
>me. The "new" moingobe (opteron) is working very well. :-)
>unfortunatly, I found high traceroute time to UBA, ~ 1000 ms
OK, thanks. I just did a traceroute from the UBA machine to moingobe
and found that the times were on the order of 250 ms. This is not
great, but it is probably good enough for a backup.
>Last week David came here, and told about the problem in
>UFRJ network. He dont feed from atm anymore, so I will feed
>from there.
The UFRJ network connections are now very bad. The path from the US to
UFRJ goes through GEANT (the European high speed backbone), while the
path back to the US goes through Sprintlink. This asymetric routing
makes feeding data to UFRJ hard. Because of this network change, CPTEC
is currently the only top level IDD feed site in the IDD-Brasil. I
would like to be able to change this! If the network connection from
Buenos Aires to Sao Paulo was better, I would consider making UBA a top
tier site.
>We have compared the delays for Conduit Feeds from UFRJ and
>CPTEC. we are getting the same high delays. looks like we
>have packed-shaping in the communication to US ?!...
This could be the case, but I can't tell more now since I am currently
unable to get to the real time statistics plots on
www.unidata.ucar.edu.
>Today I am finishing the documentation to get a new team
>work here. Soon I will have 4 people working with me. :-)
You are moving up fast! Congratulations!!
>From address@hidden Fri Jun 10 15:30:44 2005
>About the LNCC feed on US: if there is some problem
>with 3 sites in Brazil feeding on US, I can ask them
>to feed only from me, and you can close the LNCC Allow.
There is no problem with three Brazilian sites feeding from IDD relays
in the US. However, I would greatly appreciate LNCC switching their
feed from emo.unidata.ucar.edu to idd.unidata.ucar.edu and never
feeding from emo again. Another reason for this is the allow lines on
emo may be removed, and the machine may be used for something else.
>If there is no problem with more feeds directly to
>Brazil, there is no problem for me also :-)
There are no problems from our end.
>I asked them to feed only from CPTEC and UFRJ, but
>they get lower latencies directly from US, and the
>door was open...
No worries. Just have them switch the machine they are feeding from.
>Oh yes, the "old" moingobe had
>problems, so I was unable to deliver lots of data.
>Now is different.
Excellent. I think I told you a little about this before, but I
in case I didn't, I will repeat. We have been experimenting with
a cluster approach for a top level IDD relay: idd.unidata.ucar.edu.
The cluster currently consistes:
- a single 'director' front end. The role of this machine is to distribute
IDD feed requests to data server backend machines
- three data server backend machines
Each data server is a dual Opteron SunFire V20Z (fast, but not as fast
as current models) with 12 GB of RAM. We increased the RAM from 4 GB
to 12 GB so that the LDM queue would always reside completely in
memory. One of these machines is able to handle at least 220
downstream connections (I say at least since we havn't tested with
more) while idling (not working hard at all). Our design will allow us
to feed literally every IDD connection request in the world without
running the data servers hard! Starting Thursday evening we started a
stress test of the cluster to see how much data we could send out
without introducing latencies. When I left yesterday afternoon, we
were sending out a peak of 780 Mbps! I am expecting that the average
output for the two day period over this weekend will be on the order of
400-500 Mbps!! To put that in different terms, 100 Mbps is about 1 TB
per day, so we are expecting the cluster to be moving about 5 TB per
day during our test _without_ working hard. Our only limit to how much
data we could move would be our network connection which is currently 1
Gbps.
>The LNCC open his firewall also, and now we have 3 Relay
>sites here: we, UFRJ and LNCC.
This sounds excellent! How good is the network connection at LNCC?
Cheers,
Tom
--
NOTE: All email exchanges with Unidata User Support are recorded in the
Unidata inquiry tracking system and then made publicly available
through the web. If you do not want to have your interactions made
available in this way, you must let us know in each email you send to us.