[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: 20011008: IDD latencies at PSU (cont.) address@hidden
- Subject: Re: 20011008: IDD latencies at PSU (cont.) address@hidden
- Date: Fri, 12 Oct 2001 09:43:51 -0600
Jeff,
That doesn't sound promising! Any other ways the get on your network
besides the OTC connectivity?
mike
On Oct 12, 11:17am, Jeff Wolfe wrote:
> Subject: Re: 20011008: IDD latencies at PSU (cont.)
> In message <address@hidden>, Mike Schmidt writes:
> > Jeff,
> >
> > We use iperf regularly for bandwidth testing. I've started a server
> > running on;
> >
> > motherlode.ucar.edu (LDM server system in question)
> > milton.unidata.ucar.edu (test system Anne suggested)
> >
> > and agree that it may be appropriate to bump the window size to 64K
> > or more given the circumstances.
> >
> > I'm not intimately familiar with the LDM code, but I'm relatively sure
> > that it doesn't do any sort of on-the-fly connection tuning. On our
> > end, we turn up the system defaults on the (Solaris) LDM server.
> >
> > Are you working on the Web100 collaboration at PSC?
>
> No. In fact, I'm not even sure what that is. :)
>
> Networking here at PSU is divided between the College of Earth and Mineral
> Sciences and the University's IT organization. I work for the college, which
> provides intra-building and some inter-building connectivity to the various
> departments. The University provides a "Backbone" that we connect to which
> provides our I1 and I2 connections along with inter-college and
> inter-building connectivity. Among other things, that means we "don't get out
> much." to participate in projects like that.. :(
>
> Since last night I've been rechecking all of the College's networks and
> within our operation, things seem to be functioning as expected. However,
> when we hit OTC' (the University IT people) network we experience a
> substantial loss of available bandwidth.
>
> Finally, things don't look too good between NCAR and PSU:
>
> From a network topologically "close" to Art's network:
> [root@nutone iperf-1.2]# ./iperf -c motherlode.ucar.edu
> ------------------------------------------------------------
> Client connecting to motherlode.ucar.edu, TCP port 5001
> TCP window size: 16.0 KByte (default)
> ------------------------------------------------------------
> [ 3] local 128.118.52.7 port 32770 connected with 128.117.13.119 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-10.1 sec 3.4 MBytes 2.8 Mbits/sec
> [root@nutone iperf-1.2]# ./iperf -c milton.unidata.ucar.edu
> ------------------------------------------------------------
> Client connecting to milton.unidata.ucar.edu, TCP port 5001
> TCP window size: 16.0 KByte (default)
> ------------------------------------------------------------
> [ 3] local 128.118.52.7 port 32771 connected with 128.117.140.32 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-10.1 sec 3.3 MBytes 2.7 Mbits/sec
> [root@nutone iperf-1.2]# ./iperf -c milton.unidata.ucar.edu -t 20
> ------------------------------------------------------------
> Client connecting to milton.unidata.ucar.edu, TCP port 5001
> TCP window size: 16.0 KByte (default)
> ------------------------------------------------------------
> [ 3] local 128.118.52.7 port 32772 connected with 128.117.140.32 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-20.1 sec 8.3 MBytes 3.5 Mbits/sec
> [root@nutone iperf-1.2]#
>
> From a network topologically "not close" to Art's:
> dirtdevil:wolfe[102]% ./iperf -c motherlode.ucar.edu
> ------------------------------------------------------------
> Client connecting to motherlode.ucar.edu, TCP port 5001
> TCP window size: 17.1 KByte (default)
> ------------------------------------------------------------
> [ 3] local 146.186.163.19 port 1106 connected with 128.117.13.119 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-10.0 sec 3.6 MBytes 3.0 Mbits/sec
> dirtdevil:wolfe[103]% ./iperf -c motherlode.ucar.edu -t 20
> ------------------------------------------------------------
> Client connecting to motherlode.ucar.edu, TCP port 5001
> TCP window size: 17.1 KByte (default)
> ------------------------------------------------------------
> [ 3] local 146.186.163.19 port 1107 connected with 128.117.13.119 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-20.1 sec 8.2 MBytes 3.4 Mbits/sec
> dirtdevil:wolfe[104]% ./iperf -c milton.ucar.edu -t 20
> milton.ucar.edu: Unknown host
> dirtdevil:wolfe[105]% ./iperf -c milton.unidata.ucar.edu -t 20
> ------------------------------------------------------------
> Client connecting to milton.unidata.ucar.edu, TCP port 5001
> TCP window size: 17.1 KByte (default)
> ------------------------------------------------------------
> [ 3] local 146.186.163.19 port 1108 connected with 128.117.140.32 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-20.0 sec 7.3 MBytes 3.0 Mbits/sec
> dirtdevil:wolfe[106]%
>
> Cranking up the window gives:
>
> [root@nutone iperf-1.2]# ./iperf -c motherlode.ucar.edu -t 20 -w 64K
> ------------------------------------------------------------
> Client connecting to motherlode.ucar.edu, TCP port 5001
> TCP window size: 128 KByte (WARNING: requested 64.0 KByte)
> ------------------------------------------------------------
> [ 3] local 128.118.52.7 port 32776 connected with 128.117.13.119 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-20.2 sec 8.1 MBytes 3.4 Mbits/sec
> [root@nutone iperf-1.2]#
>
>
> And finally, between the two PSU networks:
>
> [root@nutone iperf-1.2]# ./iperf -c dirtdevil -t 20
> ------------------------------------------------------------
> Client connecting to dirtdevil, TCP port 5001
> TCP window size: 16.0 KByte (default)
> ------------------------------------------------------------
> [ 3] local 128.118.52.7 port 32775 connected with 146.186.163.19 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-20.0 sec 145 MBytes 60.7 Mbits/sec
>
> I'm talking to the OTC folks about issues on their hardware.. They're kinda
> like the old Telephone Company, so things can take some time.
>
> -JEff
>
>-- End of excerpt from Jeff Wolfe