[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[IDD ~MQY-650936]: Test from idd.unidata.ucar.edu to NPAD server
- Subject: [IDD ~MQY-650936]: Test from idd.unidata.ucar.edu to NPAD server
- Date: Thu, 11 May 2006 12:23:45 -0600
> Art,
>
> If you look at the public logs on the NCAR npad, FRGP npad, and the PSC npad,
> you'll see results from testing I've done today. Ignore the January 2006
> tests
> from Unidata on the FRGP node. The IDD cluster director (uni6) is getting
> from
> 160-250Mbps to the PSC npad and the real nodes (uni1, uni2, untest4) are all
> in
> the 90-150Mbps range to the PSC npad. I've noticed some very curious results
> that
> bear additional investigation including poor long haul performance from
> various
> Unidata systems including yakov.
>
> We ran a LDM test from yakov to ECMWF a couple of months back (when yakov was
> FC4)
> where we sustained 70+Mbps to Europe for several days. We know of one Unidata
> site experimenting with the pluggable TCP congestion modules in Fedora Core
> due
> to strange networking problems after an OS upgrade.
>
> Testing a number of different scenarios calls into question network throughput
> performance of the intel pro 1000 chipset(s), and possibly FC4/FC5, and
> possibly
> jumbo frames. All of our systems with Broadcom gigabit chipsets seem to
> perform
> reasonably well to very well regardless of operating system.
>
> Let me know if I can be of assistance.
>
> mike
>
> > Hi...
> >
> > In our continuing quest to find a data bottleneck to meteo.psu.edu, would
> > it be possible for you to run a test to your NPAD server
> > (http://npad.ucar.edu:8000/) from idd.unidata.ucar.edu and report the
> > results back to us?
> >
> > Thanks.
> >
> > Art
> >
> > Arthur A. Person
> > Research Assistant, System Administrator
> > Penn State Department of Meteorology
> > email: address@hidden, phone: 814-863-1563
> >
> >
>
Ticket Details
===================
Ticket ID: MQY-650936
Department: Support IDD
Priority: Normal
Status: Closed