[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Oops, incomplete patch. (fwd)
- Subject: Re: Oops, incomplete patch. (fwd)
- Date: Thu, 05 Sep 2002 09:01:36 -0600
------- Forwarded Message
Date: Wed, 07 Aug 2002 12:44:37 -0600
From: Russ Rew <address@hidden>
To: "Stonie R. Cooper" <address@hidden>
cc: Russ Rew <address@hidden>, address@hidden,
address@hidden,
"Cannon,
Declan" <address@hidden>,
"Wells,
Tim" <address@hidden>
Subject: Re: Oops, incomplete patch.
>To: Russ Rew <address@hidden>
>From: "Stonie R. Cooper" <address@hidden>
>Subject: Re: 20020730: The significance of pbuf_flush messages.
>Organization: Planetary Data
>Keywords: pbuf_flush-problem
Stonie,
> I replaced my svc.c code you sent me, and did a "make install", then su'ed to
> do a "make install_setuids". I also did an "ls -l" to make sure rpc.ldmd was
> updated, and per my note yesteday . . . everything seemed fine.
>
> Again, I don't know if there is any concern with the pbuf_flush logs, but
> here is a snippet on a dual 800MHz machine:
>
> Aug 07 17:50:12 helium pqact[22207]: pbuf_flush 12: time elapsed 2.799201
> Aug 07 17:51:23 helium pqact[22207]: pbuf_flush 12: time elapsed 3.107711
> Aug 07 17:51:37 helium pqact[22207]: pbuf_flush 12: time elapsed 4.080028
> Aug 07 17:55:03 helium pqact[22207]: pbuf_flush 12: time elapsed 2.163155
> Aug 07 17:55:06 helium pqact[22207]: pbuf_flush 12: time elapsed 2.587394
> Aug 07 17:55:16 helium pqact[22207]: pbuf_flush 12: time elapsed 2.084197
> Aug 07 17:55:30 helium pqact[22207]: pbuf_flush 12: time elapsed 3.187534
> Aug 07 17:55:58 helium pqact[22207]: pbuf_flush 12: time elapsed 3.287892
>
> Work load on this machine is a little higher than normal:
> $ w
> 5:58pm up 2 days, 1:13, 13 users, load average: 4.07, 3.86, 3.01
>
> Interestingly enough, our four channel NOAAPort box is an old single PII
> 400MHz, running our software and LDM 5.1.4 per request of a customer, and I
> can't find any pbuf_flush messages . . . and it's loading is light
>
> $ w
> 6:02pm up 2 days, 1:17, 3 users, load average: 0.24, 0.25, 0.18
>
> Using top, I really don't see anything running amuck, and the output shows
> the CPU's at 80% idle or more.
The load average on helium looks high; could it be a result of 13
users logged in and someone doing some compute-intensive task rather
than LDM 5.2? I'm concerned if there is something in version 5.2 that
is causing more CPU usage. We'll have to look at load averages under
controlled test conditions here ...
> Again, I'm not complaining . . . just was curious when I started see these
> messages more than with 5.1.4. One other thing . . . the pbuf_flush messages
> appear at peak NOAAPort moments - i.e. when a lot of data is coming in NWSTG.
> It may simply be a "conservation of mass" issue - something has to give
> someplace.
Thanks for the information. It may help us track down a problem. So
far your site is the only one reporting this, but that may be because
you're especially observant.
--Russ
------- End of Forwarded Message