[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[TIGGE #BGR-338705]: A problem when perform LDM tests
- Subject: [TIGGE #BGR-338705]: A problem when perform LDM tests
- Date: Thu, 21 Jun 2007 16:51:58 -0600
Hi Yangxin,
I apologize for not responding to your questions before now (I did not know
that there
was a question pending for me; this was my oversight).
re:
> It's been successful when I try ldmping. Moreover, I can see data coming from
> yakov via
> "ldmadmin watch".
Very good.
> Tom: if I want to save the incoming CONDUIT files from my PQ, how do I edit
> the statements
> in my pqact.conf? I tried as follows, but did not successfull:
>
> CONDUIT ^(2007)(.*) FILE -overwrite closed
> data/tigge_test/conduit/\1\2
> Maybe because I don't understand the regular expression very well.
Your regular expression is set to select products whose identifiers begin with
'2007'.
There are no product identifiers in CONDUIT that begin with the year.
The best way to create regular expressions for one's pqact.conf file is as
follows:
- use the LDM utility 'notifyme' to list out the products in the datastream of
interest
and look at the structure of the product identifiers. For instance, here is
a 'notifyme' run for the CONDUIT feed:
[ldm@yakov ~]$ notifyme -vxl- -f CONDUIT -o 3600
Jun 21 22:16:33 notifyme[8952] NOTE: Starting Up: localhost: 20070621211633.513
TS_ENDT {{CONDUIT, ".*"}}
Jun 21 22:16:33 notifyme[8952] NOTE: LDM-5 desired product-class:
20070621211633.513 TS_ENDT {{CONDUIT, ".*"}}Jun 21 22:16:33 notifyme[8952]
INFO: Resolving localhost to 127.0.0.1 took 0.00037 seconds
Jun 21 22:16:33 DEBUG: NOTIFYME(localhost) returns OK
Jun 21 22:16:33 notifyme[8952] NOTE: NOTIFYME(localhost): OK
Jun 21 22:16:34 notifyme[8952] INFO: 759485465446a5ed148c1114cf341091 308405
20070621214642.465 CONDUIT 248
data/nccf/com/gfs/prod/gfs.2007062118/gfs.t18z.pgrb2f48
!grib2/ncep/GFS/#000/200706211800F048/PRES06/0 - HCTL! 000248
Jun 21 22:16:34 notifyme[8952] INFO: 78e0d470446aa3e0db7ae71168cc604c 57098
20070621214646.104 CONDUIT 084
data/nccf/com/gfs/prod/gfs.2007062118/gfs.t18z.pgrbf54
!grib/ncep/GFS/#003/200706211800/F054/RH/550_mb! 000084
Jun 21 22:16:34 notifyme[8952] INFO: 16eb53b171fa67596bb0b526eed16fc4 57098
20070621214646.104 CONDUIT 085
data/nccf/com/gfs/prod/gfs.2007062118/gfs.t18z.pgrbf54
!grib/ncep/GFS/#003/200706211800/F054/RH/500_mb! 000085
Jun 21 22:16:34 notifyme[8952] INFO: 34caab7af717f1dc5dab10018a619a8a 98523
20070621214642.460 CONDUIT 246
data/nccf/com/gfs/prod/gfs.2007062118/gfs.t18z.pgrb2f48
!grib2/ncep/GFS/#000/200706211800F048/SWRD06/0 - NONE! 000246
Jun 21 22:16:34 notifyme[8952] INFO: 822f90454f80643d256a90587cfcac99 95812
20070621214642.509 CONDUIT 251
data/nccf/com/gfs/prod/gfs.2007062118/gfs.t18z.pgrb2f48
!grib2/ncep/GFS/#000/200706211800F048/CLD06/0 - MCLY! 000251
Jun 21 22:16:34 notifyme[8952] INFO: 026b85520eaf4ab50e23204389fa7d6d 177911
20070621214641.511 CONDUIT 069
data/nccf/com/gfs/prod/gfs.2007062118/gfs.t18z.pgrb2f48
!grib2/ncep/GFS/#000/200706211800F048/OMEG/250 Pa PRES! 000069
Jun 21 22:16:34 notifyme[8952] INFO: ddca9a30b016875b2dc08cb7e5e7208e 89678
20070621214646.164 CONDUIT 094
data/nccf/com/gfs/prod/gfs.2007062118/gfs.t18z.pgrbf54
!grib/ncep/GFS/#003/200706211800/F054/ABSV/1000_mb! 000094
Jun 21 22:16:34 notifyme[8952] INFO: 244c8185a69c23190451f50021adcfbc 89678
20070621214646.176 CONDUIT 095
data/nccf/com/gfs/prod/gfs.2007062118/gfs.t18z.pgrbf54
!grib/ncep/GFS/#003/200706211800/F054/ABSV/975_mb! 000095
Jun 21 22:16:34 notifyme[8952] INFO: 86eb53c2f99d4095ed1f243204879f2e 153949
20070621214642.588 CONDUIT 258
data/nccf/com/gfs/prod/gfs.2007062118/gfs.t18z.pgrb2f48
!grib2/ncep/GFS/#000/200706211800F048/TMPK06/0 - LCTL! 000258
The '-f' flag value specifies the desired feed type to monitor and the '-o'
flag specifies the
time offset to use ('-o 3600' says to look for products received within the
past 3600 seconds).
Product identifiers included in the listing above are:
data/nccf/com/gfs/prod/gfs.2007062118/gfs.t18z.pgrb2f48
!grib2/ncep/GFS/#000/200706211800F048/PRES06/0 - HCTL! 000248
data/nccf/com/gfs/prod/gfs.2007062118/gfs.t18z.pgrbf54
!grib/ncep/GFS/#003/200706211800/F054/RH/550_mb! 000084
data/nccf/com/gfs/prod/gfs.2007062118/gfs.t18z.pgrbf54
!grib/ncep/GFS/#003/200706211800/F054/RH/500_mb! 000085
data/nccf/com/gfs/prod/gfs.2007062118/gfs.t18z.pgrb2f48
!grib2/ncep/GFS/#000/200706211800F048/SWRD06/0 - NONE! 000246
data/nccf/com/gfs/prod/gfs.2007062118/gfs.t18z.pgrb2f48
!grib2/ncep/GFS/#000/200706211800F048/CLD06/0 - MCLY! 000251
data/nccf/com/gfs/prod/gfs.2007062118/gfs.t18z.pgrb2f48
!grib2/ncep/GFS/#000/200706211800F048/OMEG/250 Pa PRES! 000069
data/nccf/com/gfs/prod/gfs.2007062118/gfs.t18z.pgrbf54
!grib/ncep/GFS/#003/200706211800/F054/ABSV/1000_mb! 000094
data/nccf/com/gfs/prod/gfs.2007062118/gfs.t18z.pgrbf54
!grib/ncep/GFS/#003/200706211800/F054/ABSV/975_mb! 000095
data/nccf/com/gfs/prod/gfs.2007062118/gfs.t18z.pgrb2f48
!grib2/ncep/GFS/#000/200706211800F048/TMPK06/0 - LCTL! 000258
So, if you want your regular expression to use the portion of these headers
immediately after the
'gfs.', then the following pqact.conf action would work:
CONDUIT (2007)(.*) .*
FILE -close -overwrite data/tigge_test/conduit/\1\2
Note: you must be very careful to use TAB characters for certain whitespace in
pqact.conf
actions. This action should be read as:
CONDUIT<tab>(2007)(.*)<space>.*
<tab>FILE<tab>-close<tab>-overwrite<space>data/tigge_test/conduit/\1\2
> When I was mornitoring the network transfer rate, I found that the average
> rate is about
> 700-800 Kbytes/s, sometimes, it reaches 1.0-1.3 MBytes/s.
Do you mean kilobytes per second or kilobits per second? The factor of 8
difference
is significant in understanding system performance.
From the volume listing for the Unidata toplevel IDD relay node,
idd.unidata.ucar.edu:
http://www.unidata.ucar.edu/cgi-bin/rtstats/rtstats_summary_volume?idd.unidata.ucar.edu
Data Volume Summary for idd.unidata.ucar.edu
Maximum hourly volume 6097.048 M bytes/hour
Average hourly volume 4027.473 M bytes/hour
Average products per hour 183643 prods/hour
Feed Average Maximum Products
(M byte/hour) (M byte/hour) number/hour
CONDUIT 2143.224 [ 53.215%] 4170.488 48747.250
NEXRAD2 911.966 [ 22.644%] 1251.658 58075.083
NGRID 313.417 [ 7.782%] 768.949 10829.229
NIMAGE 195.738 [ 4.860%] 366.819 204.729
HDS 191.865 [ 4.764%] 449.666 18527.438
NNEXRAD 158.021 [ 3.924%] 193.183 23031.542
FNEXRAD 68.765 [ 1.707%] 80.716 75.771
UNIWISC 22.665 [ 0.563%] 32.784 25.938
IDS|DDPLUS 17.779 [ 0.441%] 22.098 23965.646
FSL2 1.941 [ 0.048%] 2.129 22.333
GEM 1.854 [ 0.046%] 22.248 128.000
NLDN 0.237 [ 0.006%] 0.604 9.708
we can see that the average volume in CONDUIT is 2143.224 mega bytes per hour
and the peak hourly rate is 4170.488 mega bytes per hour. Expressed in per
seconds, the average is 0.595 mega bytes per second (609.6 kilobytes per
second), and
the peak is 1.158 mega bytes per second (1.186 kilobytes/second). So, if your
700-800 and 1.0-1.3 numbers are really kiloBYTES per second, then your values
are in-line with what should be expected.
Since we are not receiving statistics from your machine, so it is hard for us
to say
whether you are receiving the full volume of the CONDUIT datastream and what
the product
latencies are.
> This LDM Server has one dual-core 3.0Ghz CPU, 2GB RAM, two Gb ethernet
> adapters, I set
> the PQ to the size of 1500MB.
For the transfer tests the size of the LDM queue on the downstream is not
important.
When you start writing every product received to disk, however, you may find
that a single
pqact.conf action will not be fast enough to write every product received to
disk before
it is deleted from the queue by the receipt of new data.
> Do you think current network performance is OK?
Yes, if your numbers are really in megabytes per second.
> Maybe it's a little slower than expected. Are you pacing the ingesting of data
> at yakov?
No. yakov is receiving the full volume of the CONDUIT datastream as it is being
sent by its upstream feeder (idd.unidata.ucar.edu).
> If yes, does this transfer rate match your pacing configuration?
This is not applicable since I am not pacing the data.
> And what if I
> increase more REQUESTs, is it possible that the transfer rate will increase?
Yes, it is possible, but, again, if your numbers are really in megabytes per
second
then your numbers seem to match what is expected by the average and peak
volumes in CONDUIT. This conjecture would be easier to support/refute if
we could get statistics from your machine. Did you setup the 'exec' of
rtstats in your ~ldm/etc/ldmd.conf? For reference, this entry would look
like:
EXEC "rtstats -h rtstats.unidata.ucar.edu"
If DNS is not resolving 'rtstats.unidata.ucar.edu', you can use:
EXEC "rtstats -h 128.117.149.64"
> Thanks.
Again, I apologize for the delay in responding!
Cheers,
Tom
****************************************************************************
Unidata User Support UCAR Unidata Program
(303) 497-8642 P.O. Box 3000
address@hidden Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage http://www.unidata.ucar.edu
****************************************************************************
Ticket Details
===================
Ticket ID: BGR-338705
Department: Support IDD TIGGE
Priority: Critical
Status: Closed