This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Hi Mike, re: > I'm filing the GOES ABI data according to: > https://www.unidata.ucar.edu/data/rtgoesr_pqact.html OK. Remember, the ABI imagery in the SATELLITE feed are all L1b files. If you are planning on using GOES-16/17 imagery in GEMPAK, these will _not_ work as there is no support for the format of the L1b netCDF4 files in GEMPAK. re: > And I'm using the grbfile.sh script out of the box. OK. re: > I'm trying to figure out why the CONUS is working, but the FullDisk and > Mesoscale are not. Running notifyme shows patterns are matching. I ran > pqact with the -v option here to attempt to troubleshoot. To me the > results appear identical for CONUS, FullDisk, and Mesoscale, but the > FullDisk and Mesoscale exit with status 1. It appears that the arguments > are passed in proper form, and the grblog shows only that CONUS data are > filed properly. Any tips on further troubleshooting? Are there any useful comments in the log file? re: > (BTW - I'm filing all the Unidata adjusted L2 files just fine with > L2prodfile.sh.) Very good. re: > Below are some ldmd.log examples with pqact -f. Second are my pqact > entries. > > ------------------- > > CONUS Works: > 20190730T164417.642316Z pqact[242369] palt.c:processProduct:1340 > INFO 12874305 20190730164415.714566 SATELLITE 000 > /data/cspp-geo/WEST/OR_ABI-L1b-RadC-M6C05_G17_s20192111641196_e20192111643569_c20192111644025.nc > 20190730T164417.686982Z pqact[242369] filel.c:fl_removeAndFree:425 > INFO Deleting closed PIPE entry: pid=361688, cmd="-close > /usr/local/ldm/util/grbfile.sh > /usr/local/ldm/var/data/images/sat2/G17/ABI/CONUS > OR_ABI-L1b-RadC-M6C05_G17_s20192111641196_e20192111643569_c20192111644025.nc > ABI 05 2019 211 576 /usr/local/ldm/var/logs/grbG17.log" > 20190730T164417.726536Z pqact[242369] filel.c:reap:3035 INFO > Child 361688 exited with status 0 > > Mesoscale-1 does not work: > 20190730T164418.037286Z pqact[242369] palt.c:processProduct:1340 > INFO 386783 20190730164415.153410 SATELLITE 000 > /data/cspp-geo/GRB-R/OR_ABI-L1b-RadM1-M6C07_G16_s20192111643521_e20192111643590_c20192111644034.nc > 20190730T164418.048542Z pqact[242369] filel.c:fl_removeAndFree:425 > INFO Deleting closed PIPE entry: pid=361715, cmd="-close > /usr/local/ldm/util/grbfile.sh > /usr/local/ldm/var/data/images/sat2/G16/ABI/Mesoscale-1 > OR_ABI-L1b-RadM1-M6C07_G16_s20192111643521_e20192111643590_c20192111644034.nc > ABI 07 2019 211 720 /user/local/ldm/var/logs/grbG16.log" > 20190730T164418.070242Z pqact[242369] filel.c:reap:3035 ERROR > Child 361715 exited with status 1 > > FullDisk does not work: > 20190730T164128.602664Z pqact[242369] palt.c:processProduct:1340 > INFO 86156255 20190730164039.857097 SATELLITE 000 > /data/cspp-geo/GRB-R/OR_ABI-L1b-RadF-M6C05_G16_s20192111630477_e20192111640185_c20192111640238.nc > 20190730T164128.816699Z pqact[242369] filel.c:fl_removeAndFree:425 > INFO Deleting closed PIPE entry: > pid=349631, cmd="-close /usr/local/ldm/util/grbfile.sh > /usr/local/ldm/var/data/images/sat2/G16/ABI/FullDisk > OR_ABI-L1b-RadF-M6C05_G16_s20192111630477_e20192111640185_c20192111640238.nc > ABI 05 2019 211 192 /user/local/ldm/var/logs/grbG16.log" > 20190730T164128.820545Z pqact[242369] filel.c:reap:3035 ERROR > Child 349631 exited with status 1 > --------------------- > > pqact entries: > > SATELLITE > /data/cspp-geo/.*/(OR_(ABI)-L1b-RadC-M.C(..)_(G..)_s(....)(...).*) > PIPE -close > /usr/local/ldm/util/grbfile.sh > /usr/local/ldm/var/data/images/sat2/\4/\2/CONUS \1 \2 \3 \5 \6 576 > /usr/local/ldm/var/logs/grb\4.log > # > SATELLITE > /data/cspp-geo/.*/(OR_(ABI)-L1b-RadF-M.C(..)_(G..)_s(....)(...).*) > PIPE -close > /usr/local/ldm/util/grbfile.sh > /usr/local/ldm/var/data/images/sat2/\4/\2/FullDisk \1 \2 \3 \5 \6 192 > /user/local/ldm/var/logs/grb\4.log > # > SATELLITE > /data/cspp-geo/.*/(OR_(ABI)-L1b-RadM1-M.C(..)_(G..)_s(....)(...).*) > PIPE -close > /usr/local/ldm/util/grbfile.sh > /usr/local/ldm/var/data/images/sat2/\4/\2/Mesoscale-1 \1 \2 \3 \5 \6 720 > /user/local/ldm/var/logs/grb\4.log > # > SATELLITE > /data/cspp-geo/.*/(OR_(ABI)-L1b-RadM2-M.C(..)_(G..)_s(....)(...).*) > PIPE -close > /usr/local/ldm/util/grbfile.sh > /usr/local/ldm/var/data/images/sat2/\4/\2/Mesoscale-2 \1 \2 \3 \5 \6 720 > /user/local/ldm/var/logs/grb\4.log > # The first thing that comes to mind is a write permission error. Please send the output of: <as 'ldm'> ls -alt /usr/local/ldm/var/data/images/sat2/G16/ABI The next thing is to ask if you would send us the following files that are being used: LDM configuration file - ~ldm/etc/ldmd.conf pattern-action file - whatever patter-action file you are using ~ldm/util/grbfile.sh - just to make sure that it is not munged somehow The other alternative is to get access as 'ldm' on the machine on which you are trying to do the processing. If the machine is titan, then I already have the needed credentials to login (as Sen). If it is some other machine, then I likely do not have the proper credentials (unless they match Sen's for titan). Cheers, Tom -- **************************************************************************** Unidata User Support UCAR Unidata Program (303) 497-8642 P.O. Box 3000 address@hidden Boulder, CO 80307 ---------------------------------------------------------------------------- Unidata HomePage http://www.unidata.ucar.edu **************************************************************************** Ticket Details =================== Ticket ID: VOZ-961042 Department: Support IDD Priority: Normal Status: Closed =================== NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.