This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Hi Mike, re: > As you can see by the attached log file, I plainly misunderstand the > use of the PIPE command in pqact.conf. I certainly thought I did > understand, but nope, guess not. > > The bash script I'm attempting to use simply gunzips the incoming > product stream the uses andx to save a file of selected fields from the > data product. From that point I can use the data for further > analysis. I am unfamiliar with 'andx'. Is this 'Arm Netcdf Data eXtract' as described in: http://engineering.arm.gov/~sbeus/andx-web/html/ > Can you make some suggestions regarding the process used to accomplish > my simple task? Your current 'expand_and_dump.sh' script is: #!/bin/bash # This small script simply unzips and incoming data product, extracts fields used for wx/net, then writes the result to a file. # This file will be used as input into a MySQL database. gunzip > ~/MADIS_data/utilities/andx -c sfc_cfg exit I think you want to pipe the output of gunzip to ~/MADIS_data/utilities/andx, correct? If yes, I would be explicit in telling gunzip to write its output to STDOUT (just good style) and replace '>' with '|': gunzip -c | ~/MADIS_data/utilities/andx -c sfc_cfg However, after (quickly) reading the 'andx' web page above, I am left with the impression that 'andx' reads data from a disk file, not STDIN (this makes sense since a netCDF is a random-access file). If this really is the case, it seems like your script should read: gunzip -c > $1 ~/MADIS_data/utilities/andx $1 -c sfc_cfg NB: this pre-supposes that the directory hierarchy in which the output file is to be written exists. If it doesn't, or if you are unsure if it does, you could be more explicit in the creation of the directory(ies): # Create directory structure fname=`basename $1` dirs=`echo $1 | sed s/$fname//` mkdir -p $dirs # Uncompress the input file and use 'andx' to extract values gunzip -c > $1 ~/MADIS_data/utilities/andx $1 -c sfc_cfg # Done exit The inclusion of the directory creation step would make your script look a lot like one I wrote for LDM use, ldmfile.sh: #!/bin/sh #-------------------------------------------------------------------------- # # Name: ldmfile.sh # # Purpose: file a LDM product and log the receipt of the product # # Note: modify the 'LOG' file to suit your needs # # History: 20030815 - Created for Zlib-compressed GINI image filing # #-------------------------------------------------------------------------- SHELL=sh export SHELL # Set log file LOG=/home/ldm/logs/ldm-mcidas.log exec >>$LOG 2>&1 # Create directory structure fname=`basename $1` dirs=`echo $1 | sed s/$fname//` mkdir -p $dirs # Write stdin to the designated file and log its filing echo `date -u +'%b %d %T'` `basename $0`\[$$\]: FILE $1 cat > $1 # Done exit 0 > Thanks again. No worries. Cheers, Tom **************************************************************************** Unidata User Support UCAR Unidata Program (303) 497-8642 P.O. Box 3000 address@hidden Boulder, CO 80307 ---------------------------------------------------------------------------- Unidata HomePage http://www.unidata.ucar.edu **************************************************************************** Ticket Details =================== Ticket ID: UKU-190526 Department: Support LDM Priority: Normal Status: Closed