This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
Hi Alex, re: > I un-commented the EXEC "pqact -f WMO etc/TDS/pqact.obsData" line in the > ldmd.conf files and restarted the ldm successfully. Very good. re: > However, the /data/ldm/pub/decoded directory still does not exist. Hmm... Now I have no idea of what is going on on your system. I found the need to install the Perl packages by replicating the environment I provided to you in my CentOS x86_64 6.8 development environment. After installing the Perl packages, the netCDF decoders started working as expected. re: > To help figure out what is going on, I > looked in the log file using the following command: less > ~/var/logs/ldmd.log. A screen shot is attached, but errors are popping. Thanks for including the snapshot of LDM log file output. It does show a problem, but the problem reported has nothing to do with the netCDF perl decoders. In fact, the problem seems to indicate that the LDM FILE action for FNMOC model output is failing, and this should indicate that either your file system is full (which would indicate that your data scouring is not working), or the read/write permissions on the target output directory is no longer set to allow the LDM user to write. Questions: - how much space is left on the file system to which LDM processes are writing output? Please send the output of 'df -h'. - what are the current read/write permissions on: /data /data/ldm /data/ldm/pub /data/ldm/pub/native /data/ldm/pub/native/grid /data/ldm/pub/native/grid/FNMOC /data/ldm/pub/native/grid/FNMOC/NAVGEM /data/ldm/pub/native/grid/FNMOC/NAVGEM/Global_0p5deg re: > In addition, I have the following lines un-commented in the ldmd.conf file > > EXEC "pqact -f NGRID|CONDUIT|HRS|FNMOC|SPARE etc/TDS/pqact.forecastModels" > EXEC "pqact -f NGRID|CONDUIT etc/TDS/pqact.forecastProdsAndAna" > > I noticed in the directory /data/ldm/native/grid I have the following > directories FNMOC NCEP NPVU. A could questions/comments: > > 1) What is in the NPVU directory? National Precipitation Verification Unit (NPVU). This is a product from River Forecast Centers. The action that FILEs the products is in the pattern action file: ~ldm/etc/TDS/pqact.forecastModels re: > inside of that directory, the sub > directories are RFC/[KALR KFWR KKRF KMSR KORN KPTR KRHA KSTR KTAR KTIR KTUA > PACR]. Each of the sub directories in the square brackets has a variety of > grib files inside. As it should. re: > 2) Within the directory /data/ldm/native/grid/NCEP/GFS are three > directories: CONUS_80km/ Global_0p5deg/ Global_0p5deg_ana/. The CONUS_80km/ > directory contains grib files through 10/09/2016, but nothing since that > time. Similarly, the Global_0p5deg/ and Global_0p5deg_ana/ directories > have data through 09/27/2016 and 09/25/2016, respectively. Any thoughts on > why this data is not up to date? The possibilities are: - your LDM is not receiving the data that would be written to the various directories We would be able to tell if this is the case if your system was reporting real-time stats back to us. - the file system where the data products are to be written is full The output from 'df -h' requested above should shed light on this. re: > In addition, what is the difference > between the data in the Global_0p5deg/ and Global_0p5deg_ana/ directories? The products in Global_0p5deg_ana are analysis fields. re: > 3) Similarly, the in the /data/ldm/native/grid/NCEP/NAM/CONUS_20km/noaaport/ > directory, the files are logged until 09/23/2016 (and through 09/30/2016 in > the CONUS_80km sub directories). Same comments as above. re: > 4) Some of the FNMOC products are up to date (NAVGEM, COAMPS) but other > like WW3 are not (up though 09/23/2016). This would suggest that the output file system is not full, or it is very close to being fill and some files can be written. Again, we need to see the output of 'df -h'. re: > Thanks for your help! If your output file system is full, it would explain why the netCDF Perl decoders are not working as I expected them to after the installation of needed Perl packages. Cheers, Tom -- **************************************************************************** Unidata User Support UCAR Unidata Program (303) 497-8642 P.O. Box 3000 address@hidden Boulder, CO 80307 ---------------------------------------------------------------------------- Unidata HomePage http://www.unidata.ucar.edu **************************************************************************** Ticket Details =================== Ticket ID: FGZ-978731 Department: Support IDD Priority: Normal Status: Closed =================== NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.