This archive contains answers to questions sent to Unidata support through mid-2025. Note that the archive is no longer being updated. We provide the archive for reference; many of the answers presented here remain technically correct, even if somewhat outdated. For the most up-to-date information on the use of NSF Unidata software and data services, please consult the Software Documentation first.
>From: address@hidden >Organization: St. Cloud State >Keywords: 199904091934.NAA23884 McIDAS-XCD Alan, >Well, after reading your latest message of late yesterday, I feel >like I may have surrendered almost entirely; I hope you don't mean that you did not want me to go through the various configuration steps. Please let me know if I have stepped on some toes. >seriously, I appreciate >everything, you people are great and always have been. Hmm... sounds like beer time to me :-) >My assistant snooped a little, but I did not get to talk with him >when he finished. I snooped some also, and did not get too far. > >Mcidas starts fine, but initially I got a message about not being >able to start the GUI do to a lack of colors. You are probably already running one or more X applications that are using up your color table. I am willing to bet that the culprit is something like Netscape (tm). In order to get around this, you can exit the other applications; start the GUI/McIDAS; and then restart the other applications. The other thing that can be done is to configure CDE (the Common Desktop Environment you are probably using) to use less colors. >So, I looked at .mcidasrc >and changed no. of colors to only 32 for frames, leaving 16 for graphics. Since the default now going out with the 7.5 distribution is 64 image colors, it means that you really are out of colors. Again, shutting down other applications will allow you to start with more colors for images. They will look a lot better with 64 than 32. After that, however, it takes a keen eye to notice a difference. >Upon restart, the message about colors was gone, but now, when I typed >MCGUI (I see where you have commented out the line for an auto start >of the GUI in .mcidasrc) there is another error message that I do not >understand. I expect you can identify the solution more easily by looking >yourself, so I will not copy it here. OK, the error was related to $str(?GFILE) not being defined. The reason for this was that the Fkey/GUI configuration file, UNIMENU.DEF, had not been copied to the /home/mcidas/workdata directory. This file needs to be copied to each user's McIDAS working directory for the Fkey menu and UPC GUI application to work. After seeing this on your system, I started redesigning how this is used in my GUI. In a future Addenda (upgrade to code in an existing distribution), I will have this more error proofed. >I will be happy to do the editing >if that is the problem, just let me know. No problem. This one was due to me not finishing off the installation as per the instructions in my own web pages. >Just remember, I can buy some of the beer but not all the beer. Hey, why _can't_ you buy all of the beer ;-)? Anyway, your system had been running smoothly for a day, so I decided to take the next steps in its configuration. Here is what I did this afternoon: o copied UNIMENU.DEF from /home/mcidas/data to /home/mcidas/workdata Again, this step will disappear in an upcoming Addenda; sorry for the problem. As 'ldm' I: o edited /usr/local/ldm/etc/ldmd.conf. I modified which routines get started at LDM startup: # Programs that share a queue with rpc.ldmd # are started by it and are in the same process group. # exec "pqexpire" exec "xcd_run MONITOR" exec "pqbinstats" exec "pqact" #exec "pqsurf" The addition is the line 'exec "xcd_run MONITOR"' o modified which feeds are being requested from the upstream host: # LDM5 servers we ask for data # # request <feedset> <pattern> <hostname pattern> # request DDPLUS|IDS|FSL2|MCIDAS ".*" hobbes.stcloudstate.edu #this is primary upstream site The additional feed requested is FSL2. It is likely that you will need to modify hobbes to request this feed from its upstream host. If/when you decide that you want to ingest NCEP model data, you will need to further modify this line and add '|HRS' to the request. o modified the ACCEPT entry: # ACCEPT: Who can feed us # #accept <feedset> <pattern> <hostname pattern> accept DDPLUS|IDS|FSL2 ".*" chinook.unl.edu accept DDPLUS|IDS|FSL2 ".*" aqua.atmos.uah.edu Again, this simply added the FSL2 feed, but it won't be used until waldo starts requesting data from one or the other of these machines. o edited pqact.conf and uncommented out the xcd_run invocation for DDPLUS|IDS data that I added yesterday. This will start sending all textual data (i.e. not HRS data) to the McIDAS-XCD ingester, ingetext.k (which is run by xcd_run) o stopped and restarted the LDM: ldmadmin stop <wait until all LDM processes have stopped> ldmadmin start Now, the McIDAS-XCD point source data monitors/decoders are actively producing output MD files from the FOS/NOAAPORT textual data. It is this step that allows you to not have to get the MD products in the Unidata-Wisconsin datastream. What I have not turned on yet is the NCEP model data decoding. By default in the McIDAS-XCD configuration, this decoding is turned off. The reason I do this is that the model output is a LOT of data, and some sites can simply not handle the volume (no disk space). The other reason is that I want sites to get other things running smoothly before they turn this on. Sitting here looking at your system, I see that XCD is happily producing a LOT of data files in the /var/data/mcidas directory. As an example of the MD files that are now being created, consider the following listing: cd /home/mcidas/workdata mdu.k LIST 1 100 MD# CREATED SCHM PROJ NR NC ID DESCRIPTION ----- ------- ---- ---- ---- ---- ------- ----------- 9 1999119 ISFC 0 72 4500 1999119 SAO/METAR data for 29 APR 1999 10 1999119 ISFC 0 72 4500 1999120 SAO/METAR data for 30 APR 1999 19 1999119 IRAB 0 8 1300 1999119 Mand. Level RAOB for 29 APR 1999 20 1999119 IRAB 0 8 1300 1999120 Mand. Level RAOB for 30 APR 1999 29 1999119 IRSG 0 16 6000 1999119 Sig. Level RAOB for 29 APR 1999 30 1999119 IRSG 0 16 6000 1999120 Sig. Level RAOB for 30 APR 1999 38 1999119 ISHP 0 24 2000 1999118 SHIP/BUOY data for 28 APR 1999 39 1999119 ISHP 0 24 2000 1999119 SHIP/BUOY data for 29 APR 1999 40 1999120 ISHP 0 24 2000 1999120 SHIP/BUOY data for 30 APR 1999 51 1999120 SYN 0 8 6000 1999121 SYNOPTIC data for 01 MAY 1999 58 1999120 SYN 0 8 6000 1999118 SYNOPTIC data for 28 APR 1999 59 1999119 SYN 0 8 6000 1999119 SYNOPTIC data for 29 APR 1999 60 1999119 SYN 0 8 6000 1999120 SYNOPTIC data for 30 APR 1999 69 1999119 PIRP 0 24 1500 1999119 PIREP/AIREP data for 29 APR 1999 70 1999119 PIRP 0 24 1500 1999120 PIREP/AIREP data for 30 APR 1999 -- END OF LISTING MD files 9, 39, 59, 69, 10, 20, 30, etc. are all being produced by XCD decoders. This list will expand as soon as the FSL2 data (wind profiler) and NLDN data (lightning) start arriving on waldo. I assume that you are going to want to get lightning data. I see that you are not currently getting the NLDN data (I checked the IDD current operational status for NLDN data and don't see any machine with stcloud in its name in the list). To get access to this data, you will need to send an email to Dr. David Knight of SUNY Albany: "David J. Knight" <address@hidden>. Dave will set you up with a point-to-point feed of the lightning data. To get it, you will have to request it from the SUNYA machine in your ldmd.conf file. Regarding file sizes, one thing to watch out for: the surface MD files produce by XCD are a LOT bigger than the ones in the Unidata-Wisconsin datastream: /home/mcidas/workdata% dmap.k MDXX PERM SIZE LAST CHANGED FILENAME DIRECTORY ---- --------- ------------ -------- --------- -rw- 27201088 Apr 30 00:46 MDXX0009 /var/data/mcidas -rw- 1376380 Apr 30 00:50 MDXX0010 /var/data/mcidas -rw- 4288304 Apr 30 00:46 MDXX0019 /var/data/mcidas -rw- 549724 Apr 30 00:50 MDXX0020 /var/data/mcidas -rw- 1456832 Apr 30 00:31 MDXX0029 /var/data/mcidas -rw- 497136 Apr 30 00:50 MDXX0030 /var/data/mcidas -rw- 18832 Apr 30 00:00 MDXX0038 /var/data/mcidas -rw- 5023660 Apr 30 00:45 MDXX0039 /var/data/mcidas -rw- 2608780 Apr 30 00:50 MDXX0040 /var/data/mcidas -rw- 151832 Apr 30 00:45 MDXX0051 /var/data/mcidas -rw- 3946480 Apr 30 00:03 MDXX0058 /var/data/mcidas -rw- 7044544 Apr 30 00:49 MDXX0059 /var/data/mcidas -rw- 917424 Apr 30 00:50 MDXX0060 /var/data/mcidas -rw- 6057140 Apr 30 00:29 MDXX0069 /var/data/mcidas -rw- 275876 Apr 30 00:50 MDXX0070 /var/data/mcidas 61414032 bytes in 15 files The surface file, MDXX0009, is 27 MB. The reason is that these files contain not only SAs and RSes, but also SPs and CORs. Additionally, they have data from the entire world. The same is also true of the upper air MD files, but they are not nearly so much larger since the US has a substantial portion of the world's upper air reporting stations. The ones listed, however, do not have very many of the upper air obs since I started the decoder at 23 Z. We will see them grow substantially as the 0Z obs continue to come in. Since I setup scouring yesterday, I am not worried about the amount of disk that will be used by the XCD decoders. In fact, things are looking so good that I will turn on the model data decoding now. As the user 'ldm' I: o edited /usr/local/ldm/etc/pqact.conf and uncommented out the xcd_run invocation for HRS data: changed: #### Entries for XCD decoders # ## DDPLUS|IDS ^.* PIPE xcd_run DDS #HRS ^.* PIPE # xcd_run HRS to: #### Entries for XCD decoders # ## DDPLUS|IDS ^.* PIPE xcd_run DDS HRS ^.* PIPE xcd_run HRS o sent a HUP to pqact to tell it to reread pqact.conf: ps -eaf | grep pqact ldm 22213 22209 0 23:05:42 ? 0:01 pqact kill -HUP 22213 This does not yet turn on the XCD grid decoder. It does, however, start having HRS data sent to the XCD ingester, ingebin.k. ingebin.k, in turn, writes the data to the spool file, HRS.SPL, in the /var/data/mcidas directory. The next step in gettting the model decoding to work in XCD is to enable it (as per the web instructions). As 'mcidas': cd /home/mcidas/workdata decinfo.k SET DMGRID ACTIVE OK. The gridded data will now be decoded into McIDAS GRID files in the /var/data/mcidas directory IF/WHEN you start requesting the data from hobbes (recall that the request line in ldmd.conf did not specify the HRS data) and hobbes starts requesting the data from its upstream host. This whole model data setup is tricky. The volumes of model data in the NOAAPORT datastream are huge. It may be that you do not have the bandwidth to get all of the data. What we need to do before turning the feed on, therefore, is to figure out what model data you really want to get and decode. This is best done through what you request from the upstream feeder, not by what you decide to decode. By the way, back on the subject of starting McIDAS. Try the following: o logon as 'mcidas' o set your DISPLAY environment variable o run: mcidas config This brings up a GUI that allows you to set various parameters (read from .mcidasrc) before starting the real GUI (MCGUI). It also allows you to start the Fkey menu or nothing at all. When you start the UPC GUI from this interface, the McIDAS text window is not started. The command mode is, however, still available through three different methods: a GUI replacement for the command mode; multiple instances of the McIDAS text window; or multiple instances of a GUI command mode. Sounds complicated, but in practice you don't start up any of these. To view the command mode associated with the GUI, simply click your left mouse button on the keyboard icon near the top of the GUI (several folks have commented that they can't tell that the icon is a keyboard; it is the one just to the left of the button with the red Z on it). In this command mode, you can see and reexecute commands that the GUI runs to product plots. This is a handy way for the user to see what McIDAS command will produce what output. Tomorrow, I think that we should turn off decoding of the MD files in the Unidata-Wisconsin datastream. I just plotted several surface maps from your system and see that the XCD decoders are working well. As a general question, what are your plans vis-a-vis which machine(s) will ingest and decode data? Tom