[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[IDD #MIY-175774]: "WARN" Log Message in Concatenate NEXRAD2 Script
- Subject: [IDD #MIY-175774]: "WARN" Log Message in Concatenate NEXRAD2 Script
- Date: Fri, 22 Jan 2016 14:45:55 -0700
Hi Ziv,
re:
> Yes, writing to RAM takes away all the 'WARN' messages.
Excellent!
re:
> I am trying to get the NEXRAD2 data in the same format that Amazon has
> their NEXRAD2 data so our data can be used interchangeably with theirs.
OK. This means that modifications will need to be added to the processing
that you are currently doing, or the process may have to be completely
replaced.
re:
> It
> seems that the concatenated NEXRAD2 scans are zipped, and a bit of research
> reveals that Unidata has a decoder through their GEMPAK distribution.
We are _not_ using the GEMPAK routine in the processing we are doing in AWS.
Instead we (Ryan May of our group) wrote some Python scripts that does the
end-to-end processing. In order to get this to work correctly, Ryan had
to do quite a bit of tweaking to make sure that the system wouldn't get
overloaded.
re:
> I was
> thinking about unzipping the NEXRAD2 files and then gzipping them. Do you
> know how Amazon processes their NEXRAD2 to get the files like the ones here
> <https://s3.amazonaws.com/noaa-nexrad-level2/index.html>?
Yes, because we are the ones that are writing real-time NEXRAD2 data to the
S3 bucket that contains the Level 2 archive that was moved up to AWS as part
of the NOAA Big Data project.
re:
> If you think this is a good idea could you also offer me some guidance in
> how to install/use this decoder?
I don't think it is a good idea.
So, what needs to change from what you are doing now? A couple of extra
steps:
- unbzip2 the volume scan chunks after writing (pattern-action FILE action)
them to (RAM) disk
It may be the case (I say may since I haven't tested the following) that
instead of FILEing the volume scan pieces straight to dis, you could PIPE
them to a script, and the script would write the bzipped product to disk
and then unbzip it. BUT, I am worried about the overhead that running
a script for each volume scan piece received given your bad experience
in trying to simply write (again FILE) individual products to disk in
your VM.
The only way we will know if this approach will work in your VM environment
is to create a script and then test it.
- gzip the reconstituted volume scan that would be put together from the
unbzipped pieces
This step could easily be added to the procedure you (should have)
developed to copy the reconstituted volume scans from RAM disk to
your desired end-point.
Cheers,
Tom
--
****************************************************************************
Unidata User Support UCAR Unidata Program
(303) 497-8642 P.O. Box 3000
address@hidden Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage http://www.unidata.ucar.edu
****************************************************************************
Ticket Details
===================
Ticket ID: MIY-175774
Department: Support IDD
Priority: Normal
Status: Closed