Opsgenie has an interesting post up about using the Amazon
stack including a transcription service to decode amateur
radio traffic and send alerts. In the post entitled
Convert Radio Waves to Alerts using SDR, AWS Lambda and Amazon Transcribe, Farhi Yardimci uses ham2mon
and a stack of AWS services to recognize the words "red"
and "blue" in transcribed speech and generate Opsgenie alerts from them.
In thinking about this stack, it's impressive that it can
be done, and somewhat expensive if you have a lot of traffic
to monitor. The listening device has to transfer all of its
data up to AWS for processing, with the attendant costs and lag.
As a comparison in design, it's worth considering taking the
general purpose voice recognition back end and replacing it with
a very small model tuned to a specific "wake word" that could
run on the device with no added need for external processing.
The useful version of this I could imagine would be a system
listening to the local Skywarn channel triggering only on the
word "Skywarn". If it was trained on some actual traffic, it
should wake up during severe weather as well as during the weekly
practice nets.
Some tooling that looks promising, at least to start from:
porcupine, from Picovoice,
which has some built in wake-words and can be trained from a console
precise from Mycroft AI,
which is open source
Hackaday has some videos of the porcupine demo, which will run in as little as 512KB of memory.