@sdetweil I finally managed to make it work:
On client side (old Android Lollipop box) I am using RtpMic app: it is good, simple and effective. It can start at boot and start streaming mic input via RTP.
I configured it with remote MagicMirror IP, 47474 port (just picked one) and g722 codec (good balance between quality and bandwidth, it just uses 0,09 Mbps)
On server side (MM running on Debian cloud-hosted VM), at first, I would like to only rely on PulseAudio default RTP capabilities.
Unfortunately, I still couldn’t get my PA to receive the RTP stream from RtpMic (though I’m quite sure it is possible, but I keep getting “Unsupported SAP” error) so ATM I workaround this using ffplay:
sudo apt install ffmpeg
ffplay rtp://[local IP]:47474 -acodec g722 -nodisp
(note that local MM IP goes here, not client IP)
You can check the stream flowing in pavucontrol.
Now that RTP stream is correctly flowing, I configure ALSA layer to use Pulseaudio by default (this is needed because GA module relays on ALSA, and Alexa module relays on Sox). This is achieved by creating the file “.asoundrc” under MM user home, and writing the following configuration in it:
pcm.!default pulse
ctl.!default pulse
As a last step, I install and configure GA like that:
micConfig: { // put there configuration generated by auto-installer
recorder: “arecord”,
},
When started, GA module (and virtually any other module) is able to receive voice commands from remote client.
Please note that this solution only cover one way (MM input), from client (running browser) to server (running MM in serveronly mode), the other way (MM output) is already covered by MM playing its audio output inside the browser by default.
This way you can have MM running on any machine in any remote place, and have your mirror displayed by any browser capable client (even an Android one).
Also, I had to use RtpMic/ffplay because my client is Android (so no PulseAudio), but I’m quite confident the same schema can be applied natively on any other PulseAudio capable device.