Read the statement by Michael Teeuw here.
3 different Alexa modules, which one is the best?
-
@lucallmon Great, what is your timezone ? when am I the most likely to find you ? Even though you not available to gitter us your magic right now, I’ll like to be ready when you are
-
@romain I’m US eastern.
-
@rchase said in 3 different Alexa modules, which one is the best?:
I think MMM-Alexa by sakirtemel is the best.
Because it’s pretty good from other.
Thanks :) -
@romain said in 3 different Alexa modules, which one is the best?:
ncomplete instruction to me. They explain how to setup some stuff but not how to test it. So
Hi @romain , did you get to test all modules and conclude which one is the best? I am totally lost with MMM-Alexa. I installed it and can see the yellow square in the MM interface but I don’t know how to get it working. Documentation says:
" You can easily develop your own module and control this module or get notified about events happening"
But it does not say how to develop this module :(
Any clues will be greatly appreciated.
Thanks
-
@borrigan Hello. I end up successfully setting MMM-AlexaPi and MMM-alexa.
they both have adventages and inconvinient
-MMM-AlexaPi- This module isn’t a stand alone module. You need to install AlexaPi on you pi ( https://github.com/alexa-pi/AlexaPi )
- This module is ONLY to show the status of AlexaPi. And this is AlexaPi that allow you to talk to Alexa. Meaning you don’t need the mirror at all to use it. But it’s neat because you can “see” what’s happening. Also Alexa saying “yes” when you say her name.
- It might or might not be easy to setting up. Some people won’t have any issue using it by following the setup instruction and other would have to tweaks some things.
- slower than MMM-alexa
-MMM-alexa
- It is a stand alone module. Meaning you just need this module to talk to Alexa.
- The square indicate you the status . yellow is bad configuration if I recall correctly. Red is ready to listen. green is listening.
- It is not activated by voice. You need either a button or to send the right notification to the module to make Alexa listen to you.
- It seem to not answer as much as MMM-AlexaPi
- You need to add https://sakirtemel.github.io/MMM-alexa/ to the allow return URL in your avs security settings of your device . And need to generate the first token by going in that first url and enter the requested information
- faster than MMM-AlexaPi
in both case I had to tweak some of my sound configurations files so my microphone is picked up and the output is my speaker.
Because MMM-Alexa needed to be activated manually or by another module (and the fact it seem to provide less answer) I think MMM-AlexaPi is better. Even though AlexaPi seem a bit slower to answer. To trigger MMM-Alexa I used the voicecontrol module (it’s a module that convert voice into notification. I set up the word “Alexa” to send the right notification to mmm-alexa) but the voicecontrol module seem to have a loooooooooot of false positive. It was activated way to much even if I didn’t talk sometime.I didn’t successfully made mirrormirroronthewall worked though (I might try again later. I didn’t tryed since I made the other two works)
I think mirrormirroronthewall is probably the best module of all three since is suppositivly allow you to do more than the other two that only allow you to ask question to alexa and get answers from it.as for the
But it does not say how to develop this module :(
You can either use an already existing moduel like “voicecontrol” to do that.
or developping something yourself. to do that:
You can for example take as a template the helloworld module in the ~/MagicMirror/modules/default as a template . copy/past it in the ~/MagicMirror/modules , rename the folder to something else. let’s say “toto” as an example, then rename the file inside it to match that name. helloworld.js should now be toto.js . now open the toto.js and replace all the hello words by toto.
Then delete thetext: "Hello World!"
and thewrapper.innerHTML = this.config.text;
since you don’t need to display anything.
Now you can writethis.sendNotification('ALEXA_START_RECORDING', {});
where was the wrapper.innerHTML = this.config.text;
And that send the notification to start recording what you are saying to mmm-alexa. However that example will only work once.If you never code something before, I don’t recomand you to do the developping of a module like this yourself. It’s better to use an already existing module.
/!\ if your sound configuration isn’t good, you might not be able to use voicecontrol and mmm-alexa in the same time.
Explaining the sound configuration here would be pointless since you might not have those issue. -
Thank you very very much @romain ! I will go for AlexaPi with MMM-AlexaPi. I spent all yesterday trying to set it up, and I will continue today, your input is very valuable. Just a few newbie, extra questions:
-
AlexaPi service, when running, is “Always On”? It is always listening? How to test it directly without having integrated with MMM-AlexaPi Yet?
-
Where can the activation word be configured? If I just say “Alexa”, should it work?
-
Finally, I spent hours troubleshooting the microphone. It is a very standard Logitech Microphone. I just could not get it working.
Here are:
Result from >lsusb command:
pi@raspberrypi:~ $ lsusb
Bus 001 Device 006: ID 17ef:6019 Lenovo
Bus 001 Device 005: ID 04b3:3025 IBM Corp. NetVista Full Width Keyboard
Bus 001 Device 004: ID 046d:0a03 Logitech, Inc. Logitech USB Microphone
Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter
Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp.
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubresult from the > arecord -L command:
pi@raspberrypi:~ $ arecord -L
null
Discard all samples (playback) or generate zero samples (capture)
pulse
PulseAudio Sound Server
sysdefault:CARD=Microphone
Logitech USB Microphone, USB Audio
Default Audio Device
front:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
Front speakers
surround21:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
2.1 Surround output to Front and Subwoofer speakers
surround40:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
4.0 Surround output to Front and Rear speakers
surround41:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
4.1 Surround output to Front, Rear and Subwoofer speakers
surround50:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
5.0 Surround output to Front, Center and Rear speakers
surround51:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
5.1 Surround output to Front, Center, Rear and Subwoofer speakers
surround71:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
7.1 Surround output to Front, Center, Side, Rear and Woofer speakers
iec958:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
IEC958 (S/PDIF) Digital Audio Output
dmix:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
Direct sample mixing device
dsnoop:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
Direct sample snooping device
hw:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
Direct hardware device without any conversions
plughw:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
Hardware device with all software conversionsLog from AlexaPi
● AlexaPi.service - Alexa client for all your devices
Loaded: loaded (/usr/lib/systemd/system/AlexaPi.service; enabled)
Active: active (running) since Mon 2017-04-24 10:59:14 CDT; 7s ago
Docs: https://github.com/alexa-pi/AlexaPi/wiki
Main PID: 1626 (python)
CGroup: /system.slice/AlexaPi.service
├─1626 /usr/bin/python /opt/AlexaPi/src/main.py --daemon
├─1638 /usr/bin/pulseaudio --start --log-target=syslog
└─1657 sox -q /opt/AlexaPi/src/resources/hello.mp3 -t alsa default vol -6 dB pad 0 0Apr 24 10:59:20 raspberrypi pulseaudio[1638]: [pulseaudio] module-udev-detect.c: Tried to configure /devices… 10s
Apr 24 10:59:21 raspberrypi python[1626]: Exception in thread Thread-1:
Apr 24 10:59:21 raspberrypi python[1626]: Traceback (most recent call last):
Apr 24 10:59:21 raspberrypi python[1626]: File “/usr/lib/python2.7/threading.py”, line 810, in __bootstrap_inner
Apr 24 10:59:21 raspberrypi python[1626]: self.run()
Apr 24 10:59:21 raspberrypi python[1626]: File “/usr/lib/python2.7/threading.py”, line 763, in run
Apr 24 10:59:21 raspberrypi python[1626]: self.__target(*self.__args, **self.__kwargs)
Apr 24 10:59:21 raspberrypi python[1626]: File “/opt/AlexaPi/src/alexapi/triggers/pocketsphinxtrigger.py”, …hread
Apr 24 10:59:21 raspberrypi python[1626]: inp = alsaaudio.PCM(alsaaudio.PCM_CAPTURE, alsaaudio.PCM_NORMAL, …ce’])
Apr 24 10:59:21 raspberrypi python[1626]: ALSAAudioError: Input/output error [front:CARD=Microphone,DEV=0]
Hint: Some lines were ellipsized, use -l to show in full.Thank you so much for your help, I don’t mean to take too much from your time but I am really frustrated :(
Gerardo
Mexico City -
-
AlexaPi service, when running, is “Always On”? It is always listening? How to test it directly without having integrated with MMM-AlexaPi Yet?
you should hear “hello” when it start and then if you say Alexa you should hear “yes” then you can ask a question. It is always listening.
A little tips for you, before using the service, try to run AlexaPi yourself. It provide you more debugging info that way.
To do it, first desactivate the service with the terminal by writingsudo systemctl stop AlexaPi.service; sudo systemctl disable AlexaPi.service
(you can reactivate later by doingsudo systemctl enable AlexaPi.service
).
Then you can run AlexaPi yourself by writing in the terminal/opt/AlexaPi/src/main.py -d
(If you you putted it on that location anyway).Where can the activation word be configured? If I just say “Alexa”, should it work?
in the yaml file in
/etc/opt/AlexaPi/config.yaml
there is a sectionpocketsphinx:
(this is what is use to detect words I believe). in that section you have a keyphrase:
with"alexa"
as a value. you can simply change the word to another word. Do not choose an over complicated one though, I think pocketsphinx try to guess what it supposed to sound like based on the spelling so if you choose a word not in English it might not guess correctly. You shoudl let alexa untill you get it to work though, because we know that one work for sure. You can test other word later.Finally, I spent hours troubleshooting the microphone. It is a very standard Logitech Microphone. I just could not get it working.
I am no expert in sound but I going to try to help you.
First, I rather have the output fromarecord -l
rather thanarecord -L
. the later give to much information to my understanding.
The output of myarecord -l
look like this:pi@raspberrypi:~ $ arecord -l **** List of CAPTURE Hardware Devices **** card 1: Device [USB Audio Device], device 0: USB Audio [USB Audio] Subdevices: 1/1 Subdevice #0: subdevice #0
I can see it’s “card 1” and “device 0” . that’s what I am interesting about.
then, myaplay -l
give mepi@raspberrypi:~ $ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: ALSA [bcm2835 ALSA], device 0: bcm2835 ALSA [bcm2835 ALSA] Subdevices: 8/8 Subdevice #0: subdevice #0 Subdevice #1: subdevice #1 Subdevice #2: subdevice #2 Subdevice #3: subdevice #3 Subdevice #4: subdevice #4 Subdevice #5: subdevice #5 Subdevice #6: subdevice #6 Subdevice #7: subdevice #7 card 0: ALSA [bcm2835 ALSA], device 1: bcm2835 ALSA [bcm2835 IEC958/HDMI] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: Device [USB Audio Device], device 0: USB Audio [USB Audio] Subdevices: 1/1 Subdevice #0: subdevice #0
I using the default speaker output, which is the bcm2835 Alsa device which is “card 0” and “device 0”
Knowing that you can test if your device working by doing the following:
meaning you should be able to do aaplay -D hw:0,0 /usr/share/sounds/alsa/Front_Center.wav
to test your speaker where “hw:0,0” (you could also put “plughw:0,0”) is your output device. Not sur what does “hw” and “plughw” mean, but the numbers are the card number and the device number. 0,0 in my example (remember the aplay -l above)
this will use the vlc handler i beleive. You can test it the same way withplay
instead ofaplay
for the sox handler.you can test your michrophone in the same way by doing
arecord -r 48000 -f S16_LE -D hw:1,0 -d 5 test.wav
this will tell to record with the device hw:1,0 (in my example the card 1, device 0) for 5 second at a rate of 48000Hz and as S16_Le type (not sure what that is) and the result will be put in a file name test.wav. You can thenplay
oraplay
that file to see if the recording whent well.If you manage to make those command work. In the
/etc/opt/AlexaPi/config.yaml
there is a sectionsound:
with a key “input_device:” and a key “output_device:” . Both of those key have for value “default” . for the output it’s easy, the default is the system defautl output which is hw:0,0 in my example. so you can let default or put “hw:0,0” instead if you want to use that output like me.
The tricky part is the input. The input is your michrophone, but there is no “default” michrophone in the pi right ? so the chance AlexaPi understand what you are talking about are prett slim in my opinion.
But when I wrote “hw:1,0” or “plugwh:1,0” instead of default , I had an error. For some reason it didn’t wanted to take that value. That’s when trickery was needed for me. I’m sure some people hadn’t to do that but I did.
I creat a alsa configuration file name “asound.conf” with the following in it :pcm.myTest { type dsnoop ipc_key 816357492 ipc_key_add_uid 0 ipc_perm 0666 slave { pcm "hw:1,0" channels 1 } } pcm.!default { type asym playback.pcm { type plug slave.pcm "hw:0,0" } capture.pcm { type plug slave.pcm "myTest" } }
This overrite what is the “default” configuration.
type asym
mean my playback and default arn’t on the same sound device.
theplayback.pcm
describe what is my output device. In my example you can see i putted “hw:0,0” because I use the default output.
thecapture.pcm
describe the input device (microphone). I put “MyTest” as a name which is describe above in the file (the name isn’t very explicit but I was testing and never changed it >.> )
I am not going to enter in the detail for the myTest thing, but basically, it’s tell that the michrophone can be use in multiple application at the time and that it is the “hw:1,0” (remember the arecord -l thing above in the post)This asound.conf file should be put in the
/etc/
folder.Well. That’s it. That’s what I had to do to make my microphone work for AlexaPi. Adapt this to your own devices and it might work like me.
Remember to test it whitout the service first and when that’s work you activate the service and see if it still work or not. and if not we’ll try to understand why. -
This post is deleted! -
@romain thank you very very much for all this feedback. I was definitely missing some pieces of the puzzle. I will try this again today and post my findings here. Have a great day!
-
@johnnyboy I struggle because of this @johnnyboy at first. But the lambda service is working both with us and ireland (west-eu or eu-west. don’t remember exactly now).
I end up have skill that was working. On the amazon interfaces anyway by choosing that instead of us. (even though I’m in france. I had no error anymore on the amazon test aera)
I had to change the hard coded region in the code though (easy fix).
I think I had no more error at the end (don’t remember for sure). But nothing was happening when I talked (maybe it was again a microphone issue. Not really sure about that)
I also did not understood how to use thenpm start dev
correctly at that time so maybe I would be able to do a better job at finding the issue now.
Maybe I try today, maybe I never try again. we’ll see (it’s not my raspberry pie I using so I am limited in when I can use it. So I rather do what I am ask to first before trying again)