Read the statement by Michael Teeuw here.
Alexa Goes Handsfree!
-
@d3r not quite - I’m almost there but trying to login to my AWS mirror profile after authorising it and I’m failing their ridiculous ‘type the characters you see in this image’ challenge :)
Edit: At last! through…
-
It’s awesome :)
I’ve had Alexa tell me the day, a joke and asked her a maths question.
The detection is spot on and the response time is pretty acceptable
The sooner this can get made into a module the better :)
-
The changes to the lightdm config file is probably what it’s warning you about. Remember during the MM installation, you should’ve edited that file to turn off the screen blanking. So you should always do a comparison between what your file looks like and what the vendor is releasing and apply the changes as needed.
-
That’s what it was. Thanks @KirAsh4!
-
Didn’t look into this very much, so maybe one of you can answer this: is it possible to let Alexa execute commands or code? So we can incorporate it into MagicMirror²?
-
I don’t think it can from what I’ve seen @MichMich - I could be wrong, but it isn’t obvious that it could.
Running the Alexa sample requires 3 terminal sessions -
one runs the companion service which stores the application token information and runs a listener on port 3000
one runs the java client under maven
One runs the wake word agent, running one of two engines - kitt_ai or sensory (I opted for kitt_ai)I can’t tell at the moment, what information is returned by AVS, so it’s difficult to know if and how usable it is. It does support IFTTT however. Combining it with the MMM-IFTTT module, whilst not elegant, may be a workaround?
For what it’s worth, the install, whilst lengthy was fully automated and incredibly trouble free - it worked straight out of the box on a fresh Raspian install which already had a barebones MagicMirror2 install.
Also of note, every time I need to restart the java client, I am required to re-authenticate my device to get a new bearer/autentication token. These tokens do not seem to last that long and the authentication process involves logging into Amazon through a web page with an image security verification. No doubt that’s down to how this sample is implemented but might be worth knowing about
-
It is possible to tell alexa to execute the codes. There are several projects using aws lamda to process the codes. This guy for example did it with his mirror.
[https://forum.magicmirror.builders/topic/641/the-mystic-mirror-an-alexa-powered-magic-mirror](alexa mirror) -
@d3r said in Alexa Goes Handsfree!:
@bhepler Are you running it from your pi or do you have the echo/dot?
If you are using the pi then yes you can customize the wake word which will trigger it into listening mode.How did you customize the wake word? I’m using the Alexa AVS sample app on my Rpi3 which uses snowboy as the wakeword agent and everything is working great, but would love it if I didn’t have to use the name “alexa”!
-
@carteblanche [https://groups.google.com/a/kitt.ai/forum/m/#!topic/snowboy-discussion/UIX2eK4ScjI](Recompile for custom wake word)
You need to create your own voice model and recompile the code. Check the above link. It is pretty straight foward though I found that I get lots of false hits but that’s because my model number doesn’t have a lot of samples. -
@d3r this is awesome…thanks for sharing!