Please read the release notes carefully since this update requires adjustments to your Raspberry Pi configuration!
Alexa Responses Displayed on Mirror
lucallmon last edited by lucallmon
I have the MM2 and AlexaPi module installed on my RPi. What I would like to do is what happens when you talk to Alexa on a Fire/Fire TV stick. When you talk to Alexa on one of those devices, the words and results of what she says comes up on the screen. If I were to say, “Alexa, show me a picture of George Washington.” A picture would come up on the screen with the words, “here’s a picture of George Washington.”
Does anyone have any ideas of how to do this?
You would have to read the card information coming back from alexa.
I’m currently in the middle of a project doing just that!
I’ve come to a point where I can communicatie between alexa and the mirror. Passing on info from the card is one of my next steps.
Maybe we can keep each other posted on the progress? See mine in my blog :
lucallmon last edited by
@bartalluyn Are you starting a module from the ground up or are you building a fork to one of the other modules?
The reason I ask is because I have a PERFECT running interface with MM and MMM-AlexaPi that is very responsive.
I installed MM, then installed AlexaPi separate from https://github.com/alexa-pi/AlexaPi, then I installed the module from https://github.com/dgonano/MMM-AlexaPi. After ensuring that I have root access enabled for all apps and modules, it then started working flawlessly.
If you could possibly create a fork to the MMM-AlexaPi module I think you might have some good success and have more people out there to help out with problems.
BTW, I’m no coder. I’m just a user. I don’t know if I can help much. But good luck!
as a matter of fact I am. I’m using the sample software provided by Amazon, and adapt it to be able to talk to the mirror.
It has plenty of advantages :
- not using AlexaPi, which is a third party software module, possible containing extra bugs (with all due respect to the developers of course ; respect), where as I’m supposing the Amazon samples all run pretty good
- Amazon supports it, and when new features arrive, new samples would theoretically be easier to integrate without having to code them myself or wait for the alexa pi community.
- All features are supported out of the box: music playback, alerts, feedback cards. These features are not all supported by AlexaPi, I think (or I should be mistaken).
- Wake word detection provided by Amazon, not using sphynx etc (or using a hardware button for that matter). Something provided from the vendor, free of charge (for personal use).
- Automatic integration of custom skills.
- My interface will use a simple communication platform in which practically no external code is required, except for some java code extending the java client provided by Amazon. But that is rather straight forward.
Any questions, tips, pointers, etc. Keep them coming, and subscribe to the blog to keep up to date with my progress. Eventually I’ll be uploading the whole shabang to GitHub, but you can see all the code & the explanations in detail right now on the blog as well.
lucallmon last edited by
@bartalluyn It sounds like you have a plan!
Here’s something interesting you might be able to grab something from the code: https://www.hackster.io/mexitek/magic-mirror-amazon-echo-and-aws-iot-29bba5
This guy was able to connect a physical Amazon Echo to his mirror and have it display pictures on the mirror using the Alexa skill function. Maybe this will help somehow.
things for the tip!
I’ll be sure to check that out…
schlachtkreuzer6 last edited by
@lucallmon thats awesome! i got an echo dot maybe i´ll give this a try