Which Voice Control module is fit for me?
Which Voice Control module is fit for me?
This might be a silly question.
There are already several voice-control related modules in this project. But I lost my way in the maze after reading the docs of all the modules. I think there could be a better way than try-all-method.
I need these functions;
- Sleep and Wakeup LCD by magic words. (eg. say “My Mirror”)
- Send Notification to any notification-receivable module by customized or predefined words. (eg. say “Do Something” or “Do Something to SomeModule” or just “Something”)
- Execute the external scripts and programs (eg. say “Reboot” for ‘sudo reboot’ or “Run Something” for something .sh)
I’ve been considering these;
- Matzefication/MMM-Hello-Mirror : Seemed so easy. But, as I know, Google Annyang has quota limitation.
- joanaz/MMM-MirrorMirrorOnTheWall : Awsome. But only Hide and Show modules?
- whyjustin/magic-mirror-voice : Sorry, looks so hard for me.
- fewieden/MMM-voice : Some functions are suitable. but I can’t understand how to expand COMMANDS.
- alexyak/voicecontrol : Maybe this is appropriate. but snowboy training…
- dr4ke616/MMM-Voice-Control : It is another Annayng base.
So… Which is your recommendation?
I’m in exactly the same boat and would be very interested in the answer to this.
I tried to MMM-Voice but after spending a few hours getting pocketsphinx to work, i found the speech recognition to not like my accent, and when it did the commands didnt do anything.
I’m going to give MirrorMirrorOnTheWall a go now but i really don’t want to have to say such a long phrase to get it to receive commands.
fewieden/MMM-voice : Some functions are suitable. but I can’t understand how to expand COMMANDS.
Just read the docs…it’s really not that hard to expand. That is the one I am using for my 32" touch mirror… it works well and easy to add to other modules…
The biggest bonus to me is that it doesn’t need to run through amazon or google server to use it…
Hi Sean, what did you finally end up using?
As you might have seen I started a thread for trying to better understand all the differetn voice modules. As MM beginner, none of them seem very straight forward, since almost all of them require 2-3 other dependencies.
I built my own after all with Google Assistant, Cloud Speech API and Snowboy.
I tried to install your module several times but I have problems during the installation.
Indeed, during the installation, I always encounter the same problem at this stage of the installation of the Google SDK:
Everything is fine until the step “Register the Device Model”.
Indeed, when I use the registration tool, it does not work.
I tried the command:
(env) $ googlesamples-assistant-devicetool register-model --manufacturer "Assistant SDK developer" \ --product-name "Assistant SDK light" --type LIGHT --model my-project-1505046981535-my-model1
Each time I have the following error message:
Error: Error loading client secret: [Errno 2] No such file or directory: 'client_secret_xxxxx-m88slnqk24bqul282n0nnogs10sck6qo.apps.googleusercontent.com.json'. Run the device tool with --client-secrets or --project-id option. Or copy the client_secret_xxx-xxxx.apps.googleusercontent.com.json file in the current directory.
Could you help me with this procedure?
Thank you in advance,
Why is adding voice so difficult?
After a lot of trouble I finally had MMM-voice working, but it did not react very well.
Then I tried Alexa and Awsome Alexa but I did not get it working.
After trying Alexa Voice also stopped working, so I ended up with no voice at all.
I am thinking about starting all over.
I finally have all the modules working as I like it, the PIR is working but voice …
Maybe somebody can give me some advice (which module, microphone, speakers, etc).
Or should I start with a clean Jessie, install some sort of voice first and add MM later?
@Peter Hi Peter! It seem we are all in the same position as you. Trying to add Voice interaction (not just command and control) to your Pi is a serious PITA obstacle. I’ve tried to collect some info on these modules etc, but now I’m more confused and disappointed that ever. It seem that there simply isn’t any instance where you can just install a MM module and some npm package and have it up and running. We believe that it is all about monetizing this technology as all of the serious contenders like Amazon Aexa, Google Asisstant, Bing whatnot, and countless companies, are all requiring a bunch of back and forth API’s, keys, files while collecting everything in between about you.
In an ideal world we should be able to just make a request like this:
arecord "Yo! Alexa, can you tell me when ISS will pass over London next time?" | theMagicAPI | aplay "Hey Buddy, the international spacestation will pass over london at 03:00 tomorrow morning."
In fact I suggest we hack together such an API. Basically we would do this:
- We create a pool API that pool together all effin’ AI APIs and requests. (A wrapper!)
- Everyone signup to all services (GA, Alexa, Bing etc) we all get an API token, but then we send it to the pool and all requests will be mixed together and the api user will always get a response.
- Tracking of individual user voice requests will be anonymized by mixing (much like DuckDuckGo)
@Peter BTW. Jesse is way outdated, you should have stretch. Alexa sucks, only because they insist in using JAVA and 3 concurrent terminals, bringing your RPi to the knees?
@E3V3A I learned from this forum that Jessie works better with MM than Stretch, thats why I use Jessie.
But voice is still a problem …