Read the statement by Michael Teeuw here.
Which Voice Control module is fit for me?
-
I’m in exactly the same boat and would be very interested in the answer to this.
I tried to MMM-Voice but after spending a few hours getting pocketsphinx to work, i found the speech recognition to not like my accent, and when it did the commands didnt do anything.
I’m going to give MirrorMirrorOnTheWall a go now but i really don’t want to have to say such a long phrase to get it to receive commands.
-
@Sean said in Which Voice Control module is fit for me?:
fewieden/MMM-voice : Some functions are suitable. but I can’t understand how to expand COMMANDS.
fewieden/MMM-voice
Just read the docs…it’s really not that hard to expand. That is the one I am using for my 32" touch mirror… it works well and easy to add to other modules…
The biggest bonus to me is that it doesn’t need to run through amazon or google server to use it…
-
@Sean
Hi Sean, what did you finally end up using?
As you might have seen I started a thread for trying to better understand all the differetn voice modules. As MM beginner, none of them seem very straight forward, since almost all of them require 2-3 other dependencies. -
@E3V3A
I built my own after all with Google Assistant, Cloud Speech API and Snowboy. -
@Sean
Hello Sean,
I tried to install your module several times but I have problems during the installation.Indeed, during the installation, I always encounter the same problem at this stage of the installation of the Google SDK:
(https://developers.google.com/assistant/sdk/guides/library/python/embed/register-device)Everything is fine until the step “Register the Device Model”.
Indeed, when I use the registration tool, it does not work.
I tried the command:(env) $ googlesamples-assistant-devicetool register-model --manufacturer "Assistant SDK developer" \ --product-name "Assistant SDK light" --type LIGHT --model my-project-1505046981535-my-model1
Each time I have the following error message:
Error: Error loading client secret: [Errno 2] No such file or directory: 'client_secret_xxxxx-m88slnqk24bqul282n0nnogs10sck6qo.apps.googleusercontent.com.json'. Run the device tool with --client-secrets or --project-id option. Or copy the client_secret_xxx-xxxx.apps.googleusercontent.com.json file in the current directory.
Could you help me with this procedure?
Thank you in advance,
Cordially.
-
Why is adding voice so difficult?
After a lot of trouble I finally had MMM-voice working, but it did not react very well.
Then I tried Alexa and Awsome Alexa but I did not get it working.
After trying Alexa Voice also stopped working, so I ended up with no voice at all.
I am thinking about starting all over.
I finally have all the modules working as I like it, the PIR is working but voice …
Maybe somebody can give me some advice (which module, microphone, speakers, etc).
Or should I start with a clean Jessie, install some sort of voice first and add MM later?
Peter -
@Peter Hi Peter! It seem we are all in the same position as you. Trying to add Voice interaction (not just command and control) to your Pi is a serious PITA obstacle. I’ve tried to collect some info on these modules etc, but now I’m more confused and disappointed that ever. It seem that there simply isn’t any instance where you can just install a MM module and some npm package and have it up and running. We believe that it is all about monetizing this technology as all of the serious contenders like Amazon Aexa, Google Asisstant, Bing whatnot, and countless companies, are all requiring a bunch of back and forth API’s, keys, files while collecting everything in between about you.
In an ideal world we should be able to just make a request like this:
arecord "Yo! Alexa, can you tell me when ISS will pass over London next time?" | theMagicAPI | aplay "Hey Buddy, the international spacestation will pass over london at 03:00 tomorrow morning."
In fact I suggest we hack together such an API. Basically we would do this:
- We create a pool API that pool together all effin’ AI APIs and requests. (A wrapper!)
- Everyone signup to all services (GA, Alexa, Bing etc) we all get an API token, but then we send it to the pool and all requests will be mixed together and the api user will always get a response.
- Tracking of individual user voice requests will be anonymized by mixing (much like DuckDuckGo)
-
@Peter BTW. Jesse is way outdated, you should have stretch. Alexa sucks, only because they insist in using JAVA and 3 concurrent terminals, bringing your RPi to the knees?
-
@E3V3A I learned from this forum that Jessie works better with MM than Stretch, thats why I use Jessie.
But voice is still a problem …
Peter -
@Chris
You’d seemed to try to install python library for Google Assistant. Mine is a standalone node module with grpc API.So below has no relation with my module.
(env) $ googlesamples-assistant-devicetool register-model --manufacturer "Assistant SDK developer" \ --product-name "Assistant SDK light" --type LIGHT --model my-project-1505046981535-my-model1