@makssie Definitely free. Check out IBM’s Watson TJBot. It also has really great documentation and step by step tutorials.
Read the statement by Michael Teeuw here.
Latest posts made by prostko
-
RE: Voice Control using IBM's Watson
-
RE: Voice Control using IBM's Watson
@makssie Yeah, so check out the github associated with this forum, it has great documentation and step by step instructions on how to get the mirror installed. There are a ton of videos on how to do this as well. Depending on your comfort level with Javascript, getting Watson to work on your machine will either be difficult or very, very difficult. I used Watson’s conversation app. Sticking points were acclimating to the raspberry pi itself, getting it properly configured. Then understanding the existing code base here. The module structure and everything. Then actually getting Watson to fit in there and respond with usable data. After that, it was smooth sailing.
-
RE: Voice Control using IBM's Watson
@strawberry-3.141 Oh crap, good call. Doing that now. Thanks again.
-
RE: Voice Control using IBM's Watson
Attributing this success to aleyak’s super clean code here :
https://github.com/alexyak/voicecontrol
Very nice. -
RE: Voice Control using IBM's Watson
YES!
watson is listening on startup and and printing the text from speech into the console!!!
YESSSSSS -
RE: Voice Control using IBM's Watson
Ok, so I’ve managed to get the magic mirror to recognize the new node_helper, and it will run on start of the Magic Mirror.
https://github.com/OniDotun123/MirrorMirror/tree/watson_listen_on_startup/modules/conversation -
RE: Voice Control using IBM's Watson
First - Thanks for taking the time to give it a look over.
Oops, you’re right… changing that now…
I’m checking out something that seems to be along the lines of what I’m trying to accomplish as well, https://github.com/dgonano/MMM-AlexaPi , which is just running the voice service on the mirror, as well as pouring over the Docs, but I seem to be unable to grasp the connection or purpose of the node_helper and module.js relationship.
Also, for (because of my) simplicity, I’d like to focus on sending the speech picked up from the mic into the Watson STT api.
You said that the browser can’t handle ‘require’, does that mean that node_helper is where we require things?
So, if I’m catching up correctly,
node_helper is where the requires happen(watson could service, npm mic…), the watson config(usrnme, psswrd, etc) to send along with the api request happens, and then finally the micInstance gets started, listening.
Then, the audio gets piped to the api, and the response generated (text) gets handed to the module.js (conversation.js) in the form of sendSocketNotification, and receivedSocketNotification, which will then turn around and resend that same text to all the other modules. -
Voice Control using IBM's Watson
Hello guys, in school, and have chosen to make a Smart Mirror for our final project. Since none of us have really gotten to play with the pi before, we decided to download Magic Mirror and lean on the existing code base, and develop a new voice recognition module. We downloaded and configured the Watson conversation app, which is working well. We run it from the command line inside the modules folder, and have gotten the loader to recognize it on startup.
Here’s my issue: im getting a 'Reference Error: require is not defined '…
What I’m trying to do now is just run the thing in the background, listening for the hot word. (it can run from the command line with ‘node conversation’ )I have read about the node helper, but I’m finding that process very confusing…
Thanks guys
Great forumHere’s a link to my github, the conversation module I’m working on.
edit : https://github.com/OniDotun123/MirrorMirror/tree/master/modules/conversation