Voice Control using IBM's Watson



  • Hello guys, in school, and have chosen to make a Smart Mirror for our final project. Since none of us have really gotten to play with the pi before, we decided to download Magic Mirror and lean on the existing code base, and develop a new voice recognition module. We downloaded and configured the Watson conversation app, which is working well. We run it from the command line inside the modules folder, and have gotten the loader to recognize it on startup.

    Here’s my issue: im getting a 'Reference Error: require is not defined '…
    What I’m trying to do now is just run the thing in the background, listening for the hot word. (it can run from the command line with ‘node conversation’ )

    I have read about the node helper, but I’m finding that process very confusing…

    Thanks guys
    Great forum

    Here’s a link to my github, the conversation module I’m working on.
    edit : https://github.com/OniDotun123/MirrorMirror/tree/master/modules/conversation


  • Module Developer

    Reference Error: require is not defined…

    The require npm model is loaded from the node_helper… I’m sure one of the REALLY great programmers here can point you in the right direction…



  • @prostko first of all you shouldn’t publish any password and usernames on github.

    the module.js in your case conversation.js gets imported into the browser. The browser can’t handle require that’s a node.js only functionality. Did you check the development guide? https://github.com/MichMich/MagicMirror/tree/master/modules or try looking on other modules how they are constructed.



  • First - Thanks for taking the time to give it a look over.
    Oops, you’re right… changing that now…
    I’m checking out something that seems to be along the lines of what I’m trying to accomplish as well, https://github.com/dgonano/MMM-AlexaPi , which is just running the voice service on the mirror, as well as pouring over the Docs, but I seem to be unable to grasp the connection or purpose of the node_helper and module.js relationship.
    Also, for (because of my) simplicity, I’d like to focus on sending the speech picked up from the mic into the Watson STT api.
    You said that the browser can’t handle ‘require’, does that mean that node_helper is where we require things?
    So, if I’m catching up correctly,
    node_helper is where the requires happen(watson could service, npm mic…), the watson config(usrnme, psswrd, etc) to send along with the api request happens, and then finally the micInstance gets started, listening.
    Then, the audio gets piped to the api, and the response generated (text) gets handed to the module.js (conversation.js) in the form of sendSocketNotification, and receivedSocketNotification, which will then turn around and resend that same text to all the other modules.



  • Ok, so I’ve managed to get the magic mirror to recognize the new node_helper, and it will run on start of the Magic Mirror.
    https://github.com/OniDotun123/MirrorMirror/tree/watson_listen_on_startup/modules/conversation



  • YES!
    watson is listening on startup and and printing the text from speech into the console!!!
    YESSSSSS



  • @prostko glad you did it. keep in mind that someone can browse an old commit/state of the repo so i hardly suggest to change the password at the specific services.



  • Attributing this success to aleyak’s super clean code here :
    https://github.com/alexyak/voicecontrol
    Very nice.



  • @strawberry-3.141 Oh crap, good call. Doing that now. Thanks again.



  • @prostko Hi man, I would like to create one for my final project.

    Could you help me?? Passing me your e-mail just to start.

    Thanks :D


Log in to reply
 

Looks like your connection to MagicMirror Forum was lost, please wait while we try to reconnect.