Read the statement by Michael Teeuw here.
MMM-voice
-
@dmwilsonkc I did MMM-Mobile for my bachelor thesis to connect to the MagicMirror via mobile and be able to control and change config of it, the qr code was meant for pairing the devices, so not everyone is able to hook into my socket channel and send commands. But you have to be aware of the differences between socket notifications and normal notifications. socket notifications describe only the communication between your module and his node_helper (or in my case the mobile app, because I extended the socket server), but this gives you no communication with other modules like the clock. Therefore you need normal notifications which describe the communication between modules and the core MM. Be aware of that those notifications can only be sent via the module not the node_helper.
My workflow was like this. Node_helper extends socket server. Mobile send command to node_helper, then the node_helper sends it to the module and the module broadcasts it to all other modules.
https://github.com/MichMich/MagicMirror/blob/master/modules/README.md#thissendnotificationnotification-payload
https://github.com/MichMich/MagicMirror/blob/master/modules/README.md#thissendsocketnotificationnotification-payload -
@strawberry-3.141 Thanks for the clarification. The code that I copied came from the node_helper as you are probably aware.
One more question for you concerning extending the socket with node_helper: The code you have for the MMM-mobile module is written like:appSocket() {
const namespace =
${this.name}/app
;this.io.of(namespace).use((socket, next) => { const hash = crypto.createHash('sha256').update(socket.handshake.query.token).digest('base64'); if (this.mobile.user && this.mobile.user === hash) { console.log(`${this.name}: Access granted!`); next(); } else { console.log(`${this.name}: Authentication failed!`); next(new Error('Authentication failed!')); } }); this.io.of(namespace).on('connection', (socket) => { console.log(`${this.name}: App connected!`);
Mycroft in my case is on the same RPi so I realize the code will be different and you have created this code in a way to “pair” with the mobile device. Again, I am very new to writing code like this, but can you point in the right direction as far as how to extend the socket with the MMM-mycroft module’s node_helper.js? I do not understand how to extend the socket. Why is the code appSocket() {
const namespace =
${this.name}/app;Would the same code work for MMM-mycroft module?
So my understanding, thanks to your help, to have a flow like this:
Node_helper extends socket server. Mycroft-core send command to node_helper, then the node_helper sends it to the MMM-mycroft module via this.send socketnotification and the MMM-mycroft module broadcasts it to all other modules via this.sendnotification.
I am just not sure how to code the Node_helper.js to extend the socket in my case.
You have been extremely helpful so far and I truly appreciate all of your help.
Also, since this is all done on the RPi, would the ipwhitelist even come into play here? My guess is no.
-
@dmwilsonkc if you run your mycroft core from the node helper as i do with pocketsphinx in voice then there is no need at all to mess with the socket server
-
@strawberry-3.141 I didn’t realize that was a possibility. I currently start Mycroft when the pi boots up in a virtualenv as was recommended by the mycroft folks. Both the MM and Mycroft run exactly as they should. The only exception is the microphone issue- not being able to access the microphone from two processes at the same time. I have no idea how I would start Mycroft from the node_helper.js.
At this point I am very confused on the best way to go. To me, it seems like the socket communication between two concurrently running processes would be easier. But you are the expert.
Any advice you suggest would be welcomed. Thanks again.
-
Hi everyone. I’m having trouble with my voice module.
It doesn’t seem to register any of my commands. The microphone icon is blinking but whenever I say “hide modules” or any other command it doesn’t register it.
In the terminal it says “Starting pocketsphinx.” Don’t know it that is of any concerns? If anybody has any tips they are more than welcome.
-
@ime90 can you check the .lm and .dic files in your module folder and see if they are empty or include your commands
-
@strawberry-3.141 hi strawberry, thanks for the response.
I’ve managed to make it work, but the problem is that the operations were uuuuultra slow.
I’ve timed the response time in 10 different occurrences.
After saying “MAGIC MIRROR” it takes an average of 7 seconds for the microphone icon to start blinking.
After saying “HIDE” it takes an average of 15 second (hope this isn’t normal) to hide the modules.
Is this due to my RPI3? Any and all tips and experiences are welcome.
-
@ime90 I am running MMM-voice on a RPi3b and while it is not as fast as my Mycroft-core in responding to its wake word, it is much faster than the times you have stated. I turned the debug = true so that I could see what it is doing. I would say the times you have stated are at least double what it takes MMM-voice on my Pi. I changed my wake word to Hey Jarvis. I don’t know if that made any difference in how fast it recognizes the wake word. I also added the Hello-Lucy modifications to the MMM-voice module. It is not “fast” in responding to commands, but it only takes a few seconds (5 to 6sec.) to Hide/Show modules if I remember right.
-
I wrote this in a different thread not too long ago actually, but I noticed a vast increase in detection/response rate when moving from a cheap USB mic to a decent webcam microphone. I would say the response now is on the order of 1-2 secs, with effectively no errors in voice interpretation. I’m sure this is helped immensely because the clarity of the audio into the rpi is much clearer - the cheap USB mic had terribly noisy audio in.
-
i am running MMM-voice on an ODroid Xu4, which is very fast.
I also use another voice enabled smart mirror on this same device.
with a Logitech webcam as the mic.
MMM-voice gets the hotword correct every time, but almost none of the following words.
the same hardware setup works correctly with arecord as the wav source, but uses google speech detection for everything after the hotword.
it appears the system is using the default language model (lm) and dictation (dic) files.
anyone have a suggestion on debugging this?