Read the statement by Michael Teeuw here.
Creating Custom Voice Commands for Hello-Lucy...?
-
@sdetweil Thanks for the breakdown. So does the pocketsphink library have a particular set of full words already defined (ie no new words can be added on our end) and does it compare specific combinations of phonetic symbols to that library to determine what words are being spoken or does the engine strictly recognize phonetic symbols and it can recognize any word as long as that word is associated with a locally defined string of phonetic symbols?
In other words, does the library already have the word “history” and it is defined by specific combinations of phonetic symbols so when it detects any of those combinations it ‘hears’ the word “history” or does it simply ‘listen’ for symbols in the phonetic alphabet and upon hearing a combination defined anywhere (in the .dic file, for example), it can ‘hear’ the word? Based on the fact that “Jarvis” is a word already recognized by the Hello-Lucy module, I would think that is a unique enough word that it wouldn’t be included in a full library of all words that sphinx can recognize and that in order to recognize when a user says “Jarvis”, it would have to work by picking up the combination of phonetic symbols that represents the word (in the .dic file it appears as JH AA R V AH S).
If this is the case, then as long as the tools on the sphnx website are used to generate the content in the .dic and .lm files (I assume the .lm file works in conjunction with the .dic file in terms of recognizing phonetic symbols), new words should be able to be added to those files for use with Hello-Lucy. Then, it would just be a matter of adding the code to the relevant .js and .json files included with Hello-Lucy to determine the commands associated with those words and certain combinations of them with any other words defined in the .lm and .dic files. Looking at the code for Hello-Lucy’s functionality, it appears that the words.json and sentences.json files define full words and phrases used in commands, then the primary .js file issues commands based on those.
Am I on the right track or am I missing something?
-
@Doctor0ctoroc the tool uses a compiler of sorts to build the dictionary and language model
this is an unlimited vocabulary voice reco engine.you can make is a limited vocabulary type by changing the dictionary
see the lmtool info
http://www.speech.cs.cmu.edu/tools/FAQ.html -
@sdetweil Okay cool - so it does seem like the tool generates pronunciations (strings of phonetic symbols) based on a dictionary of actual words but can also create pronunciations for new words (like proper names, eg Jarvis) so long as they are not too complicated (or over 35 characters) - however, it seems that it is less reliable to create pronunciations based on uncommon words than it is to create pronunciations for existing words. And the tool is also used to create complete sets of words to be referenced locally, if I’m not mistaken. This would mean that I can theoretically create a text file including all of the new words I want to add and upload it to both the lmtool and lextool, then I would add the output content to the .lm and .dic files included with Hello-Lucy. I think…
I may just create copies of all the related files in Hello-Lucy and do some experimenting with the option to revert back to the copied files.
-
@Doctor0ctoroc yep, you got it
-
@sdetweil said in Creating Custom Voice Commands for Hello-Lucy...?:
@Doctor0ctoroc yep, you got it
Yes! So now that I got a handle on that, I need to figure out how the .js and .json files utilize the local sphinx library.
@Mykle1 - can you lend a hand here? From the looks of it, I believe that the words.json and sentences.json files contain a reference list of all of the words and phrases used in the checkCommands.json file, and they’re referenced by the node_helper.js and Hello-Lucy.js files to implement the hide/show commands, yes? Something like that? A basic hierarchy should suffice to point me int he right direction.
-
@Doctor0ctoroc its builds the library from the sentences and words files…
then calls lmtool to generate the lm & dic files -
@sdetweil So are the .lm and .dic files generated in real time? Like, does whatever is added to the words.json and sentences.json files propagate into the .lm and .dic files or are you saying that both the words and sentences files are the basis for generating the .lm and .dic files through the sphinx tool?
-
@Doctor0ctoroc on module startup
-
@Doctor0ctoroc module sends a message to node_helper “START”
and then u can read the code in node_helper -
@sdetweil Ah, that’s fantastic. That would explain why when I changed “Hello Lucy” to “Hey Jarvis” in the Hello-Lucy.js and config file, it was added to the .dic and .lm file…I thought it was included in there from the get go (assuming the code was written to include an alternative, ‘familiar’ AI name that users might want) but all this time, it was my change of the code that put it in there - and it totally works when I say “Hey Jarvis” instead of “Hello Lucy”!
So there’s no need to even edit the .lm or .dic files directly then?
-
@Doctor0ctoroc no, they are generated each time the module starts
-
I think, for your purposes, you can simply do this, although I have not tested it:
-
In sentences.js file add your command(s). “CHEER ME UP” and “THANKS I FEEL BETTER” (MUST ALL BE CAPS)./
-
In checkCommands.json Sample -->
["SHOW","COMMAND","","","true","MMM-ModuleName",""], ["HIDE","COMMAND","","","false","MMM-ModuleName",""],Changed to
["CHEER","ME","UP","","true","compliments",""], ["THANKS","I","FEEL","BETTER","false","compliments",""],This should now make sense to you. I used the hide/show pairs mostly because I found the success rate higher when using shorter commands. I’ve “learned” how to talk to Lucy so my success rate is pretty darn good. I know others that have struggled for success.
Using what @sdetweil has told you and this simple format above, you should be on your way.
-
-
Ok, I just tested this and it works. You may find some commands work better than others. You’ll have to do some experimenting.
-
@Doctor0ctoroc said in Creating Custom Voice Commands for Hello-Lucy...?:
when I changed “Hello Lucy” to “Hey Jarvis” in the Hello-Lucy.js
You would only need to change it in the config entry
-
@Mykle1 said in Creating Custom Voice Commands for Hello-Lucy...?:
@Doctor0ctoroc said in Creating Custom Voice Commands for Hello-Lucy...?:
when I changed “Hello Lucy” to “Hey Jarvis” in the Hello-Lucy.js
You would only need to change it in the config entry
Ah, good to know so I’m not being redundant. And thanks for the direction on the coding, that helps a lot! Do I need to add the extra words to the words.json file as well?
-
@Doctor0ctoroc said in Creating Custom Voice Commands for Hello-Lucy...?:
Do I need to add the extra words to the words.json file as well?
No sir. Re-read the instructions in my post above
-
@Mykle1 Okay, got it to work! Although it seems when I say “cheer me up” it will 25% of the time pull up the help menu haha. Is there a function that pulls up help if it can’t ‘understand’ the input? I can’t imagine it’s reading my voice command as “show help”!
-
‘False Positives’ do occur. That’s why I told you that’ some commands work better than others, you’ll have to experiment and Lucy will never be as accurate as an Amazon Echo or Google Home.’ You wanted custom commands. The longer they are (4 words) the higher the risk of false positives. Also, the Pi is hard put to handle Lucy. That’s the reason I moved on to laptop boards. Fantastic response time with non-usb microphone (3.5mm) and higher accuracy.
-
@Mykle1 Fair enough! Out of curiosity, if I were to remove all the commands that I know I won’t use, would that improve the performance by limiting the number of possible commands or is the bulk of the delay just a result of the Pi’s limited processing power? For my mic, I’ve actually been using the Respeaker 2-mic Pi Hat. Not sure how much more responsive that is than a USB mic but it definitely picks up my voice better than the $6 USB mic most people get on Amazon. With a breakout ribbon cable I can place it anywhere I want up to a foot or so from the Pi itself so it doesn’t have to seat directly on top!
-
@Doctor0ctoroc no, the sphinx lib is just not that good in my opinion. i stopped using as i am spoiled by google speech reco
i warned about this back at the beginning
Hello! It looks like you're interested in this conversation, but you don't have an account yet.
Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.
With your input, this post could be even better 💗
Register Login