Read the statement by Michael Teeuw here.
Creating Custom Voice Commands for Hello-Lucy...?
-
@Doctor0ctoroc so, lucy uses the pocketsphinx_continuous_node model, which returns text strings from the sound wav reco, using the open source (carnagie mellon) sphinx engine.
reco is nothing magic, is brute force compare signals against anticipated models (examples).
now, the MM module major function is run in the browser, and due to all kinds of restrictions, physical and security,
the browser cannot directly access a lot of stuff. (files, hardware…)
(if I get the image editor on Linux to work , i’ll finish the text, but the notifications go to all modules,
the SOCKET notifications only between module and helper… if helper sends and there is more than one browser connected, ALL browsers get the message at once)
module looks like this, with another off to the side
in the MM model, there is a server side component (node_helper) which CAN access files and hardware etc…
so, the helper interfaces to the voice reco module and then sends the words up to the browser component (we call it the modulename.js) cause its filename matches the module name
MM passes the config to the modulename.js, and if there is a helper, and it needs info from the config, the modulename.js sends it down to the helper sometime
the pocketsphinx library interfaces to the software that interfaces to the mic, so all that is hidden except for the config.
the library is looking for ‘hotwords’…we once recorded EVERY sound and passed it to the server for processing(under powered local systems), this really put load on the networks, and there are security and privacy concerns… so now we listen for the hotword (‘hey you’, ‘alexa’, ‘computer’, whatever) and then start capturing the short phrases, to process)…
making a hotword a ‘phrase’ makes accurate detection of ALL the sounds a lot more difficult.
-
@sdetweil Thanks for the breakdown. So does the pocketsphink library have a particular set of full words already defined (ie no new words can be added on our end) and does it compare specific combinations of phonetic symbols to that library to determine what words are being spoken or does the engine strictly recognize phonetic symbols and it can recognize any word as long as that word is associated with a locally defined string of phonetic symbols?
In other words, does the library already have the word “history” and it is defined by specific combinations of phonetic symbols so when it detects any of those combinations it ‘hears’ the word “history” or does it simply ‘listen’ for symbols in the phonetic alphabet and upon hearing a combination defined anywhere (in the .dic file, for example), it can ‘hear’ the word? Based on the fact that “Jarvis” is a word already recognized by the Hello-Lucy module, I would think that is a unique enough word that it wouldn’t be included in a full library of all words that sphinx can recognize and that in order to recognize when a user says “Jarvis”, it would have to work by picking up the combination of phonetic symbols that represents the word (in the .dic file it appears as JH AA R V AH S).
If this is the case, then as long as the tools on the sphnx website are used to generate the content in the .dic and .lm files (I assume the .lm file works in conjunction with the .dic file in terms of recognizing phonetic symbols), new words should be able to be added to those files for use with Hello-Lucy. Then, it would just be a matter of adding the code to the relevant .js and .json files included with Hello-Lucy to determine the commands associated with those words and certain combinations of them with any other words defined in the .lm and .dic files. Looking at the code for Hello-Lucy’s functionality, it appears that the words.json and sentences.json files define full words and phrases used in commands, then the primary .js file issues commands based on those.
Am I on the right track or am I missing something?
-
@Doctor0ctoroc the tool uses a compiler of sorts to build the dictionary and language model
this is an unlimited vocabulary voice reco engine.you can make is a limited vocabulary type by changing the dictionary
see the lmtool info
http://www.speech.cs.cmu.edu/tools/FAQ.html -
@sdetweil Okay cool - so it does seem like the tool generates pronunciations (strings of phonetic symbols) based on a dictionary of actual words but can also create pronunciations for new words (like proper names, eg Jarvis) so long as they are not too complicated (or over 35 characters) - however, it seems that it is less reliable to create pronunciations based on uncommon words than it is to create pronunciations for existing words. And the tool is also used to create complete sets of words to be referenced locally, if I’m not mistaken. This would mean that I can theoretically create a text file including all of the new words I want to add and upload it to both the lmtool and lextool, then I would add the output content to the .lm and .dic files included with Hello-Lucy. I think…
I may just create copies of all the related files in Hello-Lucy and do some experimenting with the option to revert back to the copied files.
-
@Doctor0ctoroc yep, you got it
-
@sdetweil said in Creating Custom Voice Commands for Hello-Lucy...?:
@Doctor0ctoroc yep, you got it
Yes! So now that I got a handle on that, I need to figure out how the .js and .json files utilize the local sphinx library.
@Mykle1 - can you lend a hand here? From the looks of it, I believe that the words.json and sentences.json files contain a reference list of all of the words and phrases used in the checkCommands.json file, and they’re referenced by the node_helper.js and Hello-Lucy.js files to implement the hide/show commands, yes? Something like that? A basic hierarchy should suffice to point me int he right direction.
-
@Doctor0ctoroc its builds the library from the sentences and words files…
then calls lmtool to generate the lm & dic files -
@sdetweil So are the .lm and .dic files generated in real time? Like, does whatever is added to the words.json and sentences.json files propagate into the .lm and .dic files or are you saying that both the words and sentences files are the basis for generating the .lm and .dic files through the sphinx tool?
-
@Doctor0ctoroc on module startup
-
@Doctor0ctoroc module sends a message to node_helper “START”
and then u can read the code in node_helper