MagicMirror Forum
    • Recent
    • Tags
    • Unsolved
    • Solved
    • MagicMirror² Repository
    • Documentation
    • 3rd-Party-Modules
    • Donate
    • Discord
    • Register
    • Login
    1. Home
    2. Doctor0ctoroc
    3. Posts
    A New Chapter for MagicMirror: The Community Takes the Lead
    Read the statement by Michael Teeuw here.
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 2
    • Posts 15
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Creating Custom Voice Commands for Hello-Lucy...?

      @sdetweil Thanks for the breakdown. So does the pocketsphink library have a particular set of full words already defined (ie no new words can be added on our end) and does it compare specific combinations of phonetic symbols to that library to determine what words are being spoken or does the engine strictly recognize phonetic symbols and it can recognize any word as long as that word is associated with a locally defined string of phonetic symbols?

      In other words, does the library already have the word “history” and it is defined by specific combinations of phonetic symbols so when it detects any of those combinations it ‘hears’ the word “history” or does it simply ‘listen’ for symbols in the phonetic alphabet and upon hearing a combination defined anywhere (in the .dic file, for example), it can ‘hear’ the word? Based on the fact that “Jarvis” is a word already recognized by the Hello-Lucy module, I would think that is a unique enough word that it wouldn’t be included in a full library of all words that sphinx can recognize and that in order to recognize when a user says “Jarvis”, it would have to work by picking up the combination of phonetic symbols that represents the word (in the .dic file it appears as JH AA R V AH S).

      If this is the case, then as long as the tools on the sphnx website are used to generate the content in the .dic and .lm files (I assume the .lm file works in conjunction with the .dic file in terms of recognizing phonetic symbols), new words should be able to be added to those files for use with Hello-Lucy. Then, it would just be a matter of adding the code to the relevant .js and .json files included with Hello-Lucy to determine the commands associated with those words and certain combinations of them with any other words defined in the .lm and .dic files. Looking at the code for Hello-Lucy’s functionality, it appears that the words.json and sentences.json files define full words and phrases used in commands, then the primary .js file issues commands based on those.

      Am I on the right track or am I missing something?

      posted in Troubleshooting
      Doctor0ctorocD
      Doctor0ctoroc
    • RE: Creating Custom Voice Commands for Hello-Lucy...?

      @Mykle1 Thanks for your response, I was hoping I would hear from you! I’m not looking to add any additional functionality per se, or even to be able to show/hide new modules - I mainly would like to expand the word/sentence list that is used to show and hide modules already included in the list of modules Hello-Lucy can show/hide.

      So to answer your question, one example is that I would like to be able to say “cheer me up” in order to show the compliments module then say “thanks I feel better” to hide it again. And ideally, I would be able to add other more unique phrases to show and hide other modules already in the list rather than say “show” and “hide” before every one.

      As the saying goes, “give a man a fish and he’ll be fed for a day, teach a man to fish and he’ll be fed for life” - and in that spirit, I’m mainly trying to figure out:
      a) how the Hello-Lucy module accomplishes the show/hide tasks and
      b) which files do what within that functionality
      so I can copy paste existing code and change it up to do this. I’m not proficient at coding but I know enough to be able to figure out how lines relate to each other and other files to some degree. I think I’ve figured out which files do what and how to generate new content for the .lm and .dic files using the tools on the http://www.speech.cs.cmu.edu/tools/ website…I just need a bit of direction to know which lines of code do what so I can expand the word/sentence list.

      However, from what you said, it sounds like it is programmed to work based on using the specific combination of “show” or “hide” and then a keyword for each module so it may not be so easy to add phrases too different from this - in other words, it may be easy to add the phrase “show TIME” and “hide TIME” to show/hide the clock (instead of ‘show clock’ and ‘hide clock’) but it may not be so easy to add the phrase 'what time is it" to show the clock module. The reason I think it may still be possible is that there are other phrases in the code like “go to sleep” and “please wake up” which means it should be relatively simple to use other phrases to show and hide modules rather than every show/hide command to require “Show” or “Hide” at the beginning. Does that make sense?

      I guess I’m mainly looking to understand how all the files work together so that I can fiddle around with them on my end and try to change things up a bit but depending on how everything works, this may not be such an easy task for me. The way I see it, the functionality to show and hide modules is present and the functionality to understand phrases (up to four words) is present - I’d like to combine them in different ways so I can add as many custom commands as I see fit to perform the same tasks already present in the code.

      posted in Troubleshooting
      Doctor0ctorocD
      Doctor0ctoroc
    • Creating Custom Voice Commands for Hello-Lucy...?

      Being my first foray into the world of Magic Mirror, I’ve been having a blast customizing everything for the mirror I’m building for my girlfriend for her birthday. After some tinkering, I finally got the Hello-Lucy module up and running with custom colors, sounds and text with some ‘trial and error’ on various snippets of code. While checking out the contents of every single file in the module’s directory, I came across the .lm and .dic files, as well as the “checkCommands.json”, “sentences.json” and “words.json” files. If I’m not mistaken, there should be a way to create custom voice commands with these files and the help of the website http://www.speech.cs.cmu.edu/tools/ which according to the header in the .lm file was originally used to create the specific voice commands used in the module…does that sound about right to anyone? I know the github includes a guide for adding modules but it doesn’t cover new words or commands using more than two words (only the ‘show/hide’ pairs are listed) and something tells me that edits to other .js files are needed as well to prompt the execution of new commands…

      In my attempt thus far, I went to the site I linked above and input new words to generate the content to be included in both the .lm and .dic files but when I inserted them where they seemed like they should go and added new commands in the “checkCommands.json” file, it broke Lucy. In a panic, I reverted all three files to their previous state.

      Basically, I want to add new ways of implementing the same command - for example: instead of saying “show compliments” and “hide compliments” I want to be able to say “cheer me up” and “thanks, I feel better”, respectively. I’d also like to add a few new ways of saying certain words to get a better ‘catch’ during the recognition process, which should theoretically improve the success rate. As an example - and this will make zero sense to anyone that hasn’t looked at the .dic file before - the word “history” is represented phonetically as both [HH IH S T ER IY] and [HH IH S T R IY], the first being said as ‘his-stir-ee’ and the second as “hiss-tree” to account for two different ways one may say the word. However, I believe that adding [HH IH S T AR IY] and [HH IH S T AO IY] (spoken as ‘his-star-ee’ and ‘his-story’) would broaden the ability to recognize the word. And I also think that just adding that to the .dic file should cover the alternate ways of saying words.

      Anyway, this may either be the start of a long and fruitful discussion or someone who already knows all of this will chime in and help with some direction. Either way, I’m happy to be a part of this community finally (been stalking for a month now while waiting for all the stuff I ordered to arrive).

      posted in Troubleshooting
      Doctor0ctorocD
      Doctor0ctoroc
    • RE: Hardware Compatibility Question: Using the Grove Adjustable PIR Sensor & ReSpeaker Pi HAT in the Same Build?

      After looking a bit more into it, I believe that the GP12 port is the way to go. While this is a different version of the ReSpeaker mic HAT, the ReSpeaker 4-Mic Linear Array has the leads labeled on both of the Grove compatible ports located on the main board and from what I can tell, the SIG pin on the Grove PIR Sensor should be connected to GPIO 12 on the Pi via the HAT when connected to this GPIO port.

      alt text

      I’m assuming that the connector ports are configured the same way on both the 2-Mic Pi HAT and the 4-Mic Linear Array (even though they’re labeled slightly different, they’re supposed to be compatible with Grove components so I think that’s a safe assumption) so the leads should all be identical as well. So if I’m not mistaken, the SIG pin would sent the voltage from the PIR sensor to GPIO 12 and therefore be used to indicate to the Pi that there is motion or not. Am I on the right track?

      I think I’ll just continue posting my progress here for anyone with the same questions so that when I eventually figure it out, the answers will follow - unless someone else chimes in to help out! I’ve ordered the parts already and will be testing in a week or so once I’ve had some time to play around with them.

      posted in Hardware
      Doctor0ctorocD
      Doctor0ctoroc
    • Hardware Compatibility Question: Using the Grove Adjustable PIR Sensor & ReSpeaker Pi HAT in the Same Build?

      Being very new to the Magic Mirror world, my head is spinning with all of the possibilities (I’m so far down the rabbit hole right now) and I have some questions about using various hardware in the same build. In short, I need to know if the Grove Adjustable PIR Motion Sensor (or any of the Grove products, for that matter) can be controlled from the Pi if it’s connected to the ReSpeaker 2-Mics Pi HAT rather than directly to the Pi and if so, which port on the Pi Hat should be used.

      There are two connector ports on the Pi HAT - one is labeled as “GP12” and the other as “12C” and according to the documentation for the HAT, they are both Grove compatible. Additionally, it states: “GPIO12: Grove digital port, connected to GPIO12 & GPIO13” and “I2C: Grove I2C port, connected to I2C-1”

      Here are the Pi HAT and Grove PIR Sensor boards for reference:

      alt text alt text

      I also found this handy chart for the Pi HAT that gives additional info so I’m hoping even if no one knows from experience, they can determine if this is possible based on general knowledge of the Pi.

      As far as I can tell, it should be possible to plug the Grove PIR into the “GP12” port and reference GPIO 12 (Pin 32) and/or GPIO 13 (Pin 33) in the code to receive input from the PIR Sensor and not interfere with the functionality of the HAT - can anyone confirm?

      I’m also open to other suggestions for implementing PIR Motion and Voice Control into my MM build but these just seemed like the best way to do that.

      posted in Hardware
      Doctor0ctorocD
      Doctor0ctoroc
    • 1
    • 2
    • 2 / 2