Read the statement by Michael Teeuw here.
3 different Alexa modules, which one is the best?
-
@borrigan Hello. I end up successfully setting MMM-AlexaPi and MMM-alexa.
they both have adventages and inconvinient
-MMM-AlexaPi- This module isn’t a stand alone module. You need to install AlexaPi on you pi ( https://github.com/alexa-pi/AlexaPi )
- This module is ONLY to show the status of AlexaPi. And this is AlexaPi that allow you to talk to Alexa. Meaning you don’t need the mirror at all to use it. But it’s neat because you can “see” what’s happening. Also Alexa saying “yes” when you say her name.
- It might or might not be easy to setting up. Some people won’t have any issue using it by following the setup instruction and other would have to tweaks some things.
- slower than MMM-alexa
-MMM-alexa
- It is a stand alone module. Meaning you just need this module to talk to Alexa.
- The square indicate you the status . yellow is bad configuration if I recall correctly. Red is ready to listen. green is listening.
- It is not activated by voice. You need either a button or to send the right notification to the module to make Alexa listen to you.
- It seem to not answer as much as MMM-AlexaPi
- You need to add https://sakirtemel.github.io/MMM-alexa/ to the allow return URL in your avs security settings of your device . And need to generate the first token by going in that first url and enter the requested information
- faster than MMM-AlexaPi
in both case I had to tweak some of my sound configurations files so my microphone is picked up and the output is my speaker.
Because MMM-Alexa needed to be activated manually or by another module (and the fact it seem to provide less answer) I think MMM-AlexaPi is better. Even though AlexaPi seem a bit slower to answer. To trigger MMM-Alexa I used the voicecontrol module (it’s a module that convert voice into notification. I set up the word “Alexa” to send the right notification to mmm-alexa) but the voicecontrol module seem to have a loooooooooot of false positive. It was activated way to much even if I didn’t talk sometime.I didn’t successfully made mirrormirroronthewall worked though (I might try again later. I didn’t tryed since I made the other two works)
I think mirrormirroronthewall is probably the best module of all three since is suppositivly allow you to do more than the other two that only allow you to ask question to alexa and get answers from it.as for the
But it does not say how to develop this module :(
You can either use an already existing moduel like “voicecontrol” to do that.
or developping something yourself. to do that:
You can for example take as a template the helloworld module in the ~/MagicMirror/modules/default as a template . copy/past it in the ~/MagicMirror/modules , rename the folder to something else. let’s say “toto” as an example, then rename the file inside it to match that name. helloworld.js should now be toto.js . now open the toto.js and replace all the hello words by toto.
Then delete thetext: "Hello World!"
and thewrapper.innerHTML = this.config.text;
since you don’t need to display anything.
Now you can writethis.sendNotification('ALEXA_START_RECORDING', {});
where was the wrapper.innerHTML = this.config.text;
And that send the notification to start recording what you are saying to mmm-alexa. However that example will only work once.If you never code something before, I don’t recomand you to do the developping of a module like this yourself. It’s better to use an already existing module.
/!\ if your sound configuration isn’t good, you might not be able to use voicecontrol and mmm-alexa in the same time.
Explaining the sound configuration here would be pointless since you might not have those issue. -
Thank you very very much @romain ! I will go for AlexaPi with MMM-AlexaPi. I spent all yesterday trying to set it up, and I will continue today, your input is very valuable. Just a few newbie, extra questions:
-
AlexaPi service, when running, is “Always On”? It is always listening? How to test it directly without having integrated with MMM-AlexaPi Yet?
-
Where can the activation word be configured? If I just say “Alexa”, should it work?
-
Finally, I spent hours troubleshooting the microphone. It is a very standard Logitech Microphone. I just could not get it working.
Here are:
Result from >lsusb command:
pi@raspberrypi:~ $ lsusb
Bus 001 Device 006: ID 17ef:6019 Lenovo
Bus 001 Device 005: ID 04b3:3025 IBM Corp. NetVista Full Width Keyboard
Bus 001 Device 004: ID 046d:0a03 Logitech, Inc. Logitech USB Microphone
Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter
Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp.
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubresult from the > arecord -L command:
pi@raspberrypi:~ $ arecord -L
null
Discard all samples (playback) or generate zero samples (capture)
pulse
PulseAudio Sound Server
sysdefault:CARD=Microphone
Logitech USB Microphone, USB Audio
Default Audio Device
front:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
Front speakers
surround21:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
2.1 Surround output to Front and Subwoofer speakers
surround40:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
4.0 Surround output to Front and Rear speakers
surround41:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
4.1 Surround output to Front, Rear and Subwoofer speakers
surround50:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
5.0 Surround output to Front, Center and Rear speakers
surround51:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
5.1 Surround output to Front, Center, Rear and Subwoofer speakers
surround71:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
7.1 Surround output to Front, Center, Side, Rear and Woofer speakers
iec958:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
IEC958 (S/PDIF) Digital Audio Output
dmix:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
Direct sample mixing device
dsnoop:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
Direct sample snooping device
hw:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
Direct hardware device without any conversions
plughw:CARD=Microphone,DEV=0
Logitech USB Microphone, USB Audio
Hardware device with all software conversionsLog from AlexaPi
● AlexaPi.service - Alexa client for all your devices
Loaded: loaded (/usr/lib/systemd/system/AlexaPi.service; enabled)
Active: active (running) since Mon 2017-04-24 10:59:14 CDT; 7s ago
Docs: https://github.com/alexa-pi/AlexaPi/wiki
Main PID: 1626 (python)
CGroup: /system.slice/AlexaPi.service
├─1626 /usr/bin/python /opt/AlexaPi/src/main.py --daemon
├─1638 /usr/bin/pulseaudio --start --log-target=syslog
└─1657 sox -q /opt/AlexaPi/src/resources/hello.mp3 -t alsa default vol -6 dB pad 0 0Apr 24 10:59:20 raspberrypi pulseaudio[1638]: [pulseaudio] module-udev-detect.c: Tried to configure /devices… 10s
Apr 24 10:59:21 raspberrypi python[1626]: Exception in thread Thread-1:
Apr 24 10:59:21 raspberrypi python[1626]: Traceback (most recent call last):
Apr 24 10:59:21 raspberrypi python[1626]: File “/usr/lib/python2.7/threading.py”, line 810, in __bootstrap_inner
Apr 24 10:59:21 raspberrypi python[1626]: self.run()
Apr 24 10:59:21 raspberrypi python[1626]: File “/usr/lib/python2.7/threading.py”, line 763, in run
Apr 24 10:59:21 raspberrypi python[1626]: self.__target(*self.__args, **self.__kwargs)
Apr 24 10:59:21 raspberrypi python[1626]: File “/opt/AlexaPi/src/alexapi/triggers/pocketsphinxtrigger.py”, …hread
Apr 24 10:59:21 raspberrypi python[1626]: inp = alsaaudio.PCM(alsaaudio.PCM_CAPTURE, alsaaudio.PCM_NORMAL, …ce’])
Apr 24 10:59:21 raspberrypi python[1626]: ALSAAudioError: Input/output error [front:CARD=Microphone,DEV=0]
Hint: Some lines were ellipsized, use -l to show in full.Thank you so much for your help, I don’t mean to take too much from your time but I am really frustrated :(
Gerardo
Mexico City -
-
AlexaPi service, when running, is “Always On”? It is always listening? How to test it directly without having integrated with MMM-AlexaPi Yet?
you should hear “hello” when it start and then if you say Alexa you should hear “yes” then you can ask a question. It is always listening.
A little tips for you, before using the service, try to run AlexaPi yourself. It provide you more debugging info that way.
To do it, first desactivate the service with the terminal by writingsudo systemctl stop AlexaPi.service; sudo systemctl disable AlexaPi.service
(you can reactivate later by doingsudo systemctl enable AlexaPi.service
).
Then you can run AlexaPi yourself by writing in the terminal/opt/AlexaPi/src/main.py -d
(If you you putted it on that location anyway).Where can the activation word be configured? If I just say “Alexa”, should it work?
in the yaml file in
/etc/opt/AlexaPi/config.yaml
there is a sectionpocketsphinx:
(this is what is use to detect words I believe). in that section you have a keyphrase:
with"alexa"
as a value. you can simply change the word to another word. Do not choose an over complicated one though, I think pocketsphinx try to guess what it supposed to sound like based on the spelling so if you choose a word not in English it might not guess correctly. You shoudl let alexa untill you get it to work though, because we know that one work for sure. You can test other word later.Finally, I spent hours troubleshooting the microphone. It is a very standard Logitech Microphone. I just could not get it working.
I am no expert in sound but I going to try to help you.
First, I rather have the output fromarecord -l
rather thanarecord -L
. the later give to much information to my understanding.
The output of myarecord -l
look like this:pi@raspberrypi:~ $ arecord -l **** List of CAPTURE Hardware Devices **** card 1: Device [USB Audio Device], device 0: USB Audio [USB Audio] Subdevices: 1/1 Subdevice #0: subdevice #0
I can see it’s “card 1” and “device 0” . that’s what I am interesting about.
then, myaplay -l
give mepi@raspberrypi:~ $ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: ALSA [bcm2835 ALSA], device 0: bcm2835 ALSA [bcm2835 ALSA] Subdevices: 8/8 Subdevice #0: subdevice #0 Subdevice #1: subdevice #1 Subdevice #2: subdevice #2 Subdevice #3: subdevice #3 Subdevice #4: subdevice #4 Subdevice #5: subdevice #5 Subdevice #6: subdevice #6 Subdevice #7: subdevice #7 card 0: ALSA [bcm2835 ALSA], device 1: bcm2835 ALSA [bcm2835 IEC958/HDMI] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: Device [USB Audio Device], device 0: USB Audio [USB Audio] Subdevices: 1/1 Subdevice #0: subdevice #0
I using the default speaker output, which is the bcm2835 Alsa device which is “card 0” and “device 0”
Knowing that you can test if your device working by doing the following:
meaning you should be able to do aaplay -D hw:0,0 /usr/share/sounds/alsa/Front_Center.wav
to test your speaker where “hw:0,0” (you could also put “plughw:0,0”) is your output device. Not sur what does “hw” and “plughw” mean, but the numbers are the card number and the device number. 0,0 in my example (remember the aplay -l above)
this will use the vlc handler i beleive. You can test it the same way withplay
instead ofaplay
for the sox handler.you can test your michrophone in the same way by doing
arecord -r 48000 -f S16_LE -D hw:1,0 -d 5 test.wav
this will tell to record with the device hw:1,0 (in my example the card 1, device 0) for 5 second at a rate of 48000Hz and as S16_Le type (not sure what that is) and the result will be put in a file name test.wav. You can thenplay
oraplay
that file to see if the recording whent well.If you manage to make those command work. In the
/etc/opt/AlexaPi/config.yaml
there is a sectionsound:
with a key “input_device:” and a key “output_device:” . Both of those key have for value “default” . for the output it’s easy, the default is the system defautl output which is hw:0,0 in my example. so you can let default or put “hw:0,0” instead if you want to use that output like me.
The tricky part is the input. The input is your michrophone, but there is no “default” michrophone in the pi right ? so the chance AlexaPi understand what you are talking about are prett slim in my opinion.
But when I wrote “hw:1,0” or “plugwh:1,0” instead of default , I had an error. For some reason it didn’t wanted to take that value. That’s when trickery was needed for me. I’m sure some people hadn’t to do that but I did.
I creat a alsa configuration file name “asound.conf” with the following in it :pcm.myTest { type dsnoop ipc_key 816357492 ipc_key_add_uid 0 ipc_perm 0666 slave { pcm "hw:1,0" channels 1 } } pcm.!default { type asym playback.pcm { type plug slave.pcm "hw:0,0" } capture.pcm { type plug slave.pcm "myTest" } }
This overrite what is the “default” configuration.
type asym
mean my playback and default arn’t on the same sound device.
theplayback.pcm
describe what is my output device. In my example you can see i putted “hw:0,0” because I use the default output.
thecapture.pcm
describe the input device (microphone). I put “MyTest” as a name which is describe above in the file (the name isn’t very explicit but I was testing and never changed it >.> )
I am not going to enter in the detail for the myTest thing, but basically, it’s tell that the michrophone can be use in multiple application at the time and that it is the “hw:1,0” (remember the arecord -l thing above in the post)This asound.conf file should be put in the
/etc/
folder.Well. That’s it. That’s what I had to do to make my microphone work for AlexaPi. Adapt this to your own devices and it might work like me.
Remember to test it whitout the service first and when that’s work you activate the service and see if it still work or not. and if not we’ll try to understand why. -
This post is deleted! -
@romain thank you very very much for all this feedback. I was definitely missing some pieces of the puzzle. I will try this again today and post my findings here. Have a great day!
-
@johnnyboy I struggle because of this @johnnyboy at first. But the lambda service is working both with us and ireland (west-eu or eu-west. don’t remember exactly now).
I end up have skill that was working. On the amazon interfaces anyway by choosing that instead of us. (even though I’m in france. I had no error anymore on the amazon test aera)
I had to change the hard coded region in the code though (easy fix).
I think I had no more error at the end (don’t remember for sure). But nothing was happening when I talked (maybe it was again a microphone issue. Not really sure about that)
I also did not understood how to use thenpm start dev
correctly at that time so maybe I would be able to do a better job at finding the issue now.
Maybe I try today, maybe I never try again. we’ll see (it’s not my raspberry pie I using so I am limited in when I can use it. So I rather do what I am ask to first before trying again) -
This post is deleted! -
@johnnyboy Yes, But it’s not your fault. I have read an entire very long thread about mirror mirror on the wall (and maybe some others, I don’t remember if all the information were on the same spot or not) and it seem that one guy talked about trying that (changing the region) and the creator of the module himself didn’t knew Ireland was possible. But he did point out in which file the region thing was. (it wasn’t had to find though).
So I do think he meant that “that one ‘must’ be in the us-east” because he didn’t knew there was another option.
And he didn’t changed his tutorial to reflect that. (I’m guessing because he wasn’t able to test this way itself. Or maybe he wasn’t willing to) -
@romain hi again!!
I have done some progress. I followed your instructions, created the asound.conf file and setup myTest as the input device in the config.yaml file. When I run alexaPi, now it does not give me a microphone error, but a strange “Capture data too large” error. I get to hear the “hello” sound but when I speak “alexa” it won’t do anything.
Really weird. Here is the full output of main.py execution:
pi@raspberrypi:/opt/AlexaPi/src $ /opt/AlexaPi/src/main.py -d INFO: pocketsphinx.c(152): Parsed model-specific feature parameters from /usr/local/lib/python2.7/dist-packages/pocketsphinx/model/en-us/feat.params Current configuration: [NAME] [DEFLT] [VALUE] -agc none none -agcthresh 2.0 2.000000e+00 -allphone -allphone_ci no no -alpha 0.97 9.700000e-01 -ascale 20.0 2.000000e+01 -aw 1 1 -backtrace no no -beam 1e-48 1.000000e-48 -bestpath yes yes -bestpathlw 9.5 9.500000e+00 -ceplen 13 13 -cmn live batch -cmninit 40,3,-1 41.00,-5.29,-0.12,5.09,2.48,-4.07,-1.37,-1.78,-5.08,-2.05,-6.45,-1.42,1.17 -compallsen no no -debug 0 -dict /usr/local/lib/python2.7/dist-packages/pocketsphinx/model/cmudict-en-us.dict -dictcase no no -dither no no -doublebw no no -ds 1 1 -fdict -feat 1s_c_d_dd 1s_c_d_dd -featparams -fillprob 1e-8 1.000000e-08 -frate 100 100 -fsg -fsgusealtpron yes yes -fsgusefiller yes yes -fwdflat yes yes -fwdflatbeam 1e-64 1.000000e-64 -fwdflatefwid 4 4 -fwdflatlw 8.5 8.500000e+00 -fwdflatsfwin 25 25 -fwdflatwbeam 7e-29 7.000000e-29 -fwdtree yes yes -hmm /usr/local/lib/python2.7/dist-packages/pocketsphinx/model/en-us -input_endian little little -jsgf -keyphrase alexa -kws -kws_delay 10 10 -kws_plp 1e-1 1.000000e-01 -kws_threshold 1 1.000000e-10 -latsize 5000 5000 -lda -ldadim 0 0 -lifter 0 22 -lm -lmctl -lmname -logbase 1.0001 1.000100e+00 -logfn -logspec no no -lowerf 133.33334 1.300000e+02 -lpbeam 1e-40 1.000000e-40 -lponlybeam 7e-29 7.000000e-29 -lw 6.5 6.500000e+00 -maxhmmpf 30000 30000 -maxwpf -1 -1 -mdef -mean -mfclogdir -min_endfr 0 0 -mixw -mixwfloor 0.0000001 1.000000e-07 -mllr -mmap yes yes -ncep 13 13 -nfft 512 512 -nfilt 40 25 -nwpen 1.0 1.000000e+00 -pbeam 1e-48 1.000000e-48 -pip 1.0 1.000000e+00 -pl_beam 1e-10 1.000000e-10 -pl_pbeam 1e-10 1.000000e-10 -pl_pip 1.0 1.000000e+00 -pl_weight 3.0 3.000000e+00 -pl_window 5 5 -rawlogdir -remove_dc no no -remove_noise yes yes -remove_silence yes yes -round_filters yes yes -samprate 16000 1.600000e+04 -seed -1 -1 -sendump -senlogdir -senmgau -silprob 0.005 5.000000e-03 -smoothspec no no -svspec 0-12/13-25/26-38 -tmat -tmatfloor 0.0001 1.000000e-04 -topn 4 4 -topn_beam 0 0 -toprule -transform legacy dct -unit_area yes yes -upperf 6855.4976 6.800000e+03 -uw 1.0 1.000000e+00 -vad_postspeech 50 50 -vad_prespeech 20 20 -vad_startspeech 10 10 -vad_threshold 2.0 2.000000e+00 -var -varfloor 0.0001 1.000000e-04 -varnorm no no -verbose no no -warp_params -warp_type inverse_linear inverse_linear -wbeam 7e-29 7.000000e-29 -wip 0.65 6.500000e-01 -wlen 0.025625 2.562500e-02 INFO: feat.c(715): Initializing feature stream to type: '1s_c_d_dd', ceplen=13, CMN='batch', VARNORM='no', AGC='none' INFO: acmod.c(166): Using subvector specification 0-12/13-25/26-38 INFO: mdef.c(518): Reading model definition: /usr/local/lib/python2.7/dist-packages/pocketsphinx/model/en-us/mdef INFO: mdef.c(531): Found byte-order mark BMDF, assuming this is a binary mdef file INFO: bin_mdef.c(336): Reading binary model definition: /usr/local/lib/python2.7/dist-packages/pocketsphinx/model/en-us/mdef INFO: bin_mdef.c(516): 42 CI-phone, 137053 CD-phone, 3 emitstate/phone, 126 CI-sen, 5126 Sen, 29324 Sen-Seq INFO: tmat.c(149): Reading HMM transition probability matrices: /usr/local/lib/python2.7/dist-packages/pocketsphinx/model/en-us/transition_matrices INFO: acmod.c(117): Attempting to use PTM computation module INFO: ms_gauden.c(127): Reading mixture gaussian parameter: /usr/local/lib/python2.7/dist-packages/pocketsphinx/model/en-us/means INFO: ms_gauden.c(242): 42 codebook, 3 feature, size: INFO: ms_gauden.c(244): 128x13 INFO: ms_gauden.c(244): 128x13 INFO: ms_gauden.c(244): 128x13 INFO: ms_gauden.c(127): Reading mixture gaussian parameter: /usr/local/lib/python2.7/dist-packages/pocketsphinx/model/en-us/variances INFO: ms_gauden.c(242): 42 codebook, 3 feature, size: INFO: ms_gauden.c(244): 128x13 INFO: ms_gauden.c(244): 128x13 INFO: ms_gauden.c(244): 128x13 INFO: ms_gauden.c(304): 222 variance values floored INFO: ptm_mgau.c(476): Loading senones from dump file /usr/local/lib/python2.7/dist-packages/pocketsphinx/model/en-us/sendump INFO: ptm_mgau.c(500): BEGIN FILE FORMAT DESCRIPTION INFO: ptm_mgau.c(563): Rows: 128, Columns: 5126 INFO: ptm_mgau.c(595): Using memory-mapped I/O for senones INFO: ptm_mgau.c(838): Maximum top-N: 4 INFO: phone_loop_search.c(114): State beam -225 Phone exit beam -225 Insertion penalty 0 INFO: dict.c(320): Allocating 138824 * 20 bytes (2711 KiB) for word entries INFO: dict.c(333): Reading main dictionary: /usr/local/lib/python2.7/dist-packages/pocketsphinx/model/cmudict-en-us.dict INFO: dict.c(213): Dictionary size 134723, allocated 1016 KiB for strings, 1679 KiB for phones INFO: dict.c(336): 134723 words read INFO: dict.c(358): Reading filler dictionary: /usr/local/lib/python2.7/dist-packages/pocketsphinx/model/en-us/noisedict INFO: dict.c(213): Dictionary size 134728, allocated 0 KiB for strings, 0 KiB for phones INFO: dict.c(361): 5 words read INFO: dict2pid.c(396): Building PID tables for dictionary INFO: dict2pid.c(406): Allocating 42^3 * 2 bytes (144 KiB) for word-initial triphones INFO: dict2pid.c(132): Allocated 21336 bytes (20 KiB) for word-final triphones INFO: dict2pid.c(196): Allocated 21336 bytes (20 KiB) for single-phone word triphones INFO: kws_search.c(406): KWS(beam: -1080, plp: -23, default threshold -225, delay 10) 2017-04-26 19:58:32 DEBUG: Setting up playback handler: SoxHandler 2017-04-26 19:58:32 INFO: Checking Internet Connection ... 2017-04-26 19:58:32 DEBUG: Starting new HTTPS connection (1): api.amazon.com 2017-04-26 19:58:33 DEBUG: https://api.amazon.com:443 "GET /auth/o2/token HTTP/1.1" 404 29 2017-04-26 19:58:33 INFO: Connection OK 2017-04-26 19:58:33 INFO: AVS token: Requesting a new one 2017-04-26 19:58:33 DEBUG: Starting new HTTPS connection (1): api.amazon.com 2017-04-26 19:58:34 DEBUG: https://api.amazon.com:443 "POST /auth/o2/token HTTP/1.1" 200 980 2017-04-26 19:58:34 INFO: AVS token: Obtained successfully 2017-04-26 19:58:34 DEBUG: Stopping audio play 2017-04-26 19:58:34 DEBUG: Stopped play. No SoX process to stop 2017-04-26 19:58:34 DEBUG: Playing audio: /opt/AlexaPi/src/resources/hello.mp3 Exception in thread Thread-1: Traceback (most recent call last): File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner self.run() File "/usr/lib/python2.7/threading.py", line 763, in run self.__target(*self.__args, **self.__kwargs) File "/opt/AlexaPi/src/alexapi/triggers/pocketsphinxtrigger.py", line 70, in thread _, buf = inp.read() ALSAAudioError: Capture data too large. Try decreasing period size 2017-04-26 19:58:34 DEBUG: Started play. sox -q /opt/AlexaPi/src/resources/hello.mp3 -t alsa default vol -6 dB pad 0 0 2017-04-26 19:58:36 DEBUG: Finished play.
Thank you very much for your patience @romain
Best regards,
Gerardo
-
@borrigan Try to let “default” as an input device rather than myTest. The default thing will be in charge to call myTest itself (I think putting myTest directly doesn’t work for some weird reason I don’t now about)
Do you hear the “hello” when you run the script ?
If you still have the error after using the default, then ty to change you input handler from sox to vlc.I don’t think that’s related but at this point trying stuff might be the right thing x) Maybe that will fix it.