MagicMirror² v2.5.0 is available! For more information about this release, check out this topic.

A few days after upgrading to 2.4.1, my disk filled up and now my mirror won't run. Sad face!



  • Upgraded. Did all the config changes. Worked great. Went out of town. Came back to a blank mirror. Was confused to find the screen and RPi still working fine. SSH’ed in, and every pm2 command I tried gave me

    [PM2] Spawning PM2 daemon with pm2_home=/home/pi/.pm2
    

    Much googling ensued. Tried changing permission on a couple of files. Didn’t work. Finally decided it must mean my disk is full.

    pi@raspberrypi:~ $ df -h
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/root        13G   13G     0 100% /
    devtmpfs        333M     0  333M   0% /dev
    tmpfs           462M     0  462M   0% /dev/shm
    tmpfs           462M   12M  450M   3% /run
    tmpfs           5.0M  4.0K  5.0M   1% /run/lock
    tmpfs           462M     0  462M   0% /sys/fs/cgroup
    /dev/mmcblk0p6   68M   21M   47M  31% /boot
    tmpfs            93M     0   93M   0% /run/user/1000
    

    So suspicions confirmed I guess?

    I used ncdu to try to figure out if something ballooned out of control and that’s when I realized I’m a complete beginner and have no idea what to look for. Here’s what I saw:

    --- / ------------------------------------------------------------------------------------------------------------------------------
    .   7.4 GiB [##########] /var                                                                                                       
        2.8 GiB [###       ] /usr
        1.0 GiB [#         ] /home
      945.6 MiB [#         ] /opt
      182.2 MiB [          ] /lib
       21.0 MiB [          ] /boot
    .  11.9 MiB [          ] /run
        7.3 MiB [          ] /sbin
        6.8 MiB [          ] /bin
    .   5.7 MiB [          ] /etc
    . 228.0 KiB [          ] /tmp
    !  16.0 KiB [          ] /lost+found
    .  12.0 KiB [          ] /media
    e   4.0 KiB [          ] /srv
    !   4.0 KiB [          ] /root
    e   4.0 KiB [          ] /mnt
    .   0.0   B [          ] /sys
    .   0.0   B [          ] /proc
        0.0   B [          ] /dev
    

    Selecting that top directory (assuming the culprit is somewhere in there), I see this:

    --- /var ---------------------------------------------------------------------------------------------------------------------------
                             /..
    .   7.1 GiB [##########] /log                                                                                                       
    . 145.6 MiB [          ] /cache
    . 115.3 MiB [          ] /lib
      100.0 MiB [          ]  swap
      296.0 KiB [          ] /backups
    .  68.0 KiB [          ] /tmp
    .  28.0 KiB [          ] /spool
    e   4.0 KiB [          ] /opt
    e   4.0 KiB [          ] /mail
    e   4.0 KiB [          ] /local
    @   0.0   B [          ]  lock
    @   0.0   B [          ]  run  
    

    Hm. Let’s try that top one again. Here’s what we get (truncated to the directories that take up any substantial space):

    --- /var/log -----------------------------------------------------------------------------------------------------------------------
                             /..                                                                                                        
        2.4 GiB [##########]  kern.log
        1.8 GiB [#######   ]  messages
        1.4 GiB [#####     ]  messages.1
      534.7 MiB [##        ]  syslog.1
      533.4 MiB [##        ]  kern.log.1
      287.9 MiB [#         ]  syslog
       24.9 MiB [          ]  syslog.3.gz
    
    

    And that’s where I’m stuck. Is my culprit in there? Can I delete it? How can I prevent this from happening in the future?



  • I think you should be able to delete everything from /var/log as these are just log files. But the question remains why the logs are inflated so much. Something gets triggered to write something in the logs. I wouldn’t know how to find out what it is as I am a total n00b myself (even after years of using Linux).



  • Thanks for the response!

    Based on that, I went ahead and deleted those guys. I did this:

    pi@raspberrypi:~ $ sudo rm /var/log/kern* &>/dev/null
    pi@raspberrypi:~ $ sudo rm /var/log/messages* &>/dev/null
    

    …then rebooted and everything came back to life.

    I’ve now got a little breathing room on my disk:

    pi@raspberrypi:~ $ df -h
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/root        13G   11G  1.4G  89% /
    devtmpfs        333M     0  333M   0% /dev
    tmpfs           462M     0  462M   0% /dev/shm
    tmpfs           462M   12M  450M   3% /run
    tmpfs           5.0M  4.0K  5.0M   1% /run/lock
    tmpfs           462M     0  462M   0% /sys/fs/cgroup
    /dev/mmcblk0p6   68M   21M   47M  31% /boot
    tmpfs            93M     0   93M   0% /run/user/1000
    

    And I guess I need to implement some kind of solution to “rotate my logs” if that’s the correct nomenclature?

    The question remains: why did this happen? I know one of my modules is kind of noisy in the logs – like it updates once a minute and even though it’s working it sends an error message to the logs every time. But it’s hard to imagine that that one-line error would take up multiple gigs in a couple months. I dunno.

    Maybe “log rotation” should be addressed in the config instructions?



  • For future generations looking to solve this problem:

    I believe one of my modules (MMM-CTA for the record) poops out a ton of log entries. Those fill up my SD card after a few weeks.

    I ended up installing a pm2 module called pm2-logrotate. https://www.npmjs.com/package/pm2-logrotate

    I’m setting it to rotate my var/log file before it gets too big. Hopefully this will solve future problems.

    pi@raspberrypi:~ $ pm2 install pm2-logrotate
    pi@raspberrypi:~ $ pm2 set pm2-logrotate:max_size 2G