November 27, 2022

Catching a mouse using Raspberry PI

Chapter 0: Introduction

To begin with, I live in Amsterdam, The Netherlands. This is a beatiful country with lots of canals, picturesque streets and cozy houses. Cozy houses are also old. Like, really old. The oldest house of Amsterdam, which is located around 100m from my office was built in 1425.

There is one downside though. Mice. There are mice almost in every old house. It's okay if they just live in walls and much worse if they find a way into your appartment. And please forget about mouse traps, the only efficient way to get rid of them is to find the mouse entrance to your appartment and close it. Sometimes it's not easy at all.

This is the moment, where Raspberry PI, Python and Open CV comes to rescue. My idea was to build a piece of software that will detect mice' motion in the kitchen and notify me about it. It should also work in the dark room, cause normally the mouse was coming at night, and I didn't plan to leave the lights on for the whole night. And last, but not least, the process of building the tool should be fun!

Chapter 1: Hardware

I already had a Raspbery PI 3B, which I used for a lot of other fun projects, so I decided to buy a camera and a display (so I can see that little monster). So I went to good old pimoroni.com. Luckily it was Black Friday period, so everything I needed was on nice discounts.

So, here is the list of equipment required to find the mouse:

  1. Raspberry Pi 3B
  2. Night vision camera module
  3. HyperPixel 4.0 - Hi-Res Display

I will not go into details, about how to connect and install everything, there is lots of info you can find about it on the internet, so here is just a picture of how it looks like. Cute, isn't it?

Raspberry PI + Night vision camera module

Chapter 2: Software

2.0 Requirements

numpy==1.23.5
opencv-python==4.6.0.66
python-telegram-bot==13.14
imageio==2.22.4
python-dotenv==0.21.0
imageio-ffmpeg==0.4.7
huey==2.4.4

2.1: Motion detection

import os
import time
import cv2
import logging
import numpy as np

from dotenv import load_dotenv, dotenv_values
from bg import process_frames

load_dotenv()
CONTOUR_THRESHOLD = int(os.getenv("CONTOUR_THRESHOLD", 20))
WITH_SCREEN = int(os.getenv("WITH_SCREEN", 1))
MIN_GIF_LENGTH = int(os.getenv("MIN_GIF_LENGTH", 30))
GIF_SENDING_THRESHOLD = int(os.getenv("GIF_SENDING_THRESHOLD", 3))
MAX_CONTOURS = int(os.getenv("MAX_CONTOURS", 0))


def run():
    frames_with_motion = []
    previous_frame = None
    cap = cv2.VideoCapture(0)
    last_video_timestamp = None
    
    while True:
        _, frame = cap.read()
        img_rgb = cv2.cvtColor(src=frame, code=cv2.COLOR_BGR2RGB)

        # 2. Prepare image; grayscale and blur
        prepared_frame = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
        prepared_frame = cv2.GaussianBlur(src=prepared_frame, ksize=(5, 5), sigmaX=0)

        # 2. Calculate the difference
        if (previous_frame is None):
            # First frame; there is no previous one yet
            previous_frame = prepared_frame
            continue

        # 3. Set previous frame and continue if there is None
        if (previous_frame is None):
            # First frame; there is no previous one yet
            previous_frame = prepared_frame
            continue

        # calculate difference and update previous frame
        diff_frame = cv2.absdiff(src1=previous_frame, src2=prepared_frame)
        previous_frame = prepared_frame

        # 4. Dilute the image a bit to make differences more seeable; more suitable for contour detection
        kernel = np.ones((6, 6))
        diff_frame = cv2.dilate(diff_frame, kernel, 1)

        # 5. Only take different areas that are different enough (>CONTOUR_THRESHOLD / 255)
        thresh_frame = cv2.threshold(src=diff_frame, thresh=CONTOUR_THRESHOLD, maxval=255, type=cv2.THRESH_BINARY)[1]

        # 6. Find and optionally draw contours
        contours, _ = cv2.findContours(image=thresh_frame, mode=cv2.RETR_EXTERNAL, method=cv2.CHAIN_APPROX_SIMPLE)

        # Draw contours
        cv2.drawContours(image=img_rgb, contours=contours, contourIdx=-1, color=(0, 255, 0), thickness=2, lineType=cv2.LINE_AA)

        # If contours detected - save to a video, skip frames with too many contours
        if contours:
            frames_with_motion.append(img_rgb)
        else:
            len_frames = len(frames_with_motion)
            if len_frames > MIN_GIF_LENGTH:
                current_timestamp = int(time.time())

                # We don't want to send the gifs too often
                if last_video_timestamp:
                    if current_timestamp - last_video_timestamp < GIF_SENDING_THRESHOLD:
                        logging.info("Skipping, too soon")
                        continue  # skip, last video too soon
                else:
                    last_video_timestamp = current_timestamp
                
                # Process in background
                process_frames(frames_with_motion, f'{current_timestamp}.mp4')

            frames_with_motion = []

        if WITH_SCREEN:
            cv2.imshow('Motion detector', img_rgb)

        if (cv2.waitKey(30) == 27):
            break

    # Cleanup
    cv2.destroyAllWindows()    


if __name__ == "__main__":
    config = dotenv_values()
    logging.info(f"Starting Jerry with threshold setting {config}")
    run()

2.2: Notifications

import telegram
import os
import imageio
import logging

from huey import SqliteHuey
from dotenv import load_dotenv, dotenv_values

load_dotenv()
TELEGRAM_BOT_TOKEN = os.getenv("TELEGRAM_BOT_TOKEN")
TELEGRAM_CHAT_ID = os.getenv("TELEGRAM_CHAT_ID")
config = dotenv_values()
logging.info(f"Starting Jerry with threshold setting {config}")

huey = SqliteHuey(filename='q.db')


@huey.task()
def send_gif(file_name, text):
    try:
        bot = telegram.Bot(token=TELEGRAM_BOT_TOKEN)
        bot.send_video(
            chat_id=TELEGRAM_CHAT_ID,
            video=open(file_name, 'rb'),
            caption=text,
            supports_streaming=True
        )
    except Exception as e:
        logging.error(f"Erorr when sending video: {e}")
    
    try:
        os.remove(file_name)
    except Exception as e:
        logging.error(f"Erorr when deleting video: {e}")


@huey.task()
def process_frames(frames, filename):
    try:
        imageio.mimsave(filename, frames, fps=24)
    except Exception as e:
        logging.error(f"Error when saving video: {e}")
    else:
        send_gif(filename, str(len(frames)))

2.3 Process management

I used systemd to manage processes on my raspberry pi, but there are other options as well too (eg cron, bashrc, etc). Here is the config I have in

[Unit]
Description=Jerry the Mouse
After=multi-user.target

[Service]
Type=idle
ExecStart=/home/pi/jerry/venv/bin/python main.py
WorkingDirectory=/home/pi/jerry
Restart=always

[Install]
WantedBy=multi-user.target

...and for the notifications part:

[Unit]
Description=Jerry the Mouse (Background)
After=multi-user.target

[Service]
Type=idle
ExecStart=/home/pi/jerry/venv/bin/huey_consumer bg.huey
WorkingDirectory=/home/pi/jerry
Restart=always

[Install]
WantedBy=multi-user.target

Chapter 3: Results

(Un)fortunately, while I was writing this app, the mouse stopped coming. Probably, I just fixed the holes in the kitchen cabinet, through which it might come. However, to test it I attached a thread to a small bicycle light and pulled the string. The result is exactly what I expected at the beginning!

Not a real mouse, just a box with a thread attached pulled by me :)