Face detection with OpenCV and Python on RaspberryPi

Many articles here and there describe how to use OpenCV on Raspberry Pi. However, most of them are about setting up the environment by hand — meaning compiling OpenCV from sources. There are two main disadvantages to this approach. Firstly, you have to spend some time to compile it. On Raspberry Pi 3 it takes quite some time, and not mentioning the earlier versions of this mini PC. Secondly, maintaining up–to–date version requires additional time. Still, you can go for middle ground — cross–compilation that requires less time but you have to set up the environment properly. Having above in mind I will introduce you to the OpenCV with Python interface installed from pre–compiled packages. If I have your attention keep reading 😉

Installing libraries

First of all you have to install a few packages namely: libopencv-dev and python-opencv. The first one are OpenCV libraries. This is the clue! You only need this part to use OpenCV with i.e. C/C++. If you do not need examples or other stuff you do not have to compile it yourself. The second one is a bridge between OpenCV and Python. It is a kind of interface which allows you to use OpenCV libraries, functions, etc. from Python. You can install above two by typing:

sudo apt-get install libopencv-dev python-opencv

in the console. It will install additional packages along the way — allow it to do so.

RPi camera support

I am using Raspberry Pi camera module but you can use any other webcam. The most important part is it to be able to capture some photos ;-). The only thing that will change is the way how you interact with camera, hence capture image. For this part I will use Python interface along with PiCamera module. This allows you to easily access the device and get some photos.

To get a photo with PiCamera module you need to do the following things:

from picamera import PiCamera
camera = PiCamera()
camera.capture(imagepath)

The first line loads the PiCamera so you can use it in the Python script. The second line creates an object through which you get access to the physical device while the last one captures a single image. As a parameter to the capture() method you can pass a path to a file where the photo should be stored. And that’s all, the three lines allow you to use the camera and capture images. As an addition to the above I sometimes use other options to the interface i.e.

camera.rotation = 180
camera.hflip = False
camera.vflip = True

You can rotate the image with given amount of degrees, you can flip the image horizontally and vertically. To read more about the PiCamera module or the device itself you can refer to this page, this one, this one and this one :).

OpenCV and face detection

As you probably know the OpenCV library is not only a library of image processing algorithms. A great part of OpenCV is Machine Learning. A set of algorithms that allows one to give a computer a hint of human intelligence. Sorry for vague definition; one of the purposes was to encourage you to look it up for yourself. Back to the face detection in OpenCV… One of the algorithms used for face detection, or object detection in general, is HAAR feature-based cascade classifiers. Here you can find a good example.

Generally there are two stages in classifier life cycle: training and recognition. In training stage the algorithm is given a set of features and assigned labels, generally speaking but now always. Based on this input information it tries to find a function which will give a quantitative measure to distinguish one set of features from another one. The second stage is when we feed a trained algorithm with unknown data sets and it gives us an output — a class to which the input, data feature set, was closest to.

Connect the dots

Before we put all the parts together there is one more thing to be done, namely creating a temporary directory in RAM memory. Since RPi runs on SD card and is relatively slow and it is better not to write date constantly on the same place. We will create a temporary directory for this. It is the same technique which I presented in one of my previous posts. We will use tmpfs to facilitate the process. Simply mount temporary directory with tmpfs or add this line to /etc/fstab to mount it during system boot. In either case, you have to create a dedicated directory. Run this:

sudo mkdir /mnt/ram

The created directory will be used as mounting point. To mount it use below command

sudo mount -t tmpfs -o size=8M /mnt/ram/

Or add this line to /etc/fstab, remember to reboot to apply the changes.

tmpfs /mnt/ram tmpfs nodev,nosuid,size=8M 0 0

Above will use RAM memory, 8MB of it to be exact, to create a virtual directory. After that you can use it as it was a normal dir on HDD. Keep in mind that after reboot the data saved there will no longer be available.

import cv2
import numpy as np
from time import sleep
import sys
from picamera import PiCamera

print "Load classifier"
face_cascade = cv2.CascadeClassifier('./haarcascade_frontalface_alt.xml')

cap = None
scf = 0.5
image = "/mnt/ram/image.jpg"

camera = PiCamera()

while True:
    print "Capture image"
    camera.capture(image)
    
    print "Get a frame"
    frame = cv2.imread(image)

    print "Resizing"
    frame = cv2.resize(frame, None, fx=scf, fy=scf, interpolation=cv2.INTER_AREA)

    print "Transformation to grayscale"
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    print "Face detection"
    face_rect = face_cascade.detectMultiScale(gray, 1.3, 5)

    print "Face rectangle"
    print face_rect 

    print "Drawing rectangle"
    for (x,y,w,h) in face_rects:
        cv2.rectangle(frame, (x,y), (x+w,y+h), (0,0,255), 3)

    print "Show image"
    cv2.imshow('Face Detector', frame)

    c = cv2.waitKey(1)
    if c == 27:
        break

if cap != None:
    cap.release()
cv2.destroyAllWindows()

Above will let you to recognize the face. Use Escape key on the window to terminate it. And here is the effect …

You can enhance it with another classifier but this time for detecting the eyes. You have to make two slight modifications. First add this line

eye_cascade = cv2.CascadeClassifier('./haarcascade_eye.xml')

before or after creating cascade for face detection (face_cascade).

After this you have to modify the “Drawing rectangle” procedure. Simply change the for loop from above to this one:

   for (x,y,w,h) in face_rects:
        cv2.rectangle(frame, (x,y), (x+w,y+h), (255,0,0), 3)
        roi_gray = gray[y:y+h, x:x+w]
        roi_color = frame[y:y+h, x:x+w]
        eyes = eye_cascade.detectMultiScale(roi_gray)
        for (ex,ey,ew,eh) in eyes:
            cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)

Now you have it 😉 and it looks like this

PS

If you are looking for the trained classifiers (haarcascade_frontalface_alt.xml and haarcascade_eye.xml) you can find them here (https://github.com/opencv/opencv/tree/master/data/haarcascades).

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.