myR: The Pi Camera

The Pi camera is an argument that can take some posts.

The hw installation has been pretty simple , also for me.I follow the indications reported in the  https://www.raspberrypi.org/help/camera-module-setup/   and https://www.raspberrypi.org/documentation/usage/camera/README.md.

Then  I tested it worked using raspivid: https://www.raspberrypi.org/documentation/usage/camera/raspicam/raspivid.md

So, ready to develop my camera.py class.

Requirements are:

  • stream images on a browser
  • recognize  items in the image (ball tracking, as first attempt)
  • stream the result in the browser
  • run the camera code in a parallel thread

My camera.py module  includes 4 objects (classes):

  • camera: dedicated to collect images from camera and elaborate this information, in particular  performing the ball tracking feature.
  • camserver : used to serve the images to the browser.It control the camHandler
  • camHandler: the webserver class that pack data and return it to browser.
  • camera_data: the class including the data to share ( camera parameters , results, images.

 

camera(threading.Thread):

It first inits the camera by :

self.camera = picamera.PiCamera()

Then it is possible to start the man loop of the camera by   camera.start().

This loop include  a call to :

  • the procedure  self.configure(). Here it is possible set up the camera parameters by writing a command in the variable  self.data.config. It is useful  for tuning the thresholds for  colors during the  ball tracking.
  • the self.camera.capture(rawCapture, format=’bgr’, use_video_port=True). It just the the picture
  • finally the self.balltracking(image). where it is done the balltracking and returning some information, for example the position of the ball in the image and the estimated distance of the ball respect the rover.

 

The balltracking is obtained by  using the famous openCV  library: http://docs.opencv.org/2.4/

In order to install this library I found quite a lot complicated  posts googling around, finally I decide to follow this raw method , that worked (at least for me):

sudo apt-get install python-numpy
sudo apt-get install python-scipy
sudo apt-get install python-imaging
sudo apt-get install libopencv-dev
sudo apt-get install python-opencv

The ball tracking  is also a well known  exercise. I used this blog  to start my dev: http://www.pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/

 

Cam1

Summing up, the image is first filtered.

Then I decide to search for a green ball, so I setup my HSV parameters (I’ll show how later).

The image than become a black and white image: all the pixel inside my HSV thresholds are set to black , all the rest are set to white.

With cv2.findContours(…)  you can get a list of contours of the “black items” in the image.So , look for the biggest one ( I suppose in the image there is only one green item: my ball), find its center calculate its radius, and estimate the distance (by a proportion). Finally add some info and draw some lines.Done!

Cam2

In the next post I’ll explain how to expose the images on the web and how to find the right  thresholds for Hue , Saturation and Value (HSV).

 

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s