Tag Archives: rover

myR: The Pi Camera p2

camserver and CamHandler

in order to stream the video on a  browser , I add this camserver object that it takes care to pass the last available image.

Consider this webserver is working on port 9094, You can easily test it by  run:

sudo python camera.py

than typing on a browser:

http://192.168.0.10:9094/image.mjpg

Cam4

and (you need to close the previous page first)

http://192.168.0.10:9094/mask.mjpg

They are respectively the last image with result of ball tracking and the result of the filters before search for the contors.

This can help to define which values of HSV are correct to look for green color.This part can be quite tricky, so I created also a html page that can help me by moving slides on it.You can open it in the browser but for the moment it does not apply the change on the mask (this will work only when I will introduce also the server.py and the rc.py -remotecontrol- in the future posts).

Cam3

In my github you can find now the upload of this part of the code.

PS: Note that now there is a bug that I did not yet solved: since in the camHandler.do_GET there is an infinite loop continuously updating image, you can not open two pages  connected to the same server.You need first to close a webpage than open the new one.

 

Advertisements

myR: The Pi Camera

The Pi camera is an argument that can take some posts.

The hw installation has been pretty simple , also for me.I follow the indications reported in the  https://www.raspberrypi.org/help/camera-module-setup/   and https://www.raspberrypi.org/documentation/usage/camera/README.md.

Then  I tested it worked using raspivid: https://www.raspberrypi.org/documentation/usage/camera/raspicam/raspivid.md

So, ready to develop my camera.py class.

Requirements are:

  • stream images on a browser
  • recognize  items in the image (ball tracking, as first attempt)
  • stream the result in the browser
  • run the camera code in a parallel thread

My camera.py module  includes 4 objects (classes):

  • camera: dedicated to collect images from camera and elaborate this information, in particular  performing the ball tracking feature.
  • camserver : used to serve the images to the browser.It control the camHandler
  • camHandler: the webserver class that pack data and return it to browser.
  • camera_data: the class including the data to share ( camera parameters , results, images.

 

camera(threading.Thread):

It first inits the camera by :

self.camera = picamera.PiCamera()

Then it is possible to start the man loop of the camera by   camera.start().

This loop include  a call to :

  • the procedure  self.configure(). Here it is possible set up the camera parameters by writing a command in the variable  self.data.config. It is useful  for tuning the thresholds for  colors during the  ball tracking.
  • the self.camera.capture(rawCapture, format=’bgr’, use_video_port=True). It just the the picture
  • finally the self.balltracking(image). where it is done the balltracking and returning some information, for example the position of the ball in the image and the estimated distance of the ball respect the rover.

 

The balltracking is obtained by  using the famous openCV  library: http://docs.opencv.org/2.4/

In order to install this library I found quite a lot complicated  posts googling around, finally I decide to follow this raw method , that worked (at least for me):

sudo apt-get install python-numpy
sudo apt-get install python-scipy
sudo apt-get install python-imaging
sudo apt-get install libopencv-dev
sudo apt-get install python-opencv

The ball tracking  is also a well known  exercise. I used this blog  to start my dev: http://www.pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/

 

Cam1

Summing up, the image is first filtered.

Then I decide to search for a green ball, so I setup my HSV parameters (I’ll show how later).

The image than become a black and white image: all the pixel inside my HSV thresholds are set to black , all the rest are set to white.

With cv2.findContours(…)  you can get a list of contours of the “black items” in the image.So , look for the biggest one ( I suppose in the image there is only one green item: my ball), find its center calculate its radius, and estimate the distance (by a proportion). Finally add some info and draw some lines.Done!

Cam2

In the next post I’ll explain how to expose the images on the web and how to find the right  thresholds for Hue , Saturation and Value (HSV).

 

 

 

myR: How to control DC Motor part3

This 3rd part is dedicated to the sw .

I use an approach object-oriented, so I prefer to create for each real component a equivalent sw  class object. It means that I try to include in the code all the parameters necessary to describe the item and its behavior.

The advantage of this approach is that if I need 20 motors, I just initialize 20 istances of my DCMotor Class.

The translation of this concept is very simple:Which parameters can describe can describe my motor?

The fundamental information is:  which is its speed? and than which are the limits of this speed?

Again, How phisically can control the motor, so which pins are used to set the speed?

All of this information are included in the init() of my class:

def __init__(self, name, MBack, MForw, channel1, channel2, WMin=-100, WMax=100, Wstall=30, debug=True,simulation=False):

MBack is the GPIO pin used to move the motor Backward, channel1 is the DMA channel used for it.

MForw is the pin for motor Forward. channel2 is the DMA channel used for it.

Wmin and Wmax is intuitive…

WStall is the  minimun speed that cause the motor not to move.

The actions that can be done onmy motor are:

1)Start the motor – in reality this just initialize the GPIO channels and pins.

def start(self):

2)Stop the motor – stop the GPIO channel.

def stop(self):

3)Set the speed W – check the speed is inside the limits,check if the motor is requested to move backward or forward , reset the unsued pin and set the pulse width of the correct pin.

 def setW(self, W)

 

Finally  by calling in a main routine those functions you can move your motor:

myDCmotor = DCmotor('myMotor', 18, 23, 11, 12)
myDCmotor.start()
myDCmotor.set(30)
#accelerate
myDCmotor.set(80)
#deelerate
myDCmotor.set(50)
#move backward
myDCmotor.set(-40)
#brake
myDCmotor.set(0)
#stop
#myDCmotor.stop()

You can download the code from Github-solenerotech/myRover

NOTES:During the programming , also when I added other devices like picmaera and ultrasound sensor I notice some bad behaviour on the GPIO system.For this reason you can find in the code some choices that are explained by this problems.

1)I notice that it is necessary to use one channel for each pin. Otherwise the pulse width is inverded ( if I set W=90% it runs 10% ans so on).  Thats expalin channel1 and channel2.

2)the DMA channels are used to create the PWM. But the DMA channels in rpi are used also for other activity. So you need to avoid some of them, in particular DMA channel 0  is used for the system and DMA channel 2 is used for the sdcard. So avoid those 2 channels.

3)In order to generate a PWM you can use hardware clock or sw clock. The hw clock is used also by Picamera, so this can generate an interference to the picam itself. To avoid this I’m using the sw clock on the PWM. This is obtained by using in the PWM setup the option delay_hw=PWM.DELAY_VIA_PCM

 

myR: How to control DC Motor -part 2

So, let’s first complete the wiring.

Each motor needs 2 signals, one for the clockwise movement, one for the counterclockwise movement.

I’m using the  GPIO 18 ,23,24,25  connected directly to the 4 inputs of the bridge. See the diagram below.

DCmotor4

Consider Motor 1. When GPIO 18 is high , the bridge set the Out1  also High and Out2 is the ground. And the motor turns cw. When GPIO23 is High the bridge inverts the outputs, so Out1  become the ground and Out2 is High and the motor turns ccw.

Finally , as we see in preview post, modulating the pulse width of the signal we can control the speed of the motors.

myR: Under the Christmas tree…

Under the xmas tree,  Santa put a  present for me including:

  • Dagu 4D Magician Chassis (including 4 DC motors, and  2 plastic plates,4 x AA battery holder) -21 Eur
  • HC-SR04 sensor (a ultrasound sensor for measuring distance) -4 Eur
  • L298N bridge ( a bridge for controlling up to 2  motors) -4,5 Eur

In addition I had already:

  • raspberry pi
  • Pi cam
  • a wifi adapter
  • an omniwheel (spherical)
  • an empty  tic-tac candy holder
  • 4 X aa Batteries
  • a smartphone battery pack (2.5 A)

So I put every togheter and it pops out myR:  a super Rover!!!

WP_20151204_008

Nothing special under the sky, but an interesting project  to test autonomous veicles. So, to reach that goal , I’m now working on  a rover that can act in this 4 modes:

  • Jog mode, moved by operator ( my 5 years old son…).
  • Program mode: it is possible to create a sequence of movements and the rover can repeat them.
  • Discover mode: the rover can move randomly aroud the appartment avoid any obstacle in fornt of him.
  • Search mode: the rover can search and reach a ball placed or moved around.

As always I’m developing this features using python and  using object-oriented programming: for each item I create a module that implements all  the necessary features for this item.

In the next weeks I’ll post the development steps.

Happy new year and Keep in touch!