myR: the rover module

Finally the last module is the rover.py module, where I put together all the development described in the past posts.

First, there is the initialization of all the objects , that are all working in parallel threads and sharing info by datacollector:

def __init__(self, name, M0b=18, M0f=23, M1b=24, M1f=25, T1=4, E1=17, simulation=False):

self.data = datacollector()
self.camera = camera(self.data)
self.camserver = camserver(self.data)
self.sensor = us_sensor(‘S1’, T1, E1, self.data)
self.motorLeft = DCmotor(‘MLeft’, M0b, M0f, 11, 12)
self.motorRight = DCmotor(‘MRight’, M1b, M1f, 13, 14)
self.rc = rc(self.data)
self.server = server(self.data)
#self.speaker = speaker(self.data)
#self.display init in start() in order to read the output print from other classes

The basic functions of this modules are the ones used to move  the motors:ef brake(self):

def brake(self)
def right(self, time=0)
def left(self, time=0)
def down(self, time=0)
def up(self, time=0)

With time=0 the command stay until a new one is coming, otherwise the movement is done for the time value.

In order to  update the motor status  the updatemotors(self) is called.In this function it is checked which command is arrived from outside.

Consider an higher level, the rover can work in 4 different modes:

  1. jog mode . the user controls the rover and move it by arrow on the keyboard or buttons on the webpage (192.168.0.10:9094)
  2. discover mode. The rover is moving around the floor and use the distance sensor to avoid the obstacles.The used rule is really simple. move forward until an obstacle is found (measured distance < min distance). Then turn left and retry to to move forward.
  3. search mode. the rover moves around the the floor and use the picam to search and move to the green ball places somewhere in the floor.
  4. program mode. the user define a list  of commands (move left, move right, etc.)  and the rover executes them.(not implemented yet)

 

The discover mode right now is the most interesting.

This module is under development.So up to now it take a picture, then if it found a green ball, it turns in order to have the ball in the center of the picture. Then move forward until the distance measured by sensor is < 100mm

 

What I’m working on right now is to use vision to estimate the ball distance. I can estimate the distance by a proportional calculation related to the dimension of the ball.I calibrate the camera to obtain distances in this way.

I took a picture  with the ball positioned in a defined  distance(meas_distance in mm). Then I got from the system the ball radius (meas_radius in pixel).Finally I have the proportion:

meas_distance/meas_radius=current_distance/current_radius

and so:

current_distance = (meas_distance*current_radius)/meas_radius

Then I want to add more intelligence in the algorithm , for example if an item is not found, I can rotate and take a new picture.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s