logo - not so grand challange

Picture of the robot.

Hardware

We are using a model car resembling a Hummer from White Magic, Canada. All remote control equipment has been taken out and replaced with the following: Figure 1. Shaft encoder on left back wheel.

Way Point Following Using Path Planner

The robot travels autonomously to user specified way points, using a collision free path planner.
  1. The graphical user interface on a client computer shows the map that is available to the robot, see figure 2. The robot's position and heading is indicated by a blue triangle, and the robot trace is outlined by a red curve.
  2. The user adds one or several way points (yellow stars), e.i. positions were the robot should travel autonomously, and sends them to the robot by clicking 'Send Waypoints'.
  3. The robot plans a collision free path from the current position to the first way point and between consecutive way points using the heuristic best-first search algorithm, A*. The planned path is simplified to a few intermediate way points (shown as green stars in the user interface) that connects the current position with the user provided way points.
  4. The robot starts traveling to the way points, one at a time.
  5. A priori unknown obstacles that are registered by the PSD sensors while traveling are added to the map as temporary obstacles and shown as purple rectangles. The size of the rectangles is the resolution of the occupancy grid that constitutes the internal map. Similarly, sites that according to the map are occupied by an obstacle but in fact are empty (as sensed by the PSD sensors) are registered as temporarily empty, shown as white rectangles in the user interface.
  6. If the planned path (consisting of the list of way points) is no longer transversable as a consequence of the added obstacles, is a new path planned to ensuring that these are avoided as well.
Communication
The client computer (running the user interface) and the robot is connected to the same Wireless Local Area Network (WLAN) and communicate using the User Datagram Protocol (UDP).

Localization
The initial position and heading is known to the robot. Dead reckoning is used to update the current position estimation. The dead reckoning algorithm utilizes sensory inputs from the two shaft encoders mounted on the back wheels.

Speech Feedback
Speech feedback is given whenever a user defined waypoint is reached, e.g. "Reached first way point" or "Reached third way point. Mission was successfully completed".
Festival, a free linux speech syntehesizer is used to achieve this.

  Screenshot of graphical user interface.
Figure 2. Screen shot from the graphical user interface on the client PC. Blue triangle shows the position of the robot. Red curve is the robot trace. Yellow stars are way points added by the user. Small green stars are intermediate way points added by the planner in order to avoid obstacles. Purple rectangles are obstacles that where registered by the PSDs during the run.

Links


Object Tracking

Color and size of the object and the background noise is measured during the learning phase.
The detected object is tracked while avoiding obstacles in the tracking phase. One of the cameras are used for tracking the object and the 5 PSDs are used for obstacle avoidance.

Learning phase
The object is shown to the robot which measures and records a hue (color) histogram. The histogram is used to construct a color filter. The output of the filter is a grey-scale image, were the intensity of each pixel is proportional to the occurrence of that color (hue) in the measured histogram (see figure 3).
The object is placed approximately 1 meter from the camera and the size of the object is recorded (sum of the pixel values in filtered image). The object is removed and the background noise is measured.

Tracking phase
The color filter is applied to the camera image. Colors that are similar to the tracked object gets a high intensity (light grey/white) and others get low intensity (dark grey/black) (see figure 3b). The intensity values in the filtered image are summed up row-wise (blue curve in figure 3b) and column-wise (red curve in figure 3b). The two curves are low-pass filtered. The maxima of respective curve is taken as the estimated object position. The robot turns left if the object is in the left 1/3 of the image, right if its in the right 1/3 or goes straight otherwise.
The speed of the robot depends on the estimated distance to the object. The distance estimation is done based on the square root of the sum of pixel intensities in the filtered image. The speed is higher if the object is far away. If the sum of the pixel intensities in the filtered image is less then 2 times the background noise (as measured during the learning phase) the object is regarded as absent and the robot stops. If the sum of the pixel intensities in the filtered image is greater then the value measured during learning with the object on 1 meter distance, the object is regarded to be close and the robot stops. Obstacles are avoided using the 5 PSDs and motor stall detection while tracking the object.

color tracking screen shot

Figure 3. (a) Camera image with tracked object in the foreground.
(b) Image after the color filter has been applied. Green cross hair indicates estimated object position.

Links

Simple Obstacle Avoidance

Obstacle avoidance using the 5 PSDs and motor stalled detection.

Links

Currently working on the project


not so Grand Challenge, Mobile Robot Lab, UWA
Thomas Bräunl