MEET TEAM GIGA-BOT

MAKING AN AUTONOMOUS BALL COLLECTING ROBOT

Meet the Team


Nolan Graves

Path Planning
Obstacle Avoidance
Collector Design

Brandon Gigous

Path Planning
Obstacle Avoidance
Audio Feedback

Benjamin "Status" Kuo

LabVIEW
Ball Detection
Web Page

Ryan Newquist

Ball Detection
Ball Tracking
Testing

Results


FIRST PLACE!


Final Run Statistics

Run Time = 76.50 seconds
Balls Deposited = 5
Balls Identified = 5
Total Score = -83.50 seconds

Path Planning


The robot navigates the course using the A* path planning algorithm. Kalman filtering of dead reckoning data (optical encoders on the wheel motors) and Optitrack data (camera system that tracks robot's motion) was used to update the position and orientation of the robot throughout the course. The course is sectioned into 2’ x 2’ grid spaces which are prepopulated with the course walls. The A* algorithm plans the optimal path from the current position of the robot to the target waypoint, and the robot follows this path through the course. The implemented A* algorithm uses Manhattan distance to calculate the heuristic and does not allow diagonal movement to prevent the robot from running into obstacles. Furthermore, an extra movement cost for turns is implemented to minimize the number of turns in the planned path. As the robot moves through the course it uses LADAR measurements to detect new obstacles. If a new obstacle is detected, then the robot calculates a new path using the updated obstacle map. A new path is also calculated if the robot diverges from the current path and collects a golf ball. Finally, the robot calculates a new path to the next target waypoint once the current target waypoint is reached. The process repeats until the robot has reached all five waypoints in order to complete the course.

Obstacle Avoidance





The robot detects obstacles using LADAR measurements. As the robot navigates the course, LADAR measurements are recorded, and the obstacle map is updated. Extra distance is added to every LADAR measurement so that every data point is pushed into a definite grid space away from the robot. Every LADAR sweep, the detection algorithm sums up the number of LADAR data points for each grid space and assigns a hit to a grid space if the sum is greater than a threshold. If the number of hits for a grid space becomes greater than a threshold overtime, then the obstacle is detected and the obstacle map is updated. This two-layered system adds robustness to the obstacle detection and minimizes false obstacle identifications.


Ball Collection

The robot is able to detect and collect golf balls through image processing and a camera mounted on the front of the robot. A feedback control law is used to align the robot to the centroid of the identified golf ball as the robot moves towards the ball. Additionally, LADAR is used to direct the robot away from obstacles using a feedback control law. The robot rolls over the golf ball, opens one of the collector flaps, and sorts it into the appropriate compartment underneath the robot based on color. Once the golf ball is underneath the robot, the collector flap is closed, and the color and location of the ball are sent to LabVIEW. Finally, the robot releases the golf balls into their respective chutes at the end of the course.


LabVIEW GUI


A LabVIEW Virtual Instrument was developed to provide visual feedback of the robot’s navigation through the course. The VI displays the course map with walls and grid spaces, the robot’s current position with a trail of past positions, detected obstacles, and collected balls with their respective position and color. This functionality is extremely valuable for debugging, especially for obstacle detection, because it provides visual data on what the robot was seeing. The VI communicates with the robot via a wireless TCP/IP connection, so real-time data is able to be viewed while the robot navigates the course.

             

Audio Feedback


Not only can the robot sense and navigate its environment, but it does it with personality! By sending commands across chips (from the DSP chip to an OMAP chip running Linux), we can call a function to make Giga-bot speak. Or, we can even play sound clips, such as a coin dropping on a table. When the robot sees various things while navigating the course, our algorithm will randomly choose one of the robot's "catchphrases" that reflects what's going on in the course. For example, when Giga-bot sees a ball, it'll say "Oh my, an orange ball." Giga-bot also shares a thought when it reaches any one of the checkpoints. From pointless gibberish to musings of the Land of a Thousand Golf Balls, Giga-bot is sure to bring joy to people of all ages!