Program a robot to navigate to five different (x, y) points in their numbered order. Along the way to these five points collect at least two of the five golf balls dispersed in the course and avoid the obstacles in the robot’s path. Once all five points have been reached, if needed continue collecting the remaining golf balls and then deposit the orange golf balls at their deposit circle and the blue golf balls at their deposit circle.
In order to run through the course whilst avoiding obstacles, we used the widely known pathfinding algorithm, A* (pronounced “A star”). Its purpose is to determine the shortest path from a starting point to a specified endpoint. Essentially the algorithm works by using the Manhattan distance to a certain destination to calculate a heuristic (minimum cost). Movements are done in four different motions: north, south, west, and east of the robot. Unfortunately, we didn’t program the robot to move in diagonal movements, which would have allowed the robot to progress much faster through the course. One very important component to our working system is the proper integration of the OptiTrack camera system and the position data of the wheel encoders to determine the position of the robot at all times. Then we employed Kalman filtering to filter these two sets of data, which in turn resulted in a more precise orientation and position of our robot. The last remaining task, for a functional A* algorithm, involved using LADAR data to detect when an obstacle (walls) were in the robots path.
Figure 1. Heuristic (H) and cost (G) mapping of a sample path using A*
Once an obstacle is detected a flag is set, and this updates the course map with the location of these walls. By adding the obstacles to the course map, the A* algorithm then knows to steer the robot away from the known obstacle locations. This functionality is accomplished by continuously checking the LADAR readings (similar to wall detection) and adding the location of that wall onto an array. Because we want to avoid accidently adding an obstacle, we programed our obstacle avoidance to only update if a wall has been seen, if we’ve seen it more than 5 times. At this point we update the ‘x’ and ‘y’ position of that obstacle.
Given the nature of our tasks to collect two different colored balls, a vision algorithm was used. Most of the processing for this functionality is done through our ColorVision.c file that allows us to detect multiple colors using HSV color specs of the bright orange and blue balls. Once the robot has seen about 20-30 pixels of a specific color the RobotControl abandons A* and shifts to our ball detection and capturing routine. A decision will be made on whether it goes towards a blue ball or orange ball by checking which threshold is met first for an orange or blue. We are then able to command our robot to chase the ball by sending vref and turn parameters via a feedback control loop which positions the robot until the difference of the ball’s column position and the center of the LCD screen is driven to 0. Once it’s ID’d the ball and its heading in its direction, the robot stops 1 tile away (this is done by checking the balls row centroid) from the ball and adjusts the color gate (separator) and the front gate appropriately. The following motion simply drives the robot forward and closes the front gate.
In order to collect the balls, we designed our very own mechanical device with a white PVC board. To ensure a simple design and minimize the number of servos used in our design, we included only a gate (blocks the balls from getting released) and a color separator.
Figure 2. Front gate physical design
Figure 3. Ball separator gate with front gate in the up position
The LABVIEW application served two purposes, to continuously display the robots position on the course as well as map the location of the balls, with appropriate color, as soon as they are picked up. At the same time, we’re displaying the walls that the robot sees to an obstacle array. The communication for LABVIEW is done through a TCP/IP connection, ComWithLinux function, and the terminal. As soon as an obstacle is detected, we determine whether the wall should be drawn vertically or horizontally based on the x coordinate of the detected obstacle.
Figure 4. LabView block diagram
Figure 5. Dan Block (left), Andrew Blanco, Omar Ayala (middle), Nicolas Sierro, Bruno Calogero (Right)