Sunday, February 7, 2016

Robotics Challenge 4: Computer Vision and Automatic Targeting

In our upcoming competition, our driver with the joystick needs to hit the targets with high accuracy and hopefully the robot can score on its own in the initial 15 second autonomous mode. Yes, I'm helping the children of tomorrow design Skynet. Image recognition on the robot's webcam can help with a wide range of areas: automatically traversing the see-saw obstacle by recognizing the raised board to lower, detecting field position via landmarks, and fine-tuning the robot's position at the last moment to hit the goal better.



How do we recognize an object from a computer picture? We start with several photos of the item in question under realistic game conditions. To the left, you see a plywood model of the game goal. Note that the special 3M reflective tape on the bottom that responds to the green LED on our robot. To make this stand out more, we should lower the camera's brightness setting and turn off white balancing. Then, we convert the goal color from RGB to HSV: hue, saturation, and brightness/intensity.  Hue covers the rainbow spectrum, with this shade of green between .33 and .6 depending on the photo we analyzed. We'll leave it vague until the competition, when we can fine-tune the precise value. Saturation means the amount of color present. Since this is mostly white (all colors combined), the saturation value will be under .2. At an intensity of at least .9, the target glows brightly, but not as much a light source (.98). In step two, we filter the image into a black and white photo, where white is sources that match our parameters and black is excluded. Note that in the second image has false positives. Therefore, our next step is to make a list of contiguous blobs. The right one has about 12. Our goal is a box with a proportion 14 by 19.5 inches. Based on possible distances, we know the max and min box size. We also know that 28% of the box should be lit. We filter out everything that isn't goal-shaped on these criteria, and outline the matches in red.

This view can be converted to a distance and the current angle of approach in order to adjust our robot's position.




Here are other filters and types of targets like balls and blue lights.
Now all we need is a drive train and for the kids to integrate the vision recognition into the driving software.

No comments:

Post a Comment