How do we recognize an object from a computer picture? We start with several photos of the item in question under realistic game conditions. To the left, you see a plywood model of the game goal. Note that the special 3M reflective tape on the bottom that responds to the green LED on our robot. To make this stand out more, we should lower the camera's brightness setting and turn off white balancing. Then, we convert the goal color from RGB to HSV: hue, saturation, and brightness/intensity. Hue covers the rainbow spectrum, with this shade of green between .33 and .6 depending on the photo we analyzed. We'll leave it vague until the competition, when we can fine-tune the precise value. Saturation means the amount of color present. Since this is mostly white (all colors combined), the saturation value will be under .2. At an intensity of at least .9, the target glows brightly, but not as much a light source (.98). In step two, we filter the image into a black and white photo, where white is sources that match our parameters and black is excluded. Note that in the second image has false positives. Therefore, our next step is to make a list of contiguous blobs. The right one has about 12. Our goal is a box with a proportion 14 by 19.5 inches. Based on possible distances, we know the max and min box size. We also know that 28% of the box should be lit. We filter out everything that isn't goal-shaped on these criteria, and outline the matches in red.
This view can be converted to a distance and the current angle of approach in order to adjust our robot's position.
Now all we need is a drive train and for the kids to integrate the vision recognition into the driving software.