Sometimes The Rules Are Bad

·

3 min read

Image identification, a very complex operation that is challenging to accomplish with a regular computer program, is a common application for AI. Although the majority of us can quickly recognize a cat in a picture, it's incredibly difficult to come up with the standards that characterize a cat. Do we inform the software that cats have two eyes, a nose, two ears, and a tail? A mouse and a giraffe fit that description as well.

To develop the rules that will enable it to accomplish its objective, AI uses trial and error. Additionally, because the AI picked up knowledge implicitly rather than explicitly being taught, at times its tactics are extremely inventive. The clever problem-solving principles of an AI can occasionally be based on false presumptions.

In most cases, Microsoft's image recognition software, which lets you upload any image for AI to classify and label, does a good job. But it was tagging sheep in images that unquestionably didn't include any sheep. Further research led to the conclusion that, whether or not sheep were present, it tended to see sheep in environments with lush, green fields.

This AI may have had trouble understanding that the "sheep" label referred to the animals, not the grassy setting, because during training it was primarily shown sheep in fields of this kind. Alternatively, the AI has been focusing on the incorrect object. And sure enough, it tended to get perplexed when I gave it examples of sheep that weren't in beautiful green fields. When I showed it images of sheep occupying automobiles, it tended to mistakenly classify the animals as dogs or cats. Along with sheep cradled in people's arms, sheep in living rooms also received dog and cat labels. Dogs were also mistaken for sheep that were wearing leashes. Goats presented a similar set of issues for the AI since they climb. I could guess that the AI has developed rules like "Green Grass = Sheep" and "Cats" when there is fur in a car or the kitchen. When it confronted the actual world and its bewildering diversity of sheep-related circumstances, these rules, which had served it well throughout training, fell short.

These kinds of training mistakes are frequent in image recognition AI. But these errors can have detrimental effects. An AI was once taught to distinguish between images of healthy skin and images of skin cancer by a team at Stanford University. Although several of the tumors in their training data had been imaged next to rulers for scale, the researchers later realized that they had accidentally trained a ruler detector instead. The AI discovered that it was much simpler to search for the existence of a ruler in the image than it was to distinguish the minute distinctions between healthy cells and malignant cells.