Usually, AI proves very successful when its goals are focused on a single task, like playing a game with clearly defined rules. It’s trying to set up a system to handle greater complexity that has proved elusive for AI. Some researchers believe studying how animals learn can open the way to more comprehensive mastery of tasks for AI.
An appreciation for the cognitive abilities demonstrated by animals is the motivation for the Animal-AI Olympics. As described on its YouTube video, “Instead of providing a problem to solve, we will provide an arena in which we will test your entry for many simple cognitive abilities using methods from the animal cognition literature.” (To learn about AI’s origins, check out A Brief History of AI.)
How Birds Use Their Brains
“Birdbrain” is generally understood to be an insult used to a person who has demonstrated a lack of intelligence. But in fact birds do use their brains very effectively in figuring out how to solve problems like gaining access to food that is out of their immediate reach.
As the video below demonstrates, birds can be quite clever at working out practical solutions.
The challenge of raising the food particles by throwing pebbles in the water was inspired by one of Aesop’s Fables, “The Crow and the Pitcher.” The story’s moral is: “In a pinch a good use of our wits may help us out.” There are other examples of birds demonstrating superior thinking, so much so that this video argues that we should rethink our use of “birdbrain.”
The goals of the competition are the following:
- Benchmark current AI against multiple animal species using a range of established animal cognition tasks.
- Present tests towards identifying cognitive abilities of AI systems.
- Determine which AI approaches are most promising for these types of tasks.
- Create an ongoing benchmark and data repository for artificial cognition.
- Determine which aspects of intelligence are challenging for current AI and which AI already excels at.
- Create new experiments to feed back into the animal cognition community that can later be tested with animals.
- Bring together two different disciplines to share methods and developments.
Testing, Testing, 1, 2, 3
- Vision (the ability to see their surroundings)
- Navigation (the ability to move — in whatever format this takes)
- Food retrieval (that animals are internally rewarded to retrieve food when hungry)
In an interview with IEE Spectrum, Crosby explained that, as demonstrated in the video on top, animals can often figure out what they need to do to get at food. The question is: Did they analyze the situation to apply the solution, or only do what they remember works, essentially “just repeating the pattern that it learned through trial and error”? The difference between true “understanding [and] rote memorization.”
Referring back to the crow’s solution to get at the food, it’s not completely clear how the crow arrives at the solution it does. Is it possible it has intuited the dynamics of displacement in water? Or has it simply learned from its own experience that the pebbles will raise the water level?
The video demonstrates one simple example of figuring out how to get a stick that exceeds the width of a house’s opening into the house, illustrating it with a dog and then with just a box shape picking it up and angling it in the way that it can make it through the opening.
“In each case, the idea is to develop tests that will show just how the animal brain understands, interprets, and reasons about the world,” Crosby said in his blog.
That adds up to 100 tests made up of 10 different categories. While the exact nature of the tests are being kept secret, they would include understanding of object permanence and spatial skills. Winners have to not excel in just one area, but in all. (Hear what others are saying about AI in 11 Quotes About AI That’ll Make You Think.)
Failure Is an Option
Of course, even a failure of agents can still be a success in discovery. As Crosby told Technology Review, “What we are actually interested in is discovering how to translate between different types of intelligence.” If the tests show that “this translation fails, that’s a success as far as we’re concerned.”
Online submissions from the competition will run from July 8 through November 1, 2019. The original prize offered was $10,000, but it has now more than tripled to $32,000.
Results are to be announced at the end of the year, but there are also plans for 2020 and beyond to make the data and testing platform available for others researching in the field to use in benchmarking and to compare with large-scale analysis, as well as plan for future competitions.