During the Cognitive Computing Challenge, we’ve examined many ways machine learning will shape the future of finance and technology, and tried to focus on topics not typically associated with cognitive computing. And while artificial intelligence in popular culture centers on independently conscious computers, new research reveals that computers may in fact be able to tell us something about our own consciousness. Because of cognitive computing’s adaptive properties, it happens to be perfect for modeling human and animal behavior.
Even mice display body language, and research on the subject has so far relied on human observation to collect data for analysis. However, Harvard Medical Students recently announced their development of a revolutionary new computational technique which allows them to model animal behavior in three-dimensional space and categorize body movements into “notions of syllables and grammar”, as reported by Scientific Computation.
Graduate student Alexander Wiltschko had the brilliant idea to use Microsoft’s Kinect software to capture movement. He and his fellow researchers were examining how mice react to odors, particularly the smell of a fox. In reaction to the scent, the mice would freeze and roll into a ball. The scientists wanted to know exactly how many times the mice would do this, how long each action lasted, and exactly what they looked like. Wiltschko’s idea also solved another problem common to scientists who study nocturnal animals. They usually use infrared lights, but sometimes these would fall from their supporting scaffolds. The team then used the three-dimensional images to build a model that could analyze the data mathematically. Developed by Matthew J. Jonson, Harvard Medical School research fellow in neurobiology and study co-author, the model found relationships between the various poses that the mice made as well as the transitions between the poses. One thing the authors noticed after using the model was that the movements could be broken down into distinct poses that appeared in reaction to many different stimuli, and not just to the fox odor. That reminded the Harvard team of language – a series of distinct and independent symbols that form something communicative and meaningful.
So when a mouse smells a fox, its previous behaviors simply become more intense, rather than different. It is reminiscent of the difference between talking quietly and shouting. The Kinect is also able to pick up on behavior that is not apparent to humans. The researchers tested the model on mice that were engineered to have two copies of a gene with a mutation that made them waddle when they walk. While this behavior is easy to see, mice who have just a single copy of the mutated gene were long thought to be behaviorally normal; however, the model demonstrates that these mice display incorrect syllables in their walking patterns.
Johnson hopes that others will build on and improve his model. The research team’s work has given the world a unique opportunity to draw clear connections between specific environmental stimuli and behavior, a problem that the scientific community has found difficult to do. This research shows that it is possible to break down such complex data into smaller data points that can be counted and tracked over time. It also finally allows researchers to work without fear of bias, since humans are no longer being relied on to judge whether a certain behavior is occurring or not.
The winner of the cognitive computing challenge here at HeroX will have a great opportunity to help shape the future, so make sure to get your projects in soon! The submission deadline is January 11.