May 15, 2019
Two recent studies independently tapped into the power of artificial neural networks (ANN) to solve the mystery of how our visual system perceives reality, according to reporting by Shelly Fan on SingularityHub.
The first study, published in Cell, used generative networks to evolve DeepDream-like images (DeepDream is a computer vision program) that hyper-activate complex visual neurons in monkeys. These machine artworks revealed a fundamental “visual hieroglyph” that may form a basic rule for how we piece together visual stimuli to process sight into perception.
In the second study, a team used a deep ANN model—one thought to mimic biological vision—to synthesize new patterns tailored to control certain networks of visual neurons in the monkey brain. When directly shown to monkeys, the team found that the machine-generated artworks could reliably activate predicted populations of neurons. Future improved ANN models could allow even better control, giving neuroscientists a powerful noninvasive tool to study the brain. The work was published in Science.
Other Articles to Explore
The studies’ individual results illustrate how scientists are now striving to use AI to probe animal and human intelligence. “Vision is only the beginning—the tools can potentially be expanded into other sensory domains. And the more we understand about natural brains, the better we can engineer artificial ones,” writes Fan.