Primate brain’s visual could improve AI algorithms
A team of scientists, led by neuroscientists Sydney, recently found new evidence revising the traditional view of the primate brain’s visual system that could augment AI algorithms for machine vision.
This remapping of the brain could serve as a future reference for understanding how the highly complex visual system works, and potentially influence the design of artificial neural networks for machine vision.
In the quest of the whole-brain connectivity in marmosets, the team found that parts of the primate visual system may work differently than previously thought.
Mapping out how distinct types of cells connect can help researchers understand how groups of cells play in concert to relay and process sensory information from the outside environment to the brain.
For their research, the team looked at the thalamus, a brain structure located above the brainstem that consists of different nuclei thought to relay and coordinate sensory information to the cerebral cortex, typically conceived of as the seat of higher cognitive function.
Researchers have traditionally categorized different thalamic nuclei as either relay nuclei or association nuclei. The visual thalamus, for example, contains the lateral geniculate nucleus (LGN), considered to be a relay of information from the retina to the visual cortex, and the visual pulvinar, which is thought to be responsible for multisensory coordination and attention.
The research is also important because it’s the first time this type of brain mapping was conducted on primates, which have brain structures similar to humans.
Outside of basic science implications of the findings, Mitra also suggested possible applications in artificial intelligence AI algorithms.
“People are basing the algorithms that they develop on a dated view of the visual system’s anatomy,” Mitra said. “As we understand it better, maybe that will allow for new thinking about network algorithms for machine vision.”