Remarkable progress has been made in recent years on how machines ‘perceive’ the world. This progress promises potentially life-changing applications, from improved industrial robotics, self-driving cars, to accurate medical imaging. Recently, neuroscientists have argued that a subclass of these machine vision techniques, known as deep learning neural networks, are organised in the same way and perform the same kinds of computations as the human visual system.
This has immense implications: we can study these artificial networks instead of humans or other animals to better understand the human brain; we can develop prosthetics to help with brain damage; and in turn we can improve artificial neural networks from our knowledge about the human brain. However, this claim has been challenged by other neuroscientists. Therefore, it is important to systematically test whether human and these machine visual systems are analogous. As a first step we built the simplest artificial neural network that can recognise the simplest symmetrical stimuli. We identified the minimal set of parameters that are necessary to classify symmetrical patterns; these parameters are remarkably similar to those that humans use.
We will, in the near future, expand this approach to other human psychophysical tasks. Based on the outcomes of these tests will in time allow us to test a broad range of more powerful networks.