I am aware the Human brain has many functionally distinct components, but let us specifically consider the Human visual cortex: could Artificial Neural Networks (ANNs) be "trained" (through, e.g. backpropagation) in an analogous way to how the visual cortex "learns"?
Is the concept of backpropagation in ANNs a phenomena actually observed in the Human brain?
Related
Answer
I recommend Yoshua Bengio's recent works. E.g.: https://arxiv.org/abs/1502.04156 and his slides from the NIPS 2016 Brains and Bits workshop.
Also, Timothy Lillicrap's work: http://www.nature.com/articles/ncomms13276
This is still a big open question. In short, we (the neuroscience community) have little idea on how the brain learns at a circuit/systems level in general. We know a thing or two about how individual synapse changes under specific experimental protocols, but those are phenomenological models (e.g. STDP; there has been many many normative models of learning that concludes their learning rule is STDP-like). (Some systems such as cerebellum, we know relatively more, but the learning seems to be more specific to motor timing.)
The main technique we use in machine learning and artificial neural network are mostly gradient descent (or some sort of stochastic optimization algorithm), and people have been trying to find if the biological neural network learns in a similar way. There are several issues:
There's no error gradient being propagating according to chain rule. Information gets mushed together within a neuron, so there's no plausible way to trace exactly the forward-path during an backpropagation.
There's a lot of recurrent connections but they are not symmetrical.
Temporally, it is implausible that there's a clocked signal to backpropagate after a each forward path. Neuronal network doesn't seem to operate with clocks (some thing there are precise clocks, but I think there's little evidence).
Recent efforts have made some progress with random backprojection, decoupled updates using synthetic gradients, target prop, autoencoder, weight/signal quantization, deep belief networks, etc. But there seems to be several more jumps needed to make it biologically plausible such that it agrees with the neural architecture and information flow. Even then, there's no guarantee that neocortex is learning with a similar principle.
No comments:
Post a Comment