Aside from obvious biological plausibility, from a computational standpoint, what's the motivation of using the Neural Engineering Framework (NEF) instead of Artificial Neural Networks (ANNs) for computing functions? From what I can tell, they both approximate functions, have the ability to learn new functions and the ability to create self-organizing networks. What makes the NEF different?
Asked
Active
Viewed 109 times
1 Answers
5
First, let's take a look at the basic principles of NEF.
The first two principles (Representation and Computation) do seem analogous to trained ANN models. Additionally, with the hPES learning rule that I've described here, they seem to have the same learning (gradient-descent) capabilities.
Where the NEF differentiates itself the most from ANNs is when it starts to describe dynamics (oscillators, attractors). For a description on how it does this and what it accomplishes, check out Terry Stewart's course notes found here.
Describing dynamics is important in the brain, since simple oscillators and integrators seem to be the foundation of many different cognitive abilities such as working memory and motor control.
Seanny123
- 8,853
- 3
- 25
- 61
So of course, ANN, being the more general term, has more capabilities... but once you choose an instance of an ANN for a specific purpose, you begin to limit the capabilities of the ANN towards your purpose.
– Keegan Keplinger Oct 15 '14 at 19:03