The Artificial Intelligence misinformation epidemic centred around brains working like neural nets seems to be coming to a head with researchers pivoting to new forms of discovery – focusing on neural coding that could unlock the possibility of brain-computer interface.
Things are moving very quickly in neural networks for two reasons. One, processing power hasn't necessarily got that much faster over the past 5 years, but it has got exponentially cheaper and more efficient; allowing us to pack loads of processors in a small area, and run them in parallel. Neural networks, since we have no physical substrate like the brain, require a huge processing overhead. Thus, these improvements, and small improvements in the algorithms we use have allowed us to create relatively large neural networks, that don't take years to train. We can train them quickly enough, and implement them quickly enough that we can start to do these cool real time things, like this video, and the recent NNs that can play video games.
Generally, at this stage, we don't provide these networks with any specific structure, outside of picking an algorithm to construct the network. We let them form their own structures, as we train them. This creates black boxes, that vary each time, but have very similar functionalities. No two networks look the same, but they all produce essentially the same outputs. This is much like how biological learning works.
However, in most animals, the dynamic learning, occurring during the animals lifetime, and responsible for creating unique neural networks, is very limited. Most of the networks are laid down by DNA; they've essentially been trained by natural selection, rather than through an individuals experiences. This process is responsible for the remarkable coherence and integration of animal brains. It's responsible for the fact we all have essentially identical visual processing, language acquisition, and basic instinctual behaviors.
These fixed structures are extremely important for creating an intelligence we can relate to. The visual cortex, for example, is unlikely to arise in an individual neural network. We'll have to build one. The good news is, neuroscience is teaching us a lot about it. It appears to be a feed forward network, with increasing levels of abstraction.
Essentially, at the lowest level, neurons detect edges(diagonal, vertical, horizontal), curves(similar), at a very small scale. The second layer adds these up, looking for continuous edges or curves, representing the edge of an object. Another layer is likely looking for large value changes. As you got up the layers, they look for 2d geometric shapes and planes, and then likely 3d shapes. What's fascinating is that the higher layers seem to literally be built as 3d fields, with neurons representing a box, sphere, etc lighting up when we visualize, or view one.
So... Losts of structures in our brain are pre-built by evolution. Those structures are very ordered, and have complex layers of hierarchy and feedback which allow us to perceive and simulate various things, from simple shapes and sounds, to complex polygons, and complex language.
Secondly, there's the fact that even the seemingly chaotic, random learning process of or neuronal structures, during childhood and puberty, is actually extremely ordered. It follows complex procedures we don't understand, but ultimately seem to amount to a sort of layering, like an onion, where the basics are nailed down, then added to in progressive layers of abstraction; much like our visual system. Until we have algorithms that can do this; opposed to the dumb ones we use today. And, until we can literally build neural networks to mimick brain structures, we're not going to see anything highly integrated.
But, in the very near term, neural nets will become extremely good at specific skills, as demonstrated in the video. So long as the inputs are very constrained, and the outputs don't need to be very complex, we'll find more than enough uses to fuel a boom in the field; as fortunes are poured in, we will see huge leaps in the next 20 years, and I wouldn't be surprised if we see human level intelligence in 30-40 years. It will happen quickly once we know how to solve the above problems.