Ticker

6/recent/ticker-posts

Ad Code

Responsive Advertisement

Neuromorphic Computing: What Is It and Where Are We At?

For the last hundred or so years, collectively as humanity, we’ve been dreaming, thinking, writing, singing, and producing movies about a machine that could think, reason, and be intelligent in a similar way to us. The stories beginning with “Erewhon” published in 1872 by Sam Butler, Edgar Allan Poe’s “Maelzel’s Chess Player,” and the 1927 film “Metropolis” showed the idea that a machine could think and reason like a person. Not in magic or fantastical way. They drew from the automata of ancient Greece and Egypt and combined notions of philosophers such as Aristotle, Ramon Llull, Hobbes, and thousands of others.

Their notions of the human mind led them to believe that all rational thought could be expressed as algebra or logic. Later the arrival of circuits, computers, and Moore’s law led to continual speculation that human-level intelligence was just around the corner. Some have heralded it as the savior of humanity, where others portray a calamity as a second intelligent entity rises to crush the first (humans).

The flame of computerized artificial intelligence has brightly burned a few times before, such as in the 1950s, 1980s, and 2010s. Unfortunately, both prior AI booms have been followed by an “AI winter” that falls out of fashion for failing to deliver on expectations. This winter is often blamed on a lack of computer power, inadequate understanding of the brain, or hype and over-speculation. In the midst of our current AI summer, most AI researchers focus on using the steadily increasing computer power available to increase the depth of their neural nets. Despite their name, neural nets are inspired by the neurons in the brain and share only surface-level similarities.

Some researchers believe that human-level general intelligence can be achieved by simply adding more and more layers to these simplified convolutional systems fed by an ever-increasing trove of data. This point is backed up by the incredible things these networks can produce, and it gets a little better every year. However, despite what wonders deep neural nets produce, they still specialize and excel at just one thing. A superhuman Atari playing AI cannot make music or think about weather patterns without a human adding those capabilities. Furthermore, the quality of the input data dramatically impacts the quality of the net, and the ability to make an inference is limited, producing disappointing results in some domains. Some think that recurrent neural nets will never gain the sort of general intelligence and flexibility that our brains offer.

However, some researchers are trying to creating something more brainlike by, you guessed it, more closely emulates a brain. Given that we are in a golden age of computer architecture, now seems the time to create new hardware. This type of hardware is known as Neuromorphic hardware.

What is Neuromorphic Computing?

Neuromorphic is a fancy term for any software or hardware that tries to emulate or simulate a brain. While there are many things we don’t yet understand about the brain, we have made some wonderful strides in the past few years. One generally accepted theory is the columnar hypothesis, which states that the neocortex (widely thought to be where decisions are made and information is processed) is formed from millions of cortical columns or cortical modules. Other parts of the brain, such as the hippocampus, have a recognizable structure that differs from other parts of the hindbrain.

The neocortex is rather different from the hindbrain in terms of structure. There are general areas where we know specific functions occur, such as vision and hearing, but the actual brain matter looks very similar from a structural point of view across the neocortex. From a more abstract point of view, the vision section is almost identical to the hearing section, whereas the hindbrain portions are unique and structured based on function. This insight led to the speculation by Vernon Mountcastle that there was a central algorithm or structure that drove all processing in the neocortex. A cortical column is a distinct unit as it generally has 6 layers and is much more connected between layers vertically than horizontally to other columns. This means that a single unit could be copied repeatedly to form an artificial neocortex, which bodes well for very-large-scale integration (VLSI) techniques. Our fabrication processes are particularly well suited to creating a million copies of something in a small surface area.

While a recurrent neural network (RNN) is fully connective, a real brain is picky about what gets connected to what. A common model of visual networks is that of a layered pyramid, the bottom layer extracting features and each subsequent feature extracting more abstract features. Most analyzed brain circuits show a wide variety of hierarchies with connections looping back on themselves. Feedback and feedforward connections connect to multiple levels within the hierarchy. This “level skipping” is the norm, not the rule, suggesting that this structure could be key to the properties that our brains exhibit.

This leads us to the next point of integration: most networks of neurons use a leaky integrate-and-fire model. In an RNN, each node emits a signal at each timestep, whereas a real neuron only fires once its membrane potential is reached (reality is a little more complex than that). More biologically accurate artificial neural networks (ANN) that have this property are known as Spiking Neural Networks (SNN). The leaky integrate-and-fire model isn’t as biologically accurate as other models like the Hindmarsh-Rose model or the Hodgkin-Huxley model. They simulate neurotransmitter chemicals and synaptic gaps. Still, it is much more expensive to compute. Given that the neurons aren’t always firing, this does mean numbers need to be represented as spike trains, with values encoded as rate-codes, time to spike, or frequency-coded.

Where Are We at in Terms of Progress?

A few groups have been emulating neurons directly, such as the OpenWorm project that emulates the 302 neurons in a roundworm known as Caenorhabditis elegans. The current goal of many of these projects is to continue to increase the neuron count, simulation accuracy, and increase program efficiency. For example, in Germany, a project known as SpiNNaker is a low-grade supercomputer simulating a billion neurons in real-time. The project reached a million cores in late 2018, and in 2019, they announced a large grant that will fund the construction of a second-generation machine (SpiNNcloud).

Many companies, governments, and universities are looking at exotic materials and techniques to create artificial neurons such as memristors, spin-torque oscillators, and magnetic josephson junction devices (MJJs). While many of these seem incredibly promising in simulation, there is a large gap between twelve neurons in simulation (or on a small development board) and the thousands if not billions required to achieve true human-level abilities.

Shown in 2019, this 8 million neuron neuromorphic system used 64 Intel Loihi chips. Source: Tim Herman/Intel

Other groups such as IBM, Intel, Brainchip, and universities are trying to create hardware-based SNN chips with the existing CMOS technology. One such platform from Intel, known as the Loihi chip, can mesh into a larger system. Earlier last year (2020), Intel researched used 768 Loihi chips in a gird to implement a nearest-neighbor search. The 100 million neurons machine showed promising results, offering superior latency to systems with large precomputed indexes and allowed new entries to be inserted in O(1) time.

The Human Brain Project is a large-scale project working to further our understanding of biological neural networks. They have a system known as the BrainScaleS-1 waferscale that relies on analog and mixed-signal emulations of neurons. Twenty wafers (each 8″) make up BrainScaleS, each wafer having 200,000 simulated neurons. A follow-up system (BrainScaleS-2) is currently in development, with an estimated completion date of 2023.

The Blue Brain Project is a Swiss-led research effort to simulate a biologically detailed mouse brain. While not a human brain, the papers and models they have published are invaluable in furthering our progress towards useful neuromorphic ANNs.

The consensus is that we are very, very earlier in our efforts towards creating something that can do meaningful amounts of work. The biggest roadblock is that we still don’t know much about how the brain is connected and learning. When you start getting to networks of this size, the biggest challenge becomes how to train them.

Do We Even Need Neuromorphic?

A counterargument can be made that we don’t even need neuromorphic hardware. Techniques like inverted reinforcement learnings (IRL) allows the machine to create the reward functions rather than the networks. By simply observing behavior, you can model what the behavior intends to do and recreate it via a learned reward function that ensures the expert (the actor being observed) does best. Further research is being done on handling suboptimal experts to infer what they were doing and what they were trying to do.

Many will continue to push forward with the simplified networks we already have with better reward functions. For example, a recent article in IEEE about copying parts of a dragonfly brain with a simple three-layer neural net has shown great results with a methodical, informed approach. While the neural network that was training doesn’t perform as well as dragonflies in the wild, it is hard to say if this is due to the superior flight capabilities of the dragonfly compared to other insects.

Each year we see deep learning techniques producing better and more powerful results. It seems that in just one or two papers, a given area goes from interesting to amazing to jaw-dropping. Given that we don’t have a crystal ball, who knows? Maybe if we just continue on this path, we will stumble on something more generalizable that can be adapted into our existing deep learning nets.

What Can a Hacker Do Now?

If you want to get involved in neuromorphic efforts, many of the projects mentioned in this article are open-source, with their datasets and models available on GitHub and other SVN distributors. There are incredible open-source projects out there, such as NEURON from Yale or the NEST SNN simulator. Many folks share their experiments on OpenSourceBrain. You could even create your own neuromorphic hardware like the 2015 Hackaday Prize project, NeuroBytes. If you want to read up on more, this survey of neuromorphic hardware from 2017 is an incredible snapshot of the field as of that time.

While it is still a long road ahead, the future of neuromorphic computing looks promising.

Enregistrer un commentaire

0 Commentaires