Intel’s Loihi CPU: A Neuromorphic Chip Modelled on the Human Brain

2 weeks ago by Lianne Frith

Neuroscience, with its huge volume of complex interactions, is a field that offers an enormous amount of insight for the potential of hardware architectures and algorithms. The behaviors and properties of biological neurons have, to date, proven difficult to model let alone replicate. However, while the vast majority of biological neuron interactions will remain a mystery for some time, simplified abstractions of neural networks are now possible.

Neuromorphic computing turns computer architecture on its head to create chips that function like the human brain. Using spikes and synapses, the chips are being championed as a solution to diverse and challenging machine learning problems.

By being able to self-organize and make decisions based on patterns and associations, the chips could yield great practical value in the fields of robotics, manufacturing and many other functions that require continuous adaptation in response to real-world data.

The Loihi Neuromorphic Chip

The Loihi neuromorphic chip, designed by Intel, is the result of the union of neuromorphic advances, computational neuroscience, and advanced algorithms.

In short, the Loihi chip mimics the functioning of neurons and synapses in the brain, albeit in a simplified form. In the same way, as the human brain creates neural paths over time to build our problem-solving abilities, Loihi has the power to learn. The chip is the first of its kind to combine neuromorphic features with efficiency and on-chip learning potential. The initial test chip, released back in September 2017, has been the focus of Intel’s INRC research program. The fifth chip in Intel’s neuromorphic category, Loihi has been pitched as a key driver in the use of probabilistic computing in artificial intelligence. Its ability to mimic the brain’s basic mechanics has the potential to make machine learning faster and more efficient while requiring lower computing power.


Image courtesy of Wikimedia Commons.


Loihi’s Key Features

Loihi uses asynchronous spiking neural networks (SNN) to implement it’s adaptive self-modifying event-driven learning with high efficiency. This means that, instead of manipulating signals, the chip sends spikes along activate synapses.

The 60mm2 chip has around 130,000 artificial neurons and 130 million synapses, which are fabricated on Intel’s 14nm process technology. The chip is revolutionizing the state-of-the-art modeling of spiking neural networks in silicon.

The Loihi test chip’s key features include:

  • Many core mesh—128 neuromorphic cores, three embedded x86 processor cores, and off-chip communication interfaces allow each neuron to communicate with thousands of others.
  • Hierarchical connectivity—the chip is able to exploit localized sub-networks within the mesh and significantly reduce the chip-wide connectivity and synaptic resources necessary to map the networks.
  • Programmable learning engine—within the neuromorphic core, the learning engine can create synaptic state variables over time based on historical spike activity.
  • Spiking neural networks—allowing one or more neurons to send out impulses at any given time to the surrounding neurons through synapses
  • Digital asynchronous network—all logic is functionally deterministic and is bundled together in packages to allow the spikes to be generated, directed and received in an event-driven manner.
  • High algorithmic efficiency—the ability to develop and test several algorithms for problems such as path planning, constraint satisfaction, sparse coding, and dynamic pattern learning.

Image courtesy of Wikimedia Commons.

What Makes the Loihi Chip Stand Out?

While the Loihi chip may be less flexible and powerful than the leading general-purpose chips, its specialization offers excellent potential. Although the best AI algorithms already use artificial neural networks, relying on parallel processing, the Loihi chip takes this to a new level by etching the workings of neural networks in silicon.

Spiking neural networks are poorly served by conventional architectures, yet when they are flexible and well provisioned, as is the case with Loihi, they are able to support a broad range of workloads. Loihi is the first fully integrated SNN chip that doesn’t rely on storing synapses in dense matrix forms, which has historically restrained the possibilities for programmers.

The chip’s flexible learning engine enables programmers to experiment with various learning methods while all learning takes place on-chip. Loihi has been used to solve the shortest path problem of a weighted graph as well as a one-dimensional, non-Markovian sequential decision-making problem.

The development of the algorithmic potential of Loihi is really just in its infancy with only a fraction of the resources and features available being tested so far. However, with energy-efficiency of up to 1,000 times more than general purpose computing, the team at Intel aim to scale up their research into Loihi’s networks.

Intel’s Neuromorphic Research Program

Image courtesy of Wikimedia Commons.

Intel’s research community, including academic, government, and corporate research groups, is set to tackle the challenges facing the adoption of neuromorphic architectures into mainstream computing. With the Loihi chip at the centre of its research and development, the program aims to deliver findings that will drive the technology, and eventually its commercialization, forward.

In October of last year, the INRC gathered to discuss the progress made so far with the chip and the following developments were reported:

  • Audio keyword recognition—Loihi may provide better energy efficiency up to a factor of 50 times, depending on the architecture.
  • Long short term memory networks—spiking neural networks running on neuromorphic hardware promise significantly improved efficiency.
  • Signal restoration and identification—based on the mammalian olfactory system, the algorithms have demonstrated state-of-the-art learning and classification performance.

Loihi’s Future Potential

With researchers able to use Intel’s software development kit to develop their algorithms, software, and applications within its cloud service, there is a huge potential for further progress.

Members are using the hardware for research in areas such as robotics and have access to ‘Kapoho Bay’ a USB form factor which provides the interface to peripherals like the DAIS 240C DVS silicon retina camera. Over the course of this year, both Intel and the INRC members are expected to contribute much of their software and research to the public domain. When this happens, interest will grow, and more real-world applications are bound to arise.

However, while Intel aims to increase its goal to build more than 100 billion synapses within a Loihi-based multi-chip system, the relative synaptic complexity of the human brain is still a long way off.

Although Intel’s plans approach the 125 billion synapses of the common mouse, true artificial intelligence with the human brain’s nine trillion synapses is currently far beyond the technology’s abilities.