A Short Timeline on the Progression of AI
In 1956, computer pioneers Allen Newell and Herbert Simon wrote a program designed to perform automated reasoning, which became known as the first artificial intelligence program. The late 1950s to early 1960s ushered in the production of second-generation computers and the physical implementation of the world’s first industrial robot: ‘Unimate’.
Third-gen computers then followed between the late 1960s and early 1970s, boosting AI capabilities significantly. By the 1980s, AI hardware and software, such as machine vision systems for cameras, computers, and automated assemblies, had become commercially available. From the late 1980s to date, computing capabilities have exploded due to the miniaturisation of semiconductors (as predicted by Moore’s Law). Now, however, scientists are exploring the more novel materials to overcome many of the limitations of conventional computing technology. And this is where neuromorphic systems come in.
A pictorial representation of human brain activity. Image Credit: Pixabay.
Neuromorphic Systems and Their Challenges
The first generation of AI systems utilised rule-based logic to make decisions from a defined problem domain. They were suitable for monitoring and improving the efficiency of processes, but they lacked the ‘intuitiveness’ of the human brain. Today, the main purpose of neuromorphic computing research is to increase the sensory and perceptual capabilities of machines using deep learning networks, computer vision, and other frameworks.
Neuromorphic systems follow the model of human brains by storing and processing vast amounts of information simultaneously (accordingly, the parallel computing implications are discussed later on). The main challenge facing neuromorphic artificial intelligence, however, is in the design of artificial neural networks that can adapt to natural data events—which are obviously rife with randomness and uncertainty. Of course, biological brains, on the other hand, are highly flexible and adaptable to such unstructured stimuli.
Improved Materials Show Promise for Brain-like Computing
Creating algorithms that can respond to natural events like humans also require a vast perception and understanding of the physical environment, which necessitates the storage and swift processing of copious amounts of data in real-time, e.g., in autonomous vehicles. The next sections describe new materials that are already showing promise.
In a research paper published in Science Daily, scientists from Tohoku University, Japan, and the University of Cambridge have found a way to improve the response time of neuromorphic devices using unique ion-conducting polymer materials. Controlling the response times in neuromorphic devices had, for years, puzzled researchers as a critical element for mimicking synapses in the human brain, which operate at high speeds with low energy consumption.
Neural network concept. A graphic of multiple digital connections to represent neural networking. Image Credit: Freepik.
The joint universities’ new method combines the polymers PSS-Na and PEDOT:PSS (the PSS, aka poly(styrenesulfonate), is there to enhance the ion-diffusivity in an active layer of a neuromorphic element). PSS-Na, a low-cost polymer, allows transport of ions only, whereas the PSS polymer transports both ions and electrons. It was due to the researchers blending these two polymers—and therefore increasing the rate of ion diffusivity in the active layer—that the tested computational response time was improved significantly.
This breakthrough not only revealed a new method of controlling the response speed of devices but also the near-term possibility of designing artificial neural networks from multiple neuromorphic elements.
Phase-change Photonic Materials
Various university (including Exeter and Oxford University) researchers have shown, in a study published in APL Materials, that creating computer models can result in the large-scale production of neuromorphic chips. Phase-change materials and integrated photonics can be combined and fabricated into photonic integrated circuits to produce ‘brain-like’ synapses that are capable of both supervised and unsupervised learning. However, physically realistic behavioural modelling of integrated phase-change photonic devices—using the conventional finite-element, or otherwise finite-difference time-domain—requires prodigious amounts of computing power (and a lot of time) to achieve even basic processes.
Nevertheless, now, such high power demands have been reduced using the researchers’ new approaches to behavioural modelling—namely the electromagnetic, thermal (heat transfer), and phase-change models. These models collectively reflect the essential characteristics of neuromorphic devices, and they have altogether produced instantaneous results.
As covered in the said APL study, the electromagnetic model calculates the energy or amplitude and phase of intra-cellular (occurring at the cellular level) optical propagation and the amount of heat generated due to optical absorption within the cell. The thermal model ascertains the temperatures in various layers of the photonic phase-change element. And lastly, the phase-change model utilises the mathematical JMAK (Johnson-Mehl-Avrami-Kohnogorov) approach to simulate read, write, and erase data operations to/from the phase-change cell.
Memristors and Memristive Materials
The conventional Von Neumann architecture for designing computer systems faces what is known as ‘the Von Neumann bottleneck’. Such a bottleneck occurs due to the challenges of latency and higher power consumption that occur during the transfer of data between processors, memory units, and peripheral devices.
Neuromorphic architectures, on the other hand, maintain parallel connectivity between the processor and memory, and this eliminates the Von Neumann bottleneck. Memristive materials are some of the most promising candidates for achieving high-computational parallelism, lower runtime errors, and high power efficiency at a lower cost. Moreover, memristors are highly scalable because their production is compatible with the standard complementary metal-oxide-semiconductor fabrication process. Memristors can store and process vast amounts of data in real time, which makes them suitable for data-intensive applications, such as the Internet of Things and Big Data.
Human-like AI concept. Pictured: a robotic hand prepares to connect with a human hand—representing the increasing comparability between computer processing capabilities and human intelligence. Image Credit: Bigstock.
The Future Potential of Neuromorphic Systems
Neuromorphic systems attempt to replicate brain-like synapses and learning capabilities to perceive and respond to the natural world in a similar way to humans. Taking stock of the above-mentioned R&D, with the current rapid advances in data storage and processing capabilities—coupled with new, improved methods for implementing computer architectures, such as those outlined above—the mainstream application of both supervised and unsupervised learning-capable machines may well become a reality in the near future.