For context, the speed of a processor refers to the number of computations that it’s capable of making every second, or otherwise its cycle frequency (measured in millions/billions of Hertz).
The higher the value in question, the faster a computer can perform tasks and handle resource-intensive applications. Faster processors allow computers to multi-task seamlessly, map complex algorithms, and analyse large volumes of data quickly and in real-time. This year especially, leading chipmakers are employing different strategies to boost processor efficiency—such as increasing signal speed and integrating new software (e.g. with machine learning).
Processor speed can be optimised using parallel computing (aka parallelism) at advanced process nodes. Parallelism is the process where a processor breaks down a task into several discrete parts that are executed simultaneously.
Each part is in turn broken into a series of instructions that are, at any given time, executed simultaneously through the various corresponding CPUs. To fully exploit the potentials of hardware parallelism, there is a greater emphasis on software parallelism, namely programs made to split tasks into discrete computations that execute on separate processors—while ideally maintaining minimal communications with each other.
Chipmakers are achieving such a feat by focusing on multi-core processors over single-core processors. But with increasing design complexity, parallelism requires expensive power management to prevent the individual processors from overheating (particularly for architecture below the 14 nm range).
Semiconductor companies, such as Arteris and NetSpeed Systems (acquired by Intel in 2018), are achieving performance improvements by sizing the individual cores differently—a technique that achieves more than a single-core system, but with less power consumed than a multi-core processor that has identical CPUs.
Building Advanced Software Stacks
An effective means for boosting processor speeds—while keeping power requirements low—involves building more effective software stacks. Conventionally, the software has been built around general-purpose hardware with performance boosts from accelerators.
A new trend in the industry is for manufacturers to build market and application-specific hardware, upon which software stacks are layered to define the ideal specifications. That said, the software isn’t exactly being used to augment or replace hardware resources, but rather, to exploit its capabilities. (For example, performance monitoring units on a processor can be controlled by software drivers, which can instruct it to turn one of the cores off to lower power consumption and prevent overheating.)
Artificial intelligence concept graphic: electronic connections that collectively resemble a brain. Image courtesy of VPNs R Us, via Flickr.
Leveraging Artificial Intelligence
AI is a promising means for increasing the clock speed of processors. Machine learning can improve hardware acceleration using software libraries and algorithms to optimise individual cores. This also presents manufacturers with an opportunity to solve the challenges of implementation, sign-off, and verification. Machine learning improves chip design by using data mining to understand design complexity, train predictive models at the design level, and analyse problems that traditional algorithms are unable to handle.
Companies such as NetSpeed Systems and Cadence are using machine learning to mine data from the verification phase and utilise that information to build faster processors. Data mining, alongside computer vision, is also being implemented in several applications to increase processor speed while minimising power consumption. In 2016, Intel acquired Movidius, a company that specialises in producing low-power, high-performance SoC processors for computer vision applications, such as autonomous cars.
Processors Blazing the High-Speed Trail
When it comes to the processors pushing computing to its limits of speed, efficiency, and power consumption, Intel and Advanced Micro Devices (AMD) are currently dominating the chipset market. Below are 3 next-gen processors that are leading the way in 2019.
Intel’s Ice Lake Processors
Intel’s first 10 nm processors, named ‘Ice Lake’ form a range of ultra-high-performance chipsets that will enable PC companies to create thinner and more powerful devices. Ice Lake represents the 10th instalment of Intel Core technology and a significant upgrade from its earlier 14 nm processors. Intel says the new processor will deliver a two-fold performance boost over Gen9 models (although it will have only 4 cores).
Ice Lake processors, which are split into the U-series and Y-series, are based on the 10 nm Sunny Cove architecture, which is expected to achieve speeds in the teraflops (1012 floating operations per second) range.
An AMD Ryzen processor. Image courtesy of Bigstock.
Intel’s Core i9-9900K
The Intel Core i9-9900K is easily one of the fastest processors on the planet. It’s an octa-core processor that utilises 16 threads, has a base clock speed of 3.6 GHz, and a boost clock speed of 5 GHz. It utilises 95 watts of power.
AMD’s Ryzen 9 3900X Processor
Arguably the most powerful mainstream processor in the world, the Ryzen 9 3900X is a hexa-core processor that utilises 12 threads, has wa base clock speed of 3.6 GHz, and a boosted speed of 4.32 GHz. Like the Intel Core i9-9900K, it runs on 95 watts of power.
Ultimately, leading chipmakers are striving to commercialise faster processors to keep up with the current and predicted trends in technology—all while retaining a sizable portion of the market share.
As Moore's law arguably draws to an end, we will continue to see novel techniques for chip design that will push the envelope for performance and efficiency.