But before moving onto the Neoverse N1 technology itself, let’s start by looking at the limitations of datacentres, and why they have called for Arm’s partnership with Docker to develop the N1 in the first place.
Introducing The Neoverse Platform and The Arm-Docker Partnership
The Neoverse N1 Platform (the latest entry in Arm’s Neoverse family, which was first introduced in October 2018), enables data processing on a much more scalable level than the traditional infrastructure offered by datacentres. The name ‘Neoverse N1’ is a catch-all name for a variety of heterogeneous computing elements—including the throughput-focused CPU, ‘Neoverse E1’—that form Arm’s latest approach to realising the “next generation [of] cloud-to-edge infrastructure”.
Image courtesy of Bigstock.
The reason for such technology is clear when you consider that Arm itself predicts that 2035 will be the year of a trillion smart, connected devices.
It’s that level of IoT growth that means it may no longer be enough to rely on general-purpose datacentres alone to manage an organisation’s data flow, because the necessary processing capabilities need to be localised, i.e. as close as possible to the device in question. This is in order to provide a more efficient means to keep up with the modern demands on data traffic (especially when 5G is just around the corner)—particularly by being both low-power and low-latency.
Fundamentally, the said requirements for edge, and therefore IoT-friendly, data processing has now driven both the development of Neoverse N1 and the said partnership between software giant Arm and the market leader in containerisation, Docker.
The collaboration between these two tech giants has been made to, as Arm explains, deliver a “frictionless cloud-native software development and delivery model for cloud, edge, and IoT”.
Having now discussed the ‘why’ behind the Neoverse N1’s development, let’s now see what the N1 is offering to achieve such power and speed efficiency through its cloud-to-edge-based data processing capabilities.
Image courtesy of Arm.
Where the Neoverse Platform Comes in
As Arm put it, its goal is to offer a more “heterogeneous and distributed infrastructure” through its cloud-to-edge infrastructure. The elements that make this possible are opening the door for developers to have the right technologies in place, particularly for the future of ubiquitous connected devices.
This section looks at Neoverse’s two cores that make this possible: the Neoverse N1, and the Neoverse E1.
The Neoverse N1 Core
The key to accomplishing Arm’s goal is efficiency; and this is achieved through two major qualities: low power and high throughput. While the E1 CPU core achieves the latter aspect, the former is covered by the N1 CPU. This is designed with, to quote Arm: “server-class features and thread performance with cutting-edge low-power design techniques”.
The N1 is co-optimised with the CoreLink CMN-600 (Arm’s mesh interconnect designed for networking infrastructure, high-performance computing, and more), which facilitates an extreme level of scalability (from 8 to 16 cores for networking, storage, security, and edge compute nodes; and 128 or more cores in the context of hyperscale servers).
The platform achieves chip-to-chip connectivity over, and in accord with, the Cache Coherent Interconnect for Accelerators (CCIX) systems. The CCIX Consortium embodies a set of specifications which align with Arm’s said goal: to introduce the next generation of heterogeneous computing. In the case of CCIX, this is by enabling faster interconnects and stronger cache coherency for better communication between, not only CPU memory, but accelerators, too.
Again, such scalability and networking improvements all reflect Arm’s ambition of achieving low-power solutions, i.e. higher efficiency, which leads us to the other side of the same coin: the Neoverse E1’s high-throughput offerings.
The Neoverse E1 Core
While the Neoverse N1 covers high-performance processing, the Neoverse E1 is the first mainstream Arm core to use the company’s new simultaneous multithreading (SMT) microarchitecture design (‘thread’ here being short for ‘threads of execution’, meaning the processing technology that enables multiple computing tasks to work in parallel—see below diagram).
A basic diagram that represents multithreading in action: two computer processing threads are executed both simultaneously and exclusively of one another. Image courtesy of Wikimedia Commons.
According to Arm, improvements such as this have led to the E1 having the following enhancements over its preceding processor, the Cortex-A53:
- 2.1 times the compute performance.
- 2.7 times the throughput performance.
- 2.4 times the throughput efficiency.
The said introduction of multithreading is altogether a turning point for Arm, who has traditionally avoided the use of SMTs, in favour of its multi-core-based ‘big.LITTLE’ processing solution. Now, however, as network and communications technology becomes ever more prevalent, the use of SMT-based parallel task processing is more important than ever before.
This again goes back to scalability in terms of meeting modern demands: Arm says that such an architectural design is able to “support [today’s] throughput demands for next-generation edge to core data transport”.
Arm’s infographic that shows the efficiency increases, software compatibilities, and general scalability of Arm’s Neoverse E1 Platform. Image courtesy of Arm.
Arm’s—and by extension of course, Docker’s—efforts to introduce a cloud-to-edge infrastructure are a sign of the changing times: it is no longer enough to rely on remote, general-purpose datacentres in view of the exponential growth in connected devices; and suffice to say, the Neoverse Platform as a whole rises to the enormous data demands.
This is chiefly thanks to the efficiency breakthroughs that the technology has brought to the table, which again relate to Neoverse’s two cores: the N1 and the E1—respectively, milestones in both power and throughput proficiency and scalability.
The result of such leaps in heterogeneous computing is that not only are Arm preparing for its predicted, 1 trillion connected devices in 2035, but they have also paved the way for developers to integrate their own custom-built architectures that are rooted in the open-source aspects of Neoverse IP. To end with a quote from Drew Henry, SVP of Arm’s Infrastructure Line of Business:
“This incredible scalability gives our partners the flexibility to build diverse compute solutions by adding accelerators or other features with their own on-chip custom silicon. All of this enables our partners to deliver solutions with a lower total cost of ownership for infrastructure customers.”
For more information on Arm's connected technology developments, read our interview with its senior VP of IoT Cloud Services, Himagiri Mukkamala.