Blaize (formerly Thinci), is a U.S. start-up with UK-based development teams working on the creation and deployment of novel chipsets that accelerate AI workloads. This month, Blaize emerged from the shadows with over $87 million in funding and an exciting announcement—its new graph streaming architecture with impressive system-on-chip performance and backing from investors including Denso and Daimler.
Driven by leaps forward in energy efficiency, flexibility, and usability, Blaize’s products aid and enable existing and new use cases in industries such as enterprise computing, automotive, and smart vision.
As Blaize co-founder and CEO, Dinakar Munagala, puts it, “Blaize was founded on a vision of a better way to compute the workloads of the future by rethinking the fundamental software and processor architecture.”
The World’s First True Graph-Native Silicon Architecture
Since its announcement, Blaize has remained tight-lipped about the specific metrics of its new AI chip. What we do know, however, is that it has been described as “the first true graph-native silicon architecture and software platform built to process neural networks and enable AI applications with unprecedented efficiency”.
This is according to Blaize, however, there is nothing available yet to quantify these claims. It was also said by Blaize that it is the Blaize GSP architecture and Blaize Picasso SDP that has enabled the breakthrough in computational efficiency delivered by the new AI chip.
Block diagram of the GSP architecture. Image courtesy of Blaize.
It is also known that the processor, referred to as a “graph streaming processor”, is optimized for the graph processing needed to train both convolutional and instance neural networks. Although there are many different types of neural networks, they are all similar in that they are graphs. The processor is also multi-core, multi-issue, and is controlled by a hardware schedule which dynamically maps the graph on to cores manually.
Due to data being kept on-die and not moved back and forth into external memory, the processor is power- and cost-efficient. This is achieved thanks to the processor’s compiler and software platform which looks inside networks and identifies moments where partial results can be passed to the next processing stage.
Built from the ground-up to be 100% graph-native with a graph-native structure, the AI SoC can be used by developers and engineers to build multiple neural networks and comprehensive workflows on a single architecture applicable to several use cases. End-to-end applications can be built integrating non-neural functions with neural network functions that are all represented as graphs and processed up to 100 times more efficiently than existing solutions.
This enables AI application developers to build entire applications faster and run them efficiently with automated data-streaming methods.
Blaize AI Chip Products
Though we are yet to hear more about the specifics of Blaize’s AI chip, products are set to include the AI SoC itself, a module based on it, and larger products such as server cards for AI cloud processing in data centers.
One of the potential use cases Blaize provided was in shopping centers—in these places, surveillance cameras using its AI SoC could detect individual shoppers and then pass this data on to local computers and across different sites using Blaize AI boards that can then track and trace these individuals as they move.
The initial focus for Blaize and its AI SoC will be on automotive, smart vision, and enterprise computing.