Axera Wins “Most Innovative Product Award” and Launches AXera-Pi Pro at the AI Artificial Intelligence Summit Forum 2023
On August 23, 2023, the 7th AI Artificial Intelligence Summit Forum, jointly organized by elecfans.com and elexcon (Shenzhen International Electronics Exhibition), was held in Shenzhen. Under the theme “Intelligence in Unlimited Forms to Enable a Grand Future,” the event brought together leading experts, scholars, and industry representatives to explore new opportunities in the development of the artificial intelligence industry.
At the event, Qi Tang, Axera’s Director for AI Inference Engines, delivered a keynote speech titled “Deploying Vision Transformer Models at the Edge,” shedding light on practical cases for implementing vision Transformer (ViT) models using Axera’s Axera Tongyuan NPU. The presentation culminated in the official launch of the developer kit, AXera-Pi Pro. In a parallel session, the 4th Outstanding AI Innovation Awards ceremony took place, where Axera was recognized with the “Most Innovative Product Award” for its pioneering AX650N chip.
Rapid Adoption of Terminal Intelligence Elevates ViT Model Applications to New Heights
Since 2015, when AI models first outperformed humans in object recognition in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), the AI industry has embarked on a new era. In recent years, advancements in chip manufacturing and the trend towards smaller, more lightweight AI models have made it possible to deploy AI models locally in products such as robotic vacuum cleaners, home security cameras, and smart speakers. This has led to the widespread adoption of terminal intelligence.
“Up until now, the main computational framework of cloud, edge, and terminal based on traditional CNN models has been completed,” Tang stated in his speech. As AI models transition from the cloud to terminal devices, the Transformer architecture has continuously advanced and gained significant attention, especially with the rise of ChatGPT. Reflecting on the development of ViT models, from semantic segmentation models based on Transformer network architectures used in autonomous driving to the SAM model for segmenting everything and the foundational vision model DINOv2, ViT models have reached new heights in their applications.
Dedicated to becoming a world leader in AI vision chips, Axera focuses on developing high-performance, low-power AI chips for edge and terminal devices. The company is strategically positioned in the fields of smart city, smart driving, and AIoT. Driven by the upgrade and transformation of smart city, the large-scale implementation of L2/L2+ smart driving applications, and the growing demand for smart terminal devices, Axera will continue to invest in edge perception chips, consistently enhancing capabilities in perception and foundational computation.
Accelerating AI Development and Deployment Efficiency: Connecting the Digital and Physical Worlds Through Vision Technology
From smart city to smart driving and AIoT, Axera’s ability to cover these three major application scenarios relies on its two core self-developed technologies: the Axera Zhimou AI-ISP and the Axera Tongyuan NPU.
The Axera Zhimou AI-ISP combines deep learning algorithms with traditional ISP units, acting as the “eyes” of Intelligence of Everything (IoE) to perceive more information, especially meeting the essential need for full-color vision in dark environments. It boasts six key technological highlights: AI Starlight Full-Color Imaging, AI HDR Imaging, AI Multispectral Fusion, AI Image Stabilization, AI Scene Enhancement, and AI Multi-Sensor Fusion. These features significantly improve image quality and deliver superior visual effects in various driving scenarios, such as dark roads, tunnel exits, and bumpy roads.
The Axera Tongyuan NPU, another core self-developed technology of Axera, supports INT4, INT8, and INT16 computation precision. It stands out with its high performance, low cost, and ease of use. Notably, after incorporating market feedback from the first two generations and aligning with industry-leading technological trends, the third-generation mixed precision NPU has further enhanced the variety of supported operators and mixed-precision computation. It has also optimized the internal memory scheduling mechanism, efficiently supporting both CNN and Transformer models. The Pulsar2 toolchain, developed specifically for the Axera Tongyuan NPU 3.0, further enhances its functionality and usability by supporting model deployment from mainstream deep learning training frameworks, as well as both PTQ (Post Training Quantization) and QAT (Quantization-aware Training) operations. As a result, it is able to cater to various quantization tuning needs across different scenarios.
Deploying Transformer models on the Axera Tongyuan NPU is now remarkably simple and efficient. Take the Swin Transformer (SwinT) model, for instance. Users can grab the ONNX (Open Neural Network Exchange) version straight from PyTorch’s official ModelZoo. With no tweaks needed to the model or operators, it can be quantized, compressed, and compiled, achieving an impressive energy efficiency rate of 199 FPS/W. The AX650N, which won the “Most Innovative Product Award” at the event, was showcased as Axera’s high-performance chip product launched this year. Built on the third-generation NPU, it features high performance and precision, easy deployment, and low power consumption for deploying Transformers on edge and terminal devices. It stands as a leading platform for Transformer implementation in the industry.
Building a Chip Developer Ecosystem: AXera-Pi Pro Facilitates Vision Model Deployment
At the event, Axera officially launched the developer kit “AXera-Pi Pro.” Co-created with hardware ecosystem partners, this kit allows community developers to experience the convenient deployment of vision models on edge and terminal devices at a low cost. The AXera-Pi Pro, powered by the AX650N chip, offers high computing power and exceptional encoding and decoding capabilities. It meets the industry’s demands for high-performance edge intelligence computing, enabling applications such as video structuring, behavior analysis, and status detection. As a result, it efficiently supports both CNN and ViT models.
Alongside the release of the AXera-Pi Pro, Axera also launched the community version of Pulsar2, its new-generation AI toolchain that integrates model quantization, compilation, and deployment. Accompanied by comprehensive development documentation, it enables users to quickly prototype and further develop their products, enhancing their value in smart city, smart transportation, smart education, and smart manufacturing applications.
So far, Axera has mass-produced four generations of multiple vision chips tailored for different sectors. Committed to vertical integration of algorithms, chips, and products, Axera provides full-stack solutions to its partners, enabling rapid deployment of the latest technologies for their customers. In response to the rapid growth of the AI industry, Axera is committed to continuous technological innovation to enhance chip performance and reduce costs, making edge and terminal intelligence more accessible. The ultimate goal is to achieve the company’s mission of “AI for All, AI for a Better World.”