EdgeCortix unveils SAKURA AI accelerator and MERA compiler software framework

EdgeCortix unveils SAKURA AI accelerator and MERA compiler software framework

A Japanese semiconductor design company, EdgeCortix, has unveiled an energy-efficient AI co-processor for edge intelligence– SAKURA. At the TechInsight’s Linley Spring Processor Conference, the company provided more details on the architecture, performance, and delivery timing for the all-new AI inference co-processor. Some of the key industrial segments where SAKURA can be best suited are transportation, autonomous vehicles, defense, security, 5G communications, augmented and virtual reality, smart manufacturing, retail, and robotics.

Delivering up to 40 TOPs on a single chip version and 200 TOPs for a multi-chip version, leveraging the 12nm FinFET technology by TSMC, will be available as a low-power PCIe development board. Access to these boards will only be to the participating companies of the EdgeCortix Early Access Program from July 2022. The 40 TOPs performance is delivered through the single-core dynamic neural accelerator, the intellectual property of EdgeCortix with a built-in reconfigurable data path connecting all the compute engines.

SAKURA is revolutionary from both a technical and competitive perspective, delivering well over 10X performance/watt advantage compared to current AI inference solutions based on traditional graphics processing units (GPUs), especially for real-time edge applications, said Sakyasingha Dasgupta, CEO and Founder of EdgeCortix.

The in-house dynamic neural accelerator enables the application to run multiple deep neural network models with ultra-low latency. This feature allows the hardware to offer enhanced processing speed, energy efficiency, and longevity of the system-on-chip. The DNA processing engine with SAKURA delivers over 24K MACs in a single-core at 800MHz clock frequency and has a relatively sizeable on-chip memory. The engine maximizes compute utilization, exploits multiple degrees of parallelism, and provides extremely low latency.

After validating our AI processor architecture design with multiple field-programmable gate array (FPGA) customers in production, we designed SAKURA as a co-processor that can be plugged in alongside a host processor in nearly all existing systems to accelerate AI inference significantly. Using our patented runtime-reconfigurable interconnecting technology, SAKURA is inherently more flexible than traditional processors and can achieve near-optimal compute utilization in contrast to most AI processors developed over the last 40+ years.

Along with the SAKURA AI accelerator, EdgeCortix also announced the open-source release of the MERA compiler software framework. MERA allows seamless acceleration of complex and intensive AI applications, allowing developers to leverage SAKURA and FPGAs powered by DNA IP. The SAKURA AI co-processor will be publicly available to customers for purchase in multiple hardware form factors.

Please follow and like us:
Pin Share
About Abhishek Jadhav

Abhishek Jadhav is an engineering student, RISC-V ambassador and a freelance technology and science writer with bylines at Wevolver, Electromaker, Embedded Computing Design, Electronics-Lab, Hackster, and EdgeIR.

view all posts by abhishek
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
Get new posts by email:
Get new posts by email:

Join 97,426 other subscribers

Archives