responsive website templates
Mobirise

Customer Success Story

Compiler Service: Translation  of ONNX operations to custom dialect

Summary

Helprack has been helping lower code from ONNX dialect of MLIR to a proprietary custom dialect that is targeted at large scale acceleration of ML training workloads for a Bay Area based Enterprise

Details

This customer has developed an AI/ML chip that has a compute fabric akin to a GPU and can compute on the Edge. The goal is to translate all 192 ONNX operations to the custom dialect of MLIR.

Helprack delivered the first 50 ONNX operations that target the RESNET50 network to custom dialect. This is progressively lowered to processor specific dialect of MLIR and then to native instructions. The RESNET50 model has been shown to run on the custom processor.


 Read about other customer implementations.

If you are building an AI/ML chip, we could help you with building a compiler for the same.  Let's connect.