Optimal for your device
I want to create an AI model
Implementation on the device
I have concerns
Deployed to device, but
We listen to the customer's requirements and conduct profiling of the current inference process. After sorting out the required latency and accuracy performance, we define the requirements for the optimal model. If it is not necessary to do this at the edge, we will consider sharing the work with the cloud.
We receive model files and code from the customer and convert, optimize, and evaluate models for candidate edge devices. If the model file is not available, we can develop a model that meets the required accuracy and speed.
Myriad X, Edge TPU, Hailo, Ambarella
Raspberry Pi, NXP i.MX RT
Arria10/SoC, Zynq Ultrascale+ MPSoC
NVIDIA Jetson Series
iPhone, iPad, Android, SNPE
Upon receiving the created model, input/output data, training code, etc., we will propose methods to make the model smaller. In addition to optimization using development tools for each edge device, there are various methods for weight reduction and miniaturization, such as pruning the model branches and applying quantization individually.
Experienced engineers use development tools appropriately,
to maximize performance.
Challenge: Needed a lightweight, more accurate model to drive edginess in factory anomaly detection tasks.
Problem: The customer product detection model has a processing time issue due to the large number of images to be processed.