Do you have any of these issues?

Do you have any issues like this
?

Optimal for your device
I want to create an AI model

Implementation on the device
I have concerns

Deployed to device, but
No speed

We have a wealth of experience in edge device implementation
We offer proposals tailored to our customers' issues.

We offer proposals tailored to customer issues
based on our rich
track record of edge device implementations.

case 01
I've learned the model, but implementing it on a device is I'm new to this, so I'm worried...

From requirement definition to total support according to your objectives
Total support

We listen to the customer's requirements and conduct profiling of the current inference process. After sorting out the required latency and accuracy performance, we define the requirements for the optimal model. If it is not necessary to do this at the edge, we will consider sharing the work with the cloud.

In defining requirements at the edge AI
Important perspectives and implementation tasks
In defining requirements at the edge AI
Important perspectives and implementation tasks
HourGlassIcon

Current
inference processing time.
Profiling

timeIcon

Required
Latency/FPS
Definition of

guaranteeIcon

Required
Definition of accuracy performance

laptopIcon

Model Deployment/
How to update
Consider

cloudBackupRestoreIcon

Edge and Cloud
Define the division of

case 02
Selection of appropriate devices and
Difficult to implement for each device...

Optimization of the appropriate model for the edge environment you want to mount
Optimize the appropriate model

We receive model files and code from the customer and convert, optimize, and evaluate models for candidate edge devices. If the model file is not available, we can develop a model that meets the required accuracy and speed.

Flow of Model Optimization Study Flow of Model Optimization Study
RamIcon

Preparing a model
or developed by us

cloudDevelopment

Conversion and optimization for candidate devices

timeSearch

Perform requirement inference speed
Evaluate

We have experience with a variety of edge devices We have
experience with a variety of edge devices
  • NPU

    Myriad X, Edge TPU, Hailo, Ambarella

  • CPU

    Raspberry Pi, NXP i.MX RT

  • FPGA

    Arria10/SoC, Zynq Ultrascale+ MPSoC

  • GPU

    NVIDIA Jetson Series

  • smart phone

    iPhone, iPad, Android, SNPE

  • Finished product type

    AITRIOS™ COMPATIBLE MODELS (WITH IMX500)

    Jetson AGX Orin-equipped models

    Models with Hailo-8

AITRIOS™ IS AN EDGE AI SENSING PLATFORM PROVIDED BY SONY SEMICONDUCTOR SOLUTIONS INC.
AITRIOS" IS A REGISTERED TRADEMARK OR TRADEMARK OF SONY GROUP INC. OR ITS AFFILIATES.
case 03
The model is too big.
Not on the intended device...

Review current implementation details
Reduce model weight and size

Upon receiving the created model, input/output data, training code, etc., we will propose methods to make the model smaller. In addition to optimization using development tools for each edge device, there are various methods for weight reduction and miniaturization, such as pruning the model branches and applying quantization individually.

To reduce the weight and size of models
Five typical methods
Reducing the Weight and Size of Models
Five typical methods for making models lighter and smaller

Experienced engineers use development tools appropriately,
to maximize performance.

edakari
Model branch trimming By setting the values of less important weights to zero
Lightweighting
Creating
Model Quantization Expresses parameters such as weights in smaller bits and
Converts the data type of the tensor
StackedOrganizationalChart
Distillation of model knowledge Using the output of a large model as teacher data
High accuracy achieved by training a small model
Sheets
Layer Fusion By combining the processing of multiple layers into a single layer
by combining multiple layers into a single layer.
StopPieChartReport
Architectural Changes Models designed for low-resource devices
Architecture changed to a model designed for low-resource devices

EDGE AI APPLICATION EXAMPLES

caseStudy

Lightweight model training using distillation

Industry: Manufacturing

Challenge: Needed a lightweight, more accurate model to drive edginess in factory anomaly detection tasks.

caseStudy

Quantization/TensorRT for model acceleration

Industry: Retail

Problem: The customer product detection model has a processing time issue due to the large number of images to be processed.

Araya has a wealth of support experience, including partnerships with manufacturers, IT companies, and trading companies.

partner
partner
partner
partner-singpost
partner-samsung
partner
partner-ana
partner-daikin
partner-isid
partner-tohoku
partner-toyo
partner-toyota
partner-jt
partner-hitachi
partner-honda

Listed in Japanese alphabetical order

PAGE TOP