TECHNOLOGY

DEEPCUBE’S TECHNOLOGY IN A NUTSHELL

DeepCube focuses on research and development of deep learning technologies that result in improved real-world deployment of AI systems. The company’s numerous patented innovations include methods for faster and more accurate training of deep learning models, and drastically improved inference performance.

DeepCube’s proprietary framework can be deployed on top of any existing hardware in both datacenters and edge devices, resulting in over 10x speed improvement and memory reduction. DeepCube provides the only technology that allows efficient deployment of deep learning models on intelligent edge devices.

DeepCube - Technology
DeepCube - Technology

DEEPCUBE’S TECHNOLOGY IN A NUTSHELL

DeepCube focuses on research and development of deep learning technologies that result in improved real-world deployment of AI systems. The company’s numerous patented innovations include methods for faster and more accurate training of deep learning models, and drastically improved inference performance.

DeepCube’s proprietary framework can be deployed on top of any existing hardware in both datacenters and edge devices, resulting in over 10x speed improvement and memory reduction. DeepCube provides the only technology that allows efficient deployment of deep learning models on intelligent edge devices.

DEEPCUBE’S TECHNOLOGY IN A NUTSHELL

DeepCube focuses on research and development of deep learning technologies that result in improved real-world deployment of AI systems. The company’s numerous patented innovations include methods for faster and more accurate training of deep learning models, and drastically improved inference performance.

DeepCube’s proprietary framework can be deployed on top of any existing hardware in both datacenters and edge devices, resulting in over 10x speed improvement and memory reduction. DeepCube provides the only technology that allows efficient deployment of deep learning models on intelligent edge devices.

PROBLEM: DEEP LEARNING DEPLOYMENTS REMAIN VERY LIMITED

LATENCY AND MEMORY

LATENCY AND MEMORY

After the deep learning training phase, the resulting model typically requires huge amounts of processing and consumes lots of memory.

DEDICATED HARDWARE

DEDICATED HARDWARE

Today’s deployments of deep learning solutions typically use dedicated hardware (e.g. GPUs).

OPTIMIZED FOR CLOUD

OPTIMIZED FOR CLOUD

Due to the significant amount of memory and processing requirements, today’s deep learning deployments are limited mostly to the cloud.

COSTS OF CLOUD INFERENCE

COSTS OF CLOUD INFERENCE

The deployment involves huge costs for expensive hardware, large amounts of memory, and heavy power consumption due to the intensive computing requirements.

LATENCY AND MEMORY

LATENCY AND MEMORY

After the deep learning training phase, the resulting model typically requires huge amount of processing and consumes lots of memory.

DEDICATED HARDWARE

DEDICATED HARDWARE

Today’s deployment of deep learning solutions is typically using dedicated hardware (e.g. GPUs).

LATENCY AND MEMORY

LATENCY AND MEMORY

After the deep learning training phase, the resulting model typically requires huge amount of processing and consumes lots of memory.

DEDICATED HARDWARE

DEDICATED HARDWARE

Today’s deployment of deep learning solutions is typically using dedicated hardware (e.g. GPUs).

DEEPCUBE’S PILLARS

SPARSIFICATION

SPARSIFICATION

Agnostic to model type (MLP, CNN, RNN) that results in a significantly more lightweight model (represented as a sparse graph instead of dense tensor).

PROPRIETARY INFERENCE MODEL

PROPRIETARY INFERENCE MODEL

DeepCube developed its proprietary techniques and methods from scratch, which are highly optimized for running sparse deep learning models for inference, and result in dramatic speedup and memory reduction on any existing hardware.

FULLY AUTOMATED

FULLY AUTOMATED

DeepCube’s platform reduces the size of any deep learning model (any training data and model type) in a completely automated way and without any manual intervention.

SPARSIFICATION

SPARSIFICATION

Agnostic to model type (MLP, CNN, RNN) that results in a significantly more lightweight model (represented as a sparse graph instead of dense tensor).

PROPRIETARY INFERENCE MODEL

PROPRIETARY INFERENCE MODEL

DeepCube developed a proprietary techniques and methods from scratch, highly optimized for running sparse deep learning models for inference, resulting in dramatic speedup and memory reduction on any existing hardware.

FULLY AUTOMATED

FULLY AUTOMATED

DeepCube platform reduces the size of any deep learning model (any training data and model type) in a completely automated way without any manual intervention.

DEEP LEARNING WITH DEEPCUBE

Traditional Neural Network

DeepCube - Before

Neural Network with DeepCube

~90% size reduction during training

DeepCube - After

Traditional Neural Network

DeepCube - Before

Neural Network with DeepCube

~90% size reduction during training

DeepCube - After

DEEP LEARNING DEPLOYMENT ON ANY DEVICE

SPEED IMPROVEMENT & MEMORY REDUCTION

SPEED IMPROVEMENT & MEMORY REDUCTION

DeepCube’s proprietary framework can be deployed in data centers and on edge devices on top of any existing hardware, resulting in over 10x speed improvement and memory reduction.

OS & HARDWARE AGNOSTIC

OS & HARDWARE AGNOSTIC

DeepCube‘s “software accelerator” patented technology can be deployed on top of any existing hardware (CPU, GPU, ASIC).

EFFICIENT DEPLOYMENT ON EDGE DEVICES

EFFICIENT DEPLOYMENT ON EDGE DEVICES

DeepCube provides the only technology that allows efficient deployment of deep learning models on intelligent edge devices, enabling them to make truly autonomous decisions.

SPEED IMPROVEMENT & MEMORY REDUCTION

SPEED IMPROVEMENT & MEMORY REDUCTION

DeepCube’s proprietary framework can be deployed in datacenters and edge devices on top of any existing hardware, resulting in over 10x speed improvement and memory reduction

OS & HARDWARE AGNOSTIC

OS & HARDWARE AGNOSTIC

DeepCube‘s “software accelerator” patented technology can be deployed on top of any existing hardware (CPU, GPU, ASIC).

EFFICIENT DEPLOYMENT ON EDGE DEVICES

EFFICIENT DEPLOYMENT ON EDGE DEVICES

DeepCube provides the only technology that allows efficient deployment of deep learning models on intelligent edge devices, enabling them to make truly autonomous decisions.

USE CASES

SEMICONDUCTORS
SEMICONDUCTORS
SEMICONDUCTORS
DeepCube‘s “software accelerator” patented technology enables all AI Hardware providers to improve drastically their speed.
DATA CENTERS
DATA CENTERS
DATA CENTERS
DeepCube’s inference accelerator applies to large scale AI deployments cross all industries.
EDGE DEVICES
EDGE DEVICES
EDGE DEVICES
DeepCube’s technology allows the deployment of state-of-the-art deep learning models on edge devices.

USE CASES

SEMICONDUCTORS
SEMICONDUCTORS
SEMICONDUCTORS
DeepCube‘s “software accelerator” patented technology enables all AI Hardware providers to improve drastically their speed.
DATA CENTERS
DATA CENTERS
DATA CENTERS
DeepCube’s inference accelerator applies to large scale AI deployments cross all industries.
EDGE DEVICES
EDGE DEVICES
EDGE DEVICES
DeepCube’s technology allows the deployment of state-of-the-art deep learning models on edge devices.

TRANSFORM YOUR BUSINESS

With DeepCube, benefit from massive gains in performance.