Neuronix AI Lab: Pioneering a Smarter Way to Use Neural Network Sparsity for AI-Based Computer Vision

Yaron Raz, Co-Founder & CEO; Asher Hazanchuk, Co-Founder and CTO
It is proven that sleep helps the brain’s synapses — the connections among neurons—shrink back by nearly 20 percent. In the state of slumber, the brain differentiates between significant and insignificant connections and prunes lesser important connections to save energy and space.

Pioneering a Smarter Way to Use Neural Network Sparsity for AI-Based Computer Vision It is proven that sleep helps the brain’s synapses — the connections among neurons—shrink back by nearly 20 percent. In the state of slumber, the brain differentiates between significant and insignificant connections and prunes lesser important connections to save energy and space.

In tune with the above phenomenon, a similar mechanism is mimicked in the AI neural networks, known as neural network sparsity, to reduce storage, communication, and computation requirements. AI-based inferencing for computer vision requires huge amounts of mathematical calculations in order to perform tasks such as image classification, object detection, semantic segmentation and others. Against this backdrop, Israel-based Neuronix AI Labs demonstrates that neural network sparsity can reduce 90 percent of the weights and matrix multiplications in deep neural networks, enabling neural network devices to use 90% fewer resources and perform more than ten times more calculations.

Neural network sparsity is achieved by a process called network pruning which zeros out a significant portion of the network weights that have negligible effect on the overall result. In order to get to 90% and above of network sparsity “unstructured pruning” is required, which leads to a network of weights that is sporadic in nature, and lacks a definite pattern. As most GPUs, CPUs and accelerators are designed to perform bulk calculations in parallel, they cannot avoid or reuse the zero multiplication operations and therefore cannot take advantage of sparsity for pure performance gains. To overcome this persisting issue in the industry, Neuronix AI Lab came forth with its efficient and flexible Neural Network accelerator that can use sparsity to provide 90 percent cost and power consumptionreduction for AI-based computer vision. Neuronix AI Lab significantly reduces neural network multiplications while maintaining the same level of accuracy in calculations. The company also further reduces multiplications and memory access using other techniques that augment sparsity and further reduce amount of resources required. Neuronix enables next-generation chip vendors to build their own custom silicon without having to research and develop indigenous accelerator solutions. It also enables computer vision device and system vendors to use off-the-shelf hardware accelerators known as FPGAs for accelerating computer-vision based inferencing.
While reducing the number of calculations of silicon transistor chips, Neuronix AI Lab allows semiconductor manufacturers and computer vision system vendors to improve performance, introduce more robust neural networks and significantly reduce power and increase battery life.

“We have designed a neural network accelerator from the ground up that takes maximum advantage of sparsity,” says Yaron Raz, Co-Founder & CEO of Neuronix AI Lab.

Neuronix AI Lab’s solution is offered as a core IP that can be easily integrated into next-generation ASICs or System-on-Chip (SoC) devices, or run on hardware acceleration devices such as Field-Programmable-Gate-Arrays (FPGAs) – available as acceleration cards, embedded devices or public cloud instances. Such FPGAs are often used in industrial computer vision devices, surveillance, and smart city / smart retail applications. The company provides an architectural solution that integrates with the customer’s existing AI tools and processes. Neuronix AI Labs provides clients with a pro model of the neural network to examine the performance of existing tools, ensuring higher performance gains and rapid deployment of devices in the market space. The company interfaces with the existing workflows such as TensorFlow and PyTorch, commonly used for developing neural networks, to ingest and carry out modifications, followed by the adaptation to Neuronix AI Lab’s specific accelerator.


We have designed a neural network accelerator from the ground up that takes maximum advantage of sparsity

With its expertise in Machine Learning, Computer Vision, Video Compression, Parallel Computing, software optimization, and FPGA/ ASIC design, Neuronix AI Lab enables semiconductor, surveillance, smart cities, automotive, retail shops, and other industries to cut down power consumption. For instance, Neuronix AI Lab ‘s solutions help retail shops to significantly extend the life-cycle of battery powered shelf cameras. It reduces 90% of power usage, extending the life of power cells.

Looking to the future, Neuronix AI Lab is exploiting new architectures that aim to provide strong support for sparse computation. The company has opened the floodgates of innovative solutions to enable computer vision on end devices by saving 90% cost and power reduction.

The company is working with leading FPGA providers, chip vendors, computer vision system providers and ASIC IP vendors on integrating the technology into next generation designs as well as offering a joint solution.

Company
Neuronix AI Lab

Headquarters
San Francisco Bay Area, Israel

Management
Yaron Raz, Co-Founder & CEO; Asher Hazanchuk, Co-Founder and CTO

Description
Neuronix AI Labs came forth with its efficient and flexible Neural Network accelerator that can use sparsity to provide 90 percent costs and power consumption reduction for AI-based computer vision. Neuronix AI Lab significantly reduces neural network multiplications while maintaining the same level of accuracy in calculations. The company also enables next-generation chip vendors to build their own custom silicon without having to research and develop indigenous accelerator solutions. While reducing the number of calculations of silicon transistor chips, Neuronix AI Lab allows semiconductor manufacturers to improve performance and save power

Neuronix AI Lab