Tag Archives: artificial intelligence

Intel Introduces Loihi – A Self Learning Processor That Mimics Brain Functions

Intel has developed a first-of-its-kind self-learning neuromorphic chip – codenamed Loihi. It mimics the animal brain functions by learning to operate based on various modes of feedback from the environment. Unlike convolutional neural network (CNN) and other deep learning processors, Intel’s Loihi uses an asynchronous spiking model to mimic neuron and synapse behavior in a much closer analog to animal brain behavior.

loihi - Intel's self-learning chip
Loihi – Intel’s self-learning chip

Machine learning models based on CNN use large training sets to set up recognition of objects and events. This extremely energy-efficient chip, which uses the data to learn and make inferences, gets smarter over time and does not need to be trained in the traditional way. The Loihi chip includes digital circuits that mimic the brain’s basic mechanics, making machine learning faster and more efficient while requiring much lower computing power.

The chip offers highly flexible on-chip learning and combines training and inference on a single chip. This allows machines to be autonomous and to adapt in real time instead of waiting for the next update from the cloud. Compared to convolutional neural networks and deep learning neural networks, the Loihi test chip uses many fewer resources on the same task. Researchers have demonstrated learning at a rate that is a 1 million times improvement compared with other typical neural network devices.

The self-learning capabilities prototyped by this test chip have huge potential to improve automotive and industrial applications as well as personal robotics – any application that would benefit from the autonomous operation and continuous learning in an unstructured environment. For example, recognizing the movement of a car or bike for an autonomous vehicle. More importantly, it is up to 1,000 times more energy-efficient than general purpose computing.

Features

  • Fully asynchronous neuromorphic many core mesh.
  • Each neuron capable of communicating with thousands of other neurons.
  • Each neuromorphic core includes a learning engine that can be programmed to adapt network parameters during operation.
  • Fabrication on Intel’s 14 nm process technology.
  • A total of 130,000 neurons and 130 million synapses.
  • Development and testing of several algorithms with high algorithmic efficiency for problems including path planning, constraint satisfaction, sparse coding, dictionary learning, and dynamic pattern learning and adaptation.

Movidius Deep Learning USB Stick by Intel

Last week, Intel launched the Movidius Neural Compute Stick, which is a deep learning processor on a USB stick.

This USB stick was not an Intel invention. In fact, Intel had acquired Movidius company that had produced last year the world’s first deep learning processor on a USB stick based around their Myriad 2 Vision Processor.

Neural Compute Stick is based around the Movidius MA2150, the entry level chip in the Movidius Myriad 2 family of vision processing units (VPUs). Using this stick will allow you to add some artificial visual intelligence to your applications like drones and security cameras. 

Movidius Neural Compute Stick form factor device enables you prototype and tune your deep neural network. Moreover, the USB form factor connects to existing hosts and other prototyping platforms. At the same time, the VPU provides machine learning on a low-power inference engine.

Actually, the stick role comes after training your algorithm where it is ready to try real data. All you have to do is to translate your trained neural network from the desktop using the Movidius toolkit into an embedded application inside the stick. Later on, the toolkit will optimize this input to run on the Myriad 2 VPU. Note that your trained network should be compatible with Caffe deep learning framework.

It is a simple process

  1. Enter a trained Caffe
  2. Feed-forward Convolutional Neural Network (CNN) into the toolkit
  3. Profile it
  4. Compile a tuned version ready for embedded deployment using the Neural Compute Platform API.

An outstanding feature is that the stick can work without any connection to cloud or network connection, allowing to add smart features to really small devices with lower consumption. This feature may be on of the revolutionary ideas to start combining IoT and machine learning devices.

Neural Compute Stick Features

  • Supports CNN profiling, prototyping, and tuning workflow
  • All data and power provided over a single USB Type A port
  • Real-time, on device inference – cloud connectivity not required
  • Run multiple devices on the same platform to scale performance
  • Quickly deploy existing CNN models or uniquely trained networks
  • Features the Movidius VPU with energy-efficient CNN processing

“The Myriad 2 VPU housed inside the Movidius Neural Compute Stick provides powerful, yet efficient performance — more than 100 gigaflops of performance within a 1W power envelope — to run real-time deep neural networks directly from the device. This enables a wide range of AI applications to be deployed offline.” — Remi El-Ouazzane, VP and General Manager of Movidius.

At the moment, the stick SDK in only availble for x86, and there are some hints to expand platforms support. Meanwhile, developers are hoping to have ARM processor support since many of IoT applications rely on ARM processor. However, this may be not possible since the stick is an Intel product.

This stick is available for sale now, and costs $79. More information about how to get started with the stick is available on the Movidius developer site. Also check this video by Movidius: