Tag Archives: artificial intelligence

Making AI Projects Become Easier With NVIDIA Jetson

Hardware development boards became a key enabler for many of recent hardware projects. Such as Arduino and Raspberry Pi, these boards are great for beginners and hobbyists to kick start and bring ideas to reality.

Artificial Intelligence and machine learning are the technologies of the future. So it is important to know how the process goes, and what type of hardware to use. But with the limited computing capabilities of current boards, developers need a powerful and easy to use tools.

Nvidia provides a good solution with its Jetson boards, which are siblings to NVIDIA’s Drive PX boards for autonomous driving. The first board TX1 was released in November, 2015, and now Nvidia has just released the more powerful and power-efficient Jetson TX2 board.

Image credit: Android central

The TX2 is a complete supercomputer. It is a development tool and a field-ready module to power any AI-based equipment. Developers can use it to build equipment around, and also use it itself to run demos and simulations.

Jetson TX2 comes with NVIDIA’s Pascal™ architecture, which boasts 150 billion transistors built on 16 nanometer FinFET fabrication technology.

Some of technical specifications

  • NVIDIA Parker series Tegra X2: 256-core Pascal GPU and two 64-bit Denver CPU cores paired with four Cortex-A57 CPUs in an HMP configuration
  • 8GB of 128-bit LPDDR4 RAM
  • 32GB eMMC 5.1 onboard storage
  • 802.11b/g/n/ac 2×2 MIMO Wi-Fi
  • Bluetooth 4.1
  • USB 3.0 and USB 2.0
  • Gigabit Ethernet
  • SD card slot for external storage
  • SATA 2.0
  • Complete multi-channel PMIC
  • 400 pin high-speed and low-speed industry standard I/O connector
Nvidia Jetson TX1 and TX2 comparision

TX2 has two performance operating modes: Max-Q and Max-P. Max-Q is the TX2’s energy efficiency mode, at 7.5W, this mode clocks the Parker SoC for efficiency over performance (essentially placing it right before the bend in the power/performance curve) with NVIDIA claiming that this mode offers 2x the energy efficiency of the Jetson TX1. In this mode, TX2 should have similar performance to TX1 in the latter’s max performance mode.

Meanwhile the board’s Max-P mode is its maximum performance mode. In this mode NVIDIA sets the board TDP to 15W, allowing the TX2 to hit higher performance at the cost of some energy efficiency. NVIDIA claims that Max-P offers up to 2x the performance of the Jetson TX1, though as GPU clock speeds aren’t double TX1’s, it’s going to be a bit more sensitive on an application-by-application basis.

Image credit: anandtech

Devices such as robots, drones, 360 cameras, medical, etc., can use Jetson for “edge” machine learning. The ability to process data locally and with limited power is useful when connectivity bandwidth is limited or spotty (like in remote locations), latency is critical (real-time control), or where privacy and security is a concern.

Jetson TX2 is available as a developer kit for $500 at arrow.com. In fact, this kit comes with design guides and documentation, and is pre-flashed with a Linux development environment. It also supports the NVIDIA Jetpack SDK, which includes the BSP, libraries for deep learning, computer vision, GPU computing, multimedia processing, and more.

Finally, this video compares Jetson TX1 and TX2 boards:

Intel Introduces Loihi – A Self Learning Processor That Mimics Brain Functions

Intel has developed a first-of-its-kind self-learning neuromorphic chip – codenamed Loihi. It mimics the animal brain functions by learning to operate based on various modes of feedback from the environment. Unlike convolutional neural network (CNN) and other deep learning processors, Intel’s Loihi uses an asynchronous spiking model to mimic neuron and synapse behavior in a much closer analog to animal brain behavior.

loihi - Intel's self-learning chip
Loihi – Intel’s self-learning chip

Machine learning models based on CNN use large training sets to set up recognition of objects and events. This extremely energy-efficient chip, which uses the data to learn and make inferences, gets smarter over time and does not need to be trained in the traditional way. The Loihi chip includes digital circuits that mimic the brain’s basic mechanics, making machine learning faster and more efficient while requiring much lower computing power.

The chip offers highly flexible on-chip learning and combines training and inference on a single chip. This allows machines to be autonomous and to adapt in real time instead of waiting for the next update from the cloud. Compared to convolutional neural networks and deep learning neural networks, the Loihi test chip uses many fewer resources on the same task. Researchers have demonstrated learning at a rate that is a 1 million times improvement compared with other typical neural network devices.

The self-learning capabilities prototyped by this test chip have huge potential to improve automotive and industrial applications as well as personal robotics – any application that would benefit from the autonomous operation and continuous learning in an unstructured environment. For example, recognizing the movement of a car or bike for an autonomous vehicle. More importantly, it is up to 1,000 times more energy-efficient than general purpose computing.


  • Fully asynchronous neuromorphic many core mesh.
  • Each neuron capable of communicating with thousands of other neurons.
  • Each neuromorphic core includes a learning engine that can be programmed to adapt network parameters during operation.
  • Fabrication on Intel’s 14 nm process technology.
  • A total of 130,000 neurons and 130 million synapses.
  • Development and testing of several algorithms with high algorithmic efficiency for problems including path planning, constraint satisfaction, sparse coding, dictionary learning, and dynamic pattern learning and adaptation.

Movidius Deep Learning USB Stick by Intel

Last week, Intel launched the Movidius Neural Compute Stick, which is a deep learning processor on a USB stick.

This USB stick was not an Intel invention. In fact, Intel had acquired Movidius company that had produced last year the world’s first deep learning processor on a USB stick based around their Myriad 2 Vision Processor.

Neural Compute Stick is based around the Movidius MA2150, the entry level chip in the Movidius Myriad 2 family of vision processing units (VPUs). Using this stick will allow you to add some artificial visual intelligence to your applications like drones and security cameras. 

Movidius Neural Compute Stick form factor device enables you prototype and tune your deep neural network. Moreover, the USB form factor connects to existing hosts and other prototyping platforms. At the same time, the VPU provides machine learning on a low-power inference engine.

Actually, the stick role comes after training your algorithm where it is ready to try real data. All you have to do is to translate your trained neural network from the desktop using the Movidius toolkit into an embedded application inside the stick. Later on, the toolkit will optimize this input to run on the Myriad 2 VPU. Note that your trained network should be compatible with Caffe deep learning framework.

It is a simple process

  1. Enter a trained Caffe
  2. Feed-forward Convolutional Neural Network (CNN) into the toolkit
  3. Profile it
  4. Compile a tuned version ready for embedded deployment using the Neural Compute Platform API.

An outstanding feature is that the stick can work without any connection to cloud or network connection, allowing to add smart features to really small devices with lower consumption. This feature may be on of the revolutionary ideas to start combining IoT and machine learning devices.

Neural Compute Stick Features

  • Supports CNN profiling, prototyping, and tuning workflow
  • All data and power provided over a single USB Type A port
  • Real-time, on device inference – cloud connectivity not required
  • Run multiple devices on the same platform to scale performance
  • Quickly deploy existing CNN models or uniquely trained networks
  • Features the Movidius VPU with energy-efficient CNN processing

“The Myriad 2 VPU housed inside the Movidius Neural Compute Stick provides powerful, yet efficient performance — more than 100 gigaflops of performance within a 1W power envelope — to run real-time deep neural networks directly from the device. This enables a wide range of AI applications to be deployed offline.” — Remi El-Ouazzane, VP and General Manager of Movidius.

At the moment, the stick SDK in only availble for x86, and there are some hints to expand platforms support. Meanwhile, developers are hoping to have ARM processor support since many of IoT applications rely on ARM processor. However, this may be not possible since the stick is an Intel product.

This stick is available for sale now, and costs $79. More information about how to get started with the stick is available on the Movidius developer site. Also check this video by Movidius: