Tag Archives: Machine learning

The UP AI Core – Mini-PCIe Board For Machine Learning

Popularized as the “first embedded ultra-compact artificial intelligence processing card,” and built around the same Intel Movidius™ Myriad™ 2 2450 VPU as Intel’s own Neural Compute Stick. UP’s AI Core is a mini-PCI Express module that enables Artificial Intelligence on the Edge.

The UP AI Core board
The UP AI Core board

The UP AI Core has 512MB of DDR SDRAM and 4 GB of onboard storage. It is a standard looking PCI-e board measuring 51×30 mm. The onboard Movidius™ chip supports the use of both TensorFlow and Caffe frameworks, both are symbolic math libraries used for machine learning applications such as neural networks.

In order to support the board, the host computer needs to have at least 1GB of RAM, and 4GB of free storage space. Right now, only 64-bit x86 boards running Ubuntu 16.04 are fully supported. None the less that is an only requirement for the Movidius™ VPU rather than something essential in the design of the UP board itself.

However, there’s been a lot of effort since the release of the Movidius™ Neural Compute Stick to get it working on the Raspberry Pi. It’s possible now that it can be used with an Arm-based board with an appropriate PCI-e slot like the Pine H64. But without official support, it is limited to an extent.

The UP AI Core is now available for $69. It is compatible with the UP Core Plus but should work with any single-board computer that has a mini-PCIe interface. Although the user has to be careful about toolchain support for the Movidius™ chip.

Specifications for the AI Core

  • SoC: Intel® Movidius™ Myriad™ 2 VPU 2450
  • Supported Frameworks: TensorFlow, Caffe
  • Form Factor: Mini PCI-Express
  • Dimension: 51 X 30 mm
  • System Requirements:
    • x86_64 computer running Ubuntu 16.04
    • Available mPCI-E slot
    • 1GB RAM
    • 4GB free storage space

More information about the board can be found at UP AI Core’s Order Page.

Google Unveils USB Type-C Version Of It’s Edge TPU AI Chip

Google has come up with its Edge TPU machine learning chip announcement by also revealing a USB Type-C based device that can be plugged into any Linux or Android Things computer, including a Raspberry Pi. The company announced a USB stick computer version of Edge TPU that can work with any Linux or Android Things computer. It also published more details on the upcoming, NXP-based Edge TPU development kit, including its SoC NXP i.MX8M.

Two views of the Edge TPU dev kit
Google’s Edge TPU dev kit

The Edge TPU Accelerator uses the same mini-scaled Edge TPU neural network coprocessor that is built into the upcoming dev kit. It has a USB Type-C port to connect with any Debian Linux or Android Things computer to accelerate machine learning (ML) inferencing for local edge analytics. The 65 x 30mm device has mounting holes for host boards such as a Raspberry Pi Zero.

Same as the Edge TPU development kit, the Edge TPU Accelerator enables the processing of machine learning (ML) inference data directly on-device. This local ML accelerator increases privacy, removes the need for persistent connections, reduces latency, and allows for high performance using less power.

The Edge TPU Accelerator starts competing with products like Intel’s Neural Compute Stick, previously referred to as the Fathom. The USB-equipped Neural Compute Stick is equipped with the Movidius Myriad 2 VPU and neural network accelerator.

The Edge TPU dev kit details

The Edge TPU Accelerator is going to ship in October this year along with the Edge TPU chip and development kit. It was informed that the computer-on-module that features the Edge TPU will run either Debian Linux or Android Things on NXP’s i.MX8M. The 1.5GHz, Cortex-A53 based i.MX8M integrates a Vivante GC7000Lite GPU and VPU, as well as a 266MHz Cortex-M4 MCU.

The yet unnamed, 48 x 40mm module will ship with 1GB LPDDR4, 8GB eMMC, dual-band WiFi-ac, and Bluetooth 4.1. The baseboard of the dev kit will add a microSD slot, as well as single USB Type-C OTG, Type-C power (5V input), USB 3.0 host, and micro-USB serial console ports.

The Edge TPU development kit baseboard is further provided with GbE and HDMI 2.0a ports, as well as a 39-pin FPC connector for 4-lane MIPI-DSI and a 24-pin FPC for 4-lane MIPI-CSI2. There’s also a 40-pin expansion connector, but with no claims for Raspberry Pi compatibility. The 85 x 56mm board also provides an audio jack, a digital mic, and a 4-pin terminal for stereo speakers.

More information may be found in the Edge TPU Accelerator announcement, as well as the original Edge TPU announcement.

IoT Projects Is Now Easier With Bolt IoT Platform

Internet of Things (IoT) is one of the most important technologies these days. It became an essential component of many hardware projects core. And in order to make it easier for developers, Bolt IoT platform appeared as a complete solution for IoT projects.

Bolt is a combination of hardware and cloud service that allow users control their devices and collect data in safe and secure methods. It also can give actionable insights using machine learning algorithms with just some few clicks.

The platform consists of three main components, Bolt hardware module, Bolt cloud, and analytics. The hardware module is a WiFi chip with a built-in 80 MHz 32-bit RISC CPU that operates at 3.3v. It also works as an interface for a set of sensors and actuators through GPIO and UART pins to collect data and react with it.

Bolt Hardware

The next part is Bolt cloud which used mainly for configuring, monitoring, and controlling connected devices. It is a visual interface enables users to setup hardware and prepare the system easily and quickly. In addition, there is a code editor to write and edit codes for the hardware. The special feature is that you can reprogram the system remotely!

Finally, the analysis and monitoring unit provide visualized insights based on machine learning algorithms. The collected data are stored securely on the cloud, and the reports are presented as graphs, charts, or any customized visualization.

Bolt IoT Platform Features

  • A Wifi or a GSM chip
    An easy interface to quickly connect your hardware to cloud over GPIO, UART, and ADC. Also, connects to MODBUS, I2C, and SPI with an additional converter.
  • Robust Communication
    Bolt is equipped with industry standard protocols to ensure a Secure and fast communication of your device data with cloud.
  • Security
    Bolt has built-in safeguards to secure all user data from unwanted third party intrusions and hacks.
  • Machine Learning
    Deploy machine learning algorithms with just a few clicks to detect anomalies as well as predict sensor values.
  • Alerts
    Utilize Bolt’s quick alert system providing invaluable information sent directly to your phone or Email. You can config the contact details and set the threshold.
  • Mobile App Ready
    Customize and control your devices through Mobile apps. Bolt gives you full freedom to design your own mobile app centered around your requirements to monitor and control.
  • Global Infrastructure and Easy Scalability
    Bolt lets you scale from prototype to millions of devices in just a few weeks time.
  • Over the air updates
    Simultaneously program or update all your Bolt powered IoT devices wherever they are. Bolt offers you unparalleled scalability and elasticity to help your business grow.

The scope of applications that may benefit from using Bolt is very wide, including environmental applications, smart cities, electricity management, and much more. Bolt is available for ordering in two packages, the first is for developers and the other is for enterprises. Developers option contains one Bolt unit with three free months of cloud services, and its cost is about $75.

At last, Bolt makers are launching a Kickstarter campaign on the 3rd of November 2017. If you are interested and want to know more about this platform, take a look at the official website and read this detailed features document. Update 6-11-2017 – They achieved the goal of $10,000 USD funding in just 5 hours from launch!

Making AI Projects Become Easier With NVIDIA Jetson

Hardware development boards became a key enabler for many of recent hardware projects. Such as Arduino and Raspberry Pi, these boards are great for beginners and hobbyists to kick start and bring ideas to reality.

Artificial Intelligence and machine learning are the technologies of the future. So it is important to know how the process goes, and what type of hardware to use. But with the limited computing capabilities of current boards, developers need a powerful and easy to use tools.

Nvidia provides a good solution with its Jetson boards, which are siblings to NVIDIA’s Drive PX boards for autonomous driving. The first board TX1 was released in November, 2015, and now Nvidia has just released the more powerful and power-efficient Jetson TX2 board.

Image credit: Android central

The TX2 is a complete supercomputer. It is a development tool and a field-ready module to power any AI-based equipment. Developers can use it to build equipment around, and also use it itself to run demos and simulations.

Jetson TX2 comes with NVIDIA’s Pascal™ architecture, which boasts 150 billion transistors built on 16 nanometer FinFET fabrication technology.

Some of technical specifications

  • NVIDIA Parker series Tegra X2: 256-core Pascal GPU and two 64-bit Denver CPU cores paired with four Cortex-A57 CPUs in an HMP configuration
  • 8GB of 128-bit LPDDR4 RAM
  • 32GB eMMC 5.1 onboard storage
  • 802.11b/g/n/ac 2×2 MIMO Wi-Fi
  • Bluetooth 4.1
  • USB 3.0 and USB 2.0
  • Gigabit Ethernet
  • SD card slot for external storage
  • SATA 2.0
  • Complete multi-channel PMIC
  • 400 pin high-speed and low-speed industry standard I/O connector
Nvidia Jetson TX1 and TX2 comparision

TX2 has two performance operating modes: Max-Q and Max-P. Max-Q is the TX2’s energy efficiency mode, at 7.5W, this mode clocks the Parker SoC for efficiency over performance (essentially placing it right before the bend in the power/performance curve) with NVIDIA claiming that this mode offers 2x the energy efficiency of the Jetson TX1. In this mode, TX2 should have similar performance to TX1 in the latter’s max performance mode.

Meanwhile the board’s Max-P mode is its maximum performance mode. In this mode NVIDIA sets the board TDP to 15W, allowing the TX2 to hit higher performance at the cost of some energy efficiency. NVIDIA claims that Max-P offers up to 2x the performance of the Jetson TX1, though as GPU clock speeds aren’t double TX1’s, it’s going to be a bit more sensitive on an application-by-application basis.

Image credit: anandtech

Devices such as robots, drones, 360 cameras, medical, etc., can use Jetson for “edge” machine learning. The ability to process data locally and with limited power is useful when connectivity bandwidth is limited or spotty (like in remote locations), latency is critical (real-time control), or where privacy and security is a concern.

Jetson TX2 is available as a developer kit for $500 at arrow.com. In fact, this kit comes with design guides and documentation, and is pre-flashed with a Linux development environment. It also supports the NVIDIA Jetpack SDK, which includes the BSP, libraries for deep learning, computer vision, GPU computing, multimedia processing, and more.

Finally, this video compares Jetson TX1 and TX2 boards:

Movidius Deep Learning USB Stick by Intel

Last week, Intel launched the Movidius Neural Compute Stick, which is a deep learning processor on a USB stick.

This USB stick was not an Intel invention. In fact, Intel had acquired Movidius company that had produced last year the world’s first deep learning processor on a USB stick based around their Myriad 2 Vision Processor.

Neural Compute Stick is based around the Movidius MA2150, the entry level chip in the Movidius Myriad 2 family of vision processing units (VPUs). Using this stick will allow you to add some artificial visual intelligence to your applications like drones and security cameras. 

Movidius Neural Compute Stick form factor device enables you prototype and tune your deep neural network. Moreover, the USB form factor connects to existing hosts and other prototyping platforms. At the same time, the VPU provides machine learning on a low-power inference engine.

Actually, the stick role comes after training your algorithm where it is ready to try real data. All you have to do is to translate your trained neural network from the desktop using the Movidius toolkit into an embedded application inside the stick. Later on, the toolkit will optimize this input to run on the Myriad 2 VPU. Note that your trained network should be compatible with Caffe deep learning framework.

It is a simple process

  1. Enter a trained Caffe
  2. Feed-forward Convolutional Neural Network (CNN) into the toolkit
  3. Profile it
  4. Compile a tuned version ready for embedded deployment using the Neural Compute Platform API.

An outstanding feature is that the stick can work without any connection to cloud or network connection, allowing to add smart features to really small devices with lower consumption. This feature may be on of the revolutionary ideas to start combining IoT and machine learning devices.

Neural Compute Stick Features

  • Supports CNN profiling, prototyping, and tuning workflow
  • All data and power provided over a single USB Type A port
  • Real-time, on device inference – cloud connectivity not required
  • Run multiple devices on the same platform to scale performance
  • Quickly deploy existing CNN models or uniquely trained networks
  • Features the Movidius VPU with energy-efficient CNN processing

“The Myriad 2 VPU housed inside the Movidius Neural Compute Stick provides powerful, yet efficient performance — more than 100 gigaflops of performance within a 1W power envelope — to run real-time deep neural networks directly from the device. This enables a wide range of AI applications to be deployed offline.” — Remi El-Ouazzane, VP and General Manager of Movidius.

At the moment, the stick SDK in only availble for x86, and there are some hints to expand platforms support. Meanwhile, developers are hoping to have ARM processor support since many of IoT applications rely on ARM processor. However, this may be not possible since the stick is an Intel product.

This stick is available for sale now, and costs $79. More information about how to get started with the stick is available on the Movidius developer site. Also check this video by Movidius: