Tag Archives: Machine learning

IoT Projects Is Now Easier With Bolt IoT Platform

Internet of Things (IoT) is one of the most important technologies these days. It became an essential component of many hardware projects core. And in order to make it easier for developers, Bolt IoT platform appeared as a complete solution for IoT projects.

Bolt is a combination of hardware and cloud service that allow users control their devices and collect data in safe and secure methods. It also can give actionable insights using machine learning algorithms with just some few clicks.

The platform consists of three main components, Bolt hardware module, Bolt cloud, and analytics. The hardware module is a WiFi chip with a built-in 80 MHz 32-bit RISC CPU that operates at 3.3v. It also works as an interface for a set of sensors and actuators through GPIO and UART pins to collect data and react with it.

Bolt Hardware

The next part is Bolt cloud which used mainly for configuring, monitoring, and controlling connected devices. It is a visual interface enables users to setup hardware and prepare the system easily and quickly. In addition, there is a code editor to write and edit codes for the hardware. The special feature is that you can reprogram the system remotely!

Finally, the analysis and monitoring unit provide visualized insights based on machine learning algorithms. The collected data are stored securely on the cloud, and the reports are presented as graphs, charts, or any customized visualization.

Bolt IoT Platform Features

  • A Wifi or a GSM chip
    An easy interface to quickly connect your hardware to cloud over GPIO, UART, and ADC. Also, connects to MODBUS, I2C, and SPI with an additional converter.
  • Robust Communication
    Bolt is equipped with industry standard protocols to ensure a Secure and fast communication of your device data with cloud.
  • Security
    Bolt has built-in safeguards to secure all user data from unwanted third party intrusions and hacks.
  • Machine Learning
    Deploy machine learning algorithms with just a few clicks to detect anomalies as well as predict sensor values.
  • Alerts
    Utilize Bolt’s quick alert system providing invaluable information sent directly to your phone or Email. You can config the contact details and set the threshold.
  • Mobile App Ready
    Customize and control your devices through Mobile apps. Bolt gives you full freedom to design your own mobile app centered around your requirements to monitor and control.
  • Global Infrastructure and Easy Scalability
    Bolt lets you scale from prototype to millions of devices in just a few weeks time.
  • Over the air updates
    Simultaneously program or update all your Bolt powered IoT devices wherever they are. Bolt offers you unparalleled scalability and elasticity to help your business grow.

The scope of applications that may benefit from using Bolt is very wide, including environmental applications, smart cities, electricity management, and much more. Bolt is available for ordering in two packages, the first is for developers and the other is for enterprises. Developers option contains one Bolt unit with three free months of cloud services, and its cost is about $75.

At last, Bolt makers are launching a Kickstarter campaign on the 3rd of November 2017. If you are interested and want to know more about this platform, take a look at the official website and read this detailed features document. Update 6-11-2017 – They achieved the goal of $10,000 USD funding in just 5 hours from launch!

Making AI Projects Become Easier With NVIDIA Jetson

Hardware development boards became a key enabler for many of recent hardware projects. Such as Arduino and Raspberry Pi, these boards are great for beginners and hobbyists to kick start and bring ideas to reality.

Artificial Intelligence and machine learning are the technologies of the future. So it is important to know how the process goes, and what type of hardware to use. But with the limited computing capabilities of current boards, developers need a powerful and easy to use tools.

Nvidia provides a good solution with its Jetson boards, which are siblings to NVIDIA’s Drive PX boards for autonomous driving. The first board TX1 was released in November, 2015, and now Nvidia has just released the more powerful and power-efficient Jetson TX2 board.

Image credit: Android central

The TX2 is a complete supercomputer. It is a development tool and a field-ready module to power any AI-based equipment. Developers can use it to build equipment around, and also use it itself to run demos and simulations.

Jetson TX2 comes with NVIDIA’s Pascal™ architecture, which boasts 150 billion transistors built on 16 nanometer FinFET fabrication technology.

Some of technical specifications

  • NVIDIA Parker series Tegra X2: 256-core Pascal GPU and two 64-bit Denver CPU cores paired with four Cortex-A57 CPUs in an HMP configuration
  • 8GB of 128-bit LPDDR4 RAM
  • 32GB eMMC 5.1 onboard storage
  • 802.11b/g/n/ac 2×2 MIMO Wi-Fi
  • Bluetooth 4.1
  • USB 3.0 and USB 2.0
  • Gigabit Ethernet
  • SD card slot for external storage
  • SATA 2.0
  • Complete multi-channel PMIC
  • 400 pin high-speed and low-speed industry standard I/O connector
Nvidia Jetson TX1 and TX2 comparision

TX2 has two performance operating modes: Max-Q and Max-P. Max-Q is the TX2’s energy efficiency mode, at 7.5W, this mode clocks the Parker SoC for efficiency over performance (essentially placing it right before the bend in the power/performance curve) with NVIDIA claiming that this mode offers 2x the energy efficiency of the Jetson TX1. In this mode, TX2 should have similar performance to TX1 in the latter’s max performance mode.

Meanwhile the board’s Max-P mode is its maximum performance mode. In this mode NVIDIA sets the board TDP to 15W, allowing the TX2 to hit higher performance at the cost of some energy efficiency. NVIDIA claims that Max-P offers up to 2x the performance of the Jetson TX1, though as GPU clock speeds aren’t double TX1’s, it’s going to be a bit more sensitive on an application-by-application basis.

Image credit: anandtech

Devices such as robots, drones, 360 cameras, medical, etc., can use Jetson for “edge” machine learning. The ability to process data locally and with limited power is useful when connectivity bandwidth is limited or spotty (like in remote locations), latency is critical (real-time control), or where privacy and security is a concern.

Jetson TX2 is available as a developer kit for $500 at arrow.com. In fact, this kit comes with design guides and documentation, and is pre-flashed with a Linux development environment. It also supports the NVIDIA Jetpack SDK, which includes the BSP, libraries for deep learning, computer vision, GPU computing, multimedia processing, and more.

Finally, this video compares Jetson TX1 and TX2 boards:

Movidius Deep Learning USB Stick by Intel

Last week, Intel launched the Movidius Neural Compute Stick, which is a deep learning processor on a USB stick.

This USB stick was not an Intel invention. In fact, Intel had acquired Movidius company that had produced last year the world’s first deep learning processor on a USB stick based around their Myriad 2 Vision Processor.

Neural Compute Stick is based around the Movidius MA2150, the entry level chip in the Movidius Myriad 2 family of vision processing units (VPUs). Using this stick will allow you to add some artificial visual intelligence to your applications like drones and security cameras. 

Movidius Neural Compute Stick form factor device enables you prototype and tune your deep neural network. Moreover, the USB form factor connects to existing hosts and other prototyping platforms. At the same time, the VPU provides machine learning on a low-power inference engine.

Actually, the stick role comes after training your algorithm where it is ready to try real data. All you have to do is to translate your trained neural network from the desktop using the Movidius toolkit into an embedded application inside the stick. Later on, the toolkit will optimize this input to run on the Myriad 2 VPU. Note that your trained network should be compatible with Caffe deep learning framework.

It is a simple process

  1. Enter a trained Caffe
  2. Feed-forward Convolutional Neural Network (CNN) into the toolkit
  3. Profile it
  4. Compile a tuned version ready for embedded deployment using the Neural Compute Platform API.

An outstanding feature is that the stick can work without any connection to cloud or network connection, allowing to add smart features to really small devices with lower consumption. This feature may be on of the revolutionary ideas to start combining IoT and machine learning devices.

Neural Compute Stick Features

  • Supports CNN profiling, prototyping, and tuning workflow
  • All data and power provided over a single USB Type A port
  • Real-time, on device inference – cloud connectivity not required
  • Run multiple devices on the same platform to scale performance
  • Quickly deploy existing CNN models or uniquely trained networks
  • Features the Movidius VPU with energy-efficient CNN processing

“The Myriad 2 VPU housed inside the Movidius Neural Compute Stick provides powerful, yet efficient performance — more than 100 gigaflops of performance within a 1W power envelope — to run real-time deep neural networks directly from the device. This enables a wide range of AI applications to be deployed offline.” — Remi El-Ouazzane, VP and General Manager of Movidius.

At the moment, the stick SDK in only availble for x86, and there are some hints to expand platforms support. Meanwhile, developers are hoping to have ARM processor support since many of IoT applications rely on ARM processor. However, this may be not possible since the stick is an Intel product.

This stick is available for sale now, and costs $79. More information about how to get started with the stick is available on the Movidius developer site. Also check this video by Movidius: