Tag Archives: AI

Nvidia’s Jetson Xavier is an AI Computer boasting a $10,000 Worth Of Power For Your Machines and Robots

NVIDIA Jetson Xavier is the latest addition to the Jetson platform. It’s an AI computer for autonomous machines, delivering the performance of a GPU workstation in an embedded module for a consumption under 30W. With multiple operating modes at 10W, 15W, and 30W, Jetson Xavier has greater than 10x the energy efficiency and more than 20x the performance of its predecessor, the Jetson TX2.

Nvidia Jetson Xavier Computer On Module
Nvidia Jetson Xavier CoM

Jetson is a product of Nvidia (Nvidia Jetson) and one of the most powerful embedded platforms for computer vision applications and AI on edge. The Jetson platform is a range of computation processor boards which consists of the Jetson TK1, TX1, and TX2. They’re powered by a Nvidia Tegra which utilizes the ARM Central Processing Unit (CPU). Various operating systems can run on them, such as Linux distros and QNX which is a commercial Real-Time Operating System (RTOS) designed primarily for embedded systems. Nvidia is adding now a new more powerful member to the Jetson Platform.

Nvidia is very excited to announce the release of Jetson Xavier, an Artificial Intelligence computer that works with autonomous machines giving off a GPU workstation in an embedded module and now available in a Jetson Xavier Developer Kit $1299 (USD). It has a super high performance of close to 30 trillion operations per second (TOPS).

The Nvidia Jetson Xavier Developer Kit
Jetson Xavier Developer Kit

Jetson Xavier is designed for robots, drones and other autonomous machines that need maximum compute at the edge to run modern AI workloads and solve problems in manufacturing, logistics, retail, service, agriculture and more. Jetson Xavier is also suitable for smart city applications and portable medical devices. Launched at Computex 2018 in Taiwan by Nvidia CEO Jensen Huang, the Nvidia Isaac Platform includes new hardware, software, and a virtual-world robot simulator that makes it easy for developers to create new kinds of robots.

Jensen Huang said at Nvidia’s Monday press conference at Computex in Taiwan,

This is the single longest processor project we have ever done in our company, Xavier has roughly the same processing power as a $10,000 workstation equipped with a graphics processing units. Plus, it’s easy on the power consumption, he added.

Jetson Xavier is capable of more than 30 TOPS (trillion operations per second) for deep learning and computer vision tasks. The 512-core Volta GPU with support for Tensor Cores and mixed-precision compute is capable of up to 10 TFLOPS FP16 and 20 TOPS INT8. Jetson Xavier’s dual NVDLA engines are capable of up to 5 TOPS each. It also has high-performance eight-core ARM64 CPU, a dedicated image processor, a video processor and a vision processor for accelerating computer vision tasks.

It also announced an “Isaac” software development platform for robots and other autonomous machines that run on its Linux-friendly octa-core “Jetson Xavier” module. The NVIDIA Isaac Software Development Kit (SDK) gives you a comprehensive set of frameworks, tools, APIs, and libraries to accelerate development of robotics algorithms and software.

The Isaac robotics software consists of:

  • Isaac SDK — a collection of APIs and tools to develop robotics algorithm software and runtime framework with fully accelerated libraries
  • Isaac IMX — Isaac Intelligent Machine Acceleration applications, a collection of NVIDIA-developed robotics algorithm software
  • Isaac Sim — a highly realistic virtual simulation environment for developers to train autonomous machines and perform hardware-in-the-loop testing with Jetson Xavier

The Jetson Xavier Developer Kit will be available for early access in August and open to the public in October. Developers using a Jetson TX2 or TX1 to develop autonomous machines using the JetPack SDK can sign up to be notified when they can apply for early access by completing a survey. More information may be found in the Xavier product page.

Google Launches New DIY Artificial Intelligent Kit Powered by The Raspberry Pi Zero WH

The Google AIY (Artifical Intelligent Yourself) Project Team is no new and has been in existence for a while now. Their job is to deal with two significant parts of the AI community namely; voice and image recognition. Although they launched the first generation of AIY Vision and Voice kits that comes equipped with a Raspberry Pi last year, they have now modified the kits and this lead to the creation of a new generation of AIY Vision and Voice kits. Unlike the previous kits which made use of Raspberry Pi 3, the new kits which are smarter and cost-effective are based on the smaller Raspberry Pi Zero WH.

AN INTELLIGENT CAMERA

Due to the “continued demand” for the Voice and Vision kits mostly from parents and teachers in the STEM environment, Google decided to “help educators integrate AIY into STEM lesson plans and challenges of the future by launching a new version of our AIY Kits.” The new vision kit has a Raspberry Pi Camera Module V2 which can be easily assembled to create a do-it-yourself intelligent camera which cannot only capture images but also recognize faces and objects.

The Vision Kit comes with USB cable and a pre-provisioned micro SD card. Raspberry Pi Zero WH which the new kit was based on, has the same features as the Raspberry Pi Zero W. However, the Pi Zero WH comes with a soldered 40 – pin GPIO. It is also more flexible and less expensive than Raspberry Pi 3. The Vision kit is less costly as compared to the previous version because Pi Zero WH was used and can be bought for just $90. Other parts of the Vision Kit include; the cardboard case, a speaker, wide lens kit, standoffs and many more.

A SMART SPEAKER

 

The Voice Kit has most of the features found in Vision Kit but there are few differences such as the absence of a camera module and the presence of a Voice Bonnet Hat and Voice Hat stereo Microphone boards. If you argued that cardboard cannot talk, then you were wrong as the AIY Voice Kit has accomplished that already. The kit comes enclosed in cardboard and costs $50. It also has a speaker, wires, and even an arcade button.

The Voice Kit is linked with Google Cloud Speech API & Google Assistant SDK , can answer questions and perform certain tasks that has been programmed to do.

The new AIY Kits are available for purchase at US retailer Target:

The kit is expected to be available in the UK this summer.

The Google team is introducing a new way to interact with the Kits alongside the traditional use of “monitor, keyboard, and mouse” using a companion app for Android devices. The app aims to make wireless setup and configuration a snap. The app will be available alongside the launch of the new kits from the Google Play store. Google is also working on iOS and Chrome companion apps, which should be coming along soon.

More information about this development can be found on the Google AIY website

Google offers AI vision kit for Raspberry Pi owners

Google’s Vision Kit lets you build your own computer-vision system for $45 along with your own Raspberry Pi.

The company has now launched the AIY (AI yourself) Vision Kit that lets you turn Raspberry Pi equipment into an image-recognition device. The kit is powered by Google’s TensorFlow machine-learning models and will soon gain an accompanying Android app for controlling the device.

According to Google, Vision Kit features “on-device neural network acceleration”, allowing a Raspberry Pi-based box to do computer vision without processing in the cloud. The AIY Voice Kit relies on the cloud for natural-language processing.

Google offers AI vision kit for Raspberry Pi owners – [Link]

Making AI Projects Become Easier With NVIDIA Jetson

Hardware development boards became a key enabler for many of recent hardware projects. Such as Arduino and Raspberry Pi, these boards are great for beginners and hobbyists to kick start and bring ideas to reality.

Artificial Intelligence and machine learning are the technologies of the future. So it is important to know how the process goes, and what type of hardware to use. But with the limited computing capabilities of current boards, developers need a powerful and easy to use tools.

Nvidia provides a good solution with its Jetson boards, which are siblings to NVIDIA’s Drive PX boards for autonomous driving. The first board TX1 was released in November, 2015, and now Nvidia has just released the more powerful and power-efficient Jetson TX2 board.

Image credit: Android central

The TX2 is a complete supercomputer. It is a development tool and a field-ready module to power any AI-based equipment. Developers can use it to build equipment around, and also use it itself to run demos and simulations.

Jetson TX2 comes with NVIDIA’s Pascal™ architecture, which boasts 150 billion transistors built on 16 nanometer FinFET fabrication technology.

Some of technical specifications

  • NVIDIA Parker series Tegra X2: 256-core Pascal GPU and two 64-bit Denver CPU cores paired with four Cortex-A57 CPUs in an HMP configuration
  • 8GB of 128-bit LPDDR4 RAM
  • 32GB eMMC 5.1 onboard storage
  • 802.11b/g/n/ac 2×2 MIMO Wi-Fi
  • Bluetooth 4.1
  • USB 3.0 and USB 2.0
  • Gigabit Ethernet
  • SD card slot for external storage
  • SATA 2.0
  • Complete multi-channel PMIC
  • 400 pin high-speed and low-speed industry standard I/O connector
Nvidia Jetson TX1 and TX2 comparision

TX2 has two performance operating modes: Max-Q and Max-P. Max-Q is the TX2’s energy efficiency mode, at 7.5W, this mode clocks the Parker SoC for efficiency over performance (essentially placing it right before the bend in the power/performance curve) with NVIDIA claiming that this mode offers 2x the energy efficiency of the Jetson TX1. In this mode, TX2 should have similar performance to TX1 in the latter’s max performance mode.

Meanwhile the board’s Max-P mode is its maximum performance mode. In this mode NVIDIA sets the board TDP to 15W, allowing the TX2 to hit higher performance at the cost of some energy efficiency. NVIDIA claims that Max-P offers up to 2x the performance of the Jetson TX1, though as GPU clock speeds aren’t double TX1’s, it’s going to be a bit more sensitive on an application-by-application basis.

Image credit: anandtech

Devices such as robots, drones, 360 cameras, medical, etc., can use Jetson for “edge” machine learning. The ability to process data locally and with limited power is useful when connectivity bandwidth is limited or spotty (like in remote locations), latency is critical (real-time control), or where privacy and security is a concern.

Jetson TX2 is available as a developer kit for $500 at arrow.com. In fact, this kit comes with design guides and documentation, and is pre-flashed with a Linux development environment. It also supports the NVIDIA Jetpack SDK, which includes the BSP, libraries for deep learning, computer vision, GPU computing, multimedia processing, and more.

Finally, this video compares Jetson TX1 and TX2 boards: