Distributor Mouser Electronics is now shipping the QueSSence Intelligent Connected Platform from Redpine Signals. The ultra-low-power, edge-based artificial intelligence (AI) application development platform consists of hardware, software and secure cloud technologies. [via]
Based on a low-power Arm Cortex-M4 processor with a floating point unit (FPU) and an AI accelerator, the platform features large computation capacity required by AI algorithms. The QueSSence platform’s 6-axis accelerometer and gyroscope enables engineers to easily integrate sensor data, and with 400 Kbytes of RAM and up to 4 Mbytes of dedicated flash memory, the platform can store up to 25,000 sensor values. Support for voice activity detection (VAD) and up to eight capacitive touch sensor inputs enable AI applications that incorporate voice and touch interfaces. QueSSence can serve a wide range of IoT applications through certified support for multiple wireless protocols, including 802.11a/b/g/n Wi-Fi, dual-mode Bluetooth 5, 802.15.4 (capable of running Thread or Zigbee), and 802.11ah. The platform offers multiple levels of security, including physically unclonable function (PUF), crypto hardware accelerators, and secure bootloader to create a highly secure system.
Language has been one of the most fundamental ways of communication among people; it is regarded as the essential way of communication. Of course, language is not the only means of communication between people, but the need for language-based communication is critical especially when people are working and collaborating on tasks together. Communication becomes difficult in situations where people don’t understand each others’ language.
The need for a common language has been one that has usually been advocated for, but the chances of that happening are limited based on the difference in cultural heritage and history. When people travel to other countries that speak a different language either for vacation or work will mostly result in the use of human language translators which sometimes could prove expensive, inefficient, and not always available. One way of going around this language barrier is to go the route of a language translation device, and Smark might be your best tool for that.
Smark is a pure modular language translator device that is capable of real-time translation of 37+ languages using a simple, intuitive method. There are several ways of doing language translation, there are apps that do that, but Smark goes a step further in the language translation game. Smarks gives you that immersion experience, it gives the experience of a foreign culture and make one communicate like a local.
Smark can support 37+ languages. It provides on the go support for English, Spanish, Portuguese, French, German, Italian, Japanese, Korean, Arabic, Thai, Mandarin, Cantonese, Russian, Greek, Dutch, Polish, Danish, Finnish, Czech, Romanian, Swedish, Hungarian, Malay, and lastly Turkish and more.
Smarks brings back the memories of using walkie-talkies to communicate; it is an efficient way of two people trying to talk to each other. The device is separable and can be split into two halves providing back-&-forth sharing work like simple walkie-talkies making it very easy for anyone to use.
Spark is capable of speech recognition, machine translation and speech synthesis seamlessly and fluently in near real time. The device features a dual microphone at the two opposite ends accompanied with a trigger switch; recording can be done with any of the microphones. The product not only does Speech to Speech (S2S) but also Speech to Text (S2T). Spark translation and recognition is based on four main cloud translation engines, namely, Google Translate, Microsoft Translate, Baidu and Alibaba Translate Engines. In the future, it is believed Spark will provide offline translation capabilities to make it more accessible especially in the absence of network or for remote traveling.
Below are the device’s specification:
Network – WiFi
Network Bank – GSM, WCDMA, TD-SCDMA, TDD-LTE, FDD-LTE, EVDO.
Smark is made for travel. It can also be used as a network station for multiple devices while on-the-go. With a built-in SIM card, a flat rate roaming plan can be purchased to create a shareable Wi-Fi hotspot for your phone, tablet or laptop. The roaming plan covers 100+ countries and regions in total, so you’ll never have to worry about being out of range or out of touch with your family and friends.
Smark is a product of Misway, a startup with a focus on cutting-edge technology in the areas of Artificial Intelligence with an emphasis on Natural Language Processing and Speech Intelligence Development. More information about Spark is available on the product website. Also a kickstarter campaing is live.
Popularized as the “first embedded ultra-compact artificial intelligence processing card,” and built around the same Intel Movidius™ Myriad™ 2 2450 VPU as Intel’s own Neural Compute Stick. UP’s AI Core is a mini-PCI Express module that enables Artificial Intelligence on the Edge.
The UP AI Core has 512MB of DDR SDRAM and 4 GB of onboard storage. It is a standard looking PCI-e board measuring 51×30 mm. The onboard Movidius™ chip supports the use of both TensorFlow and Caffe frameworks, both are symbolic math libraries used for machine learning applications such as neural networks.
In order to support the board, the host computer needs to have at least 1GB of RAM, and 4GB of free storage space. Right now, only 64-bit x86 boards running Ubuntu 16.04 are fully supported. None the less that is an only requirement for the Movidius™ VPU rather than something essential in the design of the UP board itself.
However, there’s been a lot of effort since the release of the Movidius™ Neural Compute Stick to get it working on the Raspberry Pi. It’s possible now that it can be used with an Arm-based board with an appropriate PCI-e slot like the Pine H64. But without official support, it is limited to an extent.
The UP AI Core is now available for $69. It is compatible with the UP Core Plus but should work with any single-board computer that has a mini-PCIe interface. Although the user has to be careful about toolchain support for the Movidius™ chip.
NVIDIA Jetson Xavier is the latest addition to the Jetson platform. It’s an AI computer for autonomous machines, delivering the performance of a GPU workstation in an embedded module for a consumption under 30W. With multiple operating modes at 10W, 15W, and 30W, Jetson Xavier has greater than 10x the energy efficiency and more than 20x the performance of its predecessor, the Jetson TX2.
Jetson is a product of Nvidia (Nvidia Jetson) and one of the most powerful embedded platforms for computer vision applications and AI on edge. The Jetson platform is a range of computation processor boards which consists of the Jetson TK1, TX1, and TX2. They’re powered by a Nvidia Tegra which utilizes the ARM Central Processing Unit (CPU). Various operating systems can run on them, such as Linux distros and QNX which is a commercial Real-Time Operating System (RTOS) designed primarily for embedded systems. Nvidia is adding now a new more powerful member to the Jetson Platform.
Nvidia is very excited to announce the release of Jetson Xavier, an Artificial Intelligence computer that works with autonomous machines giving off a GPU workstation in an embedded module and now available in a Jetson Xavier Developer Kit $1299 (USD). It has a super high performance of close to 30 trillion operations per second (TOPS).
Jetson Xavier is designed for robots, drones and other autonomous machines that need maximum compute at the edge to run modern AI workloads and solve problems in manufacturing, logistics, retail, service, agriculture and more. Jetson Xavier is also suitable for smart city applications and portable medical devices. Launched at Computex 2018 in Taiwan by Nvidia CEO Jensen Huang, the Nvidia Isaac Platform includes new hardware, software, and a virtual-world robot simulator that makes it easy for developers to create new kinds of robots.
Jensen Huang said at Nvidia’s Monday press conference at Computex in Taiwan,
This is the single longest processor project we have ever done in our company, Xavier has roughly the same processing power as a $10,000 workstation equipped with a graphics processing units. Plus, it’s easy on the power consumption, he added.
Jetson Xavier is capable of more than 30 TOPS (trillion operations per second) for deep learning and computer vision tasks. The 512-core Volta GPU with support for Tensor Cores and mixed-precision compute is capable of up to 10 TFLOPS FP16 and 20 TOPS INT8. Jetson Xavier’s dual NVDLA engines are capable of up to 5 TOPS each. It also has high-performance eight-core ARM64 CPU, a dedicated image processor, a video processor and a vision processor for accelerating computer vision tasks.
It also announced an “Isaac” software development platform for robots and other autonomous machines that run on its Linux-friendly octa-core “Jetson Xavier” module. The NVIDIA Isaac Software Development Kit (SDK) gives you a comprehensive set of frameworks, tools, APIs, and libraries to accelerate development of robotics algorithms and software.
The Isaac robotics software consists of:
Isaac SDK — a collection of APIs and tools to develop robotics algorithm software and runtime framework with fully accelerated libraries
Isaac IMX — Isaac Intelligent Machine Acceleration applications, a collection of NVIDIA-developed robotics algorithm software
Isaac Sim — a highly realistic virtual simulation environment for developers to train autonomous machines and perform hardware-in-the-loop testing with Jetson Xavier
The Jetson Xavier Developer Kit will be available for early access in August and open to the public in October. Developers using a Jetson TX2 or TX1 to develop autonomous machines using the JetPack SDK can sign up to be notified when they can apply for early access by completing a survey. More information may be found in the Xavier product page.
The Google AIY (Artifical Intelligent Yourself) Project Team is no new and has been in existence for a while now. Their job is to deal with two significant parts of the AI community namely; voice and image recognition. Although they launched the first generation of AIY Vision and Voice kits that comes equipped with a Raspberry Pi last year, they have now modified the kits and this lead to the creation of a new generation of AIY Vision and Voice kits. Unlike the previous kits which made use of Raspberry Pi 3, the new kits which are smarter and cost-effective are based on the smaller Raspberry Pi Zero WH.
AN INTELLIGENT CAMERA
Due to the “continued demand” for the Voice and Vision kits mostly from parents and teachers in the STEM environment, Google decided to “help educators integrate AIY into STEM lesson plans and challenges of the future by launching a new version of our AIY Kits.” The new vision kit has a Raspberry Pi Camera Module V2 which can be easily assembled to create a do-it-yourself intelligent camera which cannot only capture images but also recognize faces and objects.
The Vision Kit comes with USB cable and a pre-provisioned micro SD card. Raspberry Pi Zero WH which the new kit was based on, has the same features as the Raspberry Pi Zero W. However, the Pi Zero WH comes with a soldered 40 – pin GPIO. It is also more flexible and less expensive than Raspberry Pi 3. The Vision kit is less costly as compared to the previous version because Pi Zero WH was used and can be bought for just $90. Other parts of the Vision Kit include; the cardboard case, a speaker, wide lens kit, standoffs and many more.
A SMART SPEAKER
The Voice Kit has most of the features found in Vision Kit but there are few differences such as the absence of a camera module and the presence of a Voice Bonnet Hat and Voice Hat stereo Microphone boards. If you argued that cardboard cannot talk, then you were wrong as the AIY Voice Kit has accomplished that already. The kit comes enclosed in cardboard and costs $50. It also has a speaker, wires, and even an arcade button.
The Voice Kit is linked with Google Cloud Speech API & Google Assistant SDK , can answer questions and perform certain tasks that has been programmed to do.
The new AIY Kits are available for purchase at US retailer Target:
The kit is expected to be available in the UK this summer.
The Google team is introducing a new way to interact with the Kits alongside the traditional use of “monitor, keyboard, and mouse” using a companion app for Android devices. The app aims to make wireless setup and configuration a snap. The app will be available alongside the launch of the new kits from the Google Play store. Google is also working on iOS and Chrome companion apps, which should be coming along soon.
Google’s Vision Kit lets you build your own computer-vision system for $45 along with your own Raspberry Pi.
The company has now launched the AIY (AI yourself) Vision Kit that lets you turn Raspberry Pi equipment into an image-recognition device. The kit is powered by Google’s TensorFlow machine-learning models and will soon gain an accompanying Android app for controlling the device.
According to Google, Vision Kit features “on-device neural network acceleration”, allowing a Raspberry Pi-based box to do computer vision without processing in the cloud. The AIY Voice Kit relies on the cloud for natural-language processing.
Google offers AI vision kit for Raspberry Pi owners – [Link]
Hardware development boards became a key enabler for many of recent hardware projects. Such as Arduino and Raspberry Pi, these boards are great for beginners and hobbyists to kick start and bring ideas to reality.
Artificial Intelligence and machine learning are the technologies of the future. So it is important to know how the process goes, and what type of hardware to use. But with the limited computing capabilities of current boards, developers need a powerful and easy to use tools.
Nvidia provides a good solution with its Jetson boards, which are siblings to NVIDIA’s Drive PX boards for autonomous driving. The first board TX1 was released in November, 2015, and now Nvidia has just released the more powerful and power-efficient Jetson TX2 board.
The TX2 is a complete supercomputer. It is a development tool and a field-ready module to power any AI-based equipment. Developers can use it to build equipment around, and also use it itself to run demos and simulations.
NVIDIA Parker series Tegra X2: 256-core Pascal GPU and two 64-bit Denver CPU cores paired with four Cortex-A57 CPUs in an HMP configuration
8GB of 128-bit LPDDR4 RAM
32GB eMMC 5.1 onboard storage
802.11b/g/n/ac 2×2 MIMO Wi-Fi
USB 3.0 and USB 2.0
SD card slot for external storage
Complete multi-channel PMIC
400 pin high-speed and low-speed industry standard I/O connector
TX2 has two performance operating modes: Max-Q and Max-P. Max-Q is the TX2’s energy efficiency mode, at 7.5W, this mode clocks the Parker SoC for efficiency over performance (essentially placing it right before the bend in the power/performance curve) with NVIDIA claiming that this mode offers 2x the energy efficiency of the Jetson TX1. In this mode, TX2 should have similar performance to TX1 in the latter’s max performance mode.
Meanwhile the board’s Max-P mode is its maximum performance mode. In this mode NVIDIA sets the board TDP to 15W, allowing the TX2 to hit higher performance at the cost of some energy efficiency. NVIDIA claims that Max-P offers up to 2x the performance of the Jetson TX1, though as GPU clock speeds aren’t double TX1’s, it’s going to be a bit more sensitive on an application-by-application basis.
Devices such as robots, drones, 360 cameras, medical, etc., can use Jetson for “edge” machine learning. The ability to process data locally and with limited power is useful when connectivity bandwidth is limited or spotty (like in remote locations), latency is critical (real-time control), or where privacy and security is a concern.
Jetson TX2 is available as a developer kit for $500 at arrow.com. In fact, this kit comes with design guides and documentation, and is pre-flashed with a Linux development environment. It also supports the NVIDIA Jetpack SDK, which includes the BSP, libraries for deep learning, computer vision, GPU computing, multimedia processing, and more.
Finally, this video compares Jetson TX1 and TX2 boards: