Tag Archives: computer vision

UDOO BOLT, A Supercomputer with twice the Power of a MacBook Pro 13

One thing technology has taught us in the last few years, is the so-called powerful devices of yesterday, will not match the devices of today or tomorrow and this is something that is transcending in the hardware industry. Maker’s board have seen a drastic improvement ever since the first Arduino and the Rasberry Pi Single Board Computer were launched. Startups, makers, engineers and even the big corporations like Intel and Nvidia have all joined in improving the maker’s ecosystem with the launch of their own boards.

UDOO Bolt

Improvements will always keep coming and one board that is going to redefine the maker’s ecosystem is the newly crowdfunded UDOO Bolt. We have seen boards like the Pi 3, Asus TinkerBoard, Nvidia Jetson and other high-performance boards, but the UDOO Bolt brings a new authority to this space. A maker board that carries an exceptional punch – A supercomputer in a maker footprint.

UDOO, an Indie developer company has released a new maker board after the UDOO x86 Ultra, and the new board reached its funding target on Kickstarter within fours hours after launch. This does not come as a shock considering the specifications of the board. The 12cm by the 12cm board which is called UDDO BOLT is almost twice as powerful as the board used on a MacBook 13 pro. The UDOO BOLT is a quantum leap compared to current maker boards: a portable, breakthrough supercomputer that goes up to 3.6 GHz thanks to the brand-new AMD Ryzen™ Embedded V1000 SoC, a top-notch, multicore CPU with a mobile GPU on par with GTX 950M and an integrated Arduino™-compatible platform, all wrapped into one.

The first and most amazing feature considering the size of the board is the type of SoC (System on Chip) that comes with the board. The tiny maker PC comes with an AMD Ryzen Embedded V1000B SoC which has an integrated ‘Radeon Vega’ graphics processing unit on the chip. The GPU is super impressive for it supports triple A (AAA) video game experience, High dynamic range (HDR) that helps the camera to capture greater detail from both bright and dark areas of a photo, Radeon FreeSync 2 and you can stream videos at 4K resolution with a running rate at 60 frames per seconds (fps) on four screens simultaneously.

This brings us to the next feature; one can view videos on four screens due to the presence of two HDMI 2.0 ports and two USB C ports. Other ports include two USB 3.1 Type-A, a single audio jack, a Gigabyte Ethernet Jack, a 19V DC power input and the Arduino compatible pinout.

You must be wondering why there is an Arduino port, this is only because the board has the same pin functionality of Arduino Uno and is even better since it has up to 12 analog inputs instead of 6, 7 PWM pins and the internal USB connection can implement other functions than serial UART like MIDI or Keyboard. Building IOT tools just got easier for all robotic engineers with its Arduino-compatible platform, which has a complete IOs for the CPU and Arduino onboard. The best part is that one can work with sensors using the Arduino platform without soldering because the board comes with grove connectors.

The UDOO BOLT supports two different types of operating systems; it supports Linux and Windows which means a person can run any application or software using the board. Also, the board can be classified into two different types based on the GPU, one comes with an AMD Radeon Vega 3, and the other has AMD Radeon Vega 8. The starting price is $229, and shipping begins in December.

The UDOO Bolt should comfortably outswing the likes of the Nvidia Jetson TX2 in areas of computer vision and deep learning and the fact it supports Windows will also give it more leverage but this won’t be an easy fight though. A worthy comparison will be between the UDOO Bolt and the new NVIDIA Jetson Xavier.

If there is any board you want to buy now, then the UDOO bolt is a board you should go for.

Pixy 2 – Computer Vision at a Whole New Level

Computer vision started as a way for computers to understand their surroundings, this requires making a computer with a high-level understanding of digital images or videos. A device that performs computer vision needs to acquire, process, and analyze images to extract data from the real world and turn it into numerical information that can be used for something. The main application for this technology has always been artificial intelligence since giving a computer the ability to understand its surroundings (and learn from them) it’s a huge step towards decision making which is a fundamental part of AI.

Makers have also started using this type of technology which lead Charmed Labs to create Pixy in 2013. Pixy is a small, easily programmable device used to recognize certain things in its sight. Pixy can be taught objects, and it can also recognize color codes. This year, Pixy 2 was announced, and it can do everything Pixy could plus some additional features.

Pixy 2 has a custom pan tilt mechanism, making it easy to look around. Also, the image processing is now at 60 frames per second. It includes new algorithms for line detection, so it can track lines, and it’s now capable of identifying intersections, and reading signals to make decisions. Signals are simple barcodes which can be printed out and can be easily programmed to a certain instruction to be performed at the sight of that specific barcode.

The device includes a cable to plug it directly into the Arduino, or it can be connected to Raspberry PI via USB cable. It can also communicate via SPI, I2C and UART giving the makers a wide range of options to work with. Finally, the new version has a LED light meant to be used in dark spaces.

A lot of projects for Pixy can be found on the internet, and with the new additions that Pixy 2 offers, there would soon be a lot of applications for this device too. Pixy 2 is smaller, faster, and smarter. As a result, makers will find creative ways to exploit these characteristics in their projects. Finally, Pixy can also be used with Lego Mindstorms (NXT and EV3).

The first Pixy was launched on Kickstarter, but Pixy 2 is not crowdfunding, and its already available to be bought on Amazon or on its official website.

Role Of Vision Processing With Artificial Neural Networks In Autonomous Driving

In next 10 years, the automotive industry will bring more change than we have seen in the last 50, due to technological advancement. One of the largest changes will be the move to autonomous vehicles, usually known as the self-driving car. Scientists from many universities are striving to implement vision processing with the artificial neural network to provide driver assistance in self-driving cars.

vision processing
Vision processing using convolutional artificial neural networks

Vision processing, as well as artificial neural networks, have been around for many years. Convolutional artificial neural networks (CNN) are sets of algorithms that extract meaningful information from sensor input. CNN’s are very computationally efficient at analyzing a scene. They are also able to identify objects as cars, people, animals, road signs, road junctions, road marking etc. enabling them to determine the relevant reality of the scene. As this system runs in real-time, the decision can be made as soon as the sensing part is complete.

One of the major steps in visual environment understanding for automotive applications is key points tracking and estimating ego-motion and environment structure subsequently from the trajectories of these key points. A propagation based tracking (PBT) method is popularly used to obtain the 2D trajectories from a sequence of images in a monocular camera setup.

The inputs from one or all of the sensors like LIDAR, RADAR, camera, IR, etc. are evaluated and decisions are taken accordingly. For example, if a car in the front suddenly brakes, the onboard computer would instantly verify the distance and calculate the speed with help of the existing sensors. Then it would apply the brakes faster than any human would be able to do. This method helps to prevent an accident with 90% efficiency.

The use of vision processing with CNN is rapidly increasing in automotive applications to enable camera-based autonomous driving. This technology sets a new driving standard. With this technology in our hand, fewer accidents, fewer fatalities, and less pollution are experienced. Vision processing in autonomous driving also enables efficient journeys, reduced crowding, car sharing, and packing cars in more tightly via vehicle to vehicle communication.

JeVois, The Open-Source Smart Vision Camera

JeVois, which can be translated from French as: I see, is an open-source quad-core camera that can be connected easily with your project whether you are using Arduino, Raspberry Pi or just running it on your PC. JeVois contains a video sensor, quad-core CPU, USB video and a serial port in only 1.7 cubic inches. To start working with your JeVois you only need to insert a microSD card loaded with the provided open-source machine vision algorithms and then connecting it to your computer. It will work immediately just by opening a camera software.

The process is as follows: video captured by the camera sensor, processed on JeVois processor, and results are sent over USB to the host computer or to the micro controller.

On your computer, you can use any camera software to see the results, also you can check different vision algorithms by selecting different resolutions and frame rates.

It has the following software and hardware frameworks:

“For ease of programming and configuration, all of the operating system, core JeVois software, and any necessary data files are stored on a single high-speed Micro-SD card that can easily be removed and plugged into a desktop or laptop computer.  The JeVois software framework combines custom Linux kernel drivers for camera sensor and for USB output, written in C, and a custom high-level vision processing framework, written in C++-17. “

Easy to integrate  with other open-source libraries, including tiny-dnn, OpenCV, boost, zBar, Eigen, turbojpeg, etc.  This framework is scalable since the operating system infrastructure is built using the buildroot framework where adding and using different libraries is easy. New vision modules can be added to the core of JeVois thanks to the fact the core software is managed by cmake. Thus, you can customize the vision algorithm you would like to run your JeVois.

In addition, it is easy to use, for example only 4 Wires are needed to connect it with Arduino: 5 or 3.3 V, GND, Tx and Rx!

JeVois is now live in a Kickstarter Campaign, check this video for better understanding:

For more information about the specifications and technical details, check the campaign page. You can pre-order your JeVois now for $45, there are still 20 days to go.

JeVois started as an educational project, to encourage the study of machine vision, computational neuroscience, and machine learning as part of introductory programming and robotics courses at all levels (from K-12 to Ph.D.). It is funded by Science Foundation (NSF) and the Defense Advanced Research Projects Agency (DARPA).

If you are interested in developing the core of JeVois check the documentation provided here.