Tag Archives: Intel

Intel Introduces Loihi – A Self Learning Processor That Mimics Brain Functions

Intel has developed a first-of-its-kind self-learning neuromorphic chip – codenamed Loihi. It mimics the animal brain functions by learning to operate based on various modes of feedback from the environment. Unlike convolutional neural network (CNN) and other deep learning processors, Intel’s Loihi uses an asynchronous spiking model to mimic neuron and synapse behavior in a much closer analog to animal brain behavior.

loihi - Intel's self-learning chip
Loihi – Intel’s self-learning chip

Machine learning models based on CNN use large training sets to set up recognition of objects and events. This extremely energy-efficient chip, which uses the data to learn and make inferences, gets smarter over time and does not need to be trained in the traditional way. The Loihi chip includes digital circuits that mimic the brain’s basic mechanics, making machine learning faster and more efficient while requiring much lower computing power.

The chip offers highly flexible on-chip learning and combines training and inference on a single chip. This allows machines to be autonomous and to adapt in real time instead of waiting for the next update from the cloud. Compared to convolutional neural networks and deep learning neural networks, the Loihi test chip uses many fewer resources on the same task. Researchers have demonstrated learning at a rate that is a 1 million times improvement compared with other typical neural network devices.

The self-learning capabilities prototyped by this test chip have huge potential to improve automotive and industrial applications as well as personal robotics – any application that would benefit from the autonomous operation and continuous learning in an unstructured environment. For example, recognizing the movement of a car or bike for an autonomous vehicle. More importantly, it is up to 1,000 times more energy-efficient than general purpose computing.

Features

  • Fully asynchronous neuromorphic many core mesh.
  • Each neuron capable of communicating with thousands of other neurons.
  • Each neuromorphic core includes a learning engine that can be programmed to adapt network parameters during operation.
  • Fabrication on Intel’s 14 nm process technology.
  • A total of 130,000 neurons and 130 million synapses.
  • Development and testing of several algorithms with high algorithmic efficiency for problems including path planning, constraint satisfaction, sparse coding, dictionary learning, and dynamic pattern learning and adaptation.

Overclocked Intel CPU draws 1kW

by Thomas Scherer @ elektormagazine.com

The impressively powerful i9-7980XE CPU from Intel boasts not just 18 cores and 36 threads but also an unlocked clock multiplier. The spec was clearly an open invitation to Roman Hartung who has something of a reputation when it comes to overclocking processors. In tests he was able to crank up the clock of this beast to 5.7 GHz at which point the CPU draws around 1kW of power.

Overclocked Intel CPU draws 1kW – [Link]

Inside Intel’s first product: the 3101 RAM chip held just 64 bits

Ken Shirriff takes a look inside the 3110 RAM chip from Intel. He writes:

Intel’s first product was not a processor, but a memory chip: the 31011 RAM chip, released in April 1969. This chip held just 64 bits of data (equivalent to 8 letters or 16 digits) and had the steep price tag of $99.50. The chip’s capacity was way too small to replace core memory, the dominant storage technology at the time, which stored bits in tiny magnetized ferrite cores. However, the 3101 performed at high speed due to its special Schottky transistors, making it useful in minicomputers where CPU registers required fast storage. The overthrow of core memory would require a different technology—MOS DRAM chips—and the 3101 remained in use in the 1980s.3

Inside Intel’s first product: the 3101 RAM chip held just 64 bits – [Link]

Intel Optane, Intel’s Next-Generation SSD Technology

In July 2015, Intel and Micron Technology announced a new technology for memory and storage solutions called “3D XPoint™ technology“. It is a new category of nonvolatile memory that addresses the need for high-performance, high-endurance, and high-capacity memory and storage.

Now Intel had produced its Optane™ technology that provides an unparalleled combination of high throughput, low latency, high quality of service, and high endurance. The new technology is a special combination of 3D XPoint™ memory media, Intel Memory and Storage Controllers, Intel Interconnect IP and Intel® software.

From system acceleration and fast caching to storage and memory expansion, Intel Optane delivers a revolutionary leap forward in decreasing latency and accelerating systems for workloads demanding large capacity and fast storage.

3D-Xpoint memory structure, Source: Intel Corp

The first product with this technology is the Intel Optane SSD DC P4800X. It is a 375GB add-in card that communicates via NVMe over a four-lane PCIe 3.0 link, and it is available for $1,520 or $4.05 per GB.

Optane™ storage could be used in many sectors and domains. It will help healthcare researchers to work with larger data sets in real-time, financial institutions to speed trading, and retailers to identify fraud detection patterns more quickly. Optane™ technology can also be used at home to optimize personal computer for immersive gaming experience.

The 3D XPoint innovative, transistor-less cross point architecture creates a three-dimensional checkerboard where memory cells sit at the intersection of words lines and bit lines, allowing the cells to be addressed individually. As a result, data can be written and read in small sizes, leading to fast and efficient read/write processes.

Memory cells are written or read by varying the amount of voltage sent to each selector. This eliminates the need for transistors, increasing capacity and reducing cost. The initial technology stores 128Gb per die across two stacked memory layers. Future generations of this technology can increase the number of memory layers and/or use traditional lithographic pitch scaling to increase die capacity.

3D XPoint Technology Wafer

You can get more detailed information about 3D Xpoint and Intel Optane technologies through their official websites. You can also take a look at these two Intel P4800X reviews; Billy Tallis fromAnandTech and Paul Alcorn from Tom’s Hardware.

tinyTILE, An Intel Development Board Based on Intel Curie Module

In the past year, Intel announced the low power development board “tinyTILE” which was built based on Intel Curie Module, offering quick and easy identification of actions and motions, features needed by always-on applications.

tinyTile was designed for use in wearable devices and rapid prototyping. It is a 35 x 26 mm board and has an Intel Curie Module on the top and a flat reverse side. There are 20 general purpose I/O pins (four of them are PWM output pins) operate at 3.3V with a maximum of 20 mA current.

The Intel Curie Module is a low-power compute module featuring the low-power 32-bit Intel Quark microcontroller with 384kB flash memory and 80kB SRAM, low-power integrated DSP sensor hub and pattern matching technology, Bluetooth® Low Energy (BLE), and 6-axis combo sensor with accelerometer and gyroscope.

Intel Curie Module Block Diagram

Features of the tinyTILE include:

  • Intel® Curie™ module dual-core (Intel® Quark* processor core and ARC* core)
  • Bluetooth® low energy, 6-axis combo sensor and pattern matching engine
  • 14 digital input/output pins (four can be used as PWM output pins)
  • Four PWM output pins
  • Six analog input pins
  • Strictly 3.3 V I/Os only
  • 20 mA DC current per I/O pin
  • 196 kB Flash memory
  • 24 kB SRAM
  • 32 MHz clock speed
  • USB connector for serial communication and firmware updates (DFU protocol)
  • 35 mm length and 26 mm width

tinyTILE can be powered using the USB connection or by an external battery, and it is compatible with three development environments:

The board is available for around $40 on element14. All related documents, specifications, BOM, BSP and other needed information are available at the official page.

You can view this project that invades your dog’s privacy with impressive ease while you’re at work!

Analyzing the vintage 8008 processor from die photos

Ken Shirriff writes:

The revolutionary Intel 8008 microprocessor is 45 years old today (March 13, 2017), so I figured it’s time for a blog post on reverse-engineering its internal circuits. One of the interesting things about old computers is how they implemented things in unexpected ways, and the 8008 is no exception. Compared to modern architectures, one unusual feature of the 8008 is it had an on-chip stack for subroutine calls, rather than storing the stack in RAM. And instead of using normal binary counters for the stack, the 8008 saved a few gates by using shift-register counters that generated pseudo-random values. In this article, I reverse-engineer these circuits from die photos and explain how they work.

Analyzing the vintage 8008 processor from die photos – [Link]

Premier Farnell partners with Intel on IoT

Farnell element14’s tinyTILE is an Intel Curie module based board created by the distributor in partnership with Intel. by Julien Happich @ edn-europe.com:

Measuring only 35x26mm, the tinyTILE has been specifically designed for use in wearable and IoT designs for consumer and industrial edge products. It runs a software platform created specifically for the Intel Curie module and as such, can be programmed using either the Arduino IDE, Intel’s own software, Intel Curie Open Developer Kit (ODK), or Anaren Atmosphere, a cloud-based ecosystem that offers a complete end-to-end IoT solution.

Premier Farnell partners with Intel on IoT – [Link]

From Sand to Circuits – How Intel makes integrated circuits [PDF]

intel

Here is a nice PDF document from Intel explaining how integrated circuits are made.

From Sand to Circuits – How Intel makes integrated circuits [PDF] – [Link]

Brillo, the new OS for IoT by Google

Google had launched Brillo, a new Android based OS used for embedded development – in particular for low-power, IoT devices. Brillo brings the simplicity and speed of software development to hardware for IoT with an embedded OS, core services, developer kit, and developer console.

google-brillo-operating-system-for-internet-of-things

Brillo works in conjunction with Weave, an open, standardized communications protocol that supports various discovery, provisioning, and authentication functions. Weave enables device setup, phone-to-device-to-cloud communication, and user interaction from mobile devices and the web. The chief benefit is allowing a “standardized” way for consumers to set up devices.

Brillo Structure
Brillo Structure

The big challenge  is unifying and facilitating the communication among the estimated 200 billion smart devices expected by 2020. Whether you’re looking to build a simple DIY project or implement an enterprise scale m2m (machine to machine) project, Google’s new tools will be a big help.  Fortunately, Brillo appears pretty easy for developers who are already familiar with Android.

Check this video by Google about Brillo and its features, and you can watch another video about Weave

Brillo supports a trio of ARM, Intel, and MIPS hacker SBCs (Single Board Computers) called “ made for Brillo” hardware kits. One of these kits is The Edison kit for Brillo by Intel, that includes an Edison IoT module plugged into a baseboard that offers convenient, Arduino-style expansion compatibility.

Edison for Brillo SBC
Edison for Brillo SBC

One of the great things about Brillo that the security issue with IoT applications is solved by choosing to use secure boot and signed over-the-air updates and providing timely patches at the OS level.

If you are interested in developing Brillo itself you can check the Brillo developer portal where code, development tools, and documentation for the Android-based Brillo embedded OS for Internet of Things devices can obtained. You should ask for an invitation then when you gain access you will get everything needed for your next project.
A high introduction was presented by Intel in the Open IoT Summit  in April 2016, you can check it here.
As Intel, UN and IDC mentioned in their joint report that there will be an average of 26 smart devices for every human in just 5 years, we can predict a rapid growing development and enhancements for IoT systems, devices and protocols.

Running Intel x86 apps on Raspberry Pi 1 and 2

ExaGear Desktop is virtualization solution which opens up a host of new possibilities for running apps across platforms. ExaGear Desktop makes it possible to run Intel x86 application on ARM-based devices, and is targeted to individuals running ARM-based Mini PCs and to businesses deploying ARM-based devices to cut costs. For example you can run Skype on ARM devices (youtube; https://youtu.be/4GUP27TJ5w4). Moreover you can run x86 Windows applications on ARM-based devices by installing Wine.

ExaGear Desktop also boasts outstanding performance specs – in tests it runs 5 times faster than Qemu.

Running Intel x86 apps on Raspberry Pi 1 and 2  – [Link]