Why machine vision matters for the Internet of Things

Connecting machine vision systems to the IoT creates a powerful network capability. Being able to identify objects from cameras allows the local node to be more intelligent and have greater autonomy, thus reducing the processing load on central servers and allowing a more distributed control architecture. This is turn provides more efficient operation that requires much less external input.

by Mark Patrick, Mouser Electronics

Machine vision has developed in great strides over the last decade. State-of-the-art algorithms capable of detecting edges and movement within video frames, alongside advances in silicon technology relating to image sensors, programmable logic, microcontrollers and graphics processing units (GPUs), have helped bring it into a wide range of embedded applications. More sophisticated designs that can be downloaded to an FPGA are being used in conjunction with new development environments, such as OpenCV, to make machine vision much more accessible to embedded system designers.

This growing proliferation of machine vision is converging with the trend of linking up industrial systems to the Internet of Things (IoT). As sensors become increasingly intelligent, driven in part by the supporting computer vision algorithms, so the data produced is offering valuable insights into the operation of industrial systems. This in turn is opening up new ways of monitoring equipment, with autonomous robotic systems such (as drones) being connected to IoT infrastructure.

Part of the move to machine vision is driven by bandwidth considerations, while the other major motivation is the prospect of automating more parts of an industrial operation. One of the key applications for machine vision is in inspection systems. High performance camera systems with CMOS image sensors have fallen in price considerably over the last ten years, allowing higher resolution examination of boards and systems during manufacturing. These camera modules are combined with FPGAs to add more processing and decision making. This allows the camera to respond accordingly to the received data, reducing the need to send video over the network and enhancing overall operational efficiency.

Robotic machine vision system

Connecting the machine vision elements of inspection equipment to the IoT provides more data for the enterprise systems that are undertaking analysis on the performance of the factory. Rather than raw data, machine vision can provide information at a level of abstraction that is suitable for such enterprise systems. This markedly reduces the bandwidth overhead both for the servers and the network as a whole - the enterprise systems are handling millions of data points coming from the IoT, so any reduction in the load on the servers will help to make more timely decisions.

The rise in machine vision uptake has also opened up the market for robot guidance systems in automated factories. Inspection machines can thus bypass the central enterprise servers and communicate directly with other equipment in the factory, based on the results derived from the machine vision systems. This increases efficiency and again reduces the load on the network and servers. Machine vision is also being used to control automated equipment, particularly in materials handling. This encompasses everything from the control systems for autonomous robots transferring material around a plant through to the automated picking machines in warehouses identifying products.

For autonomous materials handling robots, machine vision can be as simple as determining a line on the floor to follow from one location to the next. However, it can also be used to detect people or obstacles in the way, allowing factory operatives and robots to work together safely and more efficiently. As already mentioned, product picking is now also employing machine vision - with items being identified via their barcode and then a robotic gripper being aligned to capture a particular item and place it into a basket. The camera and local processing to accompany this are both essential, and the pickers/autonomous robots are also constantly monitored as part of the wider IoT.

This has even extended to the air, with Unmanned Aerial Vehicles (UAVs) increasingly relying on machine vision. UAVs have proven to be a highly effective method for carrying out inspections in hard to reach areas, such as oil pipelines and gas installations. As well as allowing the UAV to identify a particular target area and approach it, so that it can be examined more closely, machine vision is being utilised for anti-collision purposes - avoiding fixed obstacles and even other UAVs by linking the camera system to the on-board flight controller.

Robotic machine vision system

Then there is the surveillance market. The increasing use of machine vision has tremendous implications here. Instead of feeding back Megabits of video data every second for an operator to examine, video can be processed locally and alarms triggered without needing any human intervention. The machine vision algorithms running on FPGAs are becoming increasing accurate. As a consequence they are much better able to differentiate between the movement of an intruder, an animal or leaves on a tree (for example), allowing an operator to simultaneously support a larger number of surveillance nodes. Furthermore, surveillance cameras can themselves instruct other machines to respond to an alarm. The combining of autonomous ground and air vehicles, such as UAVs, potentially changes the whole way in which surveillance operates. Rather than fixed cameras that can be avoided, imaging systems are instead mounted on airborne craft that are constantly monitoring the area while in flight. These UAVs then return to a charging base as their batteries run down, and other drones are sent out to replace them. This means constant surveillance cover can be provided. More advanced machine vision algorithms are able identify potential threats and then summon other air and ground craft to the area to further monitor the situation - all without the involvement of an operator. The same type of scenario applies equally well to agricultural applications, where the machine vision algorithms on an airborne craft can monitor the condition of crops and direct an operator (or an autonomous tractor) to the target area if an issue arises that it deems needs some form of responsive action.

The applications outlined here have been enabled by progress made in the underlying hardware and software technologies. The structure of machine vision algorithms is increasingly sophisticated and they can be downloaded to the latest FPGAs and GPUs. These devices can handle 8 or 16 channels at a time, supporting rates of 60 frame per second. They can also be coupled with high level software such as OpenCV.

Originally focussed mainly on research and prototyping, in recent years OpenCV has increasingly been used in deployed products on a wide range of platforms - from cloud to mobile. The latest version, OpenCV 3.1, has just been released. The previous version, 3.0, was a major overhaul, bringing OpenCV up to modern C++ standards and incorporating expanded support for 3D vision and augmented reality. The new 3.1 release introduces improved algorithms for important functions such as calibration, optical flow, image filtering, segmentation and feature detection.

Future possibilities

Machine learning is the obvious next stage after machine vision. Computer vison algorithms are able to provide deterministic analysis of still images and video content, but machine learning is applying other neural network approaches to ‘teach’ a system what to look for. The latest version of OpenCV, for example, now supports for deep neural networks for machine learning.

The increased performance of FPGAs and GPUs is opening up new opportunities for machine learning. This relies on a training phase, where the neural network is shown many different images that are tagged with the objects of interest and is usually handled by a large server system in a lab or in the cloud. It produces a series of weights and bias data that is then applied to the same network implemented in the embedded design. This ‘inference engine’ uses those weights to assess whether new data it is seeing contains those objects. For example, the latest surveillance cameras are using neural network machine learning algorithms to go beyond traditional functions (like monitoring and recording) and offer additional video analysis features (such as crowd density monitoring, stereoscopic vision, facial recognition, people counting and behaviour analysis). This local processing can then be delivered into the IoT and thereby integrated into broader analysis software within the cloud.

 

nVent Schroff at Embedded World 2019

The theme of the nVent Schroff booth at Embedded World 2019 was “Experience Expertise – Modularity, Performance, Protection and Design”. Join us as our experts give an overview of th...


Garz & Fricke Interview at Embedded World 2019 with Dr. Arne Dethlefs: We are strengthening our presence in North America

Through its US subsidiary, located in Minnesota, Garz & Fricke is providing support for its growing HMI and Panel-PC business in the USA and Canada while also strengthening its presence in North A...


SECO's innovations at embedded world 2019

In a much larger stand than in previous years, at embedded world 2019 SECO showcases its wide range of solutions and services for the industrial domain and IoT. Among the main innovations, in this vid...


Design and Manufacturing Services at Portwell

Since about two years Portwell is part of the Posiflex Group. Together with KIOSK, the US market leader in KIOSK systems, the Posiflex Group is a strong player in the Retail, KIOSK and Embedded market...


Arrow capabilities in design support

Florian Freund, Engineering Director DACH at Arrow Electronics talks us through Arrow’s transformation from distributor to Technology Platform Provider and how Arrow is positioned in both, Custo...


Arm launches PSA Certified to improve trust in IoT security

Arm’s Platform Security Architecture (PSA) has taken a step forward with the launch of PSA Certified, a scheme where independent labs will verify that IoT devices have the right level of securit...


DIN-Rail Embedded Computers from MEN Mikro

The DIN-Rail system from MEN is a selection of individual pre-fabricated modules that can variably combine features as required for a range of embedded Rail Onboard and Rail Wayside applications. The ...


Embedded Graphics Accelerates AI at the Edge

The adoption of graphics in embedded and AI applications are growing exponentially. While graphics are widely available in the market, product lifecycle, custom change and harsh operating environments...


ADLINK Optimizes Edge AI with Heterogeneous Computing Platforms

With increasing complexity of applications, no single type of computing core can fulfill all application requirements. To optimize AI performance at the edge, an optimized solution will often employ a...


Synchronized Debugging of Multi-Target Systems

The UDE Multi-Target Debug Solution from PLS provides synchronous debugging of AURIX multi-chip systems. A special adapter handles the communication between two MCUs and the UAD3+ access device and pr...


Smart Panel Fulfills Application Needs with Flexibility

To meet all requirement of vertical applications, ADLINK’s Smart Panel is engineered for flexible configuration and expansion to reduce R&D time and effort and accelerate time to market. The...


Artificial Intelligence

Morten Kreiberg-Block, Director of Supplier & Technology Marketing EMEA at Arrow Electronics talks about the power of AI and enabling platforms. Morten shares some examples of traditional designin...


Arrow’s IoT Technology Platform – Sensor to Sunset

Andrew Bickley, Director IoT EMEA at Arrow Electronics talks about challenges in the IoT world and how Arrow is facing those through the Sensor to Sunset approach. Over the lifecycle of the connected ...


AAEON – Spreading Intelligence in the connected World

AAEON is moving from creating the simple hardware to creating the great solutions within Artificial Intelligence and IoT. AAEON is offering the new solutions for emerging markets, like robotics, drone...


Arrow as a Technology Provider drive Solutions selling approach

Amir Sherman, Director of Engineering Solutions & Embedded Technology at Arrow Electronics talks about the transition started couple of years ago from a components’ distributor to Technology...


Riding the Technology wave

David Spragg, VP, Engineering – EMEA at Arrow Electronics talks about improvements in software and hardware enabling to utilize the AI capabilities. David shares how Arrow with its solutions is ...


ASIC Design Services explains their Core Deep Learning framework for FPGA design

In this video Robert Green from ASIC Design Services describes their Core Deep Learning (CDL) framework for FPGA design at electronica 2018 in Munich, Germany. CDL technology accelerates Convolutional...


Microchip explains some of their latest smart home and facility solutions

In this video Caesar from Microchip talks about the company's latest smart home solutions at electronica 2018 in Munich, Germany. One demonstrator shown highlights the convenience and functionalit...


Infineon explains their latest CoolGaN devices at electronica 2018

In this video Infineon talks about their new CoolGaN 600 V e-mode HEMTs and GaN EiceDRIVER ICs, offering a higher power density enabling smaller and lighter designs, lower overall system cost. The nor...


Analog Devices demonstrates a novel high-efficiency charge pump with hybrid tech

In this video Frederik Dostal from Analog Devices explains a very high-efficiency charge-pump demonstration at their boot at electronica 2018 in Munich, Germany. Able to achieve an operating efficienc...