Matrox Design Assistant X Color Analysis

Digital cameras with color image sensors are now commonplace. The same is true for the computing power and device interfaces necessary to handle the additional data from color images. What’s more, as users become familiar and comfortable with machine vision technology, they seek to tackle more difficult or previously unsolvable applications. These circumstances combine to make color machine vision an area of mounting interest. Color machine vision poses unique challenges, but it also brings some unique capabilities for manufacturing control and inspection.

Matrox Design Assistant X

The color challenge

Color is the manifestation of light from the visible part of the electromagnetic spectrum. It is perceived by an observer and is therefore subjective – two people may discern a different color from the same object in the same scene. This difference in interpretation also extends to camera systems with their lenses and image sensors. A camera system’s response to color varies not only between different makes and models for its components but also between components of the same make and model. Scene illumination adds further uncertainty by altering a color’s appearance. These subtleties come about from the fact that light emanates with its own color spectrum. Each object in a scene absorbs and reflects (i.e., filters) this spectrum differently and the camera system responds to (i.e., accepts and rejects) the reflected spectrum in its own way. The challenge for color machine vision is to deliver consistent analysis throughout a system’s operation – and between systems performing the same task – while also imitating a human’s ability to discern and interpret colors.

The majority of today’s machine vision systems successfully restrict themselves to grayscale image analysis. In certain instances, however, it is unreliable or even impossible to just depend upon intensity and/or geometric (i.e., shape) information. In these cases, the flexibility of color machine vision software is needed to:

  •  optimally convert an image from color to monochrome for proper analysis using grayscale machine vision software tools
  •  calculate the color difference to identify anomalies
  •  compare the color within a region in an image against color samples to assess if an acceptable match exists or to determine the best match
  •  segment an image based on color to separate object or features from one another and from the background

Color images contain a greater amount of data to process (i.e., typically three times more) than grayscale images and require more intricate handling. Efficient and optimized algorithms are needed to analyze these images in a reasonable amount of time. This is where Matrox Design Assistant X color analysis tools come to the fore.

Matrox Design Assistant X color analysis steps

 

Matrox Design Assistant X

 

 

 

Matrox Design Assistant X includes a set of tools to identify parts, products, and items using color, assess quality from color, and isolate features using color.

 

 

 

 

 


The ColorMatcher step determines the best matching color from a collection of samples for each region of interest within an image. A color sample can be specified either interactively from an image—with the ability to mask out undesired colors—or using numerical values. A color sample can be a single color or a distribution of colors (i.e., a histogram). The color matching method and the interpretation of color differences can be manually adjusted to suit particular application requirements. The ColorMatcher step can also match each image pixel to color samples to segment the image into appropriate elements for further analysis using other steps such as BlobAnalysis.

Color Matcher step

                                              Color Matcher step

The ImageProcessing step includes operations to calculate the color distance and perform color projection. The distance operation reveals the extent of color differences within and between images, while the projection operation enhances color to grayscale image conversion for analysis using other grayscale processing steps.

The color analysis tools included in the Matrox Design Assistant X interactive development environment (and the Matrox Imaging Library (MIL) software development kit) offer the accuracy, robustness, flexibility, and speed to tackle color applications with confidence. The color tools are complemented with a comprehensive set of field‐proven grayscale analysis tools (i.e., pattern recognition, blob analysis, gauging and measurement, ID mark reading, OCR, etc.). Moreover, application development is backed by the Matrox Imaging Vision Squad, a team dedicated to helping developers and integrators with application feasibility, best strategy and even prototyping.

Assistant X

The Use of Artificial Intelligence in Machine Vision

The use of artificial intelligence (specifically, machine learning by way of deep learning) in machine vision is an incredibly powerful technology with an impressive range of practical applications, including:

  • Giving virtual assistants the ability to process natural language;
  • Enhancing the e-commerce experience through recommendation engines;
  • Assisting medical practitioners with computer-aided diagnoses; and
  • Performing predictive maintenance in the aerospace industry.

Deep learning technology is also fundamental to the fourth industrial revolution, the ongoing automation of traditional manufacturing and industrial processes with smart technology, a movement in which machine vision has much to contribute.

Deep learning alone, however, cannot tackle all types of machine vision tasks, and requires careful preparation and upkeep to be truly effective. In this article we look at how machine vision—the automated computerized process of acquiring and analyzing digital images primarily for ensuring quality, tracking and guiding production—benefits from deep learning as the latter is making the former more accessible and capable.

Machine vision and deep learning: The challenges

Machine vision deals with identification, inspection, guidance and measurement tasks commonly encountered in the manufacturing and processing of consumer and industrial goods. Conventional machine vision software addresses these tasks with specific algorithm and heuristic-based methods, which often require specialized knowledge, skill and experience to be implemented properly. Moreover, these methods or tools sometimes fall short in terms of their ability to handle and adapt to complex and varying conditions. Deep learning is of great help but requires a painstaking training process based on previously collected sample data to produce results generally required in industry. Furthermore, more training is occasionally needed to account for unforeseen situations that can adversely affect production. It is important to appreciate that deep learning is primarily employed to classify data and not all machine learning tasks lend themselves to this approach.

Where deep learning does and does not excel

As noted, deep learning is the process through which data—such as images or their constituent pixels—are sorted into two or more categories. Deep learning is particularly well suited to recognizing objects or objects traits, such as identifying that widget A is different from widget B The technology is also especially good at detecting defects, whether the presence of a blemish or foreign substance, or the absence of a critical component in or on a widget that is being assembled. It also comes in handy for recognizing text characters and symbols such as expiry dates and lot codes.

While deep learning excels in complex and variable situations such as finding irregularities in non-uniform or textured image backgrounds or within an image of a widget whose presentation changes in a normal and acceptable manner, deep learning alone cannot locate patterns with an extreme degree of positional accuracy and position. Analysis using deep learning is a probability based process and is, therefore, not practical or even suitable for jobs that require exactitude. High-accuracy, high-precision measurement is still very much the domain of traditional machine vision software. The decoding of barcodes and two-dimensional symbologies, which is inherently based on specific algorithms, is also not an area appropriate for deep learning technology.

Artificial Intelligence

Where deep learning excels: Identification (left), detect defection (middle) and OCR (right)

Artificial Intelligence1

Where deep learning does not excel: High-accuracy, high-precision pattern matching (left), metrology (middle), and code reading (right)

Matrox Imaging software

Matrox Imaging offers two established software development packages that include classic machine vision tools as well as image classification tools based on deep learning. Matrox Imaging Library (MIL) X is a software development kit for creating applications by writing program code. Matrox Assistant X is an integrated development environment where applications are created by constructing and configuring flowcharts (see graphic below). Both software packages include image classification models that are trained using the MIL CoPilot interactive environment, which also has the ability to generate program code. Users of either software development packaged get full access to the Matrox Vision Academy online portal, offering a collection of video tutorials on using the software, including image classification, that are viewable on demand. Users can also opt for Matrox Professional Services to access application engineers as well as machine vision and machine learning experts for application-specific assistance.

 

What is Deep Learning?

Deep Learning

Answering the question “What is deep learning?” requires us to stick our heads down a rabbit hole. We say this because deep learning is a type of machine learning—which, in turn, is a type of artificial intelligence (AI). You now get the reference to the rabbit hole . . . Time now for some definitions to provide clarity.

Artificial intelligence: The simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind, such as learning and problem-solving.

Machine learning: The use and development of computer systems (hardware and software) that are able to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyze and draw inferences from patterns in data.

Deep learning: A subset of machine learning based on artificial neural networks in which multiple layers of processing are used to extract progressively higher level features from data.

What distinguishes deep learning is that it empowers  machines to learn from unstructured, unlabeled data, as well as labeled and categorized data. With all the rapid developments in deep learning, a lot of new applications  for machine vision have been introduced.  Time now for another definition:

Machine vision: The technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance, usually in industry. Machine vision refers to many technologies, software and hardware products, integrated systems, actions, methods and expertise. A machine vision system uses a camera to view an image. Computer vision algorithms then process and interpret the image, before instructing other components in the system to act upon that data. Computer vision can be used alone, without needing to be part of a larger machine system.

GPUs for computer vision applications

Many technology companies have discovered the benefit of using GPUs (Graphical Processer Units) for computer vision applications due to their ability to handle the rapid parallel processing of images. Traditional GPUs from companies like NVIDIA are large, power-hungry PCIe boards running in the cloud or temperature-conditioned environments.   So how do industrial companies take advantage of GPU technology in the field, or what’s often called ‘the edge’?

NVIDIA Jetson

Introducing NVIDIA Jetson, the world’s leading small-footprint GPU platform for running AI in harsh environments at the edge of the action.  Its high-performance, low-power computing for deep learning and computer vision makes it the ideal platform for compute-intensive projects in the field. Some of Integrys’ most valued partners provide nimble solutions in this space. But before we look at these companies and their products, it’s advisable to ask, and answer, the question below.

What’s the difference between carriers and Jetson modules?

A carrier board is specifically designed to work with one of the NVIDIA Jetson modules allowing users to connect IO, cameras, power, etc., to their devices.  Together with JetPack SDK, the combination of the carrier and module is used to develop and test software for specific use needs.

Our Deep Learning Partners

DIAMOND SYSTEMS
Stevie: Carrier for Nvidia Jetson AGX Xavier. Used in PPE and temperature monitoring, robotics, deep learning, and smart intersections/ traffic control.

 

Diamond-Systems-JETBOX-STEVIE.jpg

Featured product: JETBOX-STEVIE JETSON AGX XAVIER SYSTEM

Floyd: Carrier for Nvidia Jetson Nano & Xavier NX. Used in industrial safety, drone video surveillance and facial recognition.

Diamond-Systems-JETBOX-FLOYD.jpg

Featured product: JETBOX-FLOYD JETSON NANO / NX SYSTEM

Connect Tech Logo.png
Sentry-X Rugged Embedded System: Built for the NVIDIA® Jetson AGX Xavier™, Sentry-X is ideal for aerospace and defense applications, or for any market that can benefit from the Jetson AGX Xavier’s incredible performance in a rugged enclosure.

ConnectTech-Sentry X-Rugged Embedded System.png

Featured product: SENTRY-X RUGGED EMBEDDED SYSTEM POWERED BY NVIDIA® JETSON AGX XAVIER™

Rogue: a full featured carrier board for the NVIDIA® Jetson™ AGX Xavier™ module, the Rogue is specifically designed for commercially deployable platforms, and has an extremely small footprint of 92mm x 105mm.

ConnectTech-Rogue.png

 

Featured product: ROGUE CARRIER FOR NVIDIA® JETSON™ AGX XAVIER™

Matrox_Imaging_logotype_250px_RGB.jpg
Leveraging convolutional neural network (CNN) technology, the Matrox classification tool within their Computer Vision library, MIL (Matrox Imaging Library) categorizes images of highly textured, naturally varying, and acceptably deformed goods. The inference is performed exclusively by Matrox Imaging-written code on a mainstream CPU, eliminating the dependence on third-party neural network libraries and the need for specialized GPU hardware.

 

 

 

 

 

Matrox Imaging-Library X.png

Featured product: MATROX IMAGING LIBRARY X

Eizo_Rugged_Solutions.png
The Condor product line of GPGPU and video capture cards feature NVIDIA Quadro® GPUs with Pascal™ and Turing™ architecture. These processing powerhouses leverage the latest GPGPU advancements from NVIDIA for machine-learning and artificial intelligence applications, as well as standard rendering pipelines.

 

 

 

 

 

EIZO-Condor 4107xX.png

Featured product: CONDOR 4107XX XMC XMC GRAPHICS & GPGPU CARD

FREE

OFFER

We have a NVIDIA Jetson AGX Xavier AI-at-the-edge computing platform (diamondsystems.com) Jetbox-Stevie from Diamond in our DEMO Lab. I would like to promote it and offer a “FREE” Demo by filling a form

Integrys Clearance sale.jpg

 

Deep Learning

Recent Trade Shows: Plant Expo and CANSEC

 Over the years Integrys has consistently exhibited at prominent trade shows, which provide a powerful platform for meeting new customers, reaching out to our existing clientèle, and reinforcing the Integrys brand, as well as those of our partners. This spring and summer we are continuing this valuable practice by exhibiting at the Plant Expo and CANSEC.

Plant Expo

On April 23 Integrys participated in the Plant Expo at the Mississauga Convention Centre in Mississauga, Ontario with our partners Matrox Imaging, Baumer and Advanced Illumination. We presented Matrox’s latest Design Assistant software release (DA X), an integrated development environment (IDE) for Microsoft® Windows® where vision applications are created by constructing an intuitive flowchart instead of writing traditional program code. The IDE enables users to design a graphical web-based operator interface for the application. Since DA X is hardware independent, you can choose any computer with GigE Vision® or USB3 Vision® cameras. This field-proven software is also a perfect match for a Matrox 4Sight embedded vision controller or the Matrox Iris GTR smart camera.

DA X creates an HTML5 interface for production use that can be deployed and locked to Matrox imaging devices, protecting the IP in the device. DA X communicates via TCPIP, Modbus, Ethernet/IP and PROFINET. DA X interfaces to robots and connects to a variety of 3D scanners. It has photometric stereo designs built-in to get 3D stereovision projects started fast. It even offers CNN capabilities.

At the Plant Expo we demonstrated color processing and pattern matching to determine the differences in fuses.  We also showcased a metrology measurement solution ideal for automating quality control checks of machined parts.

with DA X to determine

Pattern matching with DA X to determine differences in fuses

Integrys will also be exhibiting at the Plant Expo in Sherbrooke, Quebec on June 19.

CANSEC

CANSECOn May 27-28 Integrys exhibited at CANSEC, held this year at the EY Centre in Ottawa, Ontario. CANSEC is Canada’s global defence and security trade show, held annually in Ottawa since 1998 by the Canadian Association of Defence and Security Industries (CADSI). At CANSEC, Integrys, along with partners NAII, GMS, Connect Tech, Eizo, Imperx, Cohu and RGB Spectrum, showcased military computing and video solutions featuring GPU processing, H.265 video encoding, and small formfactor (SWAP Size weight and power) computing and video systems.

Cansec_1

Integrys at CANSEC

 

 

VISION, the world’s leading machine vision trade fair, was held November 6-8, 2018 at Stuttgart Messe in Stuttgart, Germany.

A marketplace for component manufacturers, and a platform for system suppliers and integrators, VISION is where OEMs, mechanical engineering companies and system houses learn about the latest innovations from the world of machine vision components, and where they initiate their investments. In total, there were 440 exhibitors from 31 countries and 10,000 visitors from 56 countries—including Integrys engineering manager Eric Buckley.

VISION 2018 was infused with several fascinating developing themes in the machine vision field, including three that Eric found particularly germane to Integrys and the companies we work with:

  • Deep/machine learning
  • 3D imaging
  • Polarization sensors and cameras

Deep learning

Deep learning is part of broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. And two Integrys suppliers, Connect Tech and Matrox Imaging made their presence felt in this realm at VISION 2018.

At the Connect Tech/Samtec booth there was action aplenty as visitors caught up with Connect Tech’s show-stopping embedded NVIDIA Jetson solutions for machine vision and deep learning. Connect Tech is NVIDIA’s largest Jetson ecosystem partner and a leading provider of Jetson carrier products. It also does custom carrier designs.

Meanwhile, at the Matrox booth, the focus was on the newest version of Matrox Design Assistant flowchart-based software, including new tools for image classification using deep learning and image registration using photometric stereo. These tools enable inspection and OCR of hard-to-see text. Matrox also did a fascinating deep learning demo in which dental floss near the cutter of the container was inspected to determine whether or not its appearance is appropriate.

3D imaging

There are a host of 3D sensor applications, some of which are industrial, such as pick and place, palletizing/de-palletizing, and warehouse robots, and others of which are more consumer oriented, such as drones, people counting or patient monitoring. Of note at the event was Photoneo’s MotionCam-3D, for which Photoneo won the VISION Award 2018 at the Exhibitors’ Night. It’s the company’s flagship based on Photoneo’s own patented technology called Parallel Structured Light, implemented by a custom CMOS image sensor. MotionCam-3D is the best close and mid-range 3D camera to date for sensing in rapid motion.

At the fair, Integrys supplier Hikvision, the world’s leading manufacturer of robust, high-quality video surveillance products, introduced a new member of their 3D laser scan camera product family, while Zivid Labs, a market-leading provider of 3D machine vision cameras and software for next generation robotics and industrial automation systems, demonstrated why its Zivid One and Zivid One Plus products are regarded as the world’s most accurate real-time 3D cameras.

Polarization sensors and cameras

Polarization imaging can be used to detect stress or defects in manufacturing of materials such as plastic, glass and carbon fiber, as it can be used to uncover hidden material properties to better perform inspection and classification in industrial applications.

Integrys supplier Imperx develops and manufactures advanced, rugged imaging products qualified to MIL-STD 810-G for shock and vibration. Among them is their renowned Cheetah line of cameras, and at the fair Imperx illustrated the polarization features of its 5MP Cheetah camera, which helps eliminate glare from reflective surfaces and can help visualize internal tension/stress within transparent materials. Featuring advanced Sony Pregius IMX250 sensors with global shutter technology, the camera has excellent sensitivity and exceptional dynamic range with fast frame rates in a small form factor, making it ideal for a wide variety of uses in the world of machine vision.

JAI, another Integrys supplier, launched a new polarization area scan camera in the Go Series. The new model (GO-5100MP-USB) is built around Sony’s IMX250MZR CMOS image sensor. With 5.1-megapixel resolution and an innovative 4-way polarized filter design it is ideal for inspecting plastics, glass, and other shiny materials in industrial vision applications. JAI has a mono only USB3 interface of up to 74fps that supports multiple data output modes.

At VISION 2018 Baumer introduced polarization camera models to its CX series with GigE and USB 3.0 interface options. The cameras are based on the 5 MPixel IMX250MZR global shutter sensor from Sony, which has a polarization layer consisting of four polarization filters (0°, 90°, 45°, 135°) at the pixel level. Through an integrated evaluation algorithm and the Baumer GAPI SDK, it is possible to only output polarization information. Output formats, however, are not restricted to polarization angles.

Hikvision Launches 14 New GigE Area Scan Cameras, Offering Greater Selection with Rapid Data Transmission for Factory Automation and Detection

Hikvision Launches 14 New GigE Area Scan Cameras, Offering Greater Selection with Rapid Data Transmission for Factory Automation and DetectionHikvision New GigE Area Scan Cameras

Hikvision’s machine vision and robotics team has announced the launch of 14 new GigE Area Scan Cameras in its CE, CA and CH categories, offering customers more high-resolution cameras with rapid data transmission for factory automation, detection and related applications. Hikvision New GigE Area Scan Cameras

The new batch of cameras have Gigabit Ethernet interface to provide 1 Gbps bandwidth and transmission distance of 100 m without relay. Other features include a 128 MB on-board buffer for image burst transmission and retransmission; support for auto exposure control, LUT and Gamma correction; capability to synchronize with hard and soft triggers; and various exposure modes for image capture.

Hikvision’s GigE Area Scan Camera models also feature the latest innovations in CMOS sensors to deliver excellent image quality and high speed scanning with lower noise. This includes the latest global shutter Sony Pregius CMOS sensor IMX265 (MV-CA032-10GM/MV-CA032-10GC), IMX267 (MV-CH089-10GM/MV-CH089-10GC), and IMX304 (MV-CH120-10GM/MV-CH120-10GC).

For budget conscious applications, certain Hikvision GigE models feature the Sony STARVIS IMX226 rolling shutter sensor, an affordable option that performs well in low-light, mobile or flat panel inspection conditions.

For exceptional value, customers can opt for models with the Aptina MT9P031 sensor (MV-CE050-30GM) or the high frame rate 173fps at VGA resolution (MV-CE003-20GM).

For more information, click here to see Hikvision’s machine vision area scan camera selection.

Hikvision New GigE Area Scan Cameras

Integrys Announces Integrys Public Safety and Security

Integrys is proud to announce that we have established the sub-brand Integrys Public Safety and Security to provide market-leading technology solutions to police departments, border security agencies, correctional facilities and other such operations.

Click here to learn more about this dynamic group dedicated to providing hard-working people in harms way with the equipment they need to execute with utmost efficiency and security.