Solve Any Vision Challenge

Zebra’s acquisition of Matrox Imaging lets you tackle any vision application with a single portfolio of hardware and software.

Today’s supply chain, manufacturing and distribution challenges require innovative automation solutions that do more than accelerate your operations. And that’s exactly what Zebra’s acquisition of Matrox Imaging delivers: a combined portfolio of machine vision and fixed industrial scanning solutions with the flexibility, simplicity and value you need to carry your operations into the future.

  1. Scale for Any Application and Any Specification:
    Merging Matrox Imaging’s vision tools into Zebra’s portfolio creates a single platform that can solve any vision challenge, from simple track-and-trace to complex inspection, recognition and guidance tasks. This powerful new combination creates the industry’s widest, most flexible ecosystem of vision hardware and software, allowing you to effortlessly scale up with single-supplier simplicity.
  2. Install and Deploy with Ease:
    Zebra’s interoperable and streamlined vision tools are easy to install, command a smaller footprint and integrate with third-party or existing systems, so you can create a truly connected environment—with minimal investment and downtime.
  3. Future-Proof with Confidence:
    Zebra’s comprehensive vision portfolio combines effortlessly to evolve at the pace of your business. From hardware to software, Zebra’s products deliver the flexibility, expandability and integration you need to ensure product longevity and maximize ROI.

    Integrys supports Zebra’s entire portfolio of vision solutions, including a wide range of machine vision cameras and fixed scanners for any application.

Flexible. Expandable. Integrated.

A comprehensive suite of vision software solutions

Zebra’s acquisition of Matrox Imaging creates an industry-leading portfolio of Zebra Aurora vision software products that enable users of all experience levels to solve their track-and-trace and vision inspection needs. Experienced users appreciate how easy it is to develop, refine and customize jobs, while first-time users take advantage of step-by-step guidance to develop powerful applications for a wide range of industries.

  • Aurora Focus:
    Aurora Focus runs on Zebra’s fixed scanners and smart cameras and comes ready-made for specific tasks like barcode reading and verification, OCR, and presence/absence inspection.
  • Aurora Vision Library:
    Designed for experienced programmers, Aurora Vision Library presents the sophisticated functionality of Aurora Vision Studio in a programming language.
  • Aurora Vision Studio:
    Aurora Vision Studio software enables machine and computer vision engineers to quickly create, integrate and monitor powerful vision applications without writing a single line of code.
  • Aurora Imaging Library:
    Aurora Imaging Library, formerly Matrox Imaging Library, is a software development kit (SDK) with a deep collection of image capture, processing, analysis, annotation, display and archiving tools.
  • Aurora Design Assistant:
    Aurora Design Assistant, formerly Matrox Design Assistant, is an integrated development environment (IDE) that offers a flowchart-based platform for building applications.

Connecting Technology and Innovation:

Over the past 50 years, Integrys has become the trusted source of imaging and video solutions for the aerospace, defense, healthcare, manufacturing, public safety, telecom and transportation industries. As a Zebra partner, we have the strategic insight and technical savvy to help you deploy industry-leading vision solutions with the power to transform your operations.

 

Click the link below and contact us today to learn how we can help your business connect technology with innovation.

Contact Us

 

What is the Best Software for Digital Image Processing?

Image processing is the use of algorithms and mathematical models to process and analyze digital images. The goal of image processing is to enhance the quality of images, extract meaningful information from them, and automate image-based tasks. Image processing is important in many areas, like computer vision, medical imaging, and multimedia. This article discusses important areas in image processing and mentions Zebra Technologies’ Aurora Design Assistant software.

Understanding Deep Learning (DL) and Its Functioning:

deep learning applications

The fields of image processing and Deep Learning (DL) are complementary, especially in the context of computer vision and machine learning tasks. DL is a subset of machine learning, which is a subset of artificial intelligence (AI). DL algorithms are designed to reach similar conclusions as humans would by constantly analyzing data with a given logical structure. To achieve this, DL uses a multi-layered structure of algorithms called neural networks.

The design of the neural network relies on the structure of the human brain. Just as we use our brains to identify patterns and classify different types of information, we can teach neural networks to perform the same tasks on data. DL has succeeded in AI applications, advancing technology and contributing to breakthroughs in computer vision, language understanding, and reinforcement learning.

 

 

Widely used deep learning applications:

DL has applications in a vast array of fields, including:

• Image recognition and speech recognition: DL is excellent at image classification, object detection, and facial recognition. It is used for tagging images, recognizing faces for security, and converting speech to text.

• Healthcare: DL is used for medical image analysis, disease diagnosis, and prognosis prediction. It aids in identifying patterns in medical images, such as detecting tumors in radiology scans.

• Autonomous vehicles: DL plays a key role in developing self-driving cars. It uses live data from sensors, cameras, and other sources to decide on steering, braking, and acceleration.

• Manufacturing and industry: DL is applied to predictive maintenance, quality control, and process optimization in manufacturing. It detects defects in products and predicts equipment failures using vision computers.

• Robotics: DL enables robots to perceive and respond to their environments, helping them perform complex tasks.

 

Deep Learning applications in computer vision:

vision computers and computer vision

 

Computer vision is a part of AI that helps computers analyze and process digital images. It uses algorithms and techniques to make decisions or suggestions based on the images. DL has made significant contributions to computer vision, including:

 

• Image classification: Categorization of images into predefined classes, fundamental to applications such as object recognition.

• Object detection: Detection of objects within images by providing bounding boxes around them, crucial where it’s necessary to identify and locate multiple objects in a single image, such as in autonomous vehicles or surveillance systems.

• Facial recognition: Key to identity verification, access control, and security. We can accurately identify and match facial features against a database of known faces.

• Image segmentation: Segmentation of images into meaningful regions or objects, valuable in medical imaging for identifying and isolating specific structures within images.

• Medical image analysis: Used in medical imaging tasks, such as detecting and diagnosing diseases from X-rays, MRIs, and CT scans.

• Augmented reality (AR): Enhances the capabilities of AR applications by enabling real-time recognition and tracking of objects.

 

What is the role of Deep Learning in machine vision?

manufacturing setup for image processing and recognition using machine vision

7 elements of a machine vision system

DL plays a crucial role in machine vision by providing advanced techniques for processing and understanding visual information. Key roles in machine vision include:

• Feature learning: DLs excels at automatically learning hierarchical features from raw visual data, essential in machine vision applications where identifying relevant patterns and features in images is crucial for decision-making.

• Object recognition and classification: DL enables accurate and efficient object recognition and classification. Machine vision systems can use deep neural networks to categorize objects in images, valuable in quality control in manufacturing.

• Object detection: DL is used for object detection tasks in machine vision. It can identify and locate multiple objects within an image, important in robotics and autonomous vehicles.

• Image segmentation: DL techniques are used for image segmentation in machine vision. This involves dividing an image into meaningful segments, useful in medical image analyzing and scene understanding.

• Anomaly detection: DL models can recognize normal patterns and detect anomalies in visual data. Quality control, surveillance, and monitoring systems apply it to identify deviations.

• 3D vision: DL supports 3D vision tasks by processing multiple images or using depth-sensing technologies. Vital in applications like robotic navigation.

• Document and text recognition: DL models are used for optical character recognition (OCR) and document analysis. Aids in automatically extracting information from textual content in images.

• Biometric recognition: DL enhances biometric recognition systems by providing accurate algorithms for face recognition, fingerprint recognition, and other biometric modalities.

 

How can machine learning benefit image recognition?

Machine learning brings efficiency, accuracy, and adaptability to image recognition tasks, making it a powerful tool for a wide range of applications, such as:

• Automated feature extraction: Machine learning, especially DL, automates the feature extraction process by learning relevant features directly from the data.

• Improved accuracy: Machine learning algorithms perform exceptionally well in image recognition tasks. They can learn hierarchical features, allowing them to recognize patterns and objects in images with great accuracy.

• Adaptability to varied data: Machine learning models generalize well to new and diverse datasets. This adaptability is crucial in image recognition situations where appearances may vary as a result of lighting conditions, angles, and background variations.

• Object detection and localization: Machine learning algorithms enable the classification of objects and the localization of their positions within an image. This is essential for autonomous vehicles, robotics, and surveillance.

• Semantic segmentation: Machine learning techniques can perform semantic segmentation by classifying each pixel in an image. This promotes understanding of the spatial relationships and boundaries between different objects.

 

Which is the top-rated software for machine vision?

aurora design assistant is a no code computer vision software

Integrys considers Aurora Design Assistant (Aurora DA) the best software for digital image processing on the market today for the following reasons:

Flowchart-based development: Aurora DA helps you create apps quickly without coding by using flowchart steps for building and configuring applications. Aurora DA offers no-code computer vision —allowing anyone to apply artificial intelligence without having to write a line of computer code. The IDE also lets users design a custom web-based operator interface.

Flexible deployment options: Select your platform from a hardware-neutral environment that is compatible with both branded and third-party smart cameras, vision controllers, and PCs. It supports CoaXPress, GigE Vision, or USB3 Vision camera interfaces.

Streamlined communication: Easily share actions and results with other machines using I/Os and various communication protocols in real time.

Increased productivity and reduced development costs: Vision Academy offers online and on-site training for users to enhance their software skills on specific topics.

 

Contact Us:

To learn more about Integrys computer vision projects, and products such as Aurora Design Assistant software, or to request a quote, click here to contact us.

 

How Zebra’s Comprehensive Machine Vision Portfolio Can Reshape Every Stage of Automotive Manufacturing

With more than 30,000 distinct parts from hundreds of suppliers, a typical new car presents one of modern manufacturing’s biggest challenges. The rapid adoption of innovative new technologies and components like electric drivetrains and sophisticated driver assistance systems isn’t making the manufacturer’s job any easier.

The demand for new solutions has virtually every carmaker and every player in the $2.1 billion automotive parts industry searching for solutions that can help them keep pace with the demand for greater efficiency, higher quality and better traceability. In many applications, the answer is automation.

Automation is hardly new to the auto industry—after all, the industry is widely considered to be the birthplace of the modern assembly line. What’s changing, however, is the penetration of automation technologies into more manufacturing processes. One of the most impactful changes is the widespread deployment of advanced vision systems, using fixed cameras and machine vision systems to streamline data capture and handle sophisticated visual inspections throughout automotive manufacturing.

From a brake component manufacturer’s need to track parts through the supply chain to the electronics manufacturer’s need to perform detailed quality-control inspections to the carmaker’s need for complex 3D analysis, the range of solutions required throughout the automotive manufacturing industry demands a diverse portfolio of hardware and software solutions.

Fortunately, Zebra’s acquisition of Matrox Imaging has created a single-supplier solution with a full range of hardware and software tools to cover almost any vision application. Zebra has partnered with Integrys Limited in Canada to revolutionize the industries with its Machine Vision applications.

Here’s a brief look at some of the end-to-end inspection tasks that automotive manufacturers can accomplish with Zebra’s end-to-end portfolio of machine vision systems:

  • Wire Harness Inspection: Today’s passenger cars and light trucks have a dozen or more wiring harnesses, hundreds of connectors, and 2.5 miles of wiring. Leading manufacturers are using machine vision tools to inspect and confirm every wire’s color, gauge, and sequence.
  • Pin Inspection: Since the slightest inaccuracies in pin height or alignment can lead to glitchy performance or failure of electronic systems, manufacturers use machine vision solutions to verify that each connector is manufactured to precise specifications before components go on to final assembly.
  • Conformal Coating: Innovative machine vision tools can instantly detect inconsistencies like cracks, bubbles, insufficient coverage, incomplete adhesion, and other potential problems in conformal coatings that protect printed circuit boards (PCBs) from corrosion and moisture.
  • PCB Inspection: It takes hundreds of PCBs incorporating thousands of microchips and other electronic components to support a modern vehicle. Machine vision technology provides a high-speed, high-precision solution to ensure each critical PCB meets exacting specifications.
  • Bead Inspection: Today, machine vision systems evaluate coverage, location, and continuity of the adhesive gaskets on high-speed production lines, detecting many flaws that would escape even the most experienced human inspectors.
  • Display Inspections: The number and complexity of electronic displays increase with every new generation of passenger vehicles and light trucks. Machine vision tools can automatically inspect everything from orientation (is it properly installed) and function (is the display properly sequenced) to quality (are there failed pixels) and performance (does it meet standards for brightness, color, and more).
  • Color Inspections: Machine vision tools can perform high-speed color inspections to confirm the correct color of everything from body panels and accessories to the color of packaging used for OEM parts that will be shipped to dealers’ service departments.

That’s a diverse list of machine vision applications. Still, it’s only a fraction of what manufacturers can accomplish with Zebra’s impressive portfolio of fixed industrial scanners, machine vision smart cameras, and software tools.

To learn more about the ways Zebra’s Machine Vision solutions, please contact our representative in Canada. Integrys’ advanced Machine Vision systems are reshaping quality control, production efficiency, and automation. Our solutions encompass object location, defect detection, and much more. With 20+ years of experience, we enhance productivity, assure quality, and reduce costs. To learn more about these cutting-edge Machine Vision solutions please contact us by clicking the button below.

Contact Us

Integrys: Revolutionizing Server Efficiency with Custom Design and Engineering Solutions

Introduction:

In today’s fast-paced technology industry, companies are constantly seeking ways to optimize their systems and improve efficiency. Integrys, a leading engineering firm, recently embarked on a challenging project to help a client enhance the performance of their existing servers without the need for a complete computing system overhaul. With their expertise in needs assessment, design solutions, custom configuration, test and validation, and post-application support, Integrys proved once again why they are at the forefront of innovation.

The challenges in custom design and engineering:

The client’s servers, when operating at full capacity, generated a significant amount of heat due to two main components: two powerful GPUs and an accelerated Network Interface Card from Intel. The excessive heat threatened the stability and longevity of the electronics within the chassis, necessitating a creative solution. Integrys faced several critical challenges in this project:

  1. Preserving the Existing Architecture: The project required finding a solution without redesigning the already-built chassis and motherboard and maintaining the positioning of the GPUs and Network Card.
  2. Cooling Multiple Heat Sources: Integrys needed to devise a method to bring in cool air from the environment and direct it to three different locations within the system to effectively cool the GPUs and Network Card.
  3. Precise Airflow Requirements: The engineering team had to ensure specific flow rates and temperatures were achieved across the pre-existing architecture, which posed further complexities.
  4. Seamless Integration: The most demanding mechanical challenge was developing a duct that could be easily inserted into the pre-built chassis without the need for dismantling the entire system.

The Custom Solution:

Led by Electro-Mechanical Designer, Daniyal Jafri, and under the guidance of Engineering Manager, Eric Buckley, the engineering team at Integrys combined their expertise in fluid mechanics, thermal science, and mechanical design to tackle these challenges head-on. By applying scientific principles and innovative thinking, they devised an ingenious solution that achieved the desired flow rates and temperatures while seamlessly integrating into the existing infrastructure.

custom design and engineering for the duct convention airflow by integrys
The team’s initial design iterations involved a compact 80mm duct, which was 3D printed due to the complexity of its geometry. While it met the flow rate requirements, it exceeded the acceptable acoustic noise levels. To address this issue, the team explored a larger 120mm fan, which provided a 4x increase in airflow at the same RPM. Running the 120mm fan at a lower speed to provide that same airflow as the 80mm solution reduced noise levels while still meeting the necessary cooling requirements.

Overcoming the size constraints imposed by the larger fan, the final duct design from Integrys successfully met all the client’s requirements. The innovative design allowed the duct to be seamlessly slid into the system, eliminating the need to remove metal panels within the chassis for assembly. This breakthrough significantly reduced assembly time, saving the client substantial labour costs.

 

Conclusion:

Integrys has once again showcased its ability to connect technology and innovation. By leveraging their engineering expertise, the team at Integrys devised a ground-breaking solution to enhance the efficiency of the client’s servers. Overcoming the challenges of preserving existing architecture, cooling multiple heat sources, achieving precise airflow requirements, and seamless integration, Integrys delivered a state-of-the-art 3U server solution that exceeded expectations.

Integrys’s dedication and expertise enabled the client to optimize their system without requiring a complete redesign, saving both time and resources. This successful collaboration exemplifies Integrys’ commitment to delivering cutting-edge engineering solutions and reaffirms its position as an industry leader.

 

Click here to view more engineering services from Integrys or Contact Us for your personalized engineering solution.

COSA Technology from North Atlantic Industries

The long-standing goal at North Atlantic Industries (NAI) is to accelerate your time-to-mission—to get you to market faster. NAI’s COSA technology, also known as “Configurable Open Systems Architecture” technology, helps you do just that. In a distributed, intelligent, software-driven architecture that allows you to rethink the way you engineer power-critical and I/O-intensive mission systems, COSA satisfies an impressive range of complex and time-critical requirements.

 

COSA technology from north atlantic industries in collaboration with integrys limited for canada and north american market

 

How NAI’s COSA technology works:

I/O sbc boards

 

  • Select I/O boards, single board computers, power supplies or rugged systems to meet your requirements.
  • Customize it in modular fashion, selecting from more than 100 available, high-density, fully tested I/O, communications, measurement and simulation smart function modules.
  • Leverage NAI’s free software libraries, source-code and comprehensive API to jump-start development and speed your time to test.
  • Easily adjust board configuration to add or swap functional capabilities if requirements change.

 

 

 

 

different function modules from NAI via COSANAI’s COSA technology works by providing a framework for building software systems that are composed of independent software components, or “modules.” These modules can be configured and combined in various ways to create a customized software solution that meets specific requirements.

The COSA framework provides a set of standard interfaces and protocols that allow the modules to communicate and interact with each other, regardless of the programming language or platform on which they are implemented. This allows for greater flexibility in choosing and integrating different software components, and enables the creation of highly configurable and adaptable software systems.

COSA technology applications include industrial control systems, military systems, and telecommunications networks. It is designed to be highly modular, scalable, and adaptable to changing requirements, making it an attractive solution for complex and dynamic software systems.

 

COSA modules, boards and systems

Configurable Open Systems Architecture COSA - cots systems from integrys limites
Over 100 high-density, COSA smart modules are available for placement on NAI’s intelligent I/O boards, which can operate as standalone systems. Further, NAI’s OpenVPX, VME, cPCI and PCIe boards can be placed into rugged COSA systems that range from a single module to high-density systems supporting up to 10 motherboards and 60 smart modules (and virtually everything between).
 

COSA software and processing advantages

Dedicated FPGAs embedded on NAI’s smart modules provide unique software and processing advantages that drive time and cost out of design, development and qualification schedules—to accelerate your time-to-mission. The programmability, intelligence and self-monitoring capabilities put more I/O capability into the modules themselves so you can reduce the processing load on the SBC and deliver more capabilities at the edges of your applications.

Configurability of NAI’s COTS systems 

  • 100+ smart function modules
  • Maximum of 6 slots per card
  • Maximum of 18 functions per interface unit

NAI’s COSA architecture is massively configurable, providing more than 1.4 quadrillion possible system configurations. That staggering number represents the global population . . . x 2 million.

NAI’s COSA architectures is the most modular, agile, and rugged portfolio of its kind. It enables you to leverage their portfolio of pre-integrated modules, boards, systems and power supplies to quickly and easily meet complex mission processing requirements – today and down the road, allowing you to out-pace, out-adapt and out-last your competition.

Find a team of specialists from Integrys Limited at booth #526 at CANSEC 2023 in Ottawa, Ontario on May 31st and June 1st 2023 to learn more about COSA.

 

Contact

Integrys Limited providing Configurable Open Systems Architecture COSA to north American market

To learn more about how NAI’s COSA technology can accelerate your time-to-mission or to request a quote, click here to contact us.

Matrox Deep learning and its role in machine vision

A leader in the machine vision industry, Matrox® Imaging leverages our vision expertise to apply deep learning technology when and where most appropriate and help our customers find the best solution for their applications.Matrox Deep learningArtificial intelligence, specifically machine learning by way of deep learning, is making machine vision technology for automated visual inspection more accessible and capable. Deep learning technology mimics how the human brain processes visual input but performs this task with the speed and robustness of a computerized system. The technology works to ensure quality in manufacturing industries, controlling production costs and enhancing customer satisfaction.Deep learning technology excels at certain applications, such as identification and defect detection, specifically in instances where there are complex and varying imaging conditions. The technology still benefits from conventional image processing and analysis to locate regions of interest within images to speed up the overall process and make it even more robust.

Real-world examples

Identification

Identification

Image classification using deep learning categorizes images or image regions to distinguish between similarly looking objects including those with subtle imperfections. Image classification can, for example, determine if the lips of glass bottles are safe or not.

example_defect_detection_0.jpg

Defect detection

Image segmentation using deep learning categorizes image neighborhoods to pinpoint features like defects, such as dents and scratches on sheet metal. The located features can then be further analysed and measured using traditional machine vision tools.

Deep learning software and hardware

Matrox Imaging’s software offerings—Matrox Imaging Library (MIL) X and Matrox Design Assistant® X—include vision tools to classify or segment images for inspection using deep learning. Both software packages deliver optimized convolutional neural networks (CNNs) or models for the task.

Key to deep learning is the training of a neural network model. MIL CoPilot’s interactive environment provides the platform for training these models for use in machine vision applications. MIL CoPilot delivers all the functionality needed for this task, so you can create and label the training image dataset; augment the image dataset, if necessary; and train, analyze, and test the neural network model.
copilot_diagram_1920.png

e also offer hardware products that facilitate deep learning training and deployment. A suitably equipped and configured model of the Matrox 4Sight XV6 industrial computer comes ready for deep learning training. Another Matrox 4Sight XV6 model as well as the Matrox 4Sight EV6  vision controller and Matrox Iris GTX smart cameras are available to run both traditional machine vision workloads as well as deep learning inference.

Matrox Imaging’s team of vision experts know where and when to leverage machine and deep learning technologies to your best advantage. Our specialists can help identify your needs and find a customized vision solution for your requirements.

Deep learning Artificial intelligence

 

Matrox Design Assistant X Color Analysis

Digital cameras with color image sensors are now commonplace. The same is true for the computing power and device interfaces necessary to handle the additional data from color images. What’s more, as users become familiar and comfortable with machine vision technology, they seek to tackle more difficult or previously unsolvable applications. These circumstances combine to make color machine vision an area of mounting interest. Color machine vision poses unique challenges, but it also brings some unique capabilities for manufacturing control and inspection.

Matrox Design Assistant X

The color challenge

Color is the manifestation of light from the visible part of the electromagnetic spectrum. It is perceived by an observer and is therefore subjective – two people may discern a different color from the same object in the same scene. This difference in interpretation also extends to camera systems with their lenses and image sensors. A camera system’s response to color varies not only between different makes and models for its components but also between components of the same make and model. Scene illumination adds further uncertainty by altering a color’s appearance. These subtleties come about from the fact that light emanates with its own color spectrum. Each object in a scene absorbs and reflects (i.e., filters) this spectrum differently and the camera system responds to (i.e., accepts and rejects) the reflected spectrum in its own way. The challenge for color machine vision is to deliver consistent analysis throughout a system’s operation – and between systems performing the same task – while also imitating a human’s ability to discern and interpret colors.

The majority of today’s machine vision systems successfully restrict themselves to grayscale image analysis. In certain instances, however, it is unreliable or even impossible to just depend upon intensity and/or geometric (i.e., shape) information. In these cases, the flexibility of color machine vision software is needed to:

  •  optimally convert an image from color to monochrome for proper analysis using grayscale machine vision software tools
  •  calculate the color difference to identify anomalies
  •  compare the color within a region in an image against color samples to assess if an acceptable match exists or to determine the best match
  •  segment an image based on color to separate object or features from one another and from the background

Color images contain a greater amount of data to process (i.e., typically three times more) than grayscale images and require more intricate handling. Efficient and optimized algorithms are needed to analyze these images in a reasonable amount of time. This is where Matrox Design Assistant X color analysis tools come to the fore.

Matrox Design Assistant X color analysis steps

 

Matrox Design Assistant X

 

 

 

Matrox Design Assistant X includes a set of tools to identify parts, products, and items using color, assess quality from color, and isolate features using color.

 

 

 

 

 


The ColorMatcher step determines the best matching color from a collection of samples for each region of interest within an image. A color sample can be specified either interactively from an image—with the ability to mask out undesired colors—or using numerical values. A color sample can be a single color or a distribution of colors (i.e., a histogram). The color matching method and the interpretation of color differences can be manually adjusted to suit particular application requirements. The ColorMatcher step can also match each image pixel to color samples to segment the image into appropriate elements for further analysis using other steps such as BlobAnalysis.

Color Matcher step

                                              Color Matcher step

The ImageProcessing step includes operations to calculate the color distance and perform color projection. The distance operation reveals the extent of color differences within and between images, while the projection operation enhances color to grayscale image conversion for analysis using other grayscale processing steps.

The color analysis tools included in the Matrox Design Assistant X interactive development environment (and the Matrox Imaging Library (MIL) software development kit) offer the accuracy, robustness, flexibility, and speed to tackle color applications with confidence. The color tools are complemented with a comprehensive set of field‐proven grayscale analysis tools (i.e., pattern recognition, blob analysis, gauging and measurement, ID mark reading, OCR, etc.). Moreover, application development is backed by the Matrox Imaging Vision Squad, a team dedicated to helping developers and integrators with application feasibility, best strategy and even prototyping.

Assistant X

The Use of Artificial Intelligence in Machine Vision

The use of artificial intelligence (specifically, machine learning by way of deep learning) in machine vision is an incredibly powerful technology with an impressive range of practical applications, including:

  • Giving virtual assistants the ability to process natural language;
  • Enhancing the e-commerce experience through recommendation engines;
  • Assisting medical practitioners with computer-aided diagnoses; and
  • Performing predictive maintenance in the aerospace industry.

Deep learning technology is also fundamental to the fourth industrial revolution, the ongoing automation of traditional manufacturing and industrial processes with smart technology, a movement in which machine vision has much to contribute.

Deep learning alone, however, cannot tackle all types of machine vision tasks, and requires careful preparation and upkeep to be truly effective. In this article we look at how machine vision—the automated computerized process of acquiring and analyzing digital images primarily for ensuring quality, tracking and guiding production—benefits from deep learning as the latter is making the former more accessible and capable.

Machine vision and deep learning: The challenges

Machine vision deals with identification, inspection, guidance and measurement tasks commonly encountered in the manufacturing and processing of consumer and industrial goods. Conventional machine vision software addresses these tasks with specific algorithm and heuristic-based methods, which often require specialized knowledge, skill and experience to be implemented properly. Moreover, these methods or tools sometimes fall short in terms of their ability to handle and adapt to complex and varying conditions. Deep learning is of great help but requires a painstaking training process based on previously collected sample data to produce results generally required in industry. Furthermore, more training is occasionally needed to account for unforeseen situations that can adversely affect production. It is important to appreciate that deep learning is primarily employed to classify data and not all machine learning tasks lend themselves to this approach.

Where deep learning does and does not excel

As noted, deep learning is the process through which data—such as images or their constituent pixels—are sorted into two or more categories. Deep learning is particularly well suited to recognizing objects or objects traits, such as identifying that widget A is different from widget B The technology is also especially good at detecting defects, whether the presence of a blemish or foreign substance, or the absence of a critical component in or on a widget that is being assembled. It also comes in handy for recognizing text characters and symbols such as expiry dates and lot codes.

While deep learning excels in complex and variable situations such as finding irregularities in non-uniform or textured image backgrounds or within an image of a widget whose presentation changes in a normal and acceptable manner, deep learning alone cannot locate patterns with an extreme degree of positional accuracy and position. Analysis using deep learning is a probability based process and is, therefore, not practical or even suitable for jobs that require exactitude. High-accuracy, high-precision measurement is still very much the domain of traditional machine vision software. The decoding of barcodes and two-dimensional symbologies, which is inherently based on specific algorithms, is also not an area appropriate for deep learning technology.

Artificial Intelligence

Where deep learning excels: Identification (left), detect defection (middle) and OCR (right)

Artificial Intelligence1

Where deep learning does not excel: High-accuracy, high-precision pattern matching (left), metrology (middle), and code reading (right)

Matrox Imaging software

Matrox Imaging offers two established software development packages that include classic machine vision tools as well as image classification tools based on deep learning. Matrox Imaging Library (MIL) X is a software development kit for creating applications by writing program code. Matrox Assistant X is an integrated development environment where applications are created by constructing and configuring flowcharts (see graphic below). Both software packages include image classification models that are trained using the MIL CoPilot interactive environment, which also has the ability to generate program code. Users of either software development packaged get full access to the Matrox Vision Academy online portal, offering a collection of video tutorials on using the software, including image classification, that are viewable on demand. Users can also opt for Matrox Professional Services to access application engineers as well as machine vision and machine learning experts for application-specific assistance.

 

What is Deep Learning?

Deep Learning

Answering the question “What is deep learning?” requires us to stick our heads down a rabbit hole. We say this because deep learning is a type of machine learning—which, in turn, is a type of artificial intelligence (AI). You now get the reference to the rabbit hole . . . Time now for some definitions to provide clarity.

Artificial intelligence: The simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind, such as learning and problem-solving.

Machine learning: The use and development of computer systems (hardware and software) that are able to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyze and draw inferences from patterns in data.

Deep learning: A subset of machine learning based on artificial neural networks in which multiple layers of processing are used to extract progressively higher level features from data.

What distinguishes deep learning is that it empowers  machines to learn from unstructured, unlabeled data, as well as labeled and categorized data. With all the rapid developments in deep learning, a lot of new applications  for machine vision have been introduced.  Time now for another definition:

Machine vision: The technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance, usually in industry. Machine vision refers to many technologies, software and hardware products, integrated systems, actions, methods and expertise. A machine vision system uses a camera to view an image. Computer vision algorithms then process and interpret the image, before instructing other components in the system to act upon that data. Computer vision can be used alone, without needing to be part of a larger machine system.

GPUs for computer vision applications

Many technology companies have discovered the benefit of using GPUs (Graphical Processer Units) for computer vision applications due to their ability to handle the rapid parallel processing of images. Traditional GPUs from companies like NVIDIA are large, power-hungry PCIe boards running in the cloud or temperature-conditioned environments.   So how do industrial companies take advantage of GPU technology in the field, or what’s often called ‘the edge’?

NVIDIA Jetson

Introducing NVIDIA Jetson, the world’s leading small-footprint GPU platform for running AI in harsh environments at the edge of the action.  Its high-performance, low-power computing for deep learning and computer vision makes it the ideal platform for compute-intensive projects in the field. Some of Integrys’ most valued partners provide nimble solutions in this space. But before we look at these companies and their products, it’s advisable to ask, and answer, the question below.

What’s the difference between carriers and Jetson modules?

A carrier board is specifically designed to work with one of the NVIDIA Jetson modules allowing users to connect IO, cameras, power, etc., to their devices.  Together with JetPack SDK, the combination of the carrier and module is used to develop and test software for specific use needs.

Our Deep Learning Partners

DIAMOND SYSTEMS
Stevie: Carrier for Nvidia Jetson AGX Xavier. Used in PPE and temperature monitoring, robotics, deep learning, and smart intersections/ traffic control.

 

Diamond-Systems-JETBOX-STEVIE.jpg

Featured product: JETBOX-STEVIE JETSON AGX XAVIER SYSTEM

Floyd: Carrier for Nvidia Jetson Nano & Xavier NX. Used in industrial safety, drone video surveillance and facial recognition.

Diamond-Systems-JETBOX-FLOYD.jpg

Featured product: JETBOX-FLOYD JETSON NANO / NX SYSTEM

Connect Tech Logo.png
Sentry-X Rugged Embedded System: Built for the NVIDIA® Jetson AGX Xavier™, Sentry-X is ideal for aerospace and defense applications, or for any market that can benefit from the Jetson AGX Xavier’s incredible performance in a rugged enclosure.

ConnectTech-Sentry X-Rugged Embedded System.png

Featured product: SENTRY-X RUGGED EMBEDDED SYSTEM POWERED BY NVIDIA® JETSON AGX XAVIER™

Rogue: a full featured carrier board for the NVIDIA® Jetson™ AGX Xavier™ module, the Rogue is specifically designed for commercially deployable platforms, and has an extremely small footprint of 92mm x 105mm.

ConnectTech-Rogue.png

 

Featured product: ROGUE CARRIER FOR NVIDIA® JETSON™ AGX XAVIER™

Matrox_Imaging_logotype_250px_RGB.jpg
Leveraging convolutional neural network (CNN) technology, the Matrox classification tool within their Computer Vision library, MIL (Matrox Imaging Library) categorizes images of highly textured, naturally varying, and acceptably deformed goods. The inference is performed exclusively by Matrox Imaging-written code on a mainstream CPU, eliminating the dependence on third-party neural network libraries and the need for specialized GPU hardware.

 

 

 

 

 

Matrox Imaging-Library X.png

Featured product: MATROX IMAGING LIBRARY X

Eizo_Rugged_Solutions.png
The Condor product line of GPGPU and video capture cards feature NVIDIA Quadro® GPUs with Pascal™ and Turing™ architecture. These processing powerhouses leverage the latest GPGPU advancements from NVIDIA for machine-learning and artificial intelligence applications, as well as standard rendering pipelines.

 

 

 

 

 

EIZO-Condor 4107xX.png

Featured product: CONDOR 4107XX XMC XMC GRAPHICS & GPGPU CARD

FREE

OFFER

We have a NVIDIA Jetson AGX Xavier AI-at-the-edge computing platform (diamondsystems.com) Jetbox-Stevie from Diamond in our DEMO Lab. I would like to promote it and offer a “FREE” Demo by filling a form

Integrys Clearance sale.jpg

 

Deep Learning

Recent Trade Shows: Plant Expo and CANSEC

 Over the years Integrys has consistently exhibited at prominent trade shows, which provide a powerful platform for meeting new customers, reaching out to our existing clientèle, and reinforcing the Integrys brand, as well as those of our partners. This spring and summer we are continuing this valuable practice by exhibiting at the Plant Expo and CANSEC.

Plant Expo

On April 23 Integrys participated in the Plant Expo at the Mississauga Convention Centre in Mississauga, Ontario with our partners Matrox Imaging, Baumer and Advanced Illumination. We presented Matrox’s latest Design Assistant software release (DA X), an integrated development environment (IDE) for Microsoft® Windows® where vision applications are created by constructing an intuitive flowchart instead of writing traditional program code. The IDE enables users to design a graphical web-based operator interface for the application. Since DA X is hardware independent, you can choose any computer with GigE Vision® or USB3 Vision® cameras. This field-proven software is also a perfect match for a Matrox 4Sight embedded vision controller or the Matrox Iris GTR smart camera.

DA X creates an HTML5 interface for production use that can be deployed and locked to Matrox imaging devices, protecting the IP in the device. DA X communicates via TCPIP, Modbus, Ethernet/IP and PROFINET. DA X interfaces to robots and connects to a variety of 3D scanners. It has photometric stereo designs built-in to get 3D stereovision projects started fast. It even offers CNN capabilities.

At the Plant Expo we demonstrated color processing and pattern matching to determine the differences in fuses.  We also showcased a metrology measurement solution ideal for automating quality control checks of machined parts.

with DA X to determine

Pattern matching with DA X to determine differences in fuses

Integrys will also be exhibiting at the Plant Expo in Sherbrooke, Quebec on June 19.

CANSEC

CANSECOn May 27-28 Integrys exhibited at CANSEC, held this year at the EY Centre in Ottawa, Ontario. CANSEC is Canada’s global defence and security trade show, held annually in Ottawa since 1998 by the Canadian Association of Defence and Security Industries (CADSI). At CANSEC, Integrys, along with partners NAII, GMS, Connect Tech, Eizo, Imperx, Cohu and RGB Spectrum, showcased military computing and video solutions featuring GPU processing, H.265 video encoding, and small formfactor (SWAP Size weight and power) computing and video systems.

Cansec_1

Integrys at CANSEC