Intelligent Systems
Table of Contents

MACHINE VISION

Emerging Trends in
Machine Vision

Evolutionary and revolutionary—developments in machine vision technology are rapidly moving us toward the day when "faster, cheaper, more accurate, more robust" will no longer be mutually exclusive demands.

Steve Geraghty, Coreco Imaging, Inc.

Faster, cheaper, more accurate, more robust—since the inception of machine vision systems, this has been the refrain that has driven innovation. Vision system suppliers have always designed products to help you address the pressing need for a combination of these elements. When you look at how the technology will evolve in coming years, all advances can be seen in the context of these features.

Machine vision systems usually involve tradeoffs—one element increases at the expense of another. For example, systems designed to be more accurate or robust have generally sacrificed speed. Today, those traditional tradeoffs are diminishing. High performance and low cost are no longer mutually exclusive.

Platform Evolution
Machine vision systems began as proprietary, embedded systems designed to capture and process images. Systems had their own dedicated operating system and programming language, with processing functions performed by custom-designed ASICs. This approach made early systems expensive and difficult to use.

One way to reduce costs is to adopt open standards. In recent years, we’ve seen the evolution to Windows-based systems provide standard interfaces, components, and design rules. Proprietary technology has been replaced by board-level solutions for systems based on Windows NT. Today, a camera takes a picture and delivers it to a board that plugs into a bus on the PC. This shift has led to steady increases in system performance—and major cost reductions.

Machine vision platforms are now returning to the embedded systems model, but the new systems operate in an open environment based on Windows platforms. Image-processing functions that were difficult and time consuming for the Pentium are now performed by field-programmable gate array architecture, which provides major performance improvements.

Regardless of the design, a key issue is how to most efficiently use the CPU. When image-processing tasks can be offloaded, the CPU is free to handle other activities (e.g., motion control). Solutions will increasingly be available either as embedded systems, board-level products, or intelligent cameras and will focus on offloading tasks from the main CPU, freeing it for more advanced image processing.

Integrated Vision and Motion
photo
Photo 1. Applications such as bottle inspection rely on machine vision systems for quality control. Today, machine vision is used in all aspects of factory automation, verification, and inspection.
Control is another issue being affected by changes in machine vision technologies (see Photo 1). Vision solutions involve looking for something and then doing something with the results, which often means a control function. A good example of this is alignment.

Consider a plain circuit board before components are installed. The first task is to screen-print the solder base to pinpoint the location where components can be mounted. The board and screen must be aligned precisely before screening. The alignment system uses machine vision to identify the position of the board and screen and then uses motion control to adjust one relative to the other.

Today’s alignment systems contain a vision board and at least one motion board. Currently, however, there’s no predefined connection between the two; rather, solutions are dependent on the specific system.

The vision board sees things in pixel space—it may have a good coordinate system but no clue as to the connection between the information and the real world. For example, if an object is rotated 360 , how does the amount of rotation and displacement affect pixel space?

To coordinate pixel space with real-world space, you need to calibrate the two boards. Currently, vision and motion aren’t calibrated to each other’s units. This task is left to the customer to handle. Typically, a customer buys a vision board from one vendor and a motion board from another and then has to come up with the mathematics to make the two boards talk to each other in the same space. Weeks of engineering time are often required before vision and motion work in sync. If the customer worked with one company and could get all three components—vision, motion, and calibration—developed together, the task would be completed much more efficiently.

In the future, look for vision and motion to be used together to perform applications. The same supplier will provide both systems, either as a single board or as an embedded system, depending on the level of performance required. When vision and motion are tightly integrated as a single product, then speed, accuracy, robustness, and cost will all be far better. This tight integration will eliminate delays and inefficiencies that occur when integration and calibration are the customer’s responsibility—and will result in another cost saver.

Camera Technology
The four essential elements—greater speed, lower cost, enhanced accuracy, and increased robustness—are also being affected by changes in camera technology. As components have become smaller, the field of view of area-scan cameras has shrunk to capture greater detail. Unfortunately, the area that must be covered has grown larger. In many cases, the image area has grown beyond the range of a standard 640 by 480 area-scan camera. The traditional options have been to take more time (unlikely), install additional cameras, or buy a camera with a wider field of view.

The most efficient option is to use a line-scan camera. When an object moves past one of these cameras, the camera builds an image by capturing the entire object, line by line. The image is then fed into a frame grabber for input to the PC.

Although prices change daily, line-scan cameras are available for about the same as or lower than the cost of the multiple area-scan cameras that would be needed to provide the same coverage. The real story, however, is the dramatic drop in complexity as compared with area-scan solutions. For example, some inspection systems use banks of eight or more area-scan cameras, all of which must be coordinated and managed. These systems require two or more frame grabbers, thereby consuming precious PC slots. And image data from each camera has to be read into RAM in sequence and synchronized, one camera after another.

Look for line-scan cameras to replace traditional area-scan cameras for many machine vision applications. Not only is the configuration more efficient, but new and faster inspection and alignment techniques are emerging from this technology. For example, a line-scan camera can scan an entire moving tray of parts, rather than each part individually. This capability means that parts can be examined together in memory, which speeds processing time.

Adaptive Software
Vision systems are commonly perceived to be difficult to program and cumbersome to adapt to changes. When the shape or color of an
photo
Photo 2. Determining the optimum position and orientation of patterns and images is a challenge in many machine vision applications. Manufacturers find that geometric-based algorithms provide the robustness and accuracy they require.
object changes, humans immediately recognize the object. However, a machine vision system facing the same situation will fail. So how do you design an adaptable vision system that performs quickly, accurately, and robustly without massive programming (in other words, cheaply)?

The solution is adaptive algorithms—software models that adapt to change. For example, these algorithms allow factory inspection to adapt to different color parts in a robust way. The technology takes various forms, such as neural nets, artificial intellgence, and geometric correlation (see Photo 2). Not surprisingly, there are tradeoffs. Adaptive algorithms require more processing horsepower.

Human–Machine Interface
Adaptive technology for vision systems is part of an evolutionary progression that has improved user interaction with machines.
Although prices continue to drop, vision systems will not achieve widespread acceptance if you have to be an engineer to make them work. One of the benefits of the emerging adaptive technology will be to make vision systems increasingly easy to use, and that means putting customized high-performance solutions into the hands of users without the cost of extended development work.

Adaptive technology for vision systems is part of an evolutionary progression that has improved user interaction with machines. The old DOS, Unix, and proprietary operating systems each had its own unique interface that usually required a skilled programmer to operate. The move to open standards and a Windows-based common interface has made systems more friendly and intuitive for inexperienced users. Ultimately, configuring and operating a vision system will be as easy as operating a camcorder.

The Ultimate Goal
So where are we headed? The next step for machine vision is to accommodate custom manufacturing and one-off production without extensive programming. Even if you have no idea what one-off characteristics will be needed tomorrow, machine vision will be able to automatically adapt. Just as you don’t have to tell a human who is physically inspecting parts when a new shape or color comes by, soon you will no longer have to go through an elaborate programming exercise to convey the same type of information to a vision system.

Manufacturing is quickly moving into a build-to-order world. Flexible inspection and alignment systems will soon be able to keep up with rapidly changing fabrication processes. Adaptive algorithms built into vision software will enable a new class of highly responsive image-processing solutions. These advances will fulfill our needs for performance—faster, cheaper, more accurate, and more robust—in a world that just can’t wait.




Steve Geraghty is Vice President of Operations, Coreco Imaging, Inc., 55 Middlesex Tnpk., Bedford, MA 01730; 781-275-2700, fax 781-297-9590, sgeraghty@corecoimaging.com.


 
E-NEWSLETTERS
SUBSCRIBE NOW!
Sensors Weekly
  What's New
  Product Picks





Questex Media
Home | Contact Us | Advertise
© 2009 Questex Media Group, Inc.. All rights reserved.
Reproduction in whole or in part is prohibited.
Please send any technical comments or questions to our webmaster.