Emerging Trends in
Evolutionary and revolutionarydevelopments in machine vision technology are rapidly moving us toward the day when "faster, cheaper, more accurate, more robust" will no longer be mutually exclusive demands.
Steve Geraghty, Coreco Imaging, Inc.
Faster, cheaper, more accurate, more robustsince the inception of machine vision systems, this has been the refrain that has driven innovation. Vision system suppliers have always designed products to help you address the pressing need for a combination of these elements. When you look at how the technology will evolve in coming years, all advances can be seen in the context of these features.
Machine vision systems usually involve tradeoffsone element increases at the expense of another. For example, systems designed to be more accurate or robust have generally sacrificed speed. Today, those traditional tradeoffs are diminishing. High performance and low cost are no longer mutually exclusive.
One way to reduce costs is to adopt open standards. In recent years, weve seen the evolution to Windows-based systems provide standard interfaces, components, and design rules. Proprietary technology has been replaced by board-level solutions for systems based on Windows NT. Today, a camera takes a picture and delivers it to a board that plugs into a bus on the PC. This shift has led to steady increases in system performanceand major cost reductions.
Machine vision platforms are now returning to the embedded systems model, but the new systems operate in an open environment based on Windows platforms. Image-processing functions that were difficult and time consuming for the Pentium are now performed by field-programmable gate array architecture, which provides major performance improvements.
Regardless of the design, a key issue is how to most efficiently use the CPU. When image-processing tasks can be offloaded, the CPU is free to handle other activities (e.g., motion control). Solutions will increasingly be available either as embedded systems, board-level products, or intelligent cameras and will focus on offloading tasks from the main CPU, freeing it for more advanced image processing.
Integrated Vision and Motion
Consider a plain circuit board before components are installed. The first task is to screen-print the solder base to pinpoint the location where components can be mounted. The board and screen must be aligned precisely before screening. The alignment system uses machine vision to identify the position of the board and screen and then uses motion control to adjust one relative to the other.
Todays alignment systems contain a vision board and at least one motion board. Currently, however, theres no predefined connection between the two; rather, solutions are dependent on the specific system.
The vision board sees things in pixel spaceit may have a good coordinate system but no clue as to the connection between the information and the real world. For example, if an object is rotated 360 , how does the amount of rotation and displacement affect pixel space?
To coordinate pixel space with real-world space, you need to calibrate the two boards. Currently, vision and motion arent calibrated to each others units. This task is left to the customer to handle. Typically, a customer buys a vision board from one vendor and a motion board from another and then has to come up with the mathematics to make the two boards talk to each other in the same space. Weeks of engineering time are often required before vision and motion work in sync. If the customer worked with one company and could get all three componentsvision, motion, and calibrationdeveloped together, the task would be completed much more efficiently.
In the future, look for vision and motion to be used together to perform applications. The same supplier will provide both systems, either as a single board or as an embedded system, depending on the level of performance required. When vision and motion are tightly integrated as a single product, then speed, accuracy, robustness, and cost will all be far better. This tight integration will eliminate delays and inefficiencies that occur when integration and calibration are the customers responsibilityand will result in another cost saver.
The most efficient option is to use a line-scan camera. When an object moves past one of these cameras, the camera builds an image by capturing the entire object, line by line. The image is then fed into a frame grabber for input to the PC.
Although prices change daily, line-scan cameras are available for about the same as or lower than the cost of the multiple area-scan cameras that would be needed to provide the same coverage. The real story, however, is the dramatic drop in complexity as compared with area-scan solutions. For example, some inspection systems use banks of eight or more area-scan cameras, all of which must be coordinated and managed. These systems require two or more frame grabbers, thereby consuming precious PC slots. And image data from each camera has to be read into RAM in sequence and synchronized, one camera after another.
Look for line-scan cameras to replace traditional area-scan cameras for many machine vision applications. Not only is the configuration more efficient, but new and faster inspection and alignment techniques are emerging from this technology. For example, a line-scan camera can scan an entire moving tray of parts, rather than each part individually. This capability means that parts can be examined together in memory, which speeds processing time.
The solution is adaptive algorithmssoftware models that adapt to change. For example, these algorithms allow factory inspection to adapt to different color parts in a robust way. The technology takes various forms, such as neural nets, artificial intellgence, and geometric correlation (see Photo 2). Not surprisingly, there are tradeoffs. Adaptive algorithms require more processing horsepower.
Adaptive technology for vision systems is part of an evolutionary progression that has improved user interaction with machines. The old DOS, Unix, and proprietary operating systems each had its own unique interface that usually required a skilled programmer to operate. The move to open standards and a Windows-based common interface has made systems more friendly and intuitive for inexperienced users. Ultimately, configuring and operating a vision system will be as easy as operating a camcorder.
The Ultimate Goal
Manufacturing is quickly moving into a build-to-order world. Flexible inspection and alignment systems will soon be able to keep up with rapidly changing fabrication processes. Adaptive algorithms built into vision software will enable a new class of highly responsive image-processing solutions. These advances will fulfill our needs for performancefaster, cheaper, more accurate, and more robustin a world that just cant wait.
Steve Geraghty is Vice President of Operations, Coreco Imaging, Inc., 55 Middlesex Tnpk., Bedford, MA 01730; 781-275-2700, fax 781-297-9590, firstname.lastname@example.org.