UK Industrial Vision Association director Don Braggins looks at two areas of vision system technology that have importance to the packaging industry: using the correct illumination and transfer of data from the camera to the processing computer
Looking in a positive light
Illumination sources in the first machine vision systems were based largely on those available from microscopy or photographic copying markets. However, dedicated illumination systems have evolved over the years. A key point to recognise is that the imaging device is receiving the light that leaves the object.
We generally consider ‘light’ to be light in the visible wavelength range of the spectrum [around 450-700nm wavelength]. The object of interest in machine vision applications should be clearly distinguished from all the other features around it.
Even more desirable is to have maximum contrast for the features of interest and minimum contrast from features of no interest, combined with minimum sensitivity to feature and environment variations. The major weapon in the armoury of machine vision illumination is the ability to vary the direction and wavelength of the incident light.
In order to decide the best type of illumination for a particular application, it is important to remember that light travels in straight lines and is always reflected at the angle of incidence. If there is a ‘dark’ area in the image, this could be due to genuine absorption of light by a feature or because light reflecting from the feature is not reaching the camera.
The situation is further complicated by the fact that diffuse, rough textured surfaces reflect light at multiple angles and some surfaces have a combination of a textured and smooth surface [such as a varnish coating on paper].
The problem faced with textured surfaces is illustrated in Figure 1, which shows the multiple directions that light must come from to give an even illumination of the textured surface. If a typical ring lamp illuminates this type of surface, dark areas will arise in the image where light is reflected away from the camera.
The effects can be seen in Figure 2, which shows some lettering on a wrinkled foil surface – a typical problem experienced in the packaging industry. There are many dark regions in the image that make it difficult to differentiate the lettering of interest.
There are a host of different illumination techniques available for machine vision applications, many of which make use of beam splitters and reflection surfaces to determine the direction of incident light. Backlit sources illuminate the object from the opposite side to the camera.
Bright field lighting illuminates the object from the same side as the camera, with the light source within the field of view of the camera as reflected in the surface under examination. Darkfield illumination lights the object from the same side as the camera but the light source is positioned outside the field of view of the camera.
Figures 3 and 4 indicate the optical arrangement for collimated on axis light and continuous diffuse illumination, sometimes known as cloudy day illumination. Cloudy day illumination has the effect of reducing the effects of surface texture and emphasising genuine absorption effects. When this illumination method is applied to the wrinkled foil from Figure 2, the image shown in Figure 5 is obtained. This difference is dramatic.
Although the direction of illumination is very important, there is also the consideration of wavelength. The colour of a solid object is determined by the wavelength of light that it reflects. A yellow object reflects yellow light and absorbs the rest, while a red object reflects red light and absorbs the rest.
‘White light’ sources consist of a mixture of all wavelengths, and different white light sources may have slightly different intensities of particular wavelengths. This, in turn, may result in the image contrast appearing different from one light source to another.
This is illustrated in Figures 6 and 7. They show the different contrast on the packaging for a set of crayons illuminated by room light (Figure 6) and a fluorescent source (Figure 7). In some cases it may be advantageous to use light from a different region of the spectrum in conjunction with a camera sensitive to that particular wavelength. Figures 8 and 9 compare the results using a white light source (Figure 8) and an infrared source (Figure 9). The contrast in the infrared image is much more uniform.
Transferring the data
While it is clear that the illumination is an important part of the machine vision set up, how is the image then transferred for processing? Although a new generation of intelligent cameras (whole vision systems in a single enclosure) is now on the market there is still a significant requirement for transferring image data to computers for processing and analysis.
As CCD camera technology became more widespread, camera manufacturers developed their own interfaces to transfer image data to frame-grabbers in PCs. These generally consisted of multiple coax analogue or complex parallel digital cables and each camera had different timing, connector and control protocols designed to best deliver the specific feature of the particular camera.
The result was that every frame-grabber/camera combination required a custom cable and the resulting combination needed to be tested and ‘debugged’. This in turn made it difficult to switch between cameras. To overcome these problems, three new camera interfaces have evolved. The first is an analogue approach for standard applications.
The second is called CameraLink and is a digital transmission standard for connecting digital cameras to frame-grabbers agreed in late 2000 by manufacturers specifically for the machine vision industry. The third is based on the IEEE 1394 [Firewire] digital connection standard.
The new analogue approach brings a simplification of cabling, operating modes and configuration, overcoming the problems of complex interaction of signals. The use of RS232 means that no jumpers are required, camera power can be provided from the frame-grabber and new trigger modes can reduce complexity and thus system cost.
These advantages do not apply to all analogue cameras so the standard-isation offered by CameraLink and Firewire become increasingly attractive.
The Camera Link specification includes data transmission as well as camera control and asynchronous serial communications, all on a standard cable.
CameraLink is a point to point system and is used to connect individual cameras to the frame-grabber in the computer. Unlike standard digital parallel interfaces, which may require up to 100 wires to transmit the whole information [high-end, high resolution, multi-tap cameras], Camera Link requires only 26 wires [11 differential lines and four shield connections] to transmit the same amount of information in high-speed serial mode to the frame-grabber.
Firewire can be used for multi-camera/multi-PC applications, making use of a bus configuration. The IEEE 1394 digital connection standard has been applied with a growing success in consumer video and in broadcast (DVCAM) and now in industrial vision applications, although it is suitable for high speed and low cost network transfer of any digital data.
IEEE1 1394 is a high-speed, non-proprietary, scalable digital serial bus that transports data at current rates of 100, 200 and 400 Mbps (bits per second), 800 in the near future. Up to 63 nodes may be connected, with a maximum of 16 hops between any two nodes.
Data and timing signals are transported on a low-cost cable made of two shielded twisted pairs working in low voltage differential mode. For industrial video data, the camera is intended to have any resolution or frame rate to transmit pure data without any compression and to be remotely controllable for DSP functions.
The camera becomes a plug-and-play computer peripheral, using a generic, low-cost and high-perform-ance connecting standard. The 200 or 400Mb/s bus speed allows uncompressed live picture to be transmitted in isochronous mode.
Asynchronous commands may set up and reconfigure the camera operation at the same time and interactively with the application. Pure CCD or CMOS image data is directly transmitted to computer RAM, with direct pixel to byte equivalence.
A frame-grabber board is unnecessary. One generic 1394 host adapter card connects the computer to the network. Camera data and processing may be network-shared.