Wednesday, 21 February 2018

OPTIMAL LIGHTING FOR LINE SCAN CAMERA APPLICATIONS


The speed of line scan cameras has greatly increased in the last years. Modern line scan cameras operate with integration times in the range of 15 µs. In order to achieve excellent image quality, in some cases illuminance levels of over 1 million Lux are required. One of the most important criteria for assessing image quality is noise disturbance (white noise). There are various noise sources in image processing systems and the most dominant one is called “shot noise”.
Shot noise has a physical cause and this has nothing to do with the quality of the camera. The noise is caused by the special essence of light, by photons. The image quality depends on the number of photons which hit the object and ultimately on the number of photons which reach the camera sensor.
In a set-up with a defined signal transmission there are three parameters which influence the 'shot noise' when capturing an image:
  •  integration time (scanning speed)
  •  aperture (depth of focus and maximum definition)
  •  amount of light on the scanned object
The choice of lens aperture greatly determines the required light intensity. If, for instance, the aperture is changed from 4 to 5.6, twice the amount of light is required in order to maintain the same signal to noise ratio (SNR) – see fig. 01). By using a greater aperture, more depth of focus is achieved and the image quality is improved due to reduced vignetting effects with the majority of lenses.
Industrial Machine Vision Systems

LIGHT FOR ALL APPLICATIONS CURRENTLY


LEDs are available in various shades of color. You can get them in red, green, blue, yellow or amber. Even UV LEDs and IR LEDs are obtainable. The choice of a specific color and thus a specific wave length can determine how object properties on surfaces with diverse spectral response are made visible.
In the past, red light was often used wherever high intensity was required. However, relevant performance increase in LED technology today usually occurs with white LEDs. These high-performance LEDs are used for example in car headlights and street lamps. The core of a white LED actually consists of a blue LED. Using fluorescent substances, part of the light from the blue LED is converted into other visible spectral ranges in order to produce a 'white' light.
UV-LEDs are frequently used to make fluorescent effects visible. In many cases a wavelength of approx. 400nm is sufficient. UV-LEDs with shorter wavelengths may be suitable for hardening paint, adhesives or varnishes. In comparison to blue or white LEDs, UV-LEDs are less efficient. By focusing through a reflector however this can be improved. IR lighting is implemented for food inspection. Wavelengths of 850nm or 940nm are used. When sorting recyclable material, wavelengths from 1,200nm to 1,700nm are used to identify different types. Here however, IR-LEDs in this range are not as adequate as classic halogen lamps with appropriate filters where beam output is concerned.

KEEP COOL


The small design enables a very short warm-up phase. This circumstance presupposes good thermal dissipation, in order to maintain appropriate working conditions, i.e. temperatures. As a rule: the better the cooling, the longer the LED durability. Apart from durability, LED temperatures also influence spectral behavior (possible color shifting) and general output (luminance).
In systems where precise color reproduction is required, it is recommended to keep the lighting’s temperature steady at a predetermined value. At present, efficient control systems can regulate the LED temperature to within a spectrum range of less than 2°C.
Modern lighting systems, such as the Corona II lighting system developed by CHROMASENS, provide numerous cooling options. This includes passive cooling with thermal dissipation via convection, compressed air cooling, water cooling and ventilation. Active ventilation, compressed air or water cooling are good cooling methods for measuring applications situated in surroundings with high temperatures. By monitoring the temperature of the LEDs and regulating the cooling system, shifts in color reproduction can be completely avoided or at least greatly reduced.

FOCUS ON THE ESSENTIAL


If a flat object at a known and fixed distance is to be illuminated, selecting the adequate focus is relatively simple. Selecting the right lighting is more complicated, if the object is not at a predetermined distance from the light or has no flat surface. In such a case, assuring a permanently sufficient image brightness is a challenge. Here the use of reflector technology facilitates the accumulation of light from a LED (greater coverage angle of the reflected light) and a better light distribution from the depth.
In contrast to background or bright field lighting, focused lighting is normally used for top lighting. Customary lighting systems use rod lenses or Fresnel lenses in order to achieve the necessary lighting intensity. CHROMASENS adopts a novel and completely unique approach. While the use of rod lenses causes color deviations due to refraction, the mirror (reflector) principle developed and patented by Chromasens has no such trouble.
Shiny or reflective materials are a challenge for lighting. Unwished for reflections often appear in the image. In combination with a polarizing filter rotated 90 degrees in front of the camera, these unwanted light reflections can be prevented. When using such filters, certain factors have to be considered. The temperature stability of the filter is one point. In this respect, many polarizing filters can only be used to a certain extent. Another criterion is effectiveness: with such settings, only about 18-20 % of the original amount of light reaches the sensor. The amount of light provided by the lighting must therefore be great enough to minimize noise and yet achieve sufficiently good image quality.

SUMMARY


When selecting the correct lighting for line scan camera applications, following factors ought to be considered:
  • The lense aperture and the light amount significantly influence the signal noise ratio
  •  LED systems offer definite advantages compared to traditional lighting technologies such as halogen or fluorescent lamps + Good cooling ensures long durability, consistent spectral behavior and a high level of brightness
  •  The use of reflectors assures optimal lighting, even from different distances
  •  Color LEDs, UV- and IR-LEDs are extremely versatile
  •  Polarizing filters prevent unwanted light reflection on shiny surfaces. The amount of light provided by the lighting must still be sufficient


TO KNOW MORE ABOUT INDUSTRIAL MACHINE VISION SYSTEMS CONTACT MENZEL VISION AND ROBOTICS PVT LTD AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM


Source - CHROMASENS.DE


Tuesday, 13 February 2018

COAXIAL BRIGHTFIELD LIGHT FOR 3DPIXA APPLICATIONS


Choosing the right illumination for the application is critical for acquiring the high quality images needed for calculating 3D data. We compare the imaging results of a directional coaxial brightfield illumination with a Corona tube light in terms of color image quality and height map for different samples. It could be shown that for material that exhibit considerable amounts of subsurface scattering, coaxial lighting geometry benefits the 3D measurement using 3DPIXA.In practice, it has to be kept in mind that introducing the beam splitter in the light path results in a shift of the working distance of the camera system, and a slight reduction of image quality.

1.INTRODUCTION


An illumination scheme where the source rays are reflected from a flat sample directly into the camera is called a brightfield. With line scan cameras there are two possible ways to realize such a setup: either by tilting the camera and light source such that the angle with respect to the surface normal is the same but opposite direction, or by using a beam splitter. The first method is not recommended as it can lead to occlusion and keystone effects. Thus we want to discuss the brightfield setup using a beam splitter.
Figure 1 shows the principle of this setup in comparison to a setup with a tubelight. The tubelight is the superior illumination choice for a wide array of possible applications. It reduces the intensity of specular reflections and evenly illuminates curved glossy materials. Most of the time the tubelight should be your first choice and only some materials require the use of a coaxial brightfield illumination.
An example as such is material that exhibits strong subsurface scattering, which means that light beams partially penetrate a material in a certain direction, are scattered multiple times, and then exit at a different location with possibly different direction. Resulting from that is a material appearance that is translucent. Examples of such materials are marble, skin, wax or some plastics.
Using tube light on such materials results in a very homogeneous appearance with little texture, which is problematic for 3D reconstruction. Using coaxial brightfield illumination results in relatively more direct reflection from the surface to the camera, as compared to a tube light illumination. This first surface reflection contributes to the image texture; the relative amount of sub-surface scattered light entering the camera is thereby reduced.
There are some specific properties that have to be taken into consideration when using a coaxial setup with a 3DPIXA. Firstly, only a maximum of 25% of the source intensity can reach the camera as the rest is directed elsewhere in the two transits of the beam splitter. Secondly, the glass is an active optical element that influences the imaging and 3D calculation quality. In chapter 3 we have a closer look at these factors and offer some guidelines for mechanical system design to account for resulting effects. Prior to that, we discuss the effects of the brightfield illumination on a selection of a few samples to give an idea when this type of illumination setup should be used.
Industrial Machine Vision Systems in Mumbai

2.COMPARING BRIGHTFIELD AND TUBELIGHT ILLUMINATION


In this chapter we want to give you some impressions of the differences between using a coaxial illumination in comparison to a tubelight using different samples. As a tubelight we used the CHROMASENS CORONA II Tube light (CP000200-xxxT) and for the brightfield we used a CORONA II Top light (CP000200-xxxB) with diffusor glass together with a beam splitter made from 1.1 mm “borofloat” glass.
In figure 2 we show a scanned image of a candle made of paraffin, which is a material that exhibits strong subsurface scattering. With coaxial illumination (right image) the surface texture is clearly visible and the height image shows the slightly curved shape of the candle. In comparison the tube light (left image) contains very low texture and height information could not be recovered for most of the heights (black false colored region). The texture is only visible with coaxial illumination because under this condition the light reflected from the surface is more dominant in the final image than the subsurface scattered light. However, the ratio between these two effects varies with different surface inclinations. The more deviated the surface normal is from the camera observation angle, the less direct light is reflected directly from the first surface. Therefore, texture in the image gets lower. For the candle sample, more than 15° deviation resulted in failure in recovering height information. This can be seen in the right image at the outer edges of the candle.
3Fehler! Verweisquelle konnte nicht gefunden werden.. The substrate area in the tube light image (left) shows low texture, resulting in partially low performance height reconstruction (black points in false-colored image overlay). With coaxial illumination (right image), the amount of source rays reflected back into the camera from the surface of the material is larger than the subsurface scattered light. The image texture is higher and height reconstruction performance improves.
However, if the height of the balls is the focus in the application rather than inspecting the substrate, the situation becomes more complex as the coaxial illumination results in specular reflection on the ball tops. If these areas are saturated, it negatively affects height measurements as well.
The best illumination therefore strongly depends on the measurement task and materials used and can often only be determined by testing. If you are unclear which light source is best for your application, please feel free to contact our sales personnel to discuss options and potentially arrange for initial testing with your samples at our lab.

3.OPTICAL INFLUENCE


The beam splitter essentially is a plan parallel glass plate which offsets each ray passing through without changing its direction. The size of this offset depends on the incidence angle, the thickness of the glass and its refractive index. The thickness of the beam splitter should therefore be only as small as is needed for stability reasons. In the following analysis we assume a thickness of the beam splitter of d=1.1 mm “borofloat” glass.
The result of the beam splitters influence is a movement of the point from where the sharpest image can be acquired in all three spatial coordinates. The change along the sensor direction (called x-direction) leads to a magnification change of the imaging system that is negligible small (<0.4%, with a small dependence on camera type).
The change along the scan direction (called y-direction) only offsets the starting point of the image. If the exact location of the scanline is important (e.g. when looking on a roll) the camera needs to be displaced relative to the intended scan line by
Δy = d*(0.30n – 0.12).
The equation is valid for all glass thicknesses d and is a linear approximation of the real dependency on n, where n is the refractive index of the glass material introduced into the light path. The approximation is valid in the interval of n= [1.4, 1.7] and for all types of 3DPIXAs. The direction of the displacement is towards the end of the beam splitter that is nearer to the sample, so in the scheme in figure 1 the camera has to be moved to the left.
The change of the working distance is different along the x- and y-axis of the system because of the 45° tilt of the beam splitter leading to astigmatism. In y-direction the working distance is increased by
zy = +d*(0.24n +0.23).
As above, the formula is valid for all d and n= [1.4, 1.7]. The change of the working direction along the x-direction is not constant but also changes depending on the position of the imaged point which leads to field curvature. Both astigmatism and field curvature slightly lower your image quality which influences the imaging of structures near the resolution limit. But they should not influence the 3D algorithm as generally only height structures that are several pixels in size can be computed.
Additionally to the optical effects discussed above the beam splitter also changes the absolute height values computed by the 3D algorithm (i. e. the absolute distance to the camera). The exact value of this height change is slightly different for each camera. Generally the measured distance between camera and sample decreases, so that structures appear nearer to the camera than they really are. This change is constant over the whole height range (simulations show 0.2% change) and also constant over the whole Field of View. In summary, relative height measurements are not influenced at all, and absolute measurements are shifted by a constant offset.
As the precise change of the calculated height is not known, the zero plane of the height map can’t be used to adjust the camera to the correct working distance. We advise you instead to set up your camera using the free working distance given in the data sheet and correcting it with Δzy from above.

4.SUMMARY


On certain translucent materials (those exhibiting considerable subsurface scattering of light), using coaxial illumination can result in a significant increase in image texture which greatly benefits the 3D height reconstruction. However, the additional glass of the beam splitter in the optical path of the camera system when using coaxial illumination influences the optical quality negatively. Further, the working distance of the system changes slightly and the absolute measured distances are set off by a constant value. This does not affect relative measurements, which are generally recommended with the 3DPIXA.




TO KNOW MORE ABOUT INDUSTRIAL MACHINE VISION SYSTEMS IN MUMBAI CONTACT MENZEL VISION AND ROBOTICS PVT LTD AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM



Source - CHROMASENS.DE

Tuesday, 7 November 2017

WHAT ARE VISION INSPECTION SYSTEMS?

http://mvrpl.com

VISION INSPECTION SYSTEMS (sometimes referred to as machine vision systems) provide image-based inspection automated for your convenience for a variety of industrial and manufacturing applications. Though not a new technology, 2D and 3D machine vision systems are now commonly used for automated inspection, robot guidance, quality control and sorting, and much more.

WHAT VISION INSPECTION SYSTEMS CAN DO

These intelligent inspection systems come equipped with a camera or multiple cameras, and even video and lighting. Vision systems are capable of measuring parts, verifying parts are in the correct position, and recognizing the shape of parts. Also, vision systems can measure and sort parts at high speeds. Computer software processes images captured during the process you are trying to assess to capture data. The vision system can be intelligent enough to make decisions that impact the function you are trying to assess, often in a pass/fail capacity to trigger an operator to act. These systems can be embedded into your lines to provide a constant stream of information.

APPLICATIONS FOR VISION INSPECTION SYSTEMS

VISION INSPECTION SYSTEMS can be used in any number of industries in which quality control is necessary. For example, vision systems can assist robotic systems to obtain the positioning of parts to further automate and streamline the manufacturing process. Data collected by a vision system can help improve efficiency in manufacturing lines, sorting, packing and other applications. In addition, the information captured by the vision system can identify problems with the manufacturing line or other function you are examining in an effort to improve efficiency, stop inefficient or ineffective processes, and identify unacceptable products.

INDUSTRIES USING VISION SYSTEMS FOR INSPECTION

Because vision inspection systems combine various technologies, the design of these systems can be customized to meet the needs of many industries. Thus, many companies enjoy the use of this technology for quality control purposes, and even security purposes. Industries using vision inspection systems include automation, robotics, pharmaceuticals, packaging, automotive, food and beverage, semiconductors, life sciences, medical imaging, electronics, consumer goods among other kinds of manufacturing and non-manufacturing companies.

BENEFITS OF VISION INSPECTION SYSTEMS

Overall, the benefits of VISION INSPECTION SYSTEMS , include, but are not limited to, production improvements, increased uptime, and reduction in expenses. Vision systems allow companies to conduct 100% inspection of parts for quality control purposes. This ensures that all products will meet the customers’ specifications. If you want to improve the quality and efficiency of your industry, a vision inspection system could be the answer for you.






TO KNOW MORE ABOUT VISUAL INSPECTION SYSTEM IN INDIA, CONTACT MENZEL VISION AND ROBOTICS PVT LTD AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM


Tuesday, 24 October 2017

VISION INSPECTION SYSTEMS: WHAT TO KNOW BEFORE IMPLEMENTATION




VISION INSPECTION SYSTEMS are popular mechanisms in the industrial sector because of their accuracy, repeatability and efficiency. They provide numerous advantages over human inspection of parts during production.
However, Vision inspection systems are complicated systems with a lot of variables that affect equipment that needs to be implemented correctly in order to realize the long-term benefits.
So what do you need to know before implementation to realize the full benefits of VISION INSPECTION SYSTEMS?

KNOW YOUR EQUIPMENT AND ENVIRONMENT

Implementation of vision inspection systems will often involve integration with existing production equipment and processes, so it’s important to understand how your cameras will fit in with this equipment and the production environment.
Will the integration involve conveyors, product rejection mechanisms, pick and place robotics or rugged environmental factors like extreme heat or low light?
It may take mechanical engineering, robotics and programming experts to figure out exactly how your vision inspection system will fit into existing production environments.

START TO NARROW DOWN CAMERAS FOR VISION INSPECTION SYSTEMS

There are a lot of vision systems on the market that are suitable for a variety of inspection applications. Trying to narrow them down can seem like a daunting task, but to start, you can ask yourself a simple question: do we need a single sensor camera system (smart camera) or multiple sensor camera system (multi-camera vision system)?
For production lines with fewer inspection points, where inspection data does not need heavy processing, smart cameras may be a wise choice, as they’re self-contained and easily programmed to perform a specific task.
On the other hand, production lines with dozens of inspection points, especially when centralized data on inspections could be useful, a multi-camera vision system would be most beneficial.
There are many other considerations to take into account, but starting with the question of single or multi sensor camera systems is a good start for narrowing down which type of vision system would be best for you.
There's a lot to understand about your application and the pros and cons of various vision inspection systems before implementation. Taking into account the tips above, you can increase your chances of successful implementation and see the full benefits of vision inspection systems for years to come.



TO KNOW MORE ABOUT MACHINE VISION SYSTEM IN MUMBAI INDIA CONTACT MENZEL VISION AND ROBOTICS PVT LTD AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM


Thursday, 28 September 2017

MACHINE VISION KEEPS AN EYE ON FACIAL RECOGNITION

http://mvrpl.com/machine-vision-system-Article-140.html


While privacy concerns have been a factor for years, it turns out that if you put a useful application in front of the machine vision algorithm —i.e., you make it fun — everyone’s happy. For example, a Russian music festival used a facial recognition algorithm to supply attendees with photos of themselves from the event, while a firm in Singapore is developing a transport ticketing system that uses voluntary facial recognition to charge commuters as they pass through fare gates.
It helps that consumers have face detection technology in the palm of their hands. Mobile applications such as FaceLock scan a user’s face in order to unlock apps on their smartphone or tablet. Furthermore, a recent patent filed by Apple suggests that the next generation iPhone will have “enhanced face detection using depth information.” Users also are relying on facial recognition for critical tasks such as mobile banking and commerce.
The projected growth of facial recognition and other biometrics usage reflects these trends. Facial recognition market size is estimated to rise from $3.3 billion in 2016 to $6.84 billion in 2021. Analysts attribute the growth to an expanding surveillance market, increasing government deployment, and other applications in identity management.
The machine vision industry is starting to find ways to capitalize on the growth opportunities in facial recognition, whether it’s a camera calibrated to work in low light or a mobile app that helps police officers catch suspects. But the technology needs to overcome a few hiccups first.
To Redact and Serve
Suspect Technologies, a startup in Cambridge, Massachusetts, has developed advanced facial recognition algorithms, but for two very different purposes within law enforcement. One use addresses the privacy considerations around body cameras worn by police officers. The most frequently cited goal of body worn video (BWV) is to improve law enforcement accountability and transparency. When someone files a Freedom of Information Act request to acquire one of these videos, law enforcement agencies must promptly comply.
But they can’t do that without first blurring the identities of victims, minors, and innocent bystanders, which typically has been a slow, tedious process restricted to video specialists. Suspect Technologies’ automated video redaction (AVR) software, available on cameras manufactured by VIEVU, is optimized for the real-world conditions of BWV — most notably high movement and low lighting. The technology, which can track multiple objects simultaneously, features a simple interface that allows users to add or adjust redacted objects. AVR reduces the time it takes to redact video footage by tenfold over existing methods.
Unlike AVR which covers up identities, Suspect Technologies is rolling out a mobile facial recognition app to identify suspects. “As it stands now, there’s no simple way for law enforcement to tell if someone is a wanted criminal,” says Jacob Sniff, CEO and CTO of Suspect Technologies.
Compatible with iPhone and Android devices, the company’s cloud-based watchlist recognition software has been tested on 10 million faces. The algorithm takes advantage of better facial recognition accuracy, which increases tenfold every four years. “Our goal is to be 100% accurate on the order of 10,000 identities,” Sniff says.
Suspect Technologies will start by customizing the product for regional law enforcement agencies in midsized towns, which typically have about 100 wanted felons. The company also plans to introduce its software to schools and businesses for attendance-oriented applications. 
Machine Vision System | MVRPL
Cameras That Recognize
On the hardware side, the specifications of a facial recognition application are driving machine vision camera selection. “Monochrome cameras offer better sensitivity to light, so they are ideal in low-light conditions indoors and outdoors,” says Mike Fussell, product marketing manager of the integrated imaging division at FLIR SYSTEMS , Inc.(Wilsonville, Oregon). “If someone is strongly backlit or shadowed, cameras with the latest generation of high-performance CMOS sensors really shine in those difficult situations.”
For customers seeking better performance in low light, FLIR offers higher-end sensors that have high frame rates and global shutter. The entire pixel count reads out at the same time instantly, eliminating the distortion caused by the rolling shutter readout found on less expensive sensors, Fussell says. Rolling shutter cameras show distortion caused by the movement of the subject relative to the shutter movement, but they present a lower-cost alternative in low-light conditions.
Most cameras used in facial recognition are in the 3–5 MP range, according to Fussell. But in an application like a passport kiosk, where all of the variables are controlled, a lower-resolution camera is suitable. FLIR also offers stereo vision products that customers calibrate for optical tracking, which measures eye movement relative to the head. Some companies are taking the concept of facial recognition to the next level with gait analysis, the study of human motion. “In a building automation application, where you want to learn people’s habits, you could track their gait to turn lights on and off or have elevators waiting in advance for them,” Fussell says.
Facing Obstacles Head-on
For all its potential, facial recognition technology must address fundamental challenges before an algorithm reaches a camera or mobile device. According to one study, face recognition systems are 5–10 percent less accurate when trying to identify African Americans compared to white subjects. What’s more, female subjects were more difficult to recognize than males, and younger subjects were more difficult to identify than adults. 
As such, algorithm developers must focus more on the content and quality of the training data so that data sets are evenly distributed across demographics. Testing the face recognition system, a service currently offered by the National Institute of Standards and Technology (NIST), can improve accuracy. 
Once the algorithm reaches the camera, facial recognition’s accuracy is dependent upon the number and quality of photos in the comparison database. And even though most facial recognition technology Is automated, most systems require human examination to make the final match. Without specialized training, human reviewers make the wrong decision about a match half the time.
The machine vision industry, however, is no stranger to waiting for a technology to mature. Once facial recognition does that, camera makers and software vendors will be ready to supply the equipment and services for secure, accurate identity verification.
Suspect Technologies will start by customizing the product for regional law enforcement agencies in midsized towns, which typically have about 100 wanted felons. The company also plans to introduce its software to schools and businesses for attendance-oriented applications. 




TO KNOW MORE ABOUT MACHINE VISION SYSTEM, CONTACT MENZEL VISION AND ROBOTICS PVT LTD AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM




Wednesday, 12 July 2017

WHAT IS EMBEDDED VISION

http://mvrpl.com/


In recent years, a miniaturization trend has been established in many areas of electronics. For example, ICs have become more and more integrated and circuit boards in the electrical industry have become smaller and more powerful. This has also made PCs, mobile phones and cameras more and more compact and powerful. This trend can also be observed in the world of vision technology.
A classic machine vision system consists of an industrial camera and a PC: Both were significantly larger a few years ago. But within a short time, smaller and smaller-sized PCs became possible, and in the meantime, the industry saw the introduction of single-board computers, i.e. computers that can be found on a single board. At the same time, the camera electronics became more compact and the cameras successively smaller. On the way to even higher integration, small cameras without housings are now offered, which can be easily integrated into compact systems.

Due to these two developments, the reduction in size of the PC and the camera, it is now possible to design highly compact camera vision systems for new applications. Such systems are called embedded (vision) systems.

Design and use of an embedded vision system

An embedded vision system consists, for example, of a camera, a so-called board level camera, which is connected to a processing board. Processing boards take over the tasks of the PC from the classic machine vision setup. As processing boards are much cheaper than classic industrial PCs, vision systems can become smaller and also more cost-effective. The interfaces for embedded vision systems are primarily USB or Basler BCON for LVDS.

Basler Camera Distributor in India

Embedded vision systems are used in a wide range of applications and devices, such as in medical technology, in vehicles, in industry and in consumer electronics. Embedded systems enable new products to be created and thereby create innovative possibilities in several areas.

Which embedded systems are available?

As embedded systems, there are popular single-board computers (SBC), such as the Raspberry Pi®. The Raspberry Pi ® is a mini-computer with established interfaces and offers a similar range of features as a classic PC or laptop.

Embedded vision solutions can also be implemented with so-called system on modules (SoM) or computer on modules (CoM). These modules represent a computing unit. For the adaptation of the desired interfaces to the respective application, a so-called individual carrier board is needed. This is connected to the SoM via specific connectors and can be designed and manufactured relatively simply. The SoMs or CoMs (or the entire system) are cost-effective on the one hand since they are available off-the-shelf, while on the other hand they can also be individually customized through the carrier board.

For large manufactured quantities, individual processing boards are a good idea.

All modules, single-board computers, and SoMs, are based on a system on chip (SoC). This is a component on which the processor(s), controllers, memory modules, power management and other components are integrated on a single chip.

Due to these efficient components, the SoCs, embedded vision systems have only become available in such a small size and at a low cost as today.

Characteristics of embedded vision systems versus standard vision systems

Most of the above-mentioned single-board computers and SoMs do not include the x86 family processors common in standard PCs. Rather, the CPUs are often based on the ARM architecture. 

The open-source Linux operating system is widely used as an operating system in the world of ARM processors. For Linux, there is a large number of open-source application programs, as well as numerous freely-available program libraries.

Increasingly, however, x86-based single-board computers are also spreading.

A consistently important criterion for the computer is the space available for the embedded system.
For the SW developer, the program development for an embedded system is different than for a standard PC. As a rule, the target system does not provide a suitable user interface which can also be used for programming. The SW developer must connect to the embedded system via an appropriate interface if available (e.g. network interface) or develop the SW on the standard PC and then transfer it to the target system. 

When developing the SW, it should be noted that the HW concept of the embedded system is oriented to a specific application and thus differs significantly from the universally usable PC.

However, the boundary between embedded and desktop computer systems is sometimes difficult to define. Just think of the mobile phone, which on the one hand has many features of an embedded system (ARM-based, single-board construction), but on the other hand can cope with very different tasks and is therefore a universal computer.

What are the benefits of embedded vision systems?

In some cases, much depends on how the embedded vision system is designed. A single-board computer is often a good choice as this is a standard product. It is a small compact computer that is easy to use. This solution is also useful for developers who have had little to do with embedded vision. 

On the other hand, however, the single-board computer is a system which contains unused components and thus generally does not allow the leanest system configuration. This solution is suitable for small to medium quantities. The leanest setup is obtained through a customized system. Here, however, higher integration effort is a factor. This solution is therefore suitable for large unit numbers.

The benefits of embedded vision systems at a glance:
  • Lean system design
  • Light weight
  • Cost-effective, because there is no unnecessary hardware
  • Lower manufacturing costs
  • Lower energy consumption
  • Small footprint

Which interfaces are suitable for an embedded vision application?

Embedded vision is the technology of choice for many applications. Accordingly, the design requirements are widely diversified. Depending on the specification, Basler offers a variety of cameras with different sensors, resolutions and interfaces.

The two interface technologies that Basler offers for embedded vision systems in the portfolio are:
  • USB3 Vision for easy integration and
  • Basler BCON for LVDS for a lean system design
Both technologies work with the same Basler pylon SDK, making it easier to switch from one interface technology to the other.

USB3 Vision

USB 3.0 is the right interface for a simple plug and play camera connection and ideal for camera connections to single-board computers. The Basler pylon SDK gives you easy access to the camera within seconds (for example, images and settings), since USB 3.0 cameras are standard-compliant and GenICam compatible.

Benefits
  • Easy connection to single-board computers with USB 2.0 or USB 3.0 connection
  • Field-tested solutions with Raspberry Pi®, NVIDIA Jetson TK1 and many other systems
  • Profitable solutions for SoMs with associated base boards
  • Stable data transfer with a bandwidth of up to 350 MB/s

BCON for LVDS

BCON - Basler's proprietary LVDS-based interface allows a direct camera connection with processing boards and thus also to on-board logic modules such as FPGAs (field programmable gate arrays) or comparable components. This allows a lean system design to be achieved and you can benefit from a direct board-to-board connection and data transfer. 

The interface is therefore ideal for connecting to a SoM on a carrier / adapter board or with an individually-developed processor unit.

If your system is FPGA-based, you can fully use its advantages with the BCON interface.
BCON is designed with a 28-pin ZIF connector for flat flex cables. It contains the 5V power supply together with the LVDS lanes for image data transfer and image triggering. You can configure the camera vialanes that work with the I²C standard.

Basler's pylon SDK is tailored to work with the BCON for LVDS interface. Therefore, it is easy to change settings such as exposure control, gain, and image properties using your software code and pylons API. The image acquisition of the application must be implemented individually as it depends on the hardware used.

Benefits
  • Image processing directly on the camera. This results in the highest image quality, without compromising the very limited resources of the downstream processing board.
  • Direct connection via LVDS-based image data exchange to FPGA
  • With the pylon SDK the camera configuration is possible via standard I²C bus without further programming. The compatibility with the GenICam standard is given.
  • The image data software protocol is openly and comprehensively documented
  • Development kit with reference implementation available
  • Flexible flat flex cable and small connector for applications with maximum space limitations
  • Stable, reliable data transfer with a bandwidth of up to 252 MB/s

How can an embedded vision system be developed and how can the camera be integrated?

Although it is unusual for developers who have not had much to do with embedded vision to develop an embedded vision system, there are many possibilities for this. In particular, the switch from standard machine vision system to embedded vision system can be made easy. In addition to its embedded product portfolio, Basler offers many tools that simplify integration.

Find out how you can develop an embedded vision system and how easy it is to integrate a camera in our simpleshow video.

Machine learning in embedded vision applications

Embedded vision systems often have the task of classifying images captured by the camera: On a conveyor belt, for example, in round and square biscuits. In the past, software developers have spent a lot of time and energy developing intelligent algorithms that are designed to classify a biscuit based on its characteristics (features) in type A (round) or B (square). In this example, this may sound relatively simple, but the more complex the features of an object, the more difficult it becomes.

Algorithms of machine learning (e.g., Convolutional Neural Networks, CNNs), however, do not require any features as input. If the algorithm is presented with large numbers of images of round and square biscuits, together with the information which image represents which variety, the algorithm automatically learns how to distinguish the two types of biscuits. If the algorithm is shown a new, unknown image, it decides for one of the two varieties because of its "experience" of the images already seen. The algorithms are particularly fast on graphics processor units (GPUs) and FPGAs.




To Know More About Basler Camera Distributor in India, Contact Menzel Vision and Robotics Pvt Ltd at (+ 91) 22 67993158 or Email us at info@mvrpl.com


 

Contact Details



Address: 4, A-Wing, Bezzola Complex,
Sion Trombay Road, Chembur

400071 Mumbai, India
Tel:(+91) 22 67993158
Fax: (+91) 22 67993159
Mobile:+91 9323786005 / 9820143131
E-mail: info@mvrpl.com