Thursday, 18 April 2019

THE FUTURE OF INDUSTRIAL AUTOMATION


APRIL, 2019
Since the turn of the century, the global recession has affected most businesses, including industrial automation. After four years of the new millennium, here are my views on the directions in which the automation industry is moving.

THE REAR-VIEW MIRROR

Because of the relatively small production volumes and huge varieties of applications, industrial automation typically utilizes new technologies developed in other markets. Automation companies tend to customize products for specific applications and requirements. So the innovation comes from targeted applications, rather than any hot, new technology.

Over the past few decades, some innovations have indeed given industrial automation new surges of growth: The programmable logic controller (PLC) – developed by Dick Morley and others – was designed to replace relay-logic; it generated growth in applications where custom logic was difficult to implement and change. The PLC was a lot more reliable than relay-contacts, and much easier to program and reprogram. Growth was rapid in automobile test-installations, which had to be re-programmed often for new car models. The PLC has had a long and productive life – some three decades – and (understandably) has now become a commodity.

At about the same time that the PLC was developed, another surge of innovation came through the use of computers for control systems. Mini-computers replaced large central mainframes in central control rooms, and gave rise to "distributed" control systems (DCS), pioneered by Honeywell with its TDC 2000. But, these were not really "distributed" because they were still relatively large clumps of computer hardware and cabinets filled with I/O connections.

The arrival of the PC brought low-cost PC-based hardware and software, which provided DCS functionality with significantly reduced cost and complexity. There was no fundamental technology innovation here—rather, these were innovative extensions of technology developed for other mass markets, modified and adapted for industrial automation requirements.

On the sensor side were indeed some significant innovations and developments which generated good growth for specific companies. With better specifications and good marketing, Rosemount's differential pressure flow-sensor quickly displaced lesser products. And there were a host of other smaller technology developments that caused pockets of growth for some companies. But few grew beyond a few hundred million dollars in annual revenue.

Automation software has had its day, and can't go much further. No "inflection point" here. In the future, software will embed within products and systems, with no major independent innovation on the horizon. The plethora of manufacturing software solutions and services will yield significant results, but all as part of other systems.

So, in general, innovation and technology can and will reestablish growth in industrial automation. But, there won't be any technology innovations that will generate the next Cisco or Apple or Microsoft.

We cannot figure out future trends merely by extending past trends; it’s like trying to drive by looking only at a rear-view mirror. The automation industry does NOT extrapolate to smaller and cheaper PLCs, DCSs, and supervisory control and data acquisition systems; those functions will simply be embedded in hardware and software. Instead, future growth will come from totally new directions.

NEW TECHNOLOGY DIRECTIONS

Industrial automation can and will generate explosive growth with technology related to new inflection points: nanotechnology and nanoscale assembly systems; MEMS and nanotech sensors (tiny, low-power, low-cost sensors) which can measure everything and anything; and the pervasive Internet, machine to machine (M2M) networking.

Real-time systems will give way to complex adaptive systems and multi-processing. The future belongs to nanotech, wireless everything, and complex adaptive systems.

Major new software applications will be in wireless sensors and distributed peer-to-peer networks – tiny operating systems in wireless sensor nodes, and the software that allows nodes to communicate with each other as a larger complex adaptive system. That is the wave of the future.

THE FULLY-AUTOMATED FACTORY

Automated factories and processes are too expensive to be rebuilt for every modification and design change – so they have to be highly configurable and flexible. To successfully reconfigure an entire production line or process requires direct access to most of its control elements – switches, valves, motors and drives – down to a fine level of detail.

The vision of fully automated factories has already existed for some time now: customers order online, with electronic transactions that negotiate batch size (in some cases as low as one), price, size and color; intelligent robots and sophisticated machines smoothly and rapidly fabricate a variety of customized products on demand.

The promise of remote-controlled automation is finally making headway in manufacturing settings and maintenance applications. The decades-old machine-based vision of automation – powerful super-robots without people to tend them – underestimated the importance of communications. But today, this is purely a matter of networked intelligence which is now well developed and widely available.

Communications support of a very high order is now available for automated processes: lots of sensors, very fast networks, quality diagnostic software and flexible interfaces – all with high levels of reliability and pervasive access to hierarchical diagnosis and error-correction advisories through centralized operations.

The large, centralized production plant is a thing of the past. The factory of the future will be small, movable (to where the resources are, and where the customers are). For example, there is really no need to transport raw materials long distances to a plant, for processing, and then transport the resulting product long distances to the consumer. In the old days, this was done because of the localized know-how and investments in equipment, technology and personnel. Today, those things are available globally.

HARD TRUTHS ABOUT GLOBALIZATION

The assumption has always been that the US and other industrialized nations will keep leading in knowledge-intensive industries while developing nations focus on lower skills and lower labor costs. That's now changed. The impact of the wholesale entry of 2.5 billion people (China and India) into the global economy will bring big new challenges and amazing opportunities.

Beyond just labor, many businesses (including major automation companies) are also outsourcing knowledge work such as design and engineering services. This trend has already become significant, causing joblessness not only for manufacturing labor, but also for traditionally high-paying engineering positions.

Innovation is the true source of value, and that is in danger of being dissipated – sacrificed to a short-term search for profit, the capitalistic quarterly profits syndrome. Countries like Japan and Germany will tend to benefit from their longer-term business perspectives. But, significant competition is coming from many rapidly developing countries with expanding technology prowess. So, marketing speed and business agility will be offsetting advantages.

THE WINNING DIFFERENCES

In a global market, there are three keys that constitute the winning edge:
  • 1. Proprietary products: developed quickly and inexpensively (and perhaps globally), with a continuous stream of upgrade and adaptation to maintain leadership.
  • 2. High-value-added products: proprietary products and knowledge offered through effective global service providers, tailored to specific customer needs.
  • 3. Global yet local services: the special needs and custom requirements of remote customers must be handled locally, giving them the feeling of partnership and proximity.
To implementing these directions demands management and leadership abilities that are different from old, financially-driven models. In the global economy, automation companies have little choice – they must find more ways and means to expand globally. To do this they need to minimize domination of central corporate cultures, and maximize responsiveness to local customer needs. Multi-cultural countries, like the U.S., will have significant advantages in these important business aspects.

In the new and different business environment of the 21st century, the companies that can adapt, innovate and utilize global resources will generate significant growth and success.



TO KNOW MORE ABOUT ALLIED VISION INDUSTRIAL AUTOMATION CONTACT MENZEL VISION AND ROBOTICS PVT LTD CONTACT US AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM


Thursday, 21 February 2019

THE INFLUENCE OF TEMPERATURE ON IMAGE QUALITY IN NIR / SWIR CAMERAS


Within the electromagnetic spectrum, infrared radiation is located between visible light and microwaves. It covers a wide spectrum from 750 nm to 14.000 nm wavelength. It is common to separate it into near-infrared (NIR), short-wave infrared (SWIR), mid-wave infrared (MWIR), and long-wave infrared (LWIR). Although infrared radiation in the SWIR region is not visible to the human eye, it interacts with objects in a similar manner as visible wavelengths.

Most SWIR camera sensors are based on InGaAs material and work like silicon-based CCD or CMOS sensors by converting photons into electrons - so called quantum detectors. InGaAs sensors are made of Indium, Gallium, Arsenide based photodiodes and silicon-based read-out integrated circuits (ROIC).

The band gap energy is typically smaller in InGaAs photodiodes than in silicon based pixels. As a result, the dark current is higher at similar temperatures.
Allied Vision industrial Camera Distributor in India

INFLUENCE OF DARK CURRENT ON IMAGE QUALITY


Dark current is current that flows (for example produces a signal) even if no light is hitting the sensor. It is caused by thermal excitation of electrons in the InGaAs material. The absolute value of the dark current can vary considerably from one sensor to another. Dark current increases strongly with higher sensor temperature. As a rule of thumb, it doubles approximately every 9°C.

For example, when the same picture is taken twice with the same camera at 20°C and 45°C sensor temperature, the histograms show very different results. At 45°C sensor temperature the minimum value is higher (0 being black to 255 being white) as well as the average value. At 20°C there are no saturated pixels (max. value 254) whereas at 45°C there are quite a few.

TEMPERATURE IMPACT ON SPECTRAL SENSITIVITY


Sensor temperature also has a high influence on the spectral sensitivity of SWIR cameras. When sensor temperature is lowered by 40°C (from 25°C to -15°C) it results in a shift of the spectral sensitivity of about 25 nm towards lower wavelengths. This might be of great importance for applications operating at the low or high end of the sensitivity curve. When sensitivity above 1.700 nm is required, sensor temperature should not be too low (which means below 20°C). Even though dark current (and noise) increase with higher temperature, the signal might still be better at the higher end of the spectrum.

SENSOR TEMPERATURE CONTROL IS CRUCIAL


The temperature level influences the spectral sensitivity and the dark current. The dark current has a high impact on image quality (black level and noise). For applications where image quality is important and when operating at the low or high end of the sensitivity curve, temperature control is crucial.

To accurately monitor and control the temperature, all our SWIR Goldeye cameras implement three temperature sensors:
  • - inside InGaAs sensor housing
  • - on sensor board
  • - on mainboard
In addition, our cameras provide advanced features to correct and minimize those effects (for example advanced background correction).

To counterbalance the temperature difference between ambient and sensor temperature and to stabilize the set sensor temperature, most of the Goldeye cameras are equipped with a thermo-electric cooling device. Available cooling devices are:
  • - TEC1: single-stage thermo-electric sensor cooling (for example: Goldeye G/CL-033 TEC1)
  • - TEC2: dual-stage thermo-electric sensor cooling (for example: Goldeye G/CL-032 Cool TEC2)
  • - Additional cooling fan (for example: Goldeye G/CL-032 Cool TEC2)
In addition, the Goldeye Cool models enclose the sensor in a Nitrogen filled cooling chamber (for example: Goldeye G/CL-008 Cool TEC1). That protects it from condensation and makes it suitable in environments where condensation is likely to occur (that is high humidity, high ambient temperatures).


TO KNOW MORE ABOUT ALLIED VISION INDUSTRIAL CAMERA DISTRIBUTOR IN INDIA CONTACT MENZEL VISION AND ROBOTICS PVT LTD CONTACT US AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM


Thursday, 24 January 2019

A LOOK AT THE PROGRESSION OF MACHINE VISION TECHNOLOGY OVER THE LAST THREE YEARS

company logo

Machine vision represents a diverse and growing global market, one that can be difficult to keep up with, in terms of the latest technology, standards, and product developments, as they become available from hundreds of different organizations around the world.

If you are looking for an example of how fast the market moves, and how quickly trends and new technologies emerge, our Innovators Awards provides a good reference point. In 2015, we launched our first annual Innovators Awards program, which celebrates the disparate and innovative technologies, products, and systems found in the machine vision and imaging market. In comparing the products that received distinction in 2015 to this past year’s crop of honorees, it does not take long to draw some obvious conclusions. First, let’s start with the most noticeable, which was with the cameras that received awards.

In 2015, five companies received awards for cameras. These cameras performed various functions and offered disparate capabilities, including pixel shifting, SWIR sensitivity, multi-line CMOS time delay integration, high-speed operation, and high dynamic range operation. In 2018, 13 companies received awards for their cameras, but the capabilities and features of these cameras look much different.

Vision Inspection Systems In India

CAMERAS THAT RECEIVED AWARDS IN 2018 OFFERED THE FOLLOWING FEATURES:

Polarization, 25GigE interface, 8K line scan, scientific CMOS sensor, USB 3.1 interface, fiber interface, embedded VisualApplets software, 3-CMOS prism design, and subminiature design. Like in 2015, a few companies were also honored for high-speed cameras, but overall, it is evident that most of the 2018 camera honorees are offering much different products than those from our inaugural year.
There are two other main categories that stick out, in terms of 2018 vs. 2015, the first of which is software products. In 2015, two companies received awards for their software—one for a deep learning software product and another for a machine learning-based quality control software. In 2018, eight companies received awards for software.

THESE SOFTWARE PRODUCTS OFFERED THE FOLLOWING FEATURES OR CAPABILITIES:

Deep learning (three honorees), data management, GigE Vision simulation, neural network software for autonomous vehicles, machine learning-based desktop software for autonomous vehicle vision system optimization, and a USB3 to 10GigE software converter.

Lastly, the category of embedded vision looked much different in 2018 than it did in 2015. In the embedded vision category—which I am combining with smart cameras due to overlap—there were two companies that received awards in 2015, both of which were for smart cameras that offered various capabilities. This year, however, there were 12 companies that were honored for their embedded vision innovations, for products that offered features including: embedded software running on Raspberry Pi, computer vision and deep learning hardware and software platform, embedded vision development kits, embedded computers, 3D bead inspection, as well as various smart cameras.

Throughout the other categories, there was equal or similar number of honorees from both years, but there were several interesting technologies or applications that products that popped up in 2018 offered. This includes a lens for virtual reality/augmented reality applications, a mobile hyperspectral camera, a 3D color camera, and various lighting products that targeted multispectral and hyperspectral imaging applications.

This is all to say that, when looking back to 2015 to today, machine vision technology has grown quite a bit. With the rapid pace of advancements, the growing needs of customers and end users, the miniaturization and smaller costs of components, and so on; it is exciting to think about what machine vision products in 2021 might look like.

TO KNOW MORE ABOUT VISION INSPECTION SYSTEMS IN INDIA CONTACT MENZEL VISION AND ROBOTICS PVT LTD CONTACT US AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM


Monday, 14 January 2019

BITFLOW PREDICTS VISION-GUIDED ROBOTICS TO BECOME MAJOR DISRUPTIVE FORCE IN GLOBAL MANUFACTURING

company logo



As the plant floor has become more digitally connected, the relationship between robots and machine vision has merged into a single, seamless platform, setting the stage for a new generation of more responsive vision-driven robotic systems. BitFlow, Inc., a global innovator in frame grabbers used in industrial imaging, predicts vision-guided robots will be one of the most disruptive forces in all areas of manufacturing over the next decade.

"Since the 1960s robots have contributed to automation processes, yet they've done so largely blind," said Donal Waide, Director of Sales for BitFlow, Inc. "Vision-equiped robots are different. Now, just like a human worker, robots can see a specific part to validate whether it is being placed correctly in a pick and place application, for example. Cost savings will be realized since less hard fixturing is required and the robot is more flexible in its ability to locate a variety of different parts with the same hardware."

Bitflow Frame Grabber Cards Dealer Mumbai India

HOW ROBOTIC VISION WORKS

Using a combination of camera, cables, frame grabber and software, a vision system will identify a part, its orientation and its relationship to the robot. Next, this data is fed to the robot and motion begins, such as pick and place, assembly, screw driving or welding tasks. The vision system will also capture information that would be otherwise very difficult to obtain, including small cosmetic details that let the robot know whether or not the part is acceptable. Error-proofing reduces expensive quality issues with products. Self-maintenance is another benefit. In the event that alignment of a tool is off because of damage or wear, vision can compensate by performing machine offset adjustment checks on a periodic basis while the robot is running.

DUAL MARKET GROWTH

In should come as no surprise that the machine vision and robotic markets are moving in tandem. According to the Association for Advancing Automation (A3), robot sales in North America last year surpassed all previous records. Customers purchased 34,904 total units, representing $1.896 billion in total sales. Meanwhile total machine vision transactions in North America increased 14.8%, to $2.262 billion. The automotive industry accounts for appoximately 50% of total sales.

THE ROLE OF FRAME GRABBERS

Innovations in how vision-guided robots perceive and respond to their environments are exactly what manufacturers are looking for as they develop automation systems to improve quality, productivity and cost efficiencies. These types of advancements rely on frame grabbers being paired with high-resolution cameras to digitize analog video, thus converting the data to a form that can be processed by software.

BitFlow has responded to the demands of the robotics industry by introducing frame grabbers based on the CoaXPress (CXP) machine vision standard, currently the fastest and most powerful interface on the market. In robotics applications, the five to seven meters restriction of a USB cable connection is insufficient. BitFlow CXP frame grabbers allow up to 100 meters between the frame grabber and the camera, without any loss in quality. To minimize cabling costs and complexity, BitFlow frame grabbers require only a single piece of coax to transmit high-speed data, as well as to supply power and send control signals.

BitFlow's latest model, the Aon-CXP frame grabber, is engineered for simplified integration into a robotics system. Although small, the Aon-CXP receives 6.25 Gb/S worth of data over its single link, almost twice the real world data rate of the USB3 Vision standard and significantly quicker than the latest GigE Vision data rates. The Aon-CXP is designed for use with a new series of single-link CXP cameras that are smaller, less expensive and cooler running than previous models, making them ideal for robotics.

TO KNOW MORE ABOUT BITFLOW FRAME GRABBER CARDS DEALER MUMBAI INDIA CONTACT MENZEL VISION AND ROBOTICS PVT LTD CONTACT US AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM




Friday, 21 December 2018

AN INTRODUCTION TO MACHINE VISION SYSTEMS

company logo

Machine vision is the incorporation of computer vision into industrial manufacturing processes, although it does differ substantially from computer vision. In general, computer vision revolves around image processing. Machine vision, on the other hand, uses digital input and output to manipulate mechanical components. Devices that depend on machine vision are often found at work in product inspection, where they often use digital cameras or other forms of automated vision to perform tasks traditionally performed by a human operator. However, the way machine vision systems ‘see’ is quite different from human vision.

THE COMPONENTS OF A MACHINE VISION SYSTEM CAN VARY, BUT THERE ARE SEVERAL COMMON FACTORS FOUND IN MOST. THESE ELEMENTS INCLUDE:

  • - Digital or analog cameras for acquiring images

  • - A means of digitizing images, such as a camera interface

  • - A processor
Industrial Machine Vision System
Credits : freepik.com
When these three components are combined into one device, it’s known as a smart camera. A machine vision system can consist of a smart camera with the following add-ons:
  • - Input and output hardware

  • - Lenses

  • - Light sources, such as LED illuminators or halogen lamps

  • - An image processing program

  • - A sensor to detect and trigger image acquisition

  • - Actuators to sort defective parts

HOW MACHINE VISION SYSTEMS WORK

Although each of these components serves its own individual function and can be found in many other systems, when working together they each have a distinct role in a machine vision system.

To understand how a machine vision system works, it may be helpful to envision it performing a typical function, such as product inspection. First, the sensor detects if a product is present. If there is indeed a product passing by the sensor, the sensor will trigger a camera to capture the image, and a light source to highlight key features. Next, a digitizing device called a framegrabber takes the camera’s image and translates it into digital output, which is then stored in computer memory so it can be manipulated and processed by software.

In order to process an image, computer software must perform several tasks. First, the image is reduced in gradation to a simple black and white format. Next, the image is analyzed by system software to identify defects and proper components based on predetermined criteria. After the image has been analyzed, the product will either pass or fail inspection based on the machine vision system’s findings.

GENERAL APPLICATIONS

Beyond product inspection, machine vision systems have numerous other applications. Systems that depend on visual stock control and management, such as barcode reading, counting, and store interfaces, often use machine vision systems. Large-scale industrial product runs also employ machine vision systems to assess the products at various stages in the process and also work with automated robotic arms. Even the food and beverage industry uses machine vision systems to monitor quality. In the medical field, machine vision systems are applied in medical imaging as well as in examination procedures.

TO KNOW MORE ABOUT INDUSTRIAL MACHINE VISION SYSTEM CONTACT MENZEL VISION AND ROBOTICS PVT LTD CONTACT US AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM


Monday, 23 April 2018

DEMAND FOR NEAR INFRARED IMAGING IS HEATING UP


Near infrared (NIR) imaging is growing in demand around the globe, typically as a replacement for thermal or far-infrared (FIR) vision for night vision. NIR cameras are able to detect the wavelengths of light directly adjacent to the visible light spectrum. Unlike thermal cameras, NIR cameras still detect photons like a camera in the visible light spectrum, just at a different wavelength. In the NIR spectrum, there are actually more detectable photons at night, which is what makes NIR cameras so valuable for night vision.
But how does better night vision lead to global growth in NIR imaging? And what types of applications are using NIR cameras?

WHAT’S DRIVING DEMAND FOR NEAR INFRARED IMAGING?

According to a recent study1, the market for all infrared devices will be worth $11.36 billion by 2022. The market will grow at a steady compound annual growth rate (CAGR) of 8.32% between 2016 and 2022. Surveillance applications and long-wavelength infrared (LWIR) devices were expected to see the most growth, but NIR devices will still represent a solid portion of this growth.
Technological advances in NIR imaging will be a major driver of growth in the coming years. NIR sensors often have low quantum efficiencies (QE), topping out at 58%2. Recent breakthroughs, such as thicker silicon and extended deep trench isolation (DTI) architecture for increased photon absorption, can bring QEs as high as 90%. This makes them much more suitable for a wide range of applications, especially applications where range and accuracy are vital.

Machine Vision Blog- Near Infrared Imaging
Credit : freepik.com

NEAR INFRARED IMAGING APPLICATIONS – ADVANCED DRIVER ASSISTANCE SYSTEMS (ADAS)

One of the best examples of NIR imaging in commercial applications would be their use in advanced driver assistance systems (ADAS). The U.S. and the E.U. have both mandated that all vehicles have some form of ADAS by 2020, so this technology will proliferate quickly.
When compared to thermal vision, NIR vision is a clear winner. Thermal vision only detects heat. It’s not good at producing crisp, clear images to facilitate semi- or full-autonomy.
NIR vision, on the other hand, is independent of an object’s heat and takes clear images for ADAS to function properly at night. The only technical problem currently facing NIR vision deployment in ADAS is their limited effective range – often less than 600 feet.
Despite this shortcoming, recent technological advances have addressed this problem and NIR imaging is still the best solution for applications like ADAS. 

Demand for NIR imaging will grow around the globe for the foreseeable future. NIR cameras are particularly adept at functioning at night with little to no sources of light and will start making their way into a number of applications like imaging in ADAS systems. 

TO KNOW MORE ABOUT VISION INSPECTION CAMERA BLOG INDIA, AUTONOMOUS SYSTEMS ARTICLES, RESOURCES AND NEWS VISIT MENZEL VISION AND ROBOTICS PVT LTD CONTACT US AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM



Source - VISIONONLINE.ORG

Thursday, 29 March 2018

LENS SEES CLEARLY EVEN THROUGH SHOCK AND VIBRATION


The future depends on monitoring and regulating air pollution, which is an essential step towards creating a cleaner environment.

JESSICA GEHLHAR, VISION SOLUTIONS ENGINEER, AND CORY BOONE, OPTICAL ENGINEER, EDMUND OPTICS

Historically, Industrial and Ingress Protection Ruggedized imaging lenses have addressed many environmental and application challenges. But as imaging systems increase the number of moving elements and as products move through inspection systems faster for higher throughput, these movements require greater calibration and image performance. As applications like factory automation, measurement, robotics and autonomous vehicles continue to expand and develop, the need for Stability Ruggedized imaging lenses will increase along with the progression of the industry.
Each of these applications presents environmental challenges–such as shock, vibration, and contamination–to imaging systems. Unlike lab and observatory setups, which tend to have relatively controlled environments, manufacturing facilities can be rife with environmental operating difficulties.
To address these challenges, ruggedized imaging lenses have a number of features and benefits. But to determine the best ruggedized lens for an application, first let’s clearly define the various ruggedization techniques. 
Industrial Machine Vision lenses Blog India

STANDARD IMAGING

A standard imaging lens can be insufficient in some applications and environments due to the large number of moving parts within the lens assembly, such as the doubled threaded focus adjustment, the multi-leaf iris diaphragms, and their corresponding thumb screws. For example, the thin overlapped iris leaves are especially susceptible to high shock and vibration, which can cause them to easily spring out of place and be damaged. By replacing the iris leaves with a fixed aperture, the survivability of the lens can be greatly improved.
Another component of the lens that may come loose during shock and vibration are the thumbscrews. Although they may not completely fall off, they can be loosened enough that the focus changes, potentially degrading the image quality. On a machine vision inspection system, faulty image quality can increase the potential to reject passing units or pass failing units. Debris and contaminants in the area can compound these effects.

RUGGEDIZATION

Historically, there have been two primary ruggedization techniques to address these environmental difficulties – Industrial Ruggedization and Ingress Protection Ruggedization.

IN AN INDUSTRIAL RUGGEDIZED IMAGING LENS, MANY OF THE MOVING PARTS OF A STANDARD IMAGING LENS ARE ELIMINATED:


  • the multi-leaf iris is replaced with a fixed aperture stop
  • the focus adjustment is replaced with a simple single thread
  • the thumb screws are replaced with set screws.
Industrial Ruggedization prevents many of the unintentional movements and focus shifts described above, and therefore maintains ideal image quality. Industrial Ruggedized imaging lenses can also prevent a user from accidentally changing the focus and iris settings.

In an Ingress Protection Ruggedized imaging lens, the lens assembly is either fully enclosed or sealed with O-rings (or RTV silicone) to withstand environmental contaminants. IP66 or IP67 environmental ratings are the most well known standards for particulate and water resistance.

Enclosures and seals can be especially critical for lenses used to inspect food quality. The lenses must withstand direct exposure to liquids and humidity in such wash down applications.

Many manufacturing, processing, and packaging applications are in unfavorable environments where dust, debris, dirt, adhesives or fluids are commonplace. Ingress Protection Ruggedized imaging lenses are designed to withstand these harsh environments.

STABILITY RUGGEDIZATION

Some of today’s more demanding applications in factory automation, measurement, robotics, and autonomous vehicles levy additional requirements on imaging systems beyond those of Industrial Ruggedization and Ingress Protection Ruggedization. In such situations, there’s another type of ruggedization: Stability Ruggedization. The Edmund Optics TECHSPEC Compact Ruggedized (Cr) Series Fixed Focal Length Lenses are an example of Stability Ruggedized imaging lenses.

While Ingress Protection Ruggedization prevents contamination and Industrial Ruggedization eliminates moving parts, Stability Ruggedization maintains (or stabilizes) optical pointing and positioning even after heavy shock, vibration and temperature change. In a Stability Ruggedized imaging lens, the individual lens elements are glued into place to prevent them from moving within the housing (see Figure 1).

In an optical system, lens elements sit within the inner bore of the barrel. The space between the outer diameter of the lens and inner diameter of the barrel is typically less than 50 microns; decenters of even tens of microns are enough to significantly affect the pointing of the lens (see Figures 2, 3, and 4).

When using a Stability Ruggedized lens, if an object point falls on the exact center pixel, it will always fall there even if the lens has been heavily vibrated; therefore pixel shift is reduced and the image is stabilized.

Even in clean inspection environments with well-controlled robotic movements, there are challenges. Conveyer lines and robotic systems move at higher speeds and handle heavier products than ever before. Vibration comes from operation speeds and weight of the objects, or by the lines and systems located next to equipment. These high intensity environments create tough shock and vibration requirements for imaging systems, making the lens performance more critical.

Additionally, users have higher resolution and image quality expectations. As camera pixel sizes become smaller, even slight misalignments in imaging systems become apparent. Pointing and alignment changes that were once unnoticeable with a camera having 5.6 micron pixels may be very obvious with a camera having 1.4 micron pixels either over time or after a strong shock.

Stability Ruggedization is important in applications where the field of view must be calibrated, such as measurement equipment, 3D stereo vision, lenses used for robotic sensing, and lenses used for tracking object locations. These applications often require the pointing to be stabilized to values much smaller than a single pixel.

In 3D stereo vision, two imaging lenses are used to image a pattern that’s been projected onto a 3D object. The two images are then compared to extract the 3D information about the object, but to do this the angle of the two lenses and field of view must be well calibrated.

Once calibrated, any shift in the pixel mapping will offset the information in the 3D model, affecting the measurements. Many times these systems experience heavy shock and temperature shifts during shipping or relocation. If the system were to require recalibration for each relocation, it would be costly to send a technician onsite, whereas setting up and calibrating the system once would keep costs down.

Another similar application is distortion calibration. Information is not lost in an imaging system that has distortion – the information is simply moved. Distortion can be mapped or calibrated from your imaging systems to remove it. If pixel shift occurs, the distortion mapped is now incorrect; shifting the distortion calibration map can move your values affecting your accuracy.

Another recent challenge machine vision imaging systems face is an increase in traditional robotic imaging systems crossing over into autonomous systems, along with many embedded vision solutions beyond the relatively stationary conveyer line and robotic setups.

Vision enabled robots rely on imaging systems to know where they are in space. As robots move to perform their task, the constant movement can cause pixel shift, affecting their ability to know where they are and their accuracy. Machine vision is also expanding beyond the final pack-out robots on a line. After products are packaged and boxed up, vision-guided robots can transport and load them on and off of trucks – even the trucks themselves may be autonomous vehicles, guided by many sensors including vision.

Back in the factory and warehouses, there are more autonomous robots, driving or flying around to move products or inspect storage locations. Large distribution centers often carry a variety of goods, inspection and handling systems to handle a variety of weights, sizes, and packaging materials. As industrial machine vision systems make their way outside of typical applications and onto autonomous vehicles, once ‘controlled’ environments, no matter how challenging, are no longer controlled in many ways.


TO KNOW MORE ABOUT INDUSTRIAL MACHINE VISION LENSES BLOG INDIA, AUTONOMOUS SYSTEMS ARTICLES, RESOURCES AND NEWS VISIT MENZEL VISION AND ROBOTICS PVT LTD CONTACT US AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM


Source - DESIGNWORLDONLINE.COM