Monday 28 November 2022

THE ADVANTAGE OF CHANGEABLE SPECTRAL FILTERS IN YOUR OPTICAL GAS IMAGING (OGI) CAMERA

 There are two kinds of OGI cameras: cooled and uncooled. Cooled cameras, like the EyeCGas 2.0, allow us to detect small to large leaks (methane and over 400 VOCs) from various distances, up to 10 kilometers away and more.


Whereas uncooled OGI technology, such as the EyeCGas Mini, is slightly limited in its capabilities, detecting medium to larger leaks (methane) closer to the source.

Inside the EyeCGas 2.0 OGI camera, we use a spectral filter to enhance compound detection in a specific wavelength. EyeCGas 2.0 is the only OGI camera that enables you to change the spectral filters; this ensures improved detection capability for VOCs and CO2 within the same camera.

By not having the option to change a spectral filter, you are limited to the standard filter (see chart below), which is good enough in perfect weather conditions, such as no wind or humidity.

In humid conditions or at longer distances, having the option to change to a heavy alkanes spectral filter vastly improves the camera’s VOC detection capabilities.

The chart below shows the different filters and absorption characteristics of some of the compounds they detect. Methane is detected better with the standard filter; however, VOCs are detected better with the heavy Alkanes filter.

As far as CO2, in the chart below, you can see the detector of the EyeCGas 2.0 (VOC) camera is open to up to 4.3 microns; therefore, by placing the CO2 filter, we can also detect CO2. The chart below summarizes the three different filters offered with the EyeCGas 2.0.

To summarize, for optimal performance with your OGI camera, you should always have the option to change the filters (it takes about one minute).

Use the standard filter (center ~3.3 µm) for methane and VOCs in good weather conditions. For improved VOCs (Hexane, Octane, Butane, Pentane, Heptane, etc.) detection in humid conditions and longer-range, switch to a heavy alkanes filter (center ~3.4 µm), and for CO2, change to the CO2 filter.

TO KNOW MORE ABOUT HIGH RESOLUTION CAMERA DISTRIBUTOR IN MUMBAI INDIA, CONTACT MENZEL VISION AND ROBOTICS PVT LTD OR CALL US AT (+ 91) 22 35442505 OR EMAIL US AT INFO@MVRPL.COM

WHAT IS IMAGING?

 Over several decades, advancements in imaging technology have brought improvements to both the industrial and consumer market spaces and have accelerated growth in almost all industries including Factory Automation, Autonomous Systems, Logistics and Supply Chain Management, Life Sciences and Particle Analysis, Electronics and Semiconductors Inspection, Aerospace and Defense, and Metrology and 3D Measurement.


These advancements are seen every day in the form of high-quality consumer electronics like smartphones, 4K TVs, and personal computers, which have become easier to manufacture because of the improved reliability and repeatability that imaging systems afford to manufacturing processes.

The benefits of these advancements are also passed on to consumers when purchasing goods online, in the form of extremely reduced shipping and delivery times due to the simplification and optimization of logistical processes used in storage facilities and product warehouses.

These advancements have also enabled the high-throughput production of life-saving pharmaceuticals and have enabled the creation of novel, complex medical devices and procedures, which reduce patient recovery times and allow for patients to live much longer and healthier lives.

The fundamental components of an imaging system are illumination, an imaging lens, and a camera. Illumination is used to properly light the object and/or highlight features of interest. It helps the imaging system properly “see” the object. The imaging lens takes the object information and reproduces it on to a camera sensor. Although soft-ware and motion control may be needed to tie these three components together, choosing the proper three fundamental components helps to build the foundation of a successful imaging system.

It is important to understand how decisions and tradeoffs impact the final performance of the imaging system and the end application. Should a monochrome or color camera be used? What is the optimal illumination geometry? Does the camera come with a lens? Which lens works best for the application at hand? Whether your application is in factory automation, autonomous systems, life sciences, or something else, understanding the three fundamental components eases the development and deployment of these sophisticated imaging systems

FACTORY AUTOMATION

The use of controllers, algorithms, and sensors to automate repetitive tasks and reduce human oversight. Commonly automated tasks include sorting, inspection, and defect detection. In general, when thinking of “Machine Vision,” factory automation is what springs to mind.

AUTONOMOUS SYSTEMS

Autonomous means having the ability to self-govern. Common autonomous systems include self-driving cars and trucks, flying, taxis, agriculture or farming robots, and delivery robotics. Vision systems are an incredibly important piece for the future of autonomous systems.

LOGISTICS AND SUPPLY CHAIN MANAGEMENT

Logistic processes often use robots for automated warehousing. Robots perform OCR or scan barcodes on products to rapidly identify products on shelves or packaged and ready to be shipped.

LIFE SCIENCES AND PARTICLE ANALYSIS

Life sciences include fields related to biology, medicine, physiology, and much more. Besides X-ray imaging and MRIs, this space also uses a wide range of imaging techniques like microscopy and special labeling to view, count, sort, and perform other cytometry methods on cells.

ELECTRONICS AND SEMICONDUCTORS INSPECTION

More circuitry can be integrated on semiconductors than ever before and flat panel displays have extremely high resolutions. To manufacture such complex devices, electronics and displays must be inspected for chip placement and defects at very high resolution.

AEROSPACE AND DEFENSE

Unmanned aerial, ground, and marine vehicles, fixed wing and rotorcraft aircrafts, and many other autonomous systems are used for target acquisition and designation, intelligence, surveillance, reconnaissance, and general situational awareness.

METROLOGY AND 3D MEASUREMENT

Information about an inspection sample like a characteristic dimension or colors must be measured with repeatable accuracy and reliability. Applications that require accurate measurement include time-of-flight imaging, scheimpflug scanning, 3D imaging, and LIDAR imaging.

ADVANCED DIAGNOSTICS

Advanced diagnostics enable life-changing medical sciences and devices possible. These devices are used in life and medical sciences to improve the quality of and extend life.

TO KNOW MORE ABOUT IMAGING SOURCE MACHINE VISION CAMERAS IN INDIA, CONTACT MENZEL VISION AND ROBOTICS PVT LTD OR CALL US AT (+ 91) 22 35442505 OR EMAIL US AT INFO@MVRPL.COM

Thursday 22 September 2022

TOP 5 ADVANCES IN AUTOMATION

 Automation is picking up quickly as technology rapidly develops, and things we did not think were possible even ten years ago are suddenly possible. Here are our top five picks for the biggest advances happening in automation in 2022.

3D PRINTING IN MANUFACTURING FOR FINISHED COMPONENTS

3D Printers have come a long way since their start in 1988. Originally, they were intended for only rapid prototyping. Now, they can be used for creating finished parts in manufacturing and even 3D printing actual homes like “ House Zero ” from ICON. Of course, today’s 3D printers still have a long way to go in refining their processes, but they have made a ton of progress just in the past couple of decades.

SELF-DRIVING CARS

Big tech companies like Tesla and Google have been designing and producing driverless car technology. According to Investopedia, In 2050, self-driving cars are expected to create approximately $800 billion worth of opportunities for automakers and technology developers, said a report by Securing America’s Future Energy (SAFE). High-resolution machine vision cameras and LiDar technology capable of sensing objects and people around the car will be the key to unlocking the winner of this Full Self Driving (FSD) / autonomous vehicle race between the big tech companies of the world.

24-HOUR MANUFACTURING OPERATIONS

Industrial robots can operate 24/7, performing repeatable and tiresome processes. Additionally, the expensive parts typically associated with industrial robots have dropped compared with human labor in the past two decades. This allows manufacturers to achieve higher productivity and efficiency without additional labor costs.

FINE-TUNING CAPABILITIES

Industrial robots are now equipped with further sensing, measurement, and process-control receivers that help to guide increasingly dexterous machines. The sensors in robots today can sense touch, lights, pressure, temperature, vibration, humidity, sounds, and more.

‘SEMANTIC AUTOMATION’ REVOLUTIONIZES ROBOTIC PROCESS AUTOMATION (RPA)

Currently, automation developers must tell robots what to do step-by-step: “Move this, open that, bring this…” so even in drag-and-drop, low-code environments, building intricate automation can be complicated.

But semantic automation lets developers move away from rules-based approaches. Semantic software robots use AI to simply observe activity and begin to match it without step-by-step instructions. They will recognize the process, understand what data is required, know where to get it, and where to move it.

With all the new developments in automation and robotics, it is just as important that each system features the best machine vision optics. Data flows from the lens first, and Computar’s Machine Vision lenses come in multiple sizes, focal lengths, and with convenient capabilities perfect for automation and robotic applications.

TO KNOW MORE ABOUT HIGH RESOLUTION CAMERA DISTRIBUTOR IN MUMBAI INDIA, CONTACT MENZEL VISION AND ROBOTICS PVT LTD OR CALL US AT (+ 91) 22 35442505 OR EMAIL US AT INFO@MVRPL.COM


Thursday 1 September 2022

HOW AUTOMATION AND ROBOTICS ARE CHANGING THE AGRICULTURE INDUSTRY

 In recent years, the agricultural industry is increasingly—and unsurprisingly—turning to robotics, automation, and AI to increase productivity and efficiency. Without robotics, traditional farming methods struggle to keep up. In addition, many farmers are suffering to find an adequate workforce. The need for automated farming has never been more urgent. Here are the top 10 ways automation and robotics are changing the industry.


1. AUTONOMOUS PLANTING

Nurseries can be the first step in the food journey for many crops. Nursery automation solutions are used for intelligent seeding, planting, potting, and plant inspection. In addition, mobile bots can move plants through each stage of development, then be used for harvesting, packaging, and palletizing.

2. SEEDING

The traditional method for sowing seeds uses a "broadcast spreader" attached to a tractor, throwing the seeds while the tractor is in motion. This method is inefficient and wasteful.

With machine vision, farmers use geomapping and "agrobots" for autonomous precision seeding. First, the geomapping generates a map with the field's soil properties (quality, density, etc.). Then, a tractor with a robotic-seeding attachment places the seeds at varied locations and depths to optimize germination.

3. CROP MONITORING

Farmers can collect real-time data autonomously and continuously to analyze their fields using IoT sensors, ground robots, and drones.

4. CROP ANALYSIS

Machine vision combined with deep learning algorithms can detect soil conditions, analyze aerial views of the agricultural land, and assess crop health based on geo-sensing information.

In addition, ground-based robots can provide detailed monitoring by getting closer to the crops.

5. FERTILIZING AND IRRIGATION

By targeting specific plants, robot-assisted irrigation can reduce wasted water. Robots can access areas where other machines cannot and autonomously navigate between rows of crops, then selectively water the plants where it is most needed.

6. CROP WEEDING AND SPRAYING

Robots are efficient for targeted spraying of pesticides and weed killers onto crops. In addition, "micro-spraying" significantly reduces the amount of herbicide used. This is less wasteful and is kinder to the environment. There are also weeding robots that use lasers to kill weeds.

According to Vietnam National University, micro-spraying robots use machine vision technology to detect weeds and target them with needed herbicides.

7. THINNING AND PRUNING

Pruning can be complex. The winemaking industry currently uses autonomous vineyard robots to prune grape vines. The robots create 3D models and identify what needs pruning. Intelligent software then directs the robot where to cut. Some pruning robots use a spinning cutting tool to ensure precision and use AI to learn the specific parts to prune by reviewing examples. Machine vision then detects which plants to keep and which to remove.

8. AUTONOMOUS TRACTORS

It is now more commonplace for tractors to be equipped with robotics. However, there are also fully-autonomous tractors using machine vision. Autonomous tractors can gather, identify, and sort crops. Autonomous harvesting helps farmers reduce costs and increase efficiency.

9. SHEPHERDING AND HERDING

Although most agricultural robots are currently applied in crop growing, some applications are used for sheep and cattle farming. For example, using barking drones, robotic herding systems have been developed to herd animals without human input. The rapid development of this automated technology can also be used for monitoring, protecting, and conserving animal individuals or species.

10. INSPECTION AND SORTING

Using SWIR (Short-Wave Infra-Red) lenses, machine vision provides strong contrast and high-resolution imaging needed for agricultural sorting and inspection. Hyperspectral imaging can provide valuable information, including the size, shape, color, and even the chemical composition (ripeness, fungal content, decay, etc.) of the objects being inspected.

CONCLUDING THOUGHTS

Over the past few decades, there has been an enormous expansion of automation, machine vision, and robotics in the agricultural industry. The advancements in robotic crop farming, monitoring, analysis, harvesting, weed control, and more, will continue to revolutionize agriculture and prove to be more efficient and beneficial for both the farmer and the consumer.

When setting up these automated vision systems, lens choice is vital. That makes the machine vision lens choice one of the most impactful decisions that affect how well your system will work for you.


TO KNOW MORE ABOUT MACHINE VISION LENS DISTRIBUTORS IN MUMBAI INDIA, CONTACT MENZEL VISION AND ROBOTICS PVT LTD OR CALL US AT (+ 91) 22 35442505 OR EMAIL US AT INFO@MVRPL.COM


Friday 22 July 2022

WHAT’S THE DIFFERENCE BETWEEN VISIBLE AND SWIR LENSES?

 Short-wave Infrared (SWIR) lenses are designed to operate in the 0.9-1.7 µm wavelength region. SWIR is close to visible light in that photons are reflected or absorbed by an object, providing the strong contrast needed for high-resolution imaging. SWIR is great for the Machine Vision and the Health & Sciences Industries because water vapor, fog, and certain materials such as silicon are transparent. SWIR imaging is also helpful because similar-looking colors visible to the human eye are easily differentiated using SWIR lenses.

HOW DOES IT WORK?

SWIR lenses are like visible cameras in the way they detect reflected light. Photons in the SWIR wavelength are reflected or absorbed by objects, allowing for high-resolution imaging with a strong contrast. This kind of technology is the only wavelength technology that can pierce through cloud coverage and capture a well-defined image.

For our ViSWIR series, according to Mr. Katsuya Hirano, Chief Optical Designer, CBC Group, for fully-corrected focus shift in visible and SWIR range (400nm-1,700nm): “By using ultra-low dispersion glass and low partial dispersion glass paired with superior design technology developed from Computar’s extensive optics experience, the focus shift is minimized within a few micron mm at a super wide range of wavelengths. With this, spectral imaging is achievable with a single sensor camera by simply syncing the lighting.”

With Computar's ViSWIR HYPER-APO lens series, it is unnecessary to adjust focus for differences. By adopting an APO floating design*, the focus shift is reduced at any wavelength and any working distance. This function makes SWIR lenses ideal for multiple applications, including machine vision, UAV, and remote sensing.

WHICH LENS IS THE BEST FOR MY INDUSTRY?

For the Machine Vision Industry as well as the Life Sciences Industry, we recommend our ViSWIR Series. These lenses achieve a clear and precise image visible to the SWIR range by applying a multilayer coating to absorb the specific light. A higher-resolution lens gives you greater specificity in designing and implementing the most efficient vision solutions. So, for medical devices and robotics, this is great for detail work and other short-range imaging.

For the Intelligent Transport Systems Industry and Government and Defense, a blend of visible and SWIR would be most helpful—visible imaging for distance and SWIR for detailed imaging.

Some lenses, such as ours, are designed to perform well with for both visible and SWIR, enabling cost-effective and performance imaging systems for a range of applications.

 TO KNOW MORE ABOUT HIGH RESOLUTION CAMERA DEALER IN INDIA CONTACT MENZEL VISION AND ROBOTICS PVT LTD OR CONTACT US AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM

Thursday 23 June 2022

HOW DEEP LEARNING AUTOMATES PACKAGING SOLUTION INSPECTIONS

Increasingly, packaging products require their own custom inspection systems to perfect quality, eliminate false rejects, improve throughput, and eliminate the risk of a recall. Some of the foundational machine vision applications along a packaging line include verifying that a label on a package is present, correct, straight, and readable. Other simple packaging inspections involve presence, position, quality (no flags, tears, or bubbles), and readability (barcode and date/lot codes present and scannable) on a label.


But packaging like bottles, cans, cases, and boxes—present in many industries, including food and beverage, consumer products, and logistics—can’t always be accurately inspected by traditional machine vision. For applications which present variable, unpredictable defects on confusing surfaces such as those that are highly patterned or suffer from specular glare, manufacturers have typically relied on the flexibility and judgment-based decision-making of human inspectors. Yet human inspectors have some very large tradeoffs for the modern consumer packaged goods industry: they aren’t necessarily scalable.

For applications which resist automation yet demand high quality and throughput, deep learning technology is a flexible tool that application engineers can have confidence in as their packaging needs grow and change. Deep learning technology can handle all different types of packaging surfaces, including paper, glass, plastics, and ceramics, as well as their labels. Be it a specific defect on a printed label or the cutting zone for a piece of packaging, Cognex Deep Learning can identify all of these regions of interest simply by learning the varying appearance of the targeted zone. Using an array of tools, Cognex Deep Learning can then locate and count complex objects or features, detect anomalies, and classify said objects or even entire scenes. And last but not least, it can recognize and verify alphanumeric characters using a pre-trained font library.

Here, we are going to explore how Cognex Deep Learning does all of the above for packagers and manufacturers.

PACKAGING DEFECT DETECTION

Machine vision is invaluable to packaging inspections on bottles and cans. In fact, in most factories, it is machine vision which not only inspects the placement of labels and wrapping but also places and aligns them during manufacturing.

Labeling defects are well-handled by traditional machine vision, which can capably detect wrinkles, rips, tears, warpage, bubbles, and printing errors. High-contrast imaging and surface extraction technology can capture defects, even when they occur on curved surfaces and under poor lighting conditions. Yet the metal surface of a typical aluminum can might confuse traditional machine vision with its glare as well as the unpredictable, variable nature of its defects, not all of which need to be rejected. Add to those challenging surface inspections countless forms and types of defects—for example, long scratches and shallow dents—and it quickly becomes untenable to explicitly search for all types of potential defects.

Using a novel deep learning-based approach, it’s possible to precisely and repetitively inspect all sorts of challenging metal packaging surfaces. With Cognex Deep Learning, rather than explicitly program an inspection, the deep learning algorithm trains itself on a set of known “good” samples to create its reference models. Once this training phase is complete, the inspection is ready to start. Cognex Deep Learning can identify and report all defective areas on the can’s surface which deviate outside the range of a normal acceptable appearance.

PACKAGING OPTICAL CHARACTER RECOGNITION

Hiding somewhere on almost all consumable packages, regardless of material or type, lies a date/lot code. Having these codes printed cleanly and readable is important not only for end-users and consumers doing their shopping but also for manufacturers during the verification stage. A misprinted, smeared, or deformed date/lot code printed onto a label on a bottle or package of cookies, for example, causes problems for both.

Typically, traditional machine vision could easily recognize and/or verify that codes are readable and correct before they leave the facility, but certain challenging surfaces make this too difficult. In these cases, a smeared or slanted code printed on specular material like a metal soda case could be read with some effort by a human inspector but not with much reliability by a machine vision inspection system. In these cases, packagers need an inspection system that can judge readability by human standards but, critically, with the speed and robustness of a computerized system. Enter, deep learning.

Cognex's deep learning OCR tool is able to detect and read the plain text in date/lot codes, verifying that their chains of numbers and letters are correct even when they are badly deformed, skewed, or—in the case of metal surfaces—poorly etched. The tool minimizes training because it leverages a pre-trained font library. This means that Cognex Deep Learning can read most alphanumeric text out-of-the-box, without programming. Training is limited to specific application requirements to recognize surface details or retrain on missed characters. All of these advantages help ease and speed implementation and contribute to successful OCR and OCV application results without the involvement of a vision expert.

PACKAGING ASSEMBLY VERIFICATION

Visually dependent assembly verification can be challenging for multi-pack goods which may have purposeful variation, as in the case of holiday-themed or seasonal offerings. These packs showcase different items and configurations in the same case or box.

For these sorts of inspections, manufacturers need highly flexible inspection systems which can locate and verify that individual items are present and correct, arranged in the proper configuration, and match their external packaging. To do this, the inspection system needs to be able to locate and segment several regions of interest within a single image, possibly in multiple configurations that can be inspected line-by-line to account for variations in packaging.

To locate individual items by their unique and varying identifiable characteristics, a deep learning-based system is ideal because it generalize each item’s distinguishable characteristics based on size, shape, color, and surface features. The Cognex Deep Learning software can be trained quickly to build an entire database of items. Then, the inspection can proceed by region, whether by quadrant or line-by-line, to verify that the package has been assembled correctly.

PACKAGING CLASSIFICATION

Kitting inspections require multiple capabilities of its automated inspection system. Consumer product multi-packs need to be inspected for the right number and type of inclusions before being shipped. Counting and identification are well-loved strengths of traditional machine vision. But to ensure that the right items are included in a multi-part unit requires classifying included products by category—for example, does a sunblock multi-pack contain two types of sunblock, or does it contain an extra sunblock lip balm?

This categorization is important yet remains out of reach for traditional machine vision. Luckily, Cognex's deep learning classification tool can easily be combined with traditional location and counting machine vision tools, or with deep learning-based location and counting tools if the kitting inspection deals with variable product types and requires artificial intelligence to distinguish the generalizing features of these types.

Deep learning-based classification works by separating different classes based on a collection of labelled images and identifies products based on these packaging discrepancies. If any of the classes are trained as containing anomalies, then the system can learn to classify them as acceptable or unacceptable.

New deep learning-enabled vision systems differ from traditional machine vision because they are essentially self-learning and trained on labeled sample images without explicit application development. These systems can also be trained on new images for new inspections at any time, which makes it a valuable long-term asset for growing businesses.

Deep learning-based software is also quick to deploy and uses human-like intelligence which is able to appreciate nuances like deviation and variation and outperform even the best quality inspectors at making reliably correct judgments. Most importantly, however, is that it is able to solve more complex, previously un-programmable automation challenges.

Manufacturers in the packaging industry are increasingly demanding faster, more powerful machine vision systems, and for good reason: they are expected to make a great number of products at a higher quality threshold and for less cost. Cognex is meeting customers’ rigorous requirements head-on by offering automated inspection systems that marry the power of machine vision with deep learning in order to manufacture packaging more cost effectively and robustly.

TO KNOW MORE ABOUT MACHINE VISION DEALER INDIA FOR PACKAGING SOLUTIONS CONTACT MENZEL VISION AND ROBOTICS PVT LTD OR CONTACT US AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM

Wednesday 1 June 2022

3D VISION SOLUTIONS FOR FOOD AND BEVERAGE APPLICATIONS

 In the food and beverage industry, packaging quality verification protects a brand’s image and prevents product spoilage, but the system requires precision. The following three vignettes highlight food and beverage inspection challenges and how they are solved with the right machine vision solution.

CAN INSPECTION

Inspecting aluminum cans for missing or damaged features must be done quickly to prevent bottlenecks. A small dent in a can or a tab lifted by just two degrees can result in a failure. Defects this small are difficult or impossible to identify with 2D imaging systems. However, moving to a 3D solution may disrupt production and require retraining workers. Every second counts in high-speed applications, where a few seconds may represent hundreds or thousands of products.


To achieve higher levels of detection without disrupting production or implementing long training programs, Cognex developed the In-Sight 3D-L4000 vision system. This high-performance smart camera delivers the best-quality patented laser imaging. It can detect:


  • Blobs or volumes.
  • Edges.
  • Surface angles.
  • Step heights, etc.

With both 2D capabilities and true 3D vision, the In-Sight 3D-L4000 can simultaneously run both a 2D and 3D inspection of the part. The In-Sight 3D-L4000 is available in three models to meet specific requirements for a range of applications, such as can, packaging, and product inspection.


FINAL PRODUCT INSPECTION

Food comes in many shapes and sizes. Identifying different candies, verifying frostings and decorations are correct and determining whether a finished product will fit inside its packaging are all complex tasks for automated imaging systems. To keep up with industry, 3D solutions and easy-to-use software work together to make the food and beverage industry even sweeter.

Brand image is important. If a customer sees a missing cookie, broken cereal bars, or cupcake frosting smashed into the lid of a container, they may not purchase the product, and instead associate the brand with poor quality. To verify a product’s quality, 3D vision solutions are needed. Tasks include:


  • Detecting defects.
  • Identifying parts – e.g., cookie versus frosting.
  • Verifying heights.
  • Ensuring proper volume.
  • Checking flatness.
  • Verifying presence and absence of components.

High-quality optics and smart cameras are needed to accurately detect features and to determine volumes. The In-Sight 3D-L4000 provides the performance needed to guarantee product quality. However, the biggest challenges may not be solved with the camera alone. The next application demonstrates the need for software interfaces that are easy to set up, operate and maintain.

CEREAL BAR INSPECTION

For a cereal bar application, the key to success is finding software that works easily and effectively without extended training or third-party technicians. Intuitive In-Sight software allows in-house technicians to quickly set up tools. Then the software handles the rest — determining every pixel above and below the set plane with linear measurements and highlighting features in a simple interface to communicate results clearly to every user.

Many production line workers are already familiar with In-Sight’s spreadsheet programming paradigm, while new operators can learn the system in minutes. In one example, 50 new users were trained on the 3D In-Sight program in less than three hours. Additionally, the In-Sight 3D-L4000 high-performance smart camera detects products that have been rotated or tilted. Its high-quality 2K resolution, with up to 4 kHz scan rate and patented speckle-free blue laser optics, provide a range of machine vision solutions with fast, accurate, and repeatable results.


The In-Sight 3D-L4000’s unique blue-laser optical design has several benefits including:

  • 2M eye safe operation.
  • More light delivered to the surface than competing solutions.
  • Accurate 3D point clouds for measurements.
  • Capability to capture a scan even with a percentage of the laser is blocked by debris.

This last feature is an unprecedented achievement in 3D laser scanning, made possible by the patented speckle-free laser optics. Most laser scanning applications limit the designer’s option to mount the scanner upside down because of concerns about debris blocking the laser light.

A HEALTHIER TECHNOLOGY

When it comes to the food and beverage industry, inspection can be the difference between success or millions in lost revenue. Having less product in a container than advertised can damage a brand’s image, and too much product may cause packaging errors downstream. Challenges increase as food production lines become faster, more automated, and more dynamic.

Even a simple packaging or product line may require advanced solutions for inspecting various volumes, surfaces, and features. With easy-to-use spreadsheets for effective communication, the In-Sight 3D-L4000 delivers accurate data for inspection and keeps the food and beverage lines moving.


TO KNOW MORE ABOUT MACHINE VISION SYSTEM PRODUCT DEALER IN MUMBAI INDIA CONTACT MENZEL VISION AND ROBOTICS PVT LTD CONTACT US AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM

Monday 23 May 2022

FIVE EDGE INSPECTION TECHNIQUES IN MACHINE VISION TECHNOLOGY

Finding an edge is one of the most critical functions in machine vision systems, whose algorithms comb through the pixels in a digital image in search of lines, arcs and geometric shapes. Software translates this data into edges, which tell machine vision software which areas to focus on and which ones to ignore.

Thus, edge-inspection tools bear a substantial responsibility for the accuracy and efficiency of machine vision systems. The fundamentals of edge inspection tools illustrate some of the core functions of machine vision.

HOW EDGE DETECTION WORKS IN A FACTORY SETTING

Here’s a common edge-inspection scenario based on Cognex’s machine vision software:
A completed piston assembly must be inserted into a V-8 engine block. A machine vision application takes a photograph of the piston assembly and uses machine vision algorithms to identify its edges. Another picture finds the edges within the engine to block that reveal the piston assembly’s installation location.

Edge-inspection tools are configured to direct the machine vision system to focus its attention on specific areas of the piston assembly and engine block while filtering out everything else. This is crucial because computer processors must scan every pixel within an image, which requires processing time and energy. The system runs best if it scans only the required pixels.

In our example, a machine vision system uses edge inspection data to set up a quality-control application that scans images of the piston assembly and engine block for evidence of defects. Once they pass inspection, they proceed down the assembly line to a robot arm that uses edge-inspection data to tell the robot exactly where to place the piston within the engine block.

Operations like this play out in almost infinite variety, given the widespread prevalence of machine vision technology in distribution centers and factory automation.


5 TOOLS FOR ACQUIRING ACCURATE EDGE INSPECTION DATA

Here’s a common edge-inspection scenario based on Cognex’s machine vision software:
A machine vision system sets up a series of parameters to determine if an item being scanned should progress through the production environment or be rerouted to an area for addressing defects. Every item photographed or scanned gets a pass or fail rating.

Edge inspections can be configured to establish tolerances. Any object falling outside these tolerances can be rejected, while everything within the tolerances passes.

To visualize how edge inspections work, imagine a modern-day factory creating reproductions of old-fashioned wagon wheels, which have three principal parts: the outer rim, the spokes and the hub. Edge inspection parameters are critical to using industrial robots to automate the manufacturing process.

These five edge inspection techniques come into play:

  • Distance. In a wagon wheel, the distance between spokes, rims and hubs must fall within tight tolerances. Edge inspection tools measure the distance between these components in a scanned image, enabling both quality control and alignment for robotic production.
  • Angle. Each spoke of the wagon wheel has to be installed at an exact angle. An angle edge inspection tool gives the robot accurate guidance on spoke alignment.
  • Circle diameter. Manufacturing or distribution flaws might deliver the wrong rims or hubs to the robot. A circle diameter edge inspection measures the distance from the center to the edge, creating data for flagging production errors.
  • Circle concentricity. The wagon wheel’s rim and hub share the same center, which makes them concentric. A circle concentricity edge inspection helps the robot align the rims and hubs.
  • Radius. The radius of each rim and hub provides more data to ensure that the robotic automation gets them into precise alignment.

Manufactured components as simple as a wagon wheel or as complex as a smartphone circuit board all benefit from these kinds of edge inspection applications.


CHOOSING THE RIGHT EDGE INSPECTION TECHNOLOGY

At Cognex, we’ve been perfecting the art and science of machine vision for decades. Our InspectEdge tool is one of the core assets in our machine vision software suite, which was designed to make it easy for anybody to set up a vision application, even if they don’t have advanced certifications or college degrees.

Other tools in our software suite accomplish essential tasks like bead inspection, pattern matching, identification and image processing. Whether you’re running a distribution center or automating a factory environment, these tools will give you an edge in quality control and efficiency.

TO KNOW MORE ABOUT MACHINE VISION SYSTEM PRODUCT DEALER IN MUMBAI INDIA CONTACT MENZEL VISION AND ROBOTICS PVT LTD CONTACT US AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM


Thursday 19 May 2022

HOW TO SELECT THE CORRECT MACHINE VISION LENS FOR YOUR APPLICATION

 When setting up your automated vision system, the lens may be one of the last components you choose. However, once your system is up and running, your data flows from the lens first. That makes your lens choice one of the most impactful decisions that affect how well your vision system works for you.

Resolution is a priority. A higher resolution lens gives you greater specificity in designing and implementing the most efficient vision solutions.

Don't let the lens be the weak link in your Machine Vision (MV) system. Choosing a great lens tailored to your system's needs can be daunting. To select the ideal lens, one should consider several factors. So, what is the best way to choose the right lens for a machine vision application?

Selecting a Machine Vision Lens Checklist

1. What is the distance between the object to be inspected and the camera, i.e., the Working Distance (WD)? Does the distance affect the focus and focal length of the lens?

2. What is the size of the object? Object size determines the Field of View (FOV).

3 . What resolution is needed? The image sensor, as well as the pixel size, are determined here.

4. Is camera motion or special fixturing required?

5. What are the lighting conditions? Can the lighting be controlled, or is the object luminous or in a bright environment?

6. Is the object or camera moving or stationary? If it is moving, how fast? Motion between the object and camera has shutter speed implications, affecting the light entering the lens and the f-Number.

These variables and more make selecting the proper lens a challenge, but an excellent place to start is with three significant features: type of focusing, iris, and focal length.

Choosing a great lens tailored to your system's needs can be daunting, but we are here to help. Talk to a lens specialist at Computar today and find out how we can assist in selecting the correct lens for you.

TO KNOW MORE ABOUT MACHINE VISION SYSTEM PRODUCT DEALER IN MUMBAI INDIA CONTACT MENZEL VISION AND ROBOTICS PVT LTD CONTACT US AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM

Monday 25 April 2022

HOW A VISION SYSTEM WORKS


The architecture of a vision system is strongly related to the application it is meant to solve. Some systems are “stand-alone” machines designed to solve specific problems (e.g. measurement/identification), while others are integrated into a more complex framework that can include e.g. mechanical actuators, sensors etc. Nevertheless, all vision systems operate are characterized by these fundamental operations:


Image acquisition. The first and most important task of a vision system is to acquire an image, usually by means of light-sensitive sensor. This image can be a traditional 2-D image, or a 3-D points set, or an image sequence. A number of parameters can be configured in this phase, such as image triggering, camera exposure time, lens aperture, lighting geometry, and so on.

Feature extraction. In this phase, specific characteristics can be extrapolated from the image: lines, edges, angles, regions of interest (ROIs), as well as more complex features, such as motion tracking, shapes and textures. Detection/segmentation. at this point of the process, the system must decide which information previously collected will be passed on up the chain for further elaboration.

High-level processing. The input at this point usually consists of a narrow set of data. The purpose of this last step can be to:


  • Classify objects or object’s feature in a particular class
  • Verify that the input has the specifications required by the model or class
  • Measure/estimate/calculate specifics parameters as position or dimensions of object or object’s features

TO KNOW MORE ABOUT MACHINE VISION SYSTEM PRODUCT DEALER IN MUMBAI INDIA CONTACT MENZEL VISION AND ROBOTICS PVT LTD CONTACT US AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM