Tuesday, 1 September 2020

ROCK AND ROLL! MACHINE VISION CAMERAS FOR VR IN LIVE CONCERTS

 AUGUST, 2020  ARTICLE

Machine Vision Cameras Dealer India - Menzel Vision and Robotics | Rock and roll! Machine vision cameras for VR in live concerts


Not long ago, virtual reality was little more than the stuff of science fiction books and movies. Today, virtual reality is not only making inroads in actual high-end science and technology, but also in ways that affect the life of everyday people from interactive gaming and data-driven sports broadcasting, to video conferencing, education & training, and live music & concerts.

Many music and concert lovers may have had the experience of been shoved around in the audience or had a very limited view of their favorite artists as they struggle to get the best possible view of the stage from the audience. Some people prefer to avoid the front crowds and prefer to sit back in the lawn area or back bench seats where the rowdy crowd behavior is minimal. But choosing this option means there is a risk of missing out on the details of the artists performance or the expressions conveyed to listeners within the proximity of eye contact with the performer. Such moments add to the live feel and reality of a concert experience.

In addition, since the quasi-collapse of the music industry where revenues for physical music tumbled drastically between 2001 and 2018, the industry has turned to live music as its main source of income. With several hundreds of live concerts taking place globally each year, it is impossible for fans to be physically present at all times.

Another challenge for live concerts is that they place a natural limit on the number of people who can attend them. This again, has made it harder for music fans to see their favorite artists. To meet these challenges, virtual reality has stepped in and is now playing a key role in bringing live concert experiences from the front rows to the living rooms of fans and audiences worldwide.

Virtual reality concerts are a win-win situation for the music industry and the music fans. In addition to those who pay to actually be present at a concert, the music industry can monetize everyone who couldn't obtain tickets, or who didn’t always feel like going out to see their favorite musicians perform. Virtual reality lets the music industry combine the best of both worlds: the apparent spontaneity and singularity of live music with the reproducibility and accessibility of recorded music.

Camera technology plays a key role in enabling virtual reality in live concerts. This is because the concerts are captured live from various angles using high-end cameras. The live images are then processed in almost real time to enable remotely-located audiences to choose and view the concert from positions of their choice (e.g., viewed from the front rows, viewed from different acompanists such as percussionsts, guitarists or piansts, different views of the crowds, etc.) all while delivering an immersive experience that goes far beyond that provided by a traditional concert DVD.

From a display perspective, like in sports imaging, the horizontal pixel resolution plays an important role in the quality of virtual reality. This resolution can either be actual or interpolated from a higher or lower raw image format.

4K horizontal resolution for VR has been around for quite some time. 4K, also known as Ultra HD, has a pixel resolution of 4,096 x 2,160 pixels. When compressing video streams from 4K to HD-streamable video the images are clearer, sharpner and cleaner. Shooting at such a high resolution gives editors and image processing engineers an opportunity to zoom far into images and reframe without losing information.

Using an 8K horizontal resolution, also known as Full Ultra HD, allows the user to zoom in twice as much and still get a 4K image. However, achieving real 8K horizontal resolution for virtual reality applications is difficult even if the cameras support 8K horizontal resolutions. This is because most virtual reality installations prefer each camera to have an ultra-wide field of view to give viewers a panoramic or hemispherical view while reducing the equipment handling complexities during live concerts.

The only way to achieve such ultra-wide fields of view is by using fish-eye lenses. Combining a fish-eye lens with a camera using a rectangular sensor is only possible by having an image circle that is smaller than the sensor. Today, sensors with 8K horizontal resolution are able to achieve 5324 pixels in real horizontal resolution when paired with a fish-eye lens of 4.3 mm focal length. This helps to achieve an angle of view of 250° with 21 pixels per degree, which is a good number of pixels for high quality image processing and enhancement. Interpolation can then be used to achieve an 8K horizontal screen resolution.

Obviously, the higher the camera resolution, the closer one can get to achieving real 8K horizontal resolution. But it is important to remember that these are live action events. Higher VR resolution is only useful if a camera speed of at least 30 FPS can be maintained. This limits the choice of cameras that can be used for VR applications.

One final requirement for cameras used in virtual reality applications is reliable data transmission at low noise levels over long distances. Concert venues are typically quite large, and cameras may need to be placed at locations far away from the crowds. CXP and optical interfaces (e.g., SFP+) are reliable and well-known interfaces to handle both the dist

QUALITY INSPECTION OF PHARMACEUTICALS USING HIGH SPEED MULTISPECTRAL IMAGING

 AUGUST, 2020  ARTICLE

High Speed Imaging Cameras dealer India for Pharmaceutical Industry - Menzel Vision and Robotics | Quality inspection of pharmaceuticals using high speed multispectral imaging


Pharmaceutical manufacturing is a complex process which mainly deals with the manufacturing of drugs and medicines. Being a fully automated high-speed manufacturing processes, pharmaceutical production is especially challenging. It is subject to strict regulations given by public health authorities.

Defective containers, incorrect or missing medicine, mislabelling, inefficient packaging or decoloring are risks to consumers and thus different stages of the manufacturing process need to be critically inspected. In order to produce safe medicines that minimize consumer risk and succeed in a competitive market at the same time, there is a requirement for highly effective, versatile, and sensitive quality control systems. Optical quality control using camera technology plays an important role to fulfill the challenging inspection tasks in pharmaceutical manufacturing.

Pharmaceutical products come in various forms and packages, the most common being blister packages and tablets. Those consist mainly of three parts: cavity, seal, and the tablet or drug itself. The cavity is made from synthetic material or aluminium and holds the drug.

Cavity and drug are sealed with a synthetic material, aluminium, paper, or soft foil. Though each component is closely monitored prior to packaging, shortcomings still occur during the primary packaging process. Damage to the package or content, including incorrect placement, coloring, or labelling, must be identified and eventually followed by removal of the defective product.

Production numbers are extremely high for most pharmaceutics. Optical quality control systems along with sophisticated machine learning algorithms can handle large numbers, while offering high sensitivity for defect recognition. Using high speed optical control systems, the whole sample can be inspected, which is a major advantage compared to other quality control systems like manual or mechanical inspection which can end up destroying the sample during the inspection process. There are also limitations on the size of the sample that can be handled using mechanical inspection systems.

For many years, inspection of pharmaceutical packages has been carried out with conventional RGB cameras, using only visible features to detect flaws. With the advent of multispectral cameras, one can now move beyond the visible spectrum. Multispectral cameras capture information of multiple discretely positioned spectral bands, including bands outside the visible region.

In addition to visible R-G-B imaging, the additional spectral bands in multispectral imaging can assist in distinguishing different tablets based on their chemical composition, even if they are already enclosed and sealed. Furthermore, the quantity and uniformity of the active pharmaceutical component (APC) in the tablet can be measured.

The possibility to assess the extrinsic and intrinsic properties at the same time has major advantages compared to conventional quality control inspection systems. Extrinsic properties such as package condition, labelling and dosage instructions, and color coding can be inspected using the visible spectrum. Intrinsic properties of medicinal packages such as breakage of pills, fill levels of liquids, foreign objects and quantity of pills can be captured using specific spectral bands – typically in the near infrared (NIR) region.

Multispectral imaging can also be used in applications related to mistaken identities of defects. For example, in parenteral (injectable) drugs, inspection is critical to verify that there are no particles in the parenteral solution. Multispectral imaging can more easily differentiate between bubbles and particles to minimize waste while ensuring the purity of the injectable medicine.

Advanced multispectral imaging also assists in inspecting the chemical composition of pharmaceuticals. Both, intrinsic and extrinsic information can be combined for quality assessment. This allows the producer to have a single quality control setup, which is generally more robust, simpler to operate and to maintain.

Personalized medicine is going to be an important area of pharmaceuticals in the future where medicines would be manufactured based on an individual’s underlying health conditions, reaction to specific chemicals, and effectiveness for a specific patient. Camera technology combined with artificial intelligence will continue to play an important role in the quality inspection of personalized medicines.

To support high throughput in pharmaceutical production lines, modern inspection systems will need to be equipped with high speed multispectral cameras, which include the ability to inspect multiple spectral bands at high speeds simultaneously. High performance interfaces such as 10GBASE-T (10 GigE Vision) not only have the bandwidth for high frame rates but also support multi-stream output over a single cable with independent control of each waveband for separate analysis or for fusing together on the host processor.

Another important consideration is the spatial resolution of the camera device. There are a variety of multispectral techniques used in cameras. Some use pixel-level filter arrays or multiple optical paths that sacrifice spatial details for spectral diversity.

Pharmaceutical inspection systems demand high spatial resolution per channel to ensure that small defects such as cracks or foreign particles on pill surfaces, air bubbles in liquids, dosage instructions on extrinsic packaging, etc. are clearly identifiable.

Accurate alignment and overlap of the individual spectral bands assist in precisely identifying the position and size of the defects. It also helps to simultaneously trace and correlate the defect characteristics seen through different spectral bands. A multispectral camera with full sensor resolution and a single optical axis for all spectral bands is often the most precise method to achieve such results.

Lastly, builders of future pharmaceutical inspection systems will benefit from new customization technology that allows them to precisely specify the size and location of the spectral bands needed for their particular application. In this way they can keep the number of wavebands to a minimum in order to maximize the efficiency of the system. Having more spectral bands than needed can result in challenging light source requirements and can drastically reduce the speed of the multispectral system.

Vision system builders can use the customization approach to create the right balance between the number of bands, the speed of the system and effectiveness of the inspection process.

Saturday, 16 May 2020

FACE MASK INSPECTION MADE BETTER AND FASTER WITH BASLER ACE CAMERAS



CUSTOMER

  •  O-Net Industry
  •  Location Shenzhen, China
  •  Industry: Medical Supply Inspection
  •  Implementaion: 2020

APPLICATION

An acute shortage of face masks caused by fear of the spreading coronavirus pandemic has been straining global medical supplies since the start of 2020. The smart face mask inspection system designed by O-Net Industry boosts productivity and increases product conformity rate for the manufacturers. By making the inspection process faster and more effective, this solution can both ease the pressing market need and help face mask manu-facturers drive production cost down.

Headquartered in Shenzhen China, O-Net Industry is a leading company dedicated to machine vision automa-tion. The vision systems designed by O-Net Industry are used in various applications including visual inspections, geometry measurement and OCR among others; they are also able to provide customized solutions tailored to the products to be inspected.


In a traditional production line, a high scrap rate is inevi-table due to interference by environmental factors and the inconsistent working conditions of face mask making machines, resulting in lower efficiency and conformity rate. The application of a vision inspection in the produc-tion process, however, can significantly improve the situation.

All parts of a face mask need to be inspected, including the covering, the edges, the ear loops and the metal strip that lets the wearer bend the mask around the bridge of the nose (Figure 1). Quality control needs to identify and remove masks that are overlapping, broken, contami-nated, askew or in the wrong size.
Face mask inspection is also made more complex by factors including:
  •  Uneven illumination occurs during inspection due to the grainy surface of the non-woven fabric of face masks
  •  Face masks to be inspected are mmoving and their positions are random on the conveyor
  •  The edge, ear loop and metal strip are difficult to distinguish in inspection images

SOLUTION AND BENEFITS

With the help of customized lighting and the Basler ace 5 MP camera, the smart face mask inspection system deve-loped by O-Net can obtain excellent images of each mask. The system can then use the alignment algorithm to check whether the face mask meets standards.

In the inspection process, the system finds the center and corners of the covering part of face masks via the image acquired (Figure 2), to identify products that are miss-hapen. Exact measurement of face masks can also be done. With the center confirmed, the software defines the region of interest (ROI) as well as the baseline, to measure the specific size of a face mask and determine whether it meets standards.

Inspection of ear loops focuses on whether the length of loops and the positions of the fixation points meet the set standards. In the image analysis process, ear loops can be defined as curved lines. The software will detect and extract these curved lines and determine whether they are broken (Figure 6), and if not, calculate their length (Figure 7). The system can detect the fixing points of the ear loops in the image (Figure 8) and measure the dis-tances between the fixation points and their respective neighboring edges, to determine whether they meet standards

Non-woven fabric allows some light to get through, but extra layers can significantly increase its opacity. Thus the folded section of a face mask will appear much darker than the rest in the image. In Figure 4, the upper and bottom part of the face mask appear pale; O-Net’s soft-ware is configured to accept an image where the paler area is 374550 pixels in size. By contrast, the paler area drops to only 28894 pixels, which is almost ten times less, when two face masks overlap (Figure 5). By using such features, O-Net’s system determines whether the face masks are overlapping.

Lastly, the edges of a face mask also need inspection. The system needs to check whether the pitting on the edges is well aligned. Two green baselines are defined based on the outer margins of a face mask. Then the vertical dis-tance from each pitting line to the baselines is measured, so that the system can tell if the pitting on the edges is well aligned

Inspection of the length and position of the metal strip in a face mask is also required. By using the cutomized ligh-ting, the inspection image can show both the metal strip inside and the non-woven fabric wrapping it. Once the ends of the metal strip are found in the image, the length can be calculated (Figure 10). Meanwhile, two baselines are drawn to check that the position of the metal strip is centered

The vision inspection systems developed by O-Net can effectively automate the tedious quality check process and significantly improve product conformity rate. On average, each system can replace up to four skilled human inspectors. In factory applications, the visual inspection system usually runs uninterrupted for long periods; there-fore system stability is essential. O-Net decided on the Basler ace acA2440-20gm camera, due to the well-known stability of this key vision component. Mr. Wang, sales manager of O-Net, explains that “the stability of Basler cameras has helped save considerable mainte-nance costs. Our system development is quite smooth thanks to the Basler pylon Camera Software Suite, as it’s genuinely a developer-friendly software suite, and a short time-to-mark gives us competitive advantages. The vision market is booming in China and speed is vital. Our custo-mers are demanding ever-faster delivery, so the fast and reliable lead time ensured by Basler China is another attractive reason for us to work together.”

The smart software system offers high compatibility and can be customized, as O-Net develops everything from operator interface to architecture. This type of vision inspection software solution can apply to many applications.

TECHNOLOGIES USED

  •  Camera: Basler ace acA2440-20gm
  •  Lightinh: Customized BT series lighting
  •  Software: SV Smart Vision System by O-Net

TO KNOW MORE ABOUT BASLER ACE CAMERAS FOR FACE MASK INSPECTION INDIA CONTACT MENZEL VISION AND ROBOTICS PVT LTD CONTACT US AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM



Thursday, 14 May 2020

THERMAL IMAGING FOR DETECTING ELEVATED BODY TEMPERATURE


Can thermal cameras be used to detect a virus or an infection? The quick answer to this question is no, but thermal imaging cameras can be used to detect Elevated Body Temperature. FLIR thermal cameras have a long history of being used in public spaces—such as airports, train terminals, businesses, factories, and concerts—as an effective tool to measure skin surface temperature and identify individuals with Elevated Body Temperature (EBT).

In light of the global outbreak of the coronavirus (COVID-19), which is now officially a pandemic, society is deeply concerned about the spread of infection and seeking tools to help slow and ultimately stop the spread of the virus. Although no thermal cameras can detect or diagnose the coronavirus, FLIR cameras can be used as an adjunct to other body temperature screening tools for detecting elevated skin temperature in high-traffic public places through quick individual screening.



If the temperature of the skin in key areas (especially the corner of the eye and forehead) is above average temperature, then the individual may be selected for additional screening. Identifying individuals with EBT, who should then be further screened with virus-specific diagnostic tests, can help reduce or dramatically slow the spread of viruses and infections.

Using thermal cameras, officials can be more discrete, efficient, and effective in identifying individuals that need further screening with virus-specific tests. A variety of institutions, including transportation agencies, businesses, factories, and first responders are using thermal screening as an EBT detection method and as part of employee health and screening (EH&S).

Airports in particular are actively employing FLIR thermal cameras as part of their screening measures for passengers and flight crews. The screening procedures implemented at airports and in other public places are just the first step when it comes to detecting a possible infection: it’s a quick way to screen for anyone who might be sick, and must always be followed up with further screening before authorities decide to quarantine a person.

WHAT FLIR CAMERAS ARE USED FOR THERMAL SCREENING?

While governments outside the United States may choose from many different cameras, FLIR has a 510(k) filing (K033967) with the US Food and Drug Administration (FDA) for select camera models for use as an adjunct to other body temperature screening tools to detect differences in skin surface temperatures. These cameras include the FLIR Exx-Series, FLIR T-Series, FLIR A320, and Extech IR200.


TO KNOW MORE ABOUT FLIR THERMAL BODY TEMPERATURE SCREENING CAMERAS DEALER MUMBAI CONTACT MENZEL VISION AND ROBOTICS PVT LTD CONTACT US AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM



Friday, 10 April 2020

WHATS THE DIFFERENCE BETWEEN VISION SENSORS AND VISION SYSTEMS?




The difference between vision sensors and vision systems is fairly basic:

A vision sensor does simple inspections like answering a simple yes-no question on a production line. A vision system does something complex like helping a robot arm weld parts together in an automated factory.

Machine vision sensors capture light waves from a camera’s lens and work together with digital signal processors (DSPs) to translate light data into pixels that generate digital images. Software analyzes pixel patterns to reveal critical facts about the object being photographed.



Automated production doesn’t have to mean robots building pickup trucks and smartphones. Many automated factory tasks require simple, straightforward kinds of vision sensor data:

  1. Presence or absence. Is there a part within the sensor’s field of view? If the sensor answers yes, then machine vision software gives the OK to move the part to its correct place in the production process.
  2. Inspection. Is the part damaged or flawed? If the sensor sees defects, then the part gets routed out of production.
  3. Optical character recognition (OCR). Does the part contain specific words or text? Answering this question can help automated systems sort products by brand name or product description.
Cognex machine vision systems use multiple sensors to perform all of these basic tasks plus many more complicated challenges:
  1. Guides/alignment: When parts require an exact position or alignment, vision systems use sensors to identify the correct parts and place them exactly where they need to go.
  2. Code reading: Codes on packages and individual components contain vital data that vision systems acquire in real time to sort finished goods and differentiate between parts within a production process.
  3. Gauges/measurement: Sensors can ensure that machined parts are cut to the proper dimensions.
  4. 3D imaging: Sensors create three-dimensional representations of parts and products. These images can help automate inspections and tell robotic arms where to pick up and place parts.
Every company has to decide whether they need simple vision sensors or more advanced vision systems. Vision sensors are designed to be easy to install and implement, so factory personnel typically can set them up and configure them without a lot of outside assistance. When the imaging job requires a simple go/no-go decision, vision sensors may be all the company needs.

Vision systems, by contrast, require more expertise and a significant investment of time and money for configuration, installment and training. Often, companies turn to third-party integrators who have deep expertise in vision system installations.

Every company in the machine vision sector has its own way of defining the difference between machine vision sensors and systems. Cognex, for instance, builds vision sensors that perform specific kinds of tasks, like quality control in food processing. Our vision systems combine advanced software with industrial-strength cameras to enable a broad spectrum of factory automation applications.

One way to distinguish between vision systems and sensors is to imagine hundreds of beer bottles on a conveyor belt in a bottling plant. A vision sensor can make sure every bottle has a cap. If the cap is there, then the bottle gets approved and sent to packaging, where another sensor makes sure every six-pack has six bottles.

But the bottling company may want to identify when a bottle cap is skewed past a certain angle. Or, perhaps they want to ensure that the six-pack doesn’t accidentally mix multiple beer varieties. That’s more likely to require a vision system.



TO KNOW MORE ABOUT COGNEX MACHINE VISION SYSTEM CAMERAS IN INDIA CONTACT MENZEL VISION AND ROBOTICS PVT LTD CONTACT US AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM



Monday, 23 March 2020

HOW MACHINE VISION AND DEEP LEARNING ENABLE FACTORY AUTOMATION




The pace of technology’s change over the last decade has been nearly unprecedented in human history and it’s only poised to become even more breathtaking in the years ahead: blockchain, robotics, edge computing, artificial intelligence (AI), big data, 3D printing, sensors, machine vision, internet of things, are just some of the massive technological shifts on the cusp for industries


Credits : pexels.com

Strategically planning for the adoption and leveraging of some or all these technologies will be crucial in the manufacturing industry. In the United States, manufacturing accounts for $2.17 trillion in annual economic activity, but by 2025 – just half a decade away – McKinsey forecasts that “smart factories” could generate as much as $3.7 trillion in value. In other words, the companies that can quickly turn their factories into intelligent automation hubs will be the ones that win long term from those investments.

“If you’re stuck to the old way and don’t have the capacity to digitalize manufacturing processes, your costs are probably going to rise, your products are going to be late to market, and your ability to provide distinctive value-add to customers will decline,” Stephen Ezell, an expert in global innovation policy at the Information Technology and Innovation Foundation, says in a report from Intel on the future of AI in manufacturing.

These technologies as applied in a factory or manufacturing setting are no longer nice to have, they are business critical. According to a recent research report from Forbes Insights, 93% of respondents from the automotive and manufacturing sectors classified AI as ‘highly important’ or ‘absolutely critical to success’. And yet, only 56% of these respondents plan to increase spending on artificial intelligence by less than 10%.

The disconnect between recognizing the importance of new technologies that allow for more factory automation and the willingness to spend on them will be the difference between those companies that win and those that lose. Perhaps this reticence to invest in something like AI could be attributed to the lack of understanding of its ROI, capabilities, or real-world use cases. Industry analyst Gartner, Inc. still slots many of AI’s applications into the “peak of inflated expectations” after all.

But AI, specifically deep learning or examples-based machine vision, combined with traditional rules-based machine vision can give a manufacturing factory and its teams superpowers. Take a process such as the complex assembly of a modern smartphone or other consumer electronic devices. The combination of rules-based machine vision and deep learning can help robotic assemblers identify the correct parts, identify differences like missing screws or misaligned casings, help detect if a part was present or missing or assembled in a different place on the product, and more quickly determine if those were problems. And they can do this at an unfathomable scale.

The combination of machine vision and deep learning are the on-ramp for companies to adopt smarter technologies that will give them the scale, precision, efficiency, and financial growth for the next generation. But understanding the nuanced differences between traditional machine vision and deep learning and how they complement each other, rather than replace, are essential to maximizing those investments.

TO KNOW MORE ABOUT COGNEX MACHINE VISION SYSTEM CAMERAS IN INDIA CONTACT MENZEL VISION AND ROBOTICS PVT LTD CONTACT US AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM



Thursday, 13 February 2020

THERMAL IMAGING FOR SAFER AUTONOMOUS VEHICLES

For the automotive industry, pedestrian safety has been a serious concern since the horseless carriage. Londoner Arthur Edsall was the first driver to strike and kill a pedestrian in 1896 at a speed of four miles per hour. It took the U.S. Congress almost seventy years to impose automotive safety standards and mandate the installation of safety equipment and another thirty years before airbags became a required safety feature. Automotive safety standards in the United States are promulgated by a process of reviewing accidents after they have occurred.

Credits : pexels.com

In 2019, the National Transportation Safety Board (“NTSB”) finally addressed this standards - promulgation process in their Most Wanted List of transportation safety improvements calling for an increase in the implementation of collision-avoidance systems in all new highway vehicles. The progression of this change in policy derived from the 2015 study (SIR-15/01) that described the benefits of forward-collision-avoidance systems and their ability to prevent thousands of accidents.

After that report was published, an agreement was reached with the National Highway Traffic Safety Administration (“NHTSA”) and the Insurance Institute for Highway Safety that would require compliance with the Automatic Emergency Braking standard (“AEB”) on all manufactured vehicles by 2022. However, the agreement did not identify the specific technology that would enable AEB, and the question remains whether such technology is readily available and economically viable for industry-wide adoption.

RAPIDLY IMPROVING SENSOR TECHNOLOGY


The pace of technology over the last thirty years has been astronomical, yet technology to make driving safer has not kept pace. A computer that not too long ago was the size of a garage now fits into the palm of your hand. Today driving should be safer than ever, but the reality is that without the implantation of available modern technologies, the uncertainties of the road will always be with us. According to the NHTSA, there were 37,461 traffic fatalities in 2016 in the United States. 

In 2015, there were a total of 6,243,000 passenger car accidents. 1 Globally, there is a fatality every twenty-five seconds and an injury every 1.25 seconds. In the United States there is a fatality every thirteen minutes and an injury every thirteen seconds. These statistics are mind blowing. Compared to recent events affecting the aviation industry, two Boeing 737 MAX 8 airplanes crashed killing 346 people, the same number of people that die as a result of automobile accidents every 144 minutes, and all Boeing 737 MAX 8 airplanes were grounded 

The cost for automotive accidents is high. According to the national safety counsel, in the United States, the annual cost of health care resulting from cigarette smoking is approximately $300 billion whereas the annual cost of health care for injuries arising from automobile accidents is roughly $415 billion.

Technology to protect automobile occupants has reduced the number of driver and passenger fatalities. However, the number of people who die as a result of an accident outside the automobile continue to climb at an alarming rate. Pedestrians are at the greatest risk, especially after dark. 

The NHTSA reports that in 2018, 6,227 pedestrians were killed in United States traffic accidents, with seventy-eight percent of pedestrian deaths occurring at dusk, dawn, or night.2 In the United States, pedestrian fatalities have increased forty-one percent since 2008. Solutions to address pedestrian fatalities are needed to meet the standards by 2022.

TECHNOLOGY IN THE DRIVER’S SEAT


Ultimately, it is safer cars and safer drivers that make driving safer, and automotive designers need to deploy every possible technological tool to improve driver awareness and make cars more automatically responsive to impending risks. Today’s safest cars can be equipped with a multitude of cameras and sensors to make them hyper-sensitive to the world around them and intelligent enough to take safe evasive action as needed. Microprocessors can process images and identify subject matter 1,000,000 times faster than a human being

Advanced Driver Assist Systems (“ADAS”) are becoming the norm, spotting potential problems ahead of the automobile making auto travel safer for drivers, passengers, and pedestrians, not to mention the more than one million ‘reported’ animals struck by automobiles in the United States annually resulting in $4.2 billion in insurance claims each year. The advances we have seen so far are the first steps to evolving towards a future of truly autonomous vehicles that will revolutionize both personal and commercial transportation. 

Drivers need no longer rely on eyes alone to maintain situational awareness. Early generations of vision-assisting cameras were innovative, but they were not particularly intelligent and could do little to perceive the environment around the car and communicate information that could be used for driver decision-making.

Today, with tools such as radar, light detection and ranging (“LIDAR”), cameras, and ultrasound installed, a car knows much more about the environment than the driver does and can control the vehicle faster and safer than the human driver. Risky driving conditions such as rain, fog, snow, and glare, are less hazardous when a driver is assisted by additional onboard sensors and data processors.

One of the most advanced automotive sensors is a thermal sensor that allows a driver and the automobile to perceive the heat signature of anything ahead of the driver. Previously used mainly for military and commercial applications, early forms of night vision first came to the mainstream automotive market in the 2000 Cadillac DeVille, albeit as a cost-prohibitive accessory priced at almost at a cost approaching $3,000.

Since then, thermal cameras and sensors have become smaller, lighter, faster and cheaper. After years of exclusive availability in luxury models, thermal sensors are now ready to take their place among other automotive sensors to provide a first line of driving defense that reaches far beyond the reach of headlights in all vehicles, regardless of the cost of the vehicle


TO KNOW MORE ABOUT SEEK THERMAL CONTACT MENZEL VISION AND ROBOTICS PVT LTD CONTACT US AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM



Friday, 31 January 2020

THREE TRENDS DRIVING INDUSTRIAL AUTOMATION


Since its inception in the 1980s, machine vision has concerned itself with two things:improving the technology’s power and capability and making it easier to use. Today, machine vision is turning to higher-resolution cameras with greater intelligence to empower new automated solutions both on and off the plant floor — all with a simplicity of operation approaching that of the smartphone, which significantly reduces engineering requirements and associated costs.

And, just like in other industries which are benefiting from rapid advancements in technology like big data, the cloud, artificial intelligence (AI), and mobile, so too will manufacturers, logistics operations, and other enterprises benefit from three key advances in machine vision for automation.


RAPIDLY IMPROVING SENSOR TECHNOLOGY


While 1-, 2-, and 5-megapixel (MP) cameras continue to make up the bulk of machine vision camera shipments, we’re seeing considerable interest in even higher-resolution smart cameras, up to 12 MP. High-resolution sensors mean that a single smart camera inspecting an automobile engine can do the work of several lower resolution smart cameras while maintaining high-accuracy inspections.

Cognex’s patent-pending High Dynamic Range Plus (HDR+) image processing technology provides even better image fidelity than your typical HDR. It will help smart cameras inspect multiple areas across large objects where lighting uniformity is less than ideal. In the past, lighting variations could be mistaken for defects or the feature was not even visible. Today, HDR+ helps reduce the effects of lighting variations, enabling applications in challenging environments that were beyond the capability of machine vision technology just a few years ago.

While advanced smart cameras run HDR+ technology on field-programmable gate arrays (FPGAs) to improve the quality of the acquired image at frame rate speeds, complementary sensor technology, such as time-of-flight (ToF) sensors, are being incorporated to enable “distance-based dynamic focus”. The new high-powered integrated torch (HPIT ) image formation system, using ToF distance measurement and high-speed liquid lens technology, are also making an impact by enabling dynamic autofocus at frame rate.

The newest barcode readers incorporate HPIT capability for applications such as high-speed tunnel sortation and warehouse management in situations where packages and product size can vary significantly, requiring the camera to quickly adapt to different focal ranges.

INTEGRATION WITH DEEP LEARNING


Just like AI’s impact in other industries, deep learning vision software for factory automation is allowing enterprises to automate inspections that were previously only able to do manually or more efficiently solve complex inspection challenges that are cumbersome or time-consuming to do with traditional rule-based machine vision.

The biggest use driving the investment in deep learning is the potential of re-allocating, in many cases, hundreds of human inspectors with deep learning-based inspection systems. For the first time, manufacturers have a technology that offers an inspection solution that can achieve comparable performance to that of a human.

One example of how deep learning will benefit organizations is in defect detection inspection. Every manufacturer wants to eliminate industrial defects as much as possible and as early as possible in the manufacturing process to reduce downstream impacts that cost time and money.

Defect detection is challenging because it is nearly impossible to account for the sheer amount of variation in what constitutes a defect or what anomalies might fall within the range of acceptable variation. As a result, many manufacturers utilize human inspectors at the end of the process to perform a final check for unacceptable product defects. With deep learning, quality engineers can train a machine vision system to learn what is an acceptable or unacceptable defect from a data set of reference pictures rather than program the vision system to account for the thousands of defect possibilities.

THE INTERNET OF THINGS


An important development for smart camera vision systems enabling Industry 4.0 initiatives is Open Platform Communications Unified Architecture (OPC UA). With contributions from all major machine vision trade associations around the world, OPC UA is an industrial interoperability standard developed to help machine-to-machine communication.

Combined with advanced sensor technology and trends such as deep learning, OPC UA will help transition machine vision technology from a point solution to bridge the industrial world inside the plant and the physical world outside it. Today, vision systems and barcode readers are key sources of data for modern enterprises.

TO KNOW MORE ABOUT COGNEX MACHINE VISION SYSTEM CAMERAS IN INDIA CONTACT MENZEL VISION AND ROBOTICS PVT LTD CONTACT US AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM




WHAT ARE THE BENEFITS OF CMOS BASED MACHINE VISION CAMERAS VS CCD?




Industrial machine vision cameras historically have used CCD image sensors, but there is a transition in the industrial imaging marketplace to move to CMOS imagers. Why is this?.. Sony who is the primary supplier of image sensors announced in 2015 it will stop making CCD image sensors and is already past its last time buy. The market was nervous at first until we experienced the new CMOS image sensor designs. The latest Sony Pregius Image sensors provide increased performance with lower cost making it compelling to make changes to systems using older CCD image sensors.


WHAT IS THE DIFFERENCE BETWEEN CCD AND CMOS IMAGE SENSORS IN MACHINE VISION CAMERAS?


Both produce an image by taking light energy (photons) and convert them into an electrical charge, but the process is done very differently.
In CCD image sensors, each pixel collects light, but then is moved across the circuit via current through vertical and horizontal shift registers. The light level is then sampled in the read out circuitry. Essentially its a bucket brigade to move the pixel information around which takes time and power. In CMOS sensors, each pixel has the read out circuitry located at the photosensitive site. The analog to digital circuit samples the information very quickly and eliminates artifacts such as smear and blooming. The pixel architecture has also radically changed moving the photosensitive electronics to be more efficient in collecting light.


Courtesy of Automated Imaging Association

6 ADVANTAGES OF CMOS IMAGE SENSORS VS CCD


    There are many advantages of CMOS versus CCDs and outlined below: 

  • 1 – Higher Sensitivity due to the latest pixel architecture which is beneficial in lower light applications.

  • 2 – Lower dark noise will contribute to a higher fidelity image.

  • 3 – Pixel well depth (saturation capacity) is improved providing higher dynamic range.

  • 4 – Lower Power consumption. This becomes important as lower heat dissipation equals a cooler camera and less noise.

  • 5 – Lower cost! – 5 Megapixel cameras used to cost ~ $2500 and only achieve 15 fps and now cost ~ $450 with increased frame rates.

  • 6 – Smaller pixels reduce the sensor format decreasing the lens cost.

WHAT CMOS IMAGE SENSORS CROSS OVER FROM EXISTING CCD IMAGE SENSORS?

MVRPL can help in the transition starting with crossing over CCDs to CMOS using the following cross reference chart. Once identified, use the camera selector and select the sensor from the pull down menu.
CCD to CMOS cross reference chart

TO KNOW MORE ABOUT INDUSTRIAL MACHINE VISION CAMERAS DEALER IN MUMBAI INDIA CONTACTMENZEL VISION AND ROBOTICS PVT LTD CONTACT US AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM



HOW CAN A SYSTEM INTEGRATOR SUPPORT ENVIRONMENTAL SUSTAINABILITY?



Sustainability is on the mind of many companies, but are they using one of their most valuable assets? A system integrator can possibly be the missing link to making your organization green.

I will not write about the effects of the climate change, because we all know how critical the situation is. The attention is on how everyone should contribute on reducing the environmental impact of our activities. We all are focused on recycling, reducing use of plastic, avoiding wastes, and keeping transportation as ecological as possible.

BUT HOW CAN A COMPANY BE MORE AND MORE SUSTAINABLE, AND HOW CAN A SYSTEM INTEGRATOR SUPPORT THESE INITIATIVES?


The Intergovernmental Panel on Climate Change (IPCC)—the United Nations body for assessing the science related to climate change—states in its 2019 Special Report 

“Global Warming of 1.5 °C,” which focused on how the CO2 emissions reduction can help containing the global warming, that, “The industry sector is the largest end-use sector, both in terms of final energy demand and GHG [greenhouse gas] emissions. Its direct CO2 emissions currently account for about 25% of total energy-related and process CO2 emissions, and emissions have increased at an average annual rate of 3.4% between 2000 and 2014, significantly faster than total CO2 emissions”.

Most emissions are due to the combustion of fossil fuels, non-energy uses of fossil fuels in the petrochemical industry, and metal smelting—but transportation and electricity production also contribute.

In addition to emissions reduction, there are a lot of other guidelines to follow in order to have less of an impact on the environment. The four areas a company can intervene in to improve sustainability are:
  • Business: At the higher level, a sustainability strategy has to be implemented. Tradeoffs and priorities should be evaluated in order to obtain the prefixed goals. For example, when planning new constructions, the location of the plants has to be considered, so natural resource can be leveraged in order to have less of an impact the on environment and an early optimization of the transportation that will be later required. The upgrade of existing facilities should also improve the infrastructure.

  • Supply chain: The supply chain can be designed to optimize the network together with the route carbon. Policies regarding supplier packaging can be introduced and the right tradeoffs between just-in-time (JIT) and emissions should be evaluated. Transport, in some cases, can be zeroed using addictive manufacturing—where a supplier sends a file to be 3D printed instead of a good. This is a complete digital process with no impact on emissions production.

  • Design and engineering: When designing a product, there are many factors to be considered which will help the company’s journey through sustainability. Energy efficiency, carbon footprint, usage of alternative materials, and an energy bill of materials (BOM) can integrate the product BOM. This could be optimized during design and also the good packaging and its following disposal (both the good and the packaging).

    When engineering the process, some of the goals should be reducing asset energy, water, carbon, and waste burdens and improving flexibility.

  • Operations: On the operation side, the scheduling of production and the detail scheduling are fundamental instruments along with energy, water, and waste management.

    Production efficiency is one of the keys of sustainability and can be achieved only by integrating an efficient production process with asset energy monitoring and maintenance.

    Integrating an overall equipment effectiveness (OEE) calculation system can help optimize efficiency. The higher the efficiency, the lesser the wastes and consumptions.


  • SEVERAL BIG COMPANIES THAT ALREADY STARTED A SUSTAINABLE JOURNEY:

    • · More than 10 years ago, Walmart begun its journey. In 2017, they started Project Gigaton, involving its supplier with the goal of avoiding the production of a gigaton of greenhouse gases. All the goals a supplier sets itself have to be SMART (Specific, Measurable, Achievable, Relevant, Time limited), and for each goal reached, a supplier gains “credit”.

    • · Google has the goal of operating entirely using renewable energy. Google is also requiring their supply chain to be sustainable, improving its energy performance and scaling the deployment of renewable energy sources.

    • · Amazon is committed to meet the Paris Agreement 10 years early by using 100% renewable energy by 2030. It is also raising awareness among suppliers by requiring them to use proper packaging to reduce packaging waste throughout the supply chain.
    There are many other examples of smaller companies in pursuit of sustainability. The energy consumption is often monitored and measured in order to optimize its use of water consumption and waste production. Many other initiatives are rolled out in order to influence people behavior, where a lot of companies are promoting reuse of materials reducing the use of plastic (removing plastic glasses and providing water bottles to the employees, for example)

    Others are encouraging smart-working to reduce the CO2 emissions related to home-to-work transportation. We can probably say that most of the companies have already implemented some kind of sustainability program.

    For many years, system integrators have been involved in several traditional efficiency improvement activities like motor replacement, inverters installation, and many others strictly related to energy usage reduction. But there’s much more a system integrator can do to help a company reduce its environmental impact through the operations optimization.

    Connecting information coming from the production plant and crossing them with energy consumption information and point-of-sale (POS) details can provide reports that can help identify possible optimizations, where having all the information separately couldn’t have been enlightening. Implementing OEE systems in productions lines helps increasing efficiency and availability. Scheduling can also be improved thanks to the data collected from the field. These are only examples of benefits that can be achieved with an interconnected system, and there are many more examples that could be done.

    Regulations are probably changing to make companies accountable for every occasion they aren’t “green.” Can you imagine the future of your company without a sustainability project?

    TO KNOW MORE ABOUT CCTV CAMERA DEALER IN INDIA CONTACT MENZEL VISION AND ROBOTICS PVT LTDCONTACT US AT (+ 91) 22 67993158 OR EMAIL US AT INFO@MVRPL.COM