In
recent years, a miniaturization trend has been established in many
areas of electronics. For example, ICs have become more and more
integrated and circuit boards in the electrical industry have become
smaller and more powerful. This has also made PCs, mobile phones and
cameras more and more compact and powerful. This trend can also be
observed in the world of vision technology.
A classic machine vision system consists of an
industrial camera and a PC:
Both were significantly larger a few years ago. But within a short time,
smaller and smaller-sized PCs became possible, and in the meantime, the
industry saw the introduction of single-board computers, i.e. computers
that can be found on a single board. At the same time, the camera
electronics became more compact and the cameras successively smaller. On
the way to even higher integration, small cameras without housings are
now offered, which can be easily integrated into compact systems.
Due to these two developments, the reduction in
size of the PC and the camera, it is now possible to design highly
compact camera vision systems for new applications. Such systems are
called embedded (vision) systems.
Design and use of an embedded vision system
An embedded vision system consists, for example,
of a camera, a so-called board level camera, which is connected to a
processing board. Processing boards take over the tasks of the PC from
the classic machine vision setup. As processing boards are much cheaper
than classic industrial PCs, vision systems can become smaller and also
more cost-effective. The interfaces for embedded vision systems are
primarily USB or Basler BCON for LVDS.
Embedded vision systems are used in a wide range
of applications and devices, such as in medical technology, in vehicles,
in industry and in consumer electronics. Embedded systems enable new
products to be created and thereby create innovative possibilities in
several areas.
Which embedded systems are available?
As embedded systems, there are popular
single-board computers (SBC), such as the Raspberry Pi®. The Raspberry
Pi ® is a mini-computer with established interfaces and offers a similar
range of features as a classic PC or laptop.
Embedded vision solutions can also be implemented
with so-called system on modules (SoM) or computer on modules (CoM).
These modules represent a computing unit. For the adaptation of the
desired interfaces to the respective application, a so-called individual
carrier board is needed. This is connected to the SoM via specific
connectors and can be designed and manufactured relatively simply. The
SoMs or CoMs (or the entire system) are cost-effective on the one hand
since they are available off-the-shelf, while on the other hand they can
also be individually customized through the carrier board.
For large manufactured quantities, individual processing boards are a good idea.
All modules, single-board computers, and SoMs,
are based on a system on chip (SoC). This is a component on which the
processor(s), controllers, memory modules, power management and other
components are integrated on a single chip.
Due to these efficient components, the SoCs,
embedded vision systems have only become available in such a small size
and at a low cost as today.
Characteristics of embedded vision systems versus standard vision systems
Most of the above-mentioned single-board
computers and SoMs do not include the x86 family processors common in
standard PCs. Rather, the CPUs are often based on the ARM architecture.
The open-source Linux operating system is widely
used as an operating system in the world of ARM processors. For Linux,
there is a large number of open-source application programs, as well as
numerous freely-available program libraries.
Increasingly, however, x86-based single-board computers are also spreading.
A consistently important criterion for the computer is the space available for the embedded system.
For the SW developer, the program development for
an embedded system is different than for a standard PC. As a rule, the
target system does not provide a suitable user interface which can also
be used for programming. The SW developer must connect to the embedded
system via an appropriate interface if available (e.g. network
interface) or develop the SW on the standard PC and then transfer it to
the target system.
When developing the SW, it should be noted that
the HW concept of the embedded system is oriented to a specific
application and thus differs significantly from the universally usable
PC.
However, the boundary between embedded and
desktop computer systems is sometimes difficult to define. Just think of
the mobile phone, which on the one hand has many features of an
embedded system (ARM-based, single-board construction), but on the other
hand can cope with very different tasks and is therefore a universal
computer.
What are the benefits of embedded vision systems?
In some cases, much depends on how the embedded
vision system is designed. A single-board computer is often a good
choice as this is a standard product. It is a small compact computer
that is easy to use. This solution is also useful for developers who
have had little to do with embedded vision.
On the other hand, however, the single-board
computer is a system which contains unused components and thus generally
does not allow the leanest system configuration. This solution is
suitable for small to medium quantities. The leanest setup is obtained
through a customized system. Here, however, higher integration effort is
a factor. This solution is therefore suitable for large unit numbers.
The benefits of embedded vision systems at a glance:
- Lean system design
- Light weight
- Cost-effective, because there is no unnecessary hardware
- Lower manufacturing costs
- Lower energy consumption
- Small footprint
Which interfaces are suitable for an embedded vision application?
Embedded vision is the technology of choice for
many applications. Accordingly, the design requirements are widely
diversified. Depending on the specification, Basler offers a variety of cameras with different sensors, resolutions and interfaces.
The two interface technologies that Basler offers for embedded vision systems in the portfolio are:
- USB3 Vision for easy integration and
- Basler BCON for LVDS for a lean system design
Both technologies work with the same Basler pylon SDK, making it easier to switch from one interface technology to the other.
USB3 Vision
USB 3.0 is the right interface for a simple plug
and play camera connection and ideal for camera connections to
single-board computers. The Basler pylon SDK gives you easy access to
the camera within seconds (for example, images and settings), since USB
3.0 cameras are standard-compliant and GenICam compatible.
Benefits
- Easy connection to single-board computers with USB 2.0 or USB 3.0 connection
- Field-tested solutions with Raspberry Pi®, NVIDIA Jetson TK1 and many other systems
- Profitable solutions for SoMs with associated base boards
- Stable data transfer with a bandwidth of up to 350 MB/s
BCON for LVDS
BCON - Basler's proprietary LVDS-based interface
allows a direct camera connection with processing boards and thus also
to on-board logic modules such as FPGAs (field programmable gate arrays)
or comparable components. This allows a lean system design to be
achieved and you can benefit from a direct board-to-board connection and
data transfer.
The interface is therefore ideal for connecting
to a SoM on a carrier / adapter board or with an individually-developed
processor unit.
If your system is FPGA-based, you can fully use its advantages with the BCON interface.
BCON is designed with a 28-pin ZIF connector for
flat flex cables. It contains the 5V power supply together with the LVDS
lanes for image data transfer and image triggering. You can configure
the camera vialanes that work with the I²C standard.
Basler's pylon
SDK is tailored to work with the BCON for LVDS interface. Therefore, it
is easy to change settings such as exposure control, gain, and image
properties using your software code and pylons API. The image
acquisition of the application must be implemented individually as it
depends on the hardware used.
Benefits
- Image processing directly on the camera. This results in the highest image quality, without compromising the very limited resources of the downstream processing board.
- Direct connection via LVDS-based image data exchange to FPGA
- With the pylon SDK the camera configuration is possible via standard I²C bus without further programming. The compatibility with the GenICam standard is given.
- The image data software protocol is openly and comprehensively documented
- Development kit with reference implementation available
- Flexible flat flex cable and small connector for applications with maximum space limitations
- Stable, reliable data transfer with a bandwidth of up to 252 MB/s
How can an embedded vision system be developed and how can the camera be integrated?
Although it is unusual for developers who have
not had much to do with embedded vision to develop an embedded vision
system, there are many possibilities for this. In particular, the switch
from standard machine vision system to embedded vision system can be
made easy. In addition to its embedded product portfolio, Basler offers
many tools that simplify integration.
Find out how you can develop an embedded vision system and how easy it is to integrate a camera in our simpleshow video.
Machine learning in embedded vision applications
Embedded vision systems often have the task of
classifying images captured by the camera: On a conveyor belt, for
example, in round and square biscuits. In the past, software developers
have spent a lot of time and energy developing intelligent algorithms
that are designed to classify a biscuit based on its characteristics
(features) in type A (round) or B (square). In this example, this may
sound relatively simple, but the more complex the features of an object,
the more difficult it becomes.
Algorithms of machine learning (e.g.,
Convolutional Neural Networks, CNNs), however, do not require any
features as input. If the algorithm is presented with large numbers of
images of round and square biscuits, together with the information which
image represents which variety, the algorithm automatically learns how
to distinguish the two types of biscuits. If the algorithm is shown a
new, unknown image, it decides for one of the two varieties because of
its "experience" of the images already seen. The algorithms are
particularly fast on graphics processor units (GPUs) and FPGAs.
To Know More About Basler Camera Distributor in India, Contact Menzel Vision and Robotics Pvt Ltd at (+ 91) 22 67993158 or Email us at info@mvrpl.com
Contact Details
Address: 4, A-Wing, Bezzola Complex,
Sion Trombay Road, Chembur
400071 Mumbai, India
Tel:(+91) 22 67993158
Fax: (+91) 22 67993159
Mobile:+91 9323786005 / 9820143131
E-mail: info@mvrpl.com
Sion Trombay Road, Chembur
400071 Mumbai, India
Tel:(+91) 22 67993158
Fax: (+91) 22 67993159
Mobile:+91 9323786005 / 9820143131
E-mail: info@mvrpl.com
No comments:
Post a Comment