Machine vision has long been used in industrial automation systems to improve production quality and yield by replacing traditional manual inspections. From pick and place, object tracking to metering, defect detection and more, visual data can be used to improve overall system performance by providing simple pass-through information or closed-loop control loops.
The use of vision is not only in the field of industrial automation; we have also seen a large number of applications in the daily life of cameras, such as computers, mobile devices, especially in automobiles. The camera was only introduced into the car a few years ago, but now the car is already equipped with a large number of cameras to provide the driver with a complete 360° view of the vehicle.
But when it comes to the biggest technological advances in machine vision, it may always be processing power. As processor performance doubles every two years and continues to focus on parallel processing technologies such as multicore CPUs, GPUs, and FPGAs, vision system designers can now apply highly sophisticated algorithms to visual data and create smarter systems. .
The development of processing technology has brought new opportunities, not just smarter or more powerful algorithms. Let's take a look at an application case that adds visual functionality to a manufacturing machine. These systems have traditionally been designed as a network of intelligent subsystems that form a collaborative distributed system that allows modular design (see Figure 1).
However, as system performance increases, this hardware-centric approach can be difficult because these systems are typically coupled using a mix of time-critical and non-time-critical protocols. Connecting these different systems together through various communication protocols can lead to bottlenecks in latency, determinism, and throughput.
For example, if a designer attempts to develop an application using this distributed architecture and must maintain tight integration between visual and motion systems, such as those required in visual servoing, then it may encounter major failures due to lack of processing power. Performance challenges. In addition, since each subsystem has its own controller, this actually reduces processing efficiency.
Finally, due to this hardware-centric distributed approach, designers have to use different design tools to design specific vision software for each subsystem in the vision system, as well as motion-specific software for motion systems. This is especially challenging for smaller design teams because a small team or even an engineer needs to be responsible for many parts of the design.
Fortunately, there are better ways to design these systems for advanced machines and equipment, a way to simplify complexity, increase integration, reduce risk and reduce time to market. What if we turn our thinking from a hardware-centric approach to a software-centric design approach (see Figure 2)? If we use a programming tool that can do different tasks with a single design tool, designers can reflect the modularity of the mechanical system in their software.
Figure 2: A software-centric design approach that allows designers to simplify control system architecture by integrating different automation tasks (including visual inspection, motion control, I/O, and HMI) into a single powerful embedded system.
This allows designers to simplify control system architecture by integrating different automation tasks (including visual inspection, motion control, I/O, and HMI) in a single powerful embedded system (see Figure 3). This eliminates the challenge of subsystem communication because all subsystems now run in the same software stack on a single controller. High-performance embedded vision systems are the best candidates for this centralized controller because these features are already built into these devices.
Figure 3: The heterogeneous architecture that combines the processor with FPGA and I/O is not only an ideal solution for designing high-performance vision systems but also integrating motion control, HMI and I/O.
Let's take a look at some of the benefits of this centralized processing architecture. A visually guided motion application is exemplified, such as a flexible feed, where the vision system provides a guiding function for the motion system. Here, the position and orientation of the parts are random. At the beginning of the mission, the vision system takes an image of the part to determine its position and orientation and provides this information to the motion system.
The motion system then moves the actuator to the position where the part is based on the image coordinates and picks it up. It can also use this information to correct the orientation before placing the part. In this way, the designer can eliminate any fixtures previously used to orient and position parts. This not only reduces costs, but also allows the application to more easily adapt to new part designs, requiring only software modifications.
A key advantage of a hardware-centric architecture is its scalability, which is primarily due to the Ethernet link between systems. However, special attention must also be paid to communication over this link. As mentioned earlier, the challenge of this approach is the uncertainty of the Ethernet link and limited bandwidth.
This is acceptable for most visual guided motion tasks that only give guidance at the beginning of the task; however, there may be other situations where delay changes can be a challenge. Turning this design to a centralized processing architecture has many advantages.
First, because visual systems and motion systems can be developed using the same software, designers do not need to be familiar with multiple programming languages ​​or environments, thus reducing development complexity. Second, the potential performance bottlenecks on Ethernet networks are eliminated because data is now passed between loops in a single application rather than between physical layers.
This makes the operation of the entire system deterministic because everything shares the same process. This is especially valuable when introducing vision directly into the control loop, such as in visual servo applications. Here, the vision system continuously captures images of the actuator and target parts during motion until the motion is complete. These captured images are used to provide feedback on the success of the exercise. With this feedback, designers can increase the precision and precision of existing automation without the need to upgrade to high-performance motion hardware.
Now asks a question: What does this system look like? If the designer is going to use a system that meets the computational and control needs of the machine vision system and is seamlessly connected to other systems such as motion control, HMI, and I/O, they need to use the hardware with the required performance. Architecture, and the intelligence and control required for each of these systems.
A good choice for this type of system is to use a heterogeneous processing architecture that combines the processor and FPGA with I/O. There are already many industries investing in this architecture, including Xiynx's Zynq fully programmable SoC (which combines ARM processors with Xilinx 7 series FPGA architecture) and Intel's multi-billion dollar acquisition of Altera.
For vision systems, the use of FPGAs is particularly beneficial, primarily because of their inherent parallelism. Algorithms can be separated, run in thousands of different ways, and can remain completely independent. In addition, the benefits of this architecture are not only in the visual aspect, but also in motion control systems and I/O.
Processors and FPGAs can be used to perform advanced processing, calculations, and decision making. Designers can connect to virtually any sensor on any bus through analog and digital I/O, industrial protocols, custom protocols, sensors, actuators, and relays. This architecture also meets other requirements such as timing and synchronization as well as business challenges such as increased productivity. Everyone wants to develop products faster, and this architecture eliminates the need for large professional design teams.
Unfortunately, while this architecture provides a lot of performance and scalability, the traditional approach to implementing it requires expertise, especially when using FPGAs. This poses a significant risk to the designer and may make it impractical or even impossible to use the architecture. However, with integrated software such as NI LabVIEW, designers can increase productivity and reduce risk by extracting low-level complexity and integrating all the required technologies into a single development environment.
Theory is one thing, and putting it into practice is another matter. Master Machinery is a Taiwanese company that produces semiconductor processing equipment (see Figure 4). This particular device uses a combination of machine vision, motion control, and industrial I/O to remove and package the chip from the silicon wafer. This is an example of a machine that can use the distributed architecture in Figure 1, each subsystem can be developed separately and then integrated through the network.
Figure 4: Using a centralized, software-centric approach, Master Machinery integrates its host controller, machine vision and motion systems, I/O and HMI into a single controller with 10 times the performance of its competitors. .
This machine produces about 2,000 parts per hour in the industry. But Master Machinery has taken a different approach. They designed a centralized, software-centric architecture and integrated host controllers, machine vision and motion systems, I/O, and HMI into separate controllers, all programmed with LabVIEW. In addition to the cost savings of a single subsystem, this approach has the performance advantage of producing approximately 20,000 parts per hour, which is 10 times that of competing products.
One of the key factors in Master Machinery's success is the ability to combine multiple subsystems into a single software stack, specifically machine vision and motion control systems. Using this unified approach, Master Machinery not only simplifies the way in which machine vision systems are designed, but also simplifies how the entire system is designed.
Machine vision is a complex task that requires a lot of processing power. As Moore's Law continues to increase the performance of processing components such as CPUs, GPUs, and FPGAs, designers can use these components to develop highly complex algorithms. Designers can also use this technology to improve the design performance of other components in the design, especially in motion control and I/O.
As the performance of all of these subsystems increases, the traditional distributed architecture used to develop these machines will be under pressure. Consolidating these tasks into a single controller and running in a single software environment eliminates bottlenecks in the design process, allowing designers to focus on innovation without having to worry about implementation issues.
Plastic Bluetooth Charger,Mobile Charger Wireless,Suction Cup Wireless Power Bank,Suction Power Bank
Shenzhen Konchang Electronic Technology Co.,Ltd , https://www.konchangs.com