On July 28th, the "2021 (Fourth) High Tech Robot Integrator Conference and Top Ten System Integrators Award Ceremony" hosted by High Tech Robotics and High Tech Robot Industry Research Institute (GGII) was held in Shenzhen. Yu Shufan, Sales Director of Visual Education, gave a keynote speech on "Robot 3D Intelligence 'Eye', Integration of Knowledge and Action", sharing the current market situation of machine vision and the exploration and application of visual education in 3D vision.
Benefiting from the continuous expansion of the overall scale of the manufacturing industry, the continuous improvement of intelligence level, and favorable national policies, the size of China's machine vision market is constantly expanding. The Institute of Advanced Robotics Industry (GGII) indicates that the popularity of machine vision will continue in 2021 and will continue to grow at a rate of 20% -30%. By 2023, the market size of machine vision is expected to reach 15.56 billion yuan. Faced with the positive trend of machine vision in China, Yu Shufan believes that FA (Factory Automation) has the largest proportion of industrial vision, and the combination of 3D vision technology and robots can effectively solve the problem of factory automation. Focusing on industrial scenarios, Shi Kepu has launched a complete set of standardized industrial solutions, including four parts. The first is recognition, that is, how 3D vision recognizes parts; The second is motion path planning, which is how to enable robots to avoid collisions and grasp parts at a better and faster speed; The third is grasping, which refers to how the gripper cooperates well with the movements of the eyes and brain; The fourth is precise placement.
Yu Shufan said, "In order to better enable manufacturers to use the industrial solutions of Shikepu, Shikepu has launched a customized product line that can achieve easy access systems, Rui access systems, and capture solutions." Easy to access system is a plug and play solution widely used for stacking, layering, or disorderly grabbing tasks, suitable for automation in any industry, including small businesses or businesses without automation experience, as well as any place where people engage in manual, repetitive work, placing components from pallets or boxes into machines, fixtures, or conveyor belts. Meanwhile, independent of the six axis robot model, unit configuration, and application requirements, the user-friendly system of Visual Science Popularization can also be applied to any production or warehousing system. If manufacturers want to achieve the transformation from simple crawling to complex crawling, they can also upgrade to the Rui crawling system, which can provide more flexibility and functionality in unordered crawling. In addition, in the precise placement of products, manufacturers can use the Rui Cai system to achieve precise delivery of components with higher accuracy and reliability.
The customization of the grabbing scheme includes gripper and fixture units, 3D visual sensors, training software packages, part flippers, and secondary positioning stations.
In terms of customizing grippers and tooling units, the parts grabbing scenarios of the material box include stacking, layering, and scattering. For these scenarios, the Visual Science Popularization software can choose the ideal solution from thousands of grabbing options, and accurately control the position of the robot through Visual Science Popularization motion planning, providing additional process safety to prevent accidental parts from falling. However, there are different sizes of tool units available for popular science, and users can choose the optimal size according to their requirements. Each tool unit can be equipped with up to 5 different grippers, and can provide up to 100 types of standard pickups.
In terms of customization of 3D visual sensors, they can be divided into three types according to different application scenarios. The first type is a hand eye camera, installed at the end of the robotic arm, which is simple to install and can recognize very smooth surfaces, and can scan from multiple angles; The second is a fixed camera, which can scan with high precision and does not require robot movement; The third is a sliding camera with dual angle high-speed scanning, which can pre scan multiple material boxes without the need for robot movement, and can scan 6 or even more material boxes.
Subsequently, Yu Shufan also supplemented the application process of the software training system, part flipper, and secondary positioning station in the grasping plan through a factory video.
From theory to practice, the customized product line of Visual Science Popularization has accumulated over 80 practical cooperation cases with customers. For example, the cooperation between Visual Education and BMW Group.
This year, Visual Education officially launched the luggage compartment floor handling project and reinforcement beam project located at BMW Group's Leipzig factory, using a total of four Visual Education fixed vision system workstations. Specifically, the luggage compartment floor is a semi-structured stacked part that can be retrieved using high-precision positioning and gripping, directly transposed or loaded. The reinforced beam belongs to completely random parts, with the same goal of high-precision feeding into the fixture. It is equipped with an OC table and a flipping mechanism according to popular science, while meeting a beat of 15 seconds. In addition to BMW, automotive industry giants such as Audi, Volkswagen, Volvo, and Ford have also adopted systems developed by Visual Science Popularization and have gained widespread recognition from customers.
Yu Shufan said, "In the exploration of 3D vision, Visual Education has always insisted on achieving the integration of knowledge and action in 3D vision. The so-called integration of knowledge and action is mainly reflected in the integration of company product technology and customer needs, as well as the integration of knowledge and action in robots, brains, and eyes as a whole. Most importantly, in the future, we hope that the collaboration between robots and humans can achieve the integration of knowledge and action."