<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=263810&amp;fmt=gif">
Application

Vision-guided robotics

Robotique guidée par vision mixed palletising
Vision-guided robotics

Artificial Vision (1D/2D/3D/coulor)

Vision-guided robotics refers to the use of vision systems and optical sensors to allow robots to interact with their environment using visual information. These robots can use cameras or other vision devices to perceive their environment, detect objects, follow trajectories, recognize shapes, or perform specific tasks based on the acquired visual data. The Revtech Systems team specializes in the integration and development of robotic cells guided by artificial vision. Vision-guided robotics systems allow robots to be more flexible and adaptable in their actions, as they can adjust according to variations in their environment detected by visual sensors.

Talk to our experts

Whether you're looking to reduce human error, speed up your processes, or improve the efficiency of your operations, our team is ready to work with you. We understand that every business is unique, so we'll work closely with you to understand your specific needs and design customized automation solutions.

 

FAQs

Frequently asked questions

When we talk about traditional robotics, without vision, it is in fact a robot that will operate blind. The robot will simply execute its sequence of movements which has been programmed without adjusting the positions. In the case of a vision-guided robotic cell, there will be several options, but in summary the robot movements will be generated or altered by results obtained from the vision system. applications and software options to facilitate integration/programming.

There is of course the robot and the components related to the process. At the level of vision, everything will depend on the details of the application. There will normally be optical sensors (1D sensor, 2D, 3D, color cameras, lenses, etc). There will normally also be components to control the environment of the vision system (lights, housing, etc). Then finally there will be a system for processing vision data (robot controller, PLC, vision PC, industrial PC, etc).

This is a critical point in the development of a vision-guided robotic solution. There are many optical sensors (cameras) such as 2D, linescan, smart, 3D stereo, 3D structured light, color, etc. Virtually any type of camera can be used with robots. The important thing is really to define the vision application well and to make it as stable as possible. It is therefore a combination of the right sensor, the right optics and the right vision algorithms that will bring the right results. It is then simply necessary to make the mathematical bridge for a good interpretation of the results by the robot.


Of course robots can perceive and interact with objects in 3D space. It is simply mathematically more complex than 1D or 2D vision because we work with more variables. To understand, let's take for example a 1D system that simply returns the distance of a surface from the sensor and well we have just 1 value in Z to process/calculate. Which is relatively simple to apply in robot movements. When we look at a 3D image we will often have the X,Y,Z position variables in addition to the W,P,R orientations to process. The most commonly used 3D vision technologies are laser triangulation, Time-Of-Flight, stereo vision and structured light.