How can sensors and processors build smarter, more autonomous robots?
来源:How can sensors and processors build smarter, more autonomous robots? 2022-11-28

       Autonomous robots are intelligent machines that understand and navigate their environment without human control or intervention. Although autonomous robotics is relatively new, it is already widely used in factories, warehouses, cities and homes. For example, autonomous robots could be used to transport goods around warehouses or perform last-mile deliveries, while other types of robots could be used to vacuum homes or clean lawns.

       To achieve autonomy, a robot needs to be able to sense and position itself in a mapped environment, dynamically detect obstacles around it, track those obstacles, plan a route to a given destination, and control the vehicle to follow that route. In addition, the robot must perform these tasks only when it is safe to do so, avoiding risks to people, property or the system itself. As human-robot interactions become more frequent, they will need to not only be autonomous, mobile and energy-efficient, but also meet functional safety requirements. With sensors, processors and controls, designers can meet the stringent requirements of functional safety standards such as International Electrotechnical Commission 61508.

       Precautions for autonomous robot detection

       Many different types of sensors can be used to solve the challenges that come with autonomous robots. The following are two types of sensors:

       Vision sensor. Vision sensors can simulate human vision and perception. Vision systems can address challenges such as positioning, obstacle detection and collision avoidance because of their high-resolution spatial coverage capabilities and the ability to detect and classify objects. Vision sensors are also more cost-effective than sensors such as lidar, but vision sensors are computationally intensive.

       Power-hungry central processing units (cpus) and graphics processors (Gpus) can pose challenges for power-constrained robot systems. When designing energy efficient robot systems, CPU or GPU-based processing should be as minimal as possible. The system-on-chip (SoC) in an efficient vision system should process the visual signal chain at high speed, low power consumption, and low system cost. Socs for visual processing must be smart, safe, and energy efficient. The TDA4 processor family is highly integrated and designed with a heterogeneous architecture designed to deliver computer vision performance, deep learning processing, stereoscopic capabilities, and video analysis at the lowest possible power consumption.

       TI millimeter wave radar. The use of TI millimeter wave radar in robotic applications is a relatively novel concept, but the idea of autonomy using TI millimeter wave sensing has been around for some time. In automotive applications, TI millimeter wave radar is a key component in Advanced Driver assistance systems (ADAS) to monitor the vehicle's surroundings. You can apply some similar ADAS concepts, such as circumferential monitoring or collision avoidance, to robotics.

       From the perspective of sensing technology, TI millimeter wave radar is unique, because this kind of sensor can provide distance, speed and arrival Angle information of the object, which can better guide the robot navigation, so as to avoid collision. Based on radar sensor data, the robot can decide whether to proceed safely, slow down or even stop according to the position, speed and trajectory of the person or object it approaches, as shown in Figure 2.

       Sensor fusion and edge AI are used to solve complex problems of autonomous robots

       For more complex applications, a single sensor of any kind may not be sufficient to achieve autonomy. Ultimately, multiple sensors, such as cameras or radar, should be used in the same system to work well together. Using data from different types of sensors in the processor through sensor fusion helps tackle some of the more complex autonomous robot challenges.

       Sensor fusion helps make robots more accurate, while using edge artificial intelligence (AI) can make robots smarter. Integrating edge AI into robotic systems can help robots intelligently perceive, make decisions and perform operations. Robots with edge AI can intelligently detect objects and their positions, classify objects and act accordingly. For example, when a robot is navigating a cluttered warehouse, edge AI can help it infer what kinds of objects (including people, boxes, machines, and even other robots) are in its path and decide the appropriate actions to navigate around those objects.

       When designing robotic systems using AI, there are several design considerations for both hardware and software. The TDA4 processor family features hardware accelerators for edge AI capabilities that help handle computationally intensive tasks in real time. Having access to an easy-to-use edge AI software development environment helps simplify and speed up the application development and hardware deployment process. You can read "The Simplified Guide to Embedded Edge AI Application Development" to learn more about TI's free tools, software, and services designed to give development a boost.
       
       Conclusion: The design of more intelligent and autonomous robots is a necessary condition to continue to improve the level of automation. Robots can be used in warehouses and distribution to keep up with and facilitate the development of e-commerce. Robots can also perform routine chores such as vacuuming and weeding. Using autonomous robots can increase productivity and efficiency, help improve our lives and give more value to life.

       Source: CSIA