Q
Q

favicon carmunication 606060

Passenger car registrations: -6.1% eleven months into 2022; +16.3% in November

Summary

How AI helps self-driving cars perceive objects

In the not too distant future, autonomous cars will conquer our streets and have to find their way between pedestrians, cyclists, buses and trains. The navigability of such autonomous vehicles in urban environments using, for example, 2-D or 3-D maps and corresponding sensor technology is already impressive today. But there is still a long way to go to safe and ethical locomotion (see Autonomous driving: Algorithm address ethical issues).

Artificial intelligence (AI) processes and methods are nevertheless the key to autonomous driving. So far, the algorithms of autonomous vehicles still lack robustness. In addition to ethical considerations, accurate recognition and visual interpretation of the situation plays an equally important role, especially in navigating safely between other vehicles and also pedestrians in unfamiliar urban environments.

Using Deep Learning to understand the scene

Deep learning is a special form and a subarea of machine learning based on artificial neural networks. This tool can be used to process complex data such as images or texts.
The task “scene understanding” can be solved with Deep Learning, a sub-discipline of machine learning.

On the way to human-like perception

Another milestone on the way to human-like perception for self-driving cars is the so-called amodal panoptic segmentation task. Until now, robots or autonomous vehicles have been limited to modal perception, which limits their ability to mimic the visual experience of humans. With advanced AI algorithms, visual recognition capability for self-driving cars could now be revolutionised. Machines will learn to abstract from the partial occlusion of objects and recognise them in their entirety.

Written by Carmupedia Editorial Office

Your member panel

From here you have full control over your profile

Q