Q
Q

favicon carmunication 606060

EURO 7 – what’s new.

Summary

How AI helps self-driving cars perceive objects

In the not too distant future, autonomous cars will conquer our streets and have to find their way between pedestrians, cyclists, buses and trains. The navigability of such autonomous vehicles in urban environments using, for example, 2-D or 3-D maps and corresponding sensor technology is already impressive today. But there is still a long way to go to safe and ethical locomotion (see Autonomous driving: Algorithm address ethical issues).

Artificial intelligence (AI) processes and methods are nevertheless the key to autonomous driving. So far, the algorithms of autonomous vehicles still lack robustness. In addition to ethical considerations, accurate recognition and visual interpretation of the situation plays an equally important role, especially in navigating safely between other vehicles and also pedestrians in unfamiliar urban environments.

Using Deep Learning to understand the scene

Deep learning is a special form and a subarea of machine learning based on artificial neural networks. This tool can be used to process complex data such as images or texts.
The task “scene understanding” can be solved with Deep Learning, a sub-discipline of machine learning.

On the way to human-like perception

Another milestone on the way to human-like perception for self-driving cars is the so-called amodal panoptic segmentation task. Until now, robots or autonomous vehicles have been limited to modal perception, which limits their ability...

Like what you see?

Then log-in to unlock all the content or become a member of Carmunication today!

Written by Carmupedia Editorial Office

You might also be interested in

Your member panel

From here you have full control over your profile

Q