go back

Towards a Task Dependent Representation Generation for Scene Analysis

Robert Kastner, Thomas Michalke, Jannik Fritsch, Christian Goerick, "Towards a Task Dependent Representation Generation for Scene Analysis", IEEE Intelligent Vehicles Symposium (IV), 2010.

Abstract

State-of-the-art advanced driver assistance systems (ADAS) typically focus on single tasks and therefore, have clearly defined functionalities. Although said ADAS functions (e.g. lane departure warning) show good performance, they lack the general ability to extract spatial relations of the environment. These spatial relations are required for scene analysis on a higher layer of abstraction, providing a new quality of scene understanding, e.g. for inner-city crash prevention when trying to detect a Stop sign violation in a complex situation. Otherwise, it will be difficult for an ADAS to deal with complex scenes and situations in a generic way. This contribution presents the novel task dependent generation of spatial representations, allowing task specific extraction of knowledge from the environment based on our biologically motivated ADAS. Additionally, the hierarchy of the approach provides advantages when dealing with heterogeneous processing modules, a large number of tasks and additional new input cues. First results show the reliability of the approach.



Download Bibtex file Download PDF

Search