go back

Perception & Knowledge Representation

Perception is a system’s ability to receive and evaluate useful information about its environment. It comprises manifold capabilities such as detection, recognition, tracking and state estimation based on sensory measurements.

Together with a structured Knowledge Representation in which the percepts can be stored, enriched and combined into a world model, perception forms the grounding point for all higher-level cognitive capabilities. These include learning, prediction, situation analysis, behavior planning or even abstract thinking.

The Knowledge Representation also comprises knowledge about the environment, the situational context and the behavioral goals of involved agents.

HRI-EU researches multi-sensor fusion and spatio-temporal integration to improve the confidence and reliability of perceptions. Incremental Knowledge Representations are developed to connect perception with a system’s broader knowledge, including common-sense human knowledge.

The acquired knowledge combined with the grounded percepts, form the basis for real-world behavior planning and reasoning.


Sensor Fusion for Localization

Localization on maps at lane-level accuracy is a key capability for future autonomous driving systems. It enables intelligent vehicles to better understand their environment, to predict future trajectories of other traffic participants and allows for an improved evaluation of behavior options.

For the localization, data from several sensors has to be integrated to provide a more complete and coherent view of the environment. Information has to be fused from different types of sensors, and multimodal sensor measurements have to be combined with other available information about the environment, e.g., from spatial plausibility con-siderations or map data.

In this work, camera-based road views are combined with satellite-based positioning information and infrastructure geometry data from maps. This leads to a more accurate map-relative localization of traffic participants, supporting the overall driving situation analysis. A beneficial side-effect of our approach is the alignment of the sensor (e.g. the front camera view) with environment data which allows for the accurate 2D and 3D visualization of map related information. Augmented reality (AR) possibilities are also explored as a key.

For more information

B. Flade, M. Nieto, G. Isasmendi, J. Eggert, “Lane Detection Based Camera to Map Alignment Using Open-Source Map Data”, IEEE Conference on Intelligent Transportation Systems, 2018