For a cognitive system, the reliable detection and estimation of static and dynamic scene elements is a key capability. Autonomous vehicles have to identify the drivable area as well as road delimiters and infrastructure, in addition to other traffic participants. However, sensor measurements are often uncertain, incomplete and ambiguous.
Therefore, data from several sensors have to be integrated to provide a more complete and coherent view of the environment. Information has to be fused from different types of sensors, and multimodal sensor measurements have to be combined with other available information about the environment, e.g. from spatial plausibility considerations or map data.
Perception is a system’s ability to receive and evaluate useful information about its environment. Perception comprises manifold elementary capabilities such as detection, recognition, tracking and state estimation based on sensory measurements. Together with an environment representation where the percepts can be stored, enriched and combined into an environment model, perception forms the starting point for all higher-level cognitive capabilities including learning, prediction, situation analysis, behavior planning or even abstract thinking.
In situated cognition knowledge about an environment is tightly coupled with the context and the behavioral goals of intelligent systems.We research multi-sensor fusion and spatiotemporal integration to improve the confidence and reliability of perceptions. Graph and grid-based representations are used to store the system’s local surroundings and its actuation planning. Vehicles are sensor rich platforms for research in perception and representations, whereas for robots the repertoire of possible actions is more diverse.
For more information
J. Fritsch, T. Kuehnl and F. Kummert, “Monocular road terrain detection by combining visual and spatial information”,
IEEE Trans. Intell. Transp. Syst., vol. 15, issue 4, pp. 1586-1596, 2014.