Benedict Flade, Marcos Nieto, Gorka Isasmendi, Julian Eggert,
"Lane Detection Based Camera to Map Alignment Using Open-Source Map Data",
21st International IEEE Conference on Intelligent Transportation Systems, 2018.
For accurate self-localization of vehicles, many approaches rely on on the match between high definition (HD) 3D map data and sensor data obtained from laser scanners or camera images. However, when depending on HD-maps, any small changes of the road network have to be detected and the corresponding map section needs to be updated which is associated with considerable effort. As an alternative, in this paper we propose an approach which is able to provide map-relative lane-level localization without the restraint of requiring extensive sensor equipment, neither for generating the maps, nor for aligning map to sensor data. It uses freely available crowdsourced map data which is enhanced and stored in a graph-based relational local dynamic map (R-LDM).
Based on rough position estimation, provided by Global Navigation Satellite Systems (GNSS) such as GPS or Galileo, we align the road geometry with lanes, extracted from the front camera image. For this purpose, we compare 3D virtual views (so-called candidates) created from projected map data, with lane geometry data, extracted from the front camera image. The position correction relative to the initial guess is determined by best match search of the virtual view that fits best the actually sensed view. More specifically, the match is gained by extracting explicit lane marking information with a lane-detection algorithm as well as general road structure information with an edge detection, and by considering local environment information in the candidate generation step.
Evaluations performed on data recorded in The Netherlands show that our algorithm presents a promising approach to allow lane-level localization using state-of-the-art equipment and freely available map data.
Download Bibtex file