Abstract

Over the past several years, autonomous vehicles have made a huge step towards being adopted into our daily life. Self-driving cars are already driving on our streets, new algorithms are developed every day, machine learning gained a lot of popularity and can outperform traditional techniques in a much more diverse setting. However, what almost all algorithms still have in common, is that they use several sensors most of the time. For instance, laser-based distance measurements and odometry readings are combined in order to localize the car and map the environment simulatenously. This is called SLAM (Simultaneous Localization and Mapping). Combining measurement readings is an active field of research and can be quite challenging as every sensor's characteristics need to be taken under consideration when fusion wants to be done properly.

This thesis investigates the fusion of Global Positioning System (GPS) and Visual Simultaneous Localization and Mapping (SLAM). Chapter one introduces and motivates the main topic broadly. Similar research will be discussed, too. The second chapter clarifies terminology, reviews state-of-the-art approaches and prepares the reader for understanding the similarities between the used sensor data by explaining the fundamental technical background. It aims to provide a better understanding on how that data can complement each other by means of sensor data fusion. Chapter three covers the actual fusion of sensor measurements in detail by using a Kalman filter implementation and their mathematical description. Chapter four provides a summary of the data generation and aquisition process. Chapters five and six discuss results from simulations and real experiments, respectively. Where available, output is compared with ground truth data. Finally, chapter seven concludes this research with the examination of the implemented system and proposes extensions to improve it further.