Computer Research and Modeling
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Archive

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Computer Research and Modeling:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Computer Research and Modeling, 2022, Volume 14, Issue 6, Pages 1239–1253
DOI: https://doi.org/10.20537/2076-7633-2022-14-6-1239-1253
(Mi crm1030)
 

This article is cited in 1 scientific paper (total in 1 paper)

MODELS IN PHYSICS AND TECHNOLOGY

Lidar and camera data fusion in self-driving cars

M. Ahmeda, M. Hegazya, A. S. Klimchikb, R. Bobyc

a Institute of Robotics and Computer Vision, 420500 Innopolis, Russia
b School of Computer Science, University of Lincoln, United Kingdom
c Mechanical Engineering, Indian Institute of Technology Jodhpur, Karwar, Jodhpur, Rajasthan, India, 342037
References:
Abstract: Sensor fusion is one of the important solutions for the perception problem in self-driving cars, where the main aim is to enhance the perception of the system without losing real-time performance. Therefore, it is a trade-off problem and its often observed that most models that have a high environment perception cannot perform in a real-time manner. Our article is concerned with camera and Lidar data fusion for better environment perception in self-driving cars, considering 3 main classes which are cars, cyclists and pedestrians. We fuse output from the 3D detector model that takes its input from Lidar as well as the output from the 2D detector that take its input from the camera, to give better perception output than any of them separately, ensuring that it is able to work in real-time. We addressed our problem using a 3D detector model (Complex-Yolov3) and a 2D detector model(Yolo-v3), where in we applied the image-based fusion method that could make a fusion between Lidar and camera information with a fast and efficient late fusion technique that is discussed in detail in this article. We used the mean average precision (mAP) metric in order to evaluate our object detection model and to compare the proposed approach with them as well. At the end, we showed the results on the KITTI dataset as well as our real hardware setup, which consists of Lidar velodyne 16 and Leopard USB cameras. We used Python to develop our algorithm and then validated it on the KITTI dataset. We used ros2 along with C++ to verify the algorithm on our dataset obtained from our hardware configurations which proved that our proposed approach could give good results and work efficiently in practical situations in a real-time manner.
Keywords: autonomous vehicles, self-driving cars, sensors fusion, Lidar, camera, late fusion, point cloud, images, KITTI dataset, hardware verification.
Funding agency Grant number
Russian Science Foundation 22-41-02006
The work of A. Klimchick and R. A. Boby was supported by the grant of the Russian Science Foundation No. 22-41-02006, https://rscf.ru/project/22-41-02006/.
Received: 15.09.2022
Accepted: 10.10.2022
Document Type: Article
UDC: 004.896
Language: English
Citation: M. Ahmed, M. Hegazy, A. S. Klimchik, R. Boby, “Lidar and camera data fusion in self-driving cars”, Computer Research and Modeling, 14:6 (2022), 1239–1253
Citation in format AMSBIB
\Bibitem{AhmHegKli22}
\by M.~Ahmed, M.~Hegazy, A.~S.~Klimchik, R.~Boby
\paper Lidar and camera data fusion in self-driving cars
\jour Computer Research and Modeling
\yr 2022
\vol 14
\issue 6
\pages 1239--1253
\mathnet{http://mi.mathnet.ru/crm1030}
\crossref{https://doi.org/10.20537/2076-7633-2022-14-6-1239-1253}
Linking options:
  • https://www.mathnet.ru/eng/crm1030
  • https://www.mathnet.ru/eng/crm/v14/i6/p1239
  • This publication is cited in the following 1 articles:
    Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Computer Research and Modeling
    Statistics & downloads:
    Abstract page:170
    Full-text PDF :87
    References:41
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024