Car-following theory has received considerable attention as a core component of Intelligent Transportation Systems. However, its application to the emerging autonomous vehicles (AVs) remains an unexplored research area. AVs are designed to provide convenient and safe driving by avoiding accidents caused by human errors. They require advanced levels of recognition of other drivers' driving-style. With car-following models, AVs can use their built-in technology to understand the environment surrounding them and make real-time decisions to follow other vehicles. In this paper, we design an end-to-end car-following framework for AVs using automated object detection and navigation decision modules. The objective is to allow an AV to follow another vehicle based on Red Green Blue Depth (RGB-D) frames. We propose to employ a joint solution involving the You Look Once version 3 (YOLOv3) object detector to identify the leader vehicle and other obstacles and a reinforcement learning (RL) algorithm to navigate the self-driving vehicle. Two RL algorithms, namely Q-learning and Deep Q-learning have been investigated. Simulation results show the convergence of the developed models and investigate their efficiency in following the leader. It is shown that, with video frames only, promising results are achieved and that AVs can adopt a reasonable car-following behavior.
|Number of pages
|IEEE Open Journal of Intelligent Transportation Systems
|Published - Jan 1 2021