Obstacle avoidance for self-driving vehicle with reinforcement learning

Paper #:
  • 2017-01-1960

Published:
  • 2017-09-23
Abstract:
Obstacle avoidance is an important function in self-driving vehicle control. When the vehicle move from any arbitrary start positions to any target positions in environment, a proper path must avoid both static obstacles and moving obstacles of arbitrary shape. There are many possible scenarios, manually tackling all possible cases will likely yield a too simplistic policy. In this paper we apply reinforcement learning to the problem of forming effective strategies. We note that there are two major challenges that make self-driving vehicle different from other robotic tasks. First, in order to control the vehicle precisely, the action space must be continuous which means that traditional Q-learning can’t deal with. Second, self-driving vehicle must satisfy various constraints including vehicle dynamics constraints and traffic rules constraints. We make three contributions in our work. First, we use the Deep Deterministic Policy Gradients (DDPG) algorithm to solve the problem of continuous action space, so that self-driving output the continuous steering angle and acceleration. Second, according to the vehicle constraints include inside and outside, design a more reasonable path for obstacle avoidance. Third, propose a multi-sensor data fusion method, including the vehicle state and surrounding environment state, in order to satisfy the need of algorithm input. In addition to that, we will test the performance of these algorithm on an open source car simulator for Racing called TORCS which stands for The Open Racing car Simulator. The result demonstrate the effectiveness and robustness of our method.
Access
Now
SAE MOBILUS Subscriber? You may already have access.
Buy
Attention: This item is not yet published. Pre-Order to be notified, via email, when it becomes available.
Select
Price
List
Download
$27.00
Mail
$27.00
Members save up to 40% off list price.
Share
HTML for Linking to Page
Page URL

Related Items

Article
2016-08-15
Technical Paper / Journal Article
2004-03-08
Technical Paper / Journal Article
2004-03-08