Obstacle avoidance is an important function in self-driving vehicle control. When the vehicle move from any arbitrary start positions to any target positions in environment, a proper path must avoid both static obstacles and moving obstacles of arbitrary shape. There are many possible scenarios, manually tackling all possible cases will likely yield a too simplistic policy. In this paper we apply reinforcement learning to the problem of forming effective strategies. We note that there are two major challenges that make self-driving vehicle different from other robotic tasks. First, in order to control the vehicle precisely, the action space must be continuous which means that traditional Q-learning can’t deal with. Second, self-driving vehicle must satisfy various constraints including vehicle dynamics constraints and traffic rules constraints. We make three contributions in our work. First, we use the Deep Deterministic Policy Gradients (DDPG) algorithm to solve the problem of continuous action space, so that self-driving output the continuous steering angle and acceleration. Second, according to the vehicle constraints include inside and outside, design a more reasonable path for obstacle avoidance. Third, propose a multi-sensor data fusion method, including the vehicle state and surrounding environment state, in order to satisfy the need of algorithm input. In addition to that, we will test the performance of these algorithm on an open source car simulator for Racing called TORCS which stands for The Open Racing car Simulator. The result demonstrate the effectiveness and robustness of our method.