Obstacle avoidance is an important function in self-driving vehicle control. When the vehicle move from any arbitrary start positions to any target positions in environment, a proper path must avoid both static obstacles and moving obstacles of arbitrary shape. There are many possible scenarios, manually tackling all possible cases will likely yield a too simplistic policy. In this paper reinforcement learning is applied to the problem to form effective strategies. There are two major challenges that make self-driving vehicle different from other robotic tasks. Firstly, in order to control the vehicle precisely, the action space must be continuous which can’t be dealt with by traditional Q-learning. Secondly, self-driving vehicle must satisfy various constraints including vehicle dynamics constraints and traffic rules constraints. Three contributions are made in this paper. Firstly, an improved Deep Deterministic Policy Gradients (DDPG) algorithm is proposed to solve the problem of continuous action space, so that the continuous steering angle and acceleration can be obtained. Secondly, according to the vehicle constraints include inside and outside, a more reasonable path for obstacle avoidance is designed. Thirdly, the various sensors data are merged into vehicle so as to satisfy the need of the input information of the algorithm, including the vehicle state and surrounding environment state. In addition to that, the algorithm are tested on an open source vehicle simulator for racing called TORCS which stands for The Open Racing Car Simulator. The result demonstrate the effectiveness and robustness of the method.