Autonomous driving technologies can provide better safety, comfort and efficiency for future transportation. Most research in this area main focus on developing sensing and control approaches to achieve autonomous driving functions such as model based approaches and neural network based approaches. However, even if the autonomous driving functions are ideally achieved, the performance of the system is still subject to sensing exceptions. Few research has studied how to efficiently handle such sensing exceptions. In existing autonomous approaches, sensors, such as cameras, radars and lidars, usually need to be full calibrated or trained after mounted on the vehicles and before being used for autonomous driving. A simple unexpected on the sensors, e.g., mounting position or angle of a camera is changed, may lead the autonomous driving function to fail. The vehicle is then supposed to be sent back to manufacturers for repair, which is definitely inefficient in terms of both time and economy. This paper introduces an efficient approach to make human drivers be able to online teach autonomous driving vehicles to drive under sensing exceptions through demonstrations. A human teaching by demonstration framework for autonomous driving is first designed. Human teaching and robot learning processes and algorithms are then designed to handle sensing exceptions. A vision system is adopted as an example. The vision system is used for autonomous lane keeping function. When its parameters are unexpected changed, the vehicle will not be able to keep the lane. The human driver can then involve to manually drive the vehicle as a demonstration. Through the proposed teaching models and learning algorithms, the vehicle is able to automatically learn key parameters in the vision system and recover the lane keeping function. Experimental results performed on a 1/10-scale autonomous driving vehicle demonstrated the effectiveness and advantages of the proposed approach.