Advanced driver assistance systems (ADAS) have been widely introduced in automobile to enhance driving safety and comfort, and to reduce drivers' driving burden. It would be important to incorporate different driving styles towards the design of these systems, although it is difficult yet challenging due to the vast amount and variations of the driving population. Previous research has mainly adopted physical approaches in modeling drivers' driving behavior, which however were very much limited, if not impossible, in capturing human drivers' driving characteristics. This paper proposes an approach that considering driving styles are formed through drivers' learning processes from interaction with surrounding environment. Based on reinforcement learning theory, it is considered that the driving action is decided by maximizing a reward function. Instead of calibrating the unknown reward function to satisfy driver’s desired response, we try to recover it based on the demonstration of human driving utilizing maximum likelihood inverse reinforcement learning (MLIRL). In terms of longitudinal driving, an IRL-based longitudinal driving assistance system is achieved in this paper. The design of this system mainly includes three parts. Firstly, large amount of real world driving data are collected using a test vehicle platform, and the data is divided into two parts, a training set and a testing set. Then, it is considered that longitudinal acceleration is made according to a Boltzmann distribution in human driving activity. The reward function is expressed as a linear combination of some kernelized basis functions. And the parameter vector is estimated using MLIRL based on the training set. Finally, a learning-based longitudinal driving assistance algorithm is developed. It is evaluated in the testing set. Results demonstrate that the method can reflect human drivers driving behavior to a satisfactory level.