Accuracy in obstacle detection is one of the major areas of autonomous vehicle research in recent years. With growing popularity of the concept of self-driving cars, passenger safety has become of paramount importance. The large amount of data produced by various sensors in a self-driving car poses additional challenge to process which can directly impact the accuracy of the data. Ensuring data accuracy before feeding them to various detection algorithms is thus a critical requirement for such vehicles. In the paper, analysis of a simple concept of data verification is proposed wherein the data obtained from two LIDAR sensors in the form of 3-D point cloud was superimposed on a 2-D camera image plane of the same static object. The object is considered to be stationary relative to the vehicle. The camera captures the image of a static object and creates a two dimensional plane of the image. Using data based sensor fusion algorithms like DBSCAN (density-based spatial clustering of applications with noise) and RANSAC (random sample consensus), the proposed method investigated whether the LIDAR point clouds could efficiently generate a map on the camera image of the same object. The accuracy of mapping would indicate the strength of detection as well as the computation time of algorithm. The results obtained show good accuracy in mapping which could be further extended for the detection of dynamic obstacles.