Danymol, R. and Kutty, K., "A Compressed Sensing and Sparsity Based Approach for Estimating an Equivalent NIR Image from a RGB Image," SAE Technical Paper 2015-01-0310, 2015, doi:10.4271/2015-01-0310.
Camera sensors that are made of silicon photodiodes and used in ordinary digital cameras are sensitive to visible as well as Near-Infrared (NIR) wavelength. However, since the human vision is sensitive only in the visible region, a hot mirror/infrared blocking filter is used in cameras. Certain complimentary attributes of NIR data are, therefore, lost in this process of image acquisition. However, RGB and NIR images are captured entirely in two different spectra/wavelengths; thus they retain different information. Since NIR and RGB images compromise complimentary information, we believe that this can be exploited for extracting better features, localization of objects of interest and in multi-modal fusion. In this paper, an attempt is made to estimate the NIR image from a given optical image. Using a normal optical camera and based on the compressed sensing framework, the NIR data estimation is formulated as an image recovery problem. The NIR data is considered as missing pixel information and its approximation is done during the image recovery phase. Thus, for a given optical image, with NIR data being considered as missing information, the recovered NIR data gives the corresponding NIR image. The motivation behind using compressed sensing for NIR estimation is that, it uses a ‘Dictionary Learning Technique’ which is capable of retaining a linear relationship between the color image feature values with NIR data. Using this proposed method, we have been able to estimate NIR images directly from optical images with reconstructed PSNR values ranging from 10 to 20.5 dbs. Visual examination of the estimated data also concurs that there is a good match between the estimated and original NIR images. In the automotive domain, the proposed method would help in a myriad of ADAS applications that use optical cameras viz. night time pedestrian detection, collision avoidance, traffic sign recognition etc.