Fusion of colour and monochromatic images with edge emphasis

  • Rade M. Pavlović Ministarstvo odbrane, Vojnotehnički institut
  • Vladimir S. Petrović
Keywords: imaging, fusion, Encoding, colour,

Abstract


We propose a novel method to fuse true colour images with monochromatic non-visible range images that seeks to encode important structural information from monochromatic images efficiently but also preserve the natural appearance of the available true chromacity information. We utilise the β colour opponency channel of the lαβ colour as the domain to fuse information from the monochromatic input into the colour input by the way of robust grayscale fusion. This is followed by an effective gradient structure visualisation step that enhances the visibility of monochromatic information in the final colour fused image. Images fused using this method preserve their natural appearance and chromacity better than conventional methods while at the same time clearly encode structural information from the monochormatic input. This is demonstrated on a number of well-known true colour fusion examples and confirmed by the results of subjective trials on the data from several colour fusion scenarios.

Introduction

The goal of image fusion can be broadly defined as: the representation of visual information contained in a number of input images into a single fused image without distortion or loss of information. In practice, however, a representation of all available information from multiple inputs in a single image is almost impossible and fusion is generally a data reduction task.  One of the sensors usually provides a true colour image that by definition has all of its data dimensions already populated by the spatial and chromatic information. Fusing such images with information from monochromatic inputs in a conventional manner can severely affect natural appearance of the fused image. This is a difficult problem and partly the reason why colour fusion received only a fraction of the attention than better behaved grayscale fusion even long after colour sensors became widespread.

Fusion method

Humans tend to see colours as contrasts between opponent colours and an improvement in visibility of structures from the monochrome can be achieved when they are used to encode a single HVS colour dimension consistently. The lαβ colour system effectively decorrelates the colour opponency and intensity channels and manipulating one causes no visible changes in the others. Colour fusion can be achieved by fusing one of the colour opponency channels with the monochrome image. We use the Laplacian pyramid fusion known to be one of the most robust monochrome fusion methods available. The Laplacian, also known as the DOLP (difference of low-pass) pyramid is a reversible multiresolution representation that expresses the image through a series of sub-band images of decreasing resolution, increasing scale, whose coefficients broadly express fine detail contrast at that location and scale. A simple fusion strategy creates a new fused pyramid by copying the largest absolute input coefficient at each location.

The β channel of the lαβ space represents the red-green opponency and we base our fusion on encoding this channel of the colour input with the monochrome image. This causes warmer objects (lighter in IR) to appear redder in the fused image. The fusion proceeds in several steps. Initially we transform the colour input RGB image into the lαβ image. Monochrome fusion is then performed by decomposing the β image and the normalised monochrome into their Laplacian pyramid representations. We use the select max strategy to construct the fused pyramid but we only apply this to a small number of higher resolution pyramid sub-bands. Larger scale features in lower resolution sub-band images that constitute the natural context of the scene are sourced entirely from the colour image (β). This ensures that well defined smaller objects from the IR image are transferred robustly into the fused image as well as the broad scene context from the colour input. Reconstructing the fused pyramid produces the fused β image which is combined with the original l and α channels of the colour input to produce the fused RGB colour image.

Edge Emphasis

We encode only the β channel which has only a fraction of the overall colour signal power (most is in the intensity channel) so the contrast of the monochrome image structures is still relatively modest in the fused image. We can improve their visualisation using a relatively simple effect of gradient outline enhancement. Initially, we extract gradient information from the monochrome image using 3x3 Sobel edge operators. The responses to horizontal and vertical Sobel templates, sx and sy, are combined to evaluate gradient magnitude at each location. To enhance the structure visualisation, prior to fusion, to the monochrome input we add its gradient magnitude image. The enhanced monochrome image  is well behaved as the used gradient filters are linear, and is used directly as the input into monochrome fusion. The gradient magnitude image effectively captures the primal sketch of the scene and encoding an opponency channel with this information improves the visualisation of the structural outline of the monochrome input in the colour fused image.

Results and Conclusion

A new “β fusion” colour image fusion method is presented that successfully both visualises important structure information from the monochrome input and preserves the natural appearance of the true colour input. Colour fusion is performed in the lαβ colour space known to decorrelate main colour opponencies seen by the human visual system. We chose the β channel representing the red-green opponency of the true colour image to encode structural information from the monochrome input by fusing them using modified Laplacian  pyramid fusion. The visualisation of important structures from the monochrome input can be improved through a simple structure encoding step using its gradient information. The method is naturally extended to video fusion. The proposed fusion methods produce colour fused images with significantly better visualisation of important information from the monochrome input while almost entirely preserving the natural appearance of the true colour input. This was demonstrated on a number of well-known colour fusion examples and measured using subjective trials on the data from multiple surveillance scenarios.

References

Aguilar, M., Fay, D. A., Ross, W. D., Waxman, A. M., Ireland, D. B., Racamato, J. P., 1988, Real-time fusion of low-light CCD and uncooled IR imagery for color night vision, pp. 124-135, Enhanced and Synthetic Vision, Orlando, FL, US, July 30.

Burt, P., Adelson, E., 1983, The Laplacian pyramid as a compact image code, IEEE Transactions on Communication, Volume 31(4), pp. 532-540.

Guangxin, L., Shuyan, X., 2009, An Efficient Color Transfer Method for Fusion of Multiband Nightvision Images, International Conference on Information Engineering and Computer Science, Wuhan, China, December 19-20.

Hogervorst, M. A., Toet, A., 2007, Fast and true-to-life application of daytime colours to night-time imagery, pp.1-8, Information Fusion, 2007 10th International Conference on, Quebec, QC, Canada, July 9-12.

Hogervorst, M. A., Toet, A., 2010, Fast natural color mapping for night-time imagery, Information Fusion, Volume 11(2), pp. 69-77.

Huang, M., Leng, J., Xiang, C., 2008, A Study on IHS+WT and HSV+WT Methods of Image Fusion, pp. 665-668, International Symposium on Information Science and Engieering, Shanghai, China, December 20-22.

Jang, J. H., Ra, J. B., 2008, Pseudo-Color Image Fusion Based on Intensity-Hue-Saturation Color Space, pp. 366-371, IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Seoul, Korea, August 20 – 22.

McDaniel, R. V., Scribner, D. A., Krebs, W. K., Warren, P., McCarley J., 1998, Image fusion for tactical applications, Infrared Technology and Applications, Volume 3436, pp. 685-695.

Petrović, V., 2001, Multisensor pixel-level image fusion,

PhD Thesis, University of Manchester, UK.

Petrović, V., Xydeas, C., 1999, Computationally efficient pixel-level image fusion, pp. 177-184 Proceedings of Eurofusion99, Stanford-upon-Avon, October.

Petrović, V., Xydeas, C., 2004, Gradient Based Multiresolution Image Fusion, IEEE Transactions on Image Processing, Volume 13(2), pp. 228-237.

Petrović, V., Zrnić, B., 2001, Multisenzorsko sjedinjavanje informacija za otkrivanje, praćenje i identifikaciju ciljeva, TELFOR 2001, Beograd, Novembar 20-22.

Reinhard, E., Ashikhmin, M., Gooch, B., Shirley, P., 2001, Color transfer between images, IEEE Computer Graphics and Applications, Volume 21(5), pp. 34-41.

Ruderman, A., Joubert, O. R., Fabre-Thorpe, M., 1988, Statistics of cone responses to natural images: implications for visual coding, Journal of the Optical Society of America, Volume 15(8), pp. 2036-2045.

Shiming, S., Lingxue, W., Wei-qi, J., Yuanmeng, Z., 2007, Color night vision based on color transfer in YUV color space, International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, Beijing, China, September 09.

Sonka, M., Hlavac, V., Boyle, R., 1998, Image Processing, Analysis and Machine Vision, PWS Publishing, Pacific Grove.

The Online Resource for Research in Image Fusion, [Internet], Dostupno na: <http://www.imagefusion.org>, Preuzeto: 08.03.2012. godine.

Toet, A., 1989, Image fusion by a ratio of low-pass pyramid, Pattern Recognition Letters, Volume 9, pp. 245-253.

Toet, A., 2003a, Color the night: Applying Daytime Colors to Nighttime Imagery, pp. 168-178 Enhanced and Sinthetic Vision, Orlando, FL, September 23.

Toet, A., 2003b, Natural colour mapping for multiband nightvision imagery, Information fusion, Volume 4(3), pp. 155-166.

Toet, A., 2003c, Color Image Fusion for Concealed Weapon Detection, pp.372-379, Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Defense and Law Enforcement II, Orlando, FL, September 23..

Toet, A., Hogervorst, M. A., 2009, Towards an Optimal Color. Representation for Multiband Nightvision Systems, pp. 1417 – 1423, 12th International Conference on Information Fusion, Seattle, WA, USA, July 6-9.

Toet, A., Walraven,, 1996, New false color mapping for image fusion, Optical Engineering, Volume 35(3), pp.650-658.

Waxman, A. M., Aguilar, M., Baxter, R. A., Fay, D. A., Ireland, D. B., Racamato, J. P., Ross, W. D., 1998a, Opponent-color fusion of multi-sensor imagery: visible, IR and SAR, Proceedings of IRIS Passive Sensors, Volume 1, pp. 43-61.

Waxman, A. M., Aguilar, M., Baxter, R. A., Fay, D. A., Ireland, D. B., Racamato, J. P., Ross, W. D., 1998b, Solid-state color night vision: fusion of low-light visible and thermal infrared imagery, Lincoln Laboratory Journal, Volume 11(1), pp. 41-60.

Yhang, J., Han, Y., Chan, B., Yuan, Y., Qian, Y., Qiu, Y., 2009, Real-time Color Image Fusion for Infrared and Low-light-level Cameras, International Synposium on Photoelectronic Detection and Imaging, Beijing, China, August 04.

Zhiyun, X., Rick, S. B., 2003, Concealed Weapon Detection Using Color Image Fusion”, pp. 622-627, Information Fusion, 2003. Proceedings of the Sixth International Conference, Cairns, Queensland, Australia, July 08-11.

Published
2014/02/26
Section
Original Scientific Papers