Home Technology Reconstruction of multispectral images from spectrally coded light fields of flat scenes
Article
Licensed
Unlicensed Requires Authentication

Reconstruction of multispectral images from spectrally coded light fields of flat scenes

  • Maximilian Schambach

    Maximilian Schambach recieved his B.Sc. and M.Sc. degree in Physics from the Friedrich Schiller University Jena in 2013 and Leipzig University in 2016, respectively. He is currently working as a research associate at the Institute of Industrial Information Technology (IIIT) at the Karlsruhe Institute of Technology, Germany, where he is pursuing a PhD. degree. His current research interests include signal and image processing, computational imaging and compressed sensing.

    ORCID logo EMAIL logo
    and Fernando Puente León

    Fernando Puente León is a Professor with the Department of Electrical Engineering and Information Technology at Karlsruhe Institute of Technology, Germany, where he heads the Institute of Industrial Information Technology (IIIT). From 2001 to 2002, he was with DS2, Valencia, Spain. From 2002 to 2003, he was a Postdoctoral Research Associate with the Institut für Mess- und Regelungstechnik, University of Karlsruhe. From 2003 to 2008, he was a Professor with the Department of Electrical Engineering and Information Technology, Technische Universität München, Germany. His research interests include image processing, automated visual inspection, information fusion, measurement technology, pattern recognition, and communications.

Published/Copyright: September 4, 2019

Abstract

We present a novel method to reconstruct multispectral images of flat objects from spectrally coded light fields as taken by an unfocused light field camera with a spectrally coded microlens array. In this sense, the spectrally coded light field camera is used as a multispectral snapshot imager, acquiring a multispectral datacube in a single exposure. The multispectral image, corresponding to the light field’s central view, is reconstructed by shifting the spectrally coded subapertures onto the central view according to their respective disparity. We assume that the disparity of the scene is approximately constant and non-zero. Since the spectral mask is identical for all subapertures, the missing spectral data of the central view will be filled up from the shifted spectrally coded subapertures. We investigate the reconstruction quality for different spectral masks and camera parameter sets optimized for real life applications such as in-line production monitoring for which the constant disparity constraint naturally holds. For synthesized reference scenes, using 16 color channels, we achieve a reconstruction PSNR of up to 51 dB.

Zusammenfassung

Wir präsentieren eine neuartige Methode zur Rekonstruktion von multispektralen Bildern flacher Objekte aus spektral codierten Lichtfeldern, wie sie mit einer Lichtfeldkamera mit spektral kodiertem Mikrolinsenarray aufgenommen werden. In diesem Sinne entspricht die spektral codierte Lichtfeldkamera einer multispektralen Snapshot Kamera. Das der Zentralansicht des Lichtfelds entsprechende multispektrale Bild wird rekonstruiert, indem die spektral codierten Subapertur-Bilder, abhängig von der jeweiligen Disparität, auf die Zentralansicht transformiert werden. Wir nehmen an, dass die Disparität der Szenen konstant und ungleich null ist. Da die spektrale Kodierung für alle Subapertur-Bilder identisch ist, wird die fehlende spektrale Information jedes Pixels der Zentralansicht von den transformierten spektral kodierten Subapertur-Bildern aufgefüllt. Wir untersuchen die Qualität der Rekonstruktion für verschiedene Masken der spektralen Kodierung und Kameraparameter, welche für echte Anwendungen, wie beispielsweise der Produktionsüberwachung, welche die Auflage einer konstanten Disparität auf natürliche Art erfüllen, angepasst sind. Mithilfe synthetischer Referenzdaten, im Falle von 16 Farbkanälen, erreichen wir eine Rekonstruktionsqualität von bis zu 51 dB.

About the authors

Maximilian Schambach

Maximilian Schambach recieved his B.Sc. and M.Sc. degree in Physics from the Friedrich Schiller University Jena in 2013 and Leipzig University in 2016, respectively. He is currently working as a research associate at the Institute of Industrial Information Technology (IIIT) at the Karlsruhe Institute of Technology, Germany, where he is pursuing a PhD. degree. His current research interests include signal and image processing, computational imaging and compressed sensing.

Fernando Puente León

Fernando Puente León is a Professor with the Department of Electrical Engineering and Information Technology at Karlsruhe Institute of Technology, Germany, where he heads the Institute of Industrial Information Technology (IIIT). From 2001 to 2002, he was with DS2, Valencia, Spain. From 2002 to 2003, he was a Postdoctoral Research Associate with the Institut für Mess- und Regelungstechnik, University of Karlsruhe. From 2003 to 2008, he was a Professor with the Department of Electrical Engineering and Information Technology, Technische Universität München, Germany. His research interests include image processing, automated visual inspection, information fusion, measurement technology, pattern recognition, and communications.

Acknowledgment

The authors acknowledge support by the state of Baden-Württemberg through bwHPC, a massively parallel computer.

References

1. E. H. Adelson and J. Y. A. Wang. Single lens stereo with a plenoptic camera. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2):99–106, 1992.10.1109/34.121783Search in Google Scholar

2. J. Anastasiadis and F. Puente León. Detection of substances in food with 3D convolutional autoencoders. tm – Technisches Messen, 85(1):38–44, 2018.10.1515/teme-2018-0033Search in Google Scholar

3. B. Arad and O. Ben-Shahar. Sparse recovery of hyperspectral signal from natural RGB images. In IEEE European Conference on Computer Vision, pages 19–34. Springer, 2016.10.1007/978-3-319-46478-7_2Search in Google Scholar

4. K. Berkner and S. Shroff. Optimization of spectrally coded mask for multi-modal plenoptic camera. In Imaging and Applied Optics, page CMD4, 2011.10.1364/COSI.2011.CMD4Search in Google Scholar

5. Y. Bok, H.-G. Jeon, and I. S. Kweon. Geometric calibration of micro-lens-based light field cameras using line features. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(2):287–300, 2017.10.1109/TPAMI.2016.2541145Search in Google Scholar PubMed

6. D. G. Dansereau, O. Pizarro, and S. B. Williams. Decoding, calibration and rectification for lenselet-based plenoptic cameras. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1027–1034, 2013.10.1109/CVPR.2013.137Search in Google Scholar

7. B. Geelen, N. Tack, and A. Lambrechts. A compact snapshot multispectral imager with a monolithically integrated per-pixel filter mosaic. In Advanced Fabrication Technologies for Micro/Nano Optics and Photonics VII, volume 8974, page 89740L. International Society for Optics and Photonics, 2014.10.1117/12.2037607Search in Google Scholar

8. R. Horstmeyer, G. Euliss, and R. Athale. Flexible multimodal camera using a light field architecture. In IEEE International Conference on Computational Photography, pages 1–8, 2009.10.1109/ICCPHOT.2009.5559016Search in Google Scholar

9. Institute of Industrial Information Technology, Karlsruhe Institute of Technology. Public GitLab repositories. https://gitlab.com/iiit-public, GNU GPLv3 License, 2019.Search in Google Scholar

10. W. Krippner, S. Bauer, and F. Puente León. Optical determination of material abundances in mixtures. tm – Technisches Messen, 84(3):207–215, 2017.10.1515/teme-2016-0077Search in Google Scholar

11. G. Lu and B. Fei. Medical hyperspectral imaging: a review. Journal of Biomedical Optics, 19(1):010901, 2014.10.1117/1.JBO.19.1.010901Search in Google Scholar PubMed PubMed Central

12. R. Lu and Y.-R. Chen. Hyperspectral imaging for safety inspection of food and agricultural products. In Pathogen Detection and Remediation for Safe Eating, volume 3544, pages 121–134. International Society for Optics and Photonics, 1999.10.1117/12.335771Search in Google Scholar

13. A. Lumsdaine and T. Georgiev. The focused plenoptic camera. In IEEE International Conference on Computational Photography, pages 1–8, 2009.10.1109/ICCPHOT.2009.5559008Search in Google Scholar

14. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan. Light field photography with a hand-held plenoptic camera. Computer Science Technical Report, 2(11):1–11, 2005.Search in Google Scholar

15. T. Nürnberg, M. Schambach, D. Uhlig, M. Heizmann, and F. Puente León. A simulation framework for the design and evaluation of computational cameras. In Proceedings of SPIE Automated Visual Inspection and Machine Vision III, 11061, 2019.10.1117/12.2527599Search in Google Scholar

16. P. Tatzer, M. Wolf, and T. Panner. Industrial application for inline material sorting using hyperspectral imaging in the NIR range. Real-Time Imaging, 11(2):99–107, 2005.10.1016/j.rti.2005.04.003Search in Google Scholar

17. D. Uhlig and M. Heizmann. Multi-stereo deflectometry with a light-field camera. tm – Technisches Messen, 85(1):59–65, 2018.10.1515/teme-2018-0042Search in Google Scholar

18. S. Wanner and B. Goldluecke. Variational light field analysis for disparity estimation and super-resolution. IEEE transactions on pattern analysis and machine intelligence, 36(3):606–619, 2013.10.1109/TPAMI.2013.147Search in Google Scholar PubMed

19. Z. Xiong, L. Wang, H. Li, D. Liu, and F. Wu. Snapshot hyperspectral light field imaging. In IEEE Conference on Computer Vision and Pattern Recognition, pages 6873–6881, 2017.10.1109/CVPR.2017.727Search in Google Scholar

20. F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar. Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum. IEEE Transactions on Image Processing, 19(9):2241–2253, 2010.10.1109/TIP.2010.2046811Search in Google Scholar PubMed

21. J. Ye and F. Imai. High resolution multi-spectral image reconstruction on light field via sparse representation. In Imaging Systems and Applications, pages IT3A–4, 2015.10.1364/ISA.2015.IT3A.4Search in Google Scholar

22. Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai. Light-field depth estimation via epipolar plane image analysis and locally linear embedding. IEEE Transactions on Circuits and Systems for Video Technology, 27(4):739–747, 2017.10.1109/TCSVT.2016.2555778Search in Google Scholar

23. K. Zhu, Y. Xue, Q. Fu, S. B. Kang, X. Chen, and J. Yu. Hyperspectral light field stereo matching. IEEE Transactions on Pattern Analysis and Machine Intelligence, page 1, 2018.10.1109/TPAMI.2018.2827049Search in Google Scholar PubMed

Received: 2019-07-12
Accepted: 2019-08-02
Published Online: 2019-09-04
Published in Print: 2019-11-18

© 2019 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 15.12.2025 from https://www.degruyterbrill.com/document/doi/10.1515/teme-2019-0103/html?srsltid=AfmBOopxYFB2wAtALZj3oFBXPeEWPgYAp1agT8lNX1Sxmz31KDujB7eg
Scroll to top button