Volume 13 Issue 6
Dec.  2020
Turn off MathJax
Article Contents
ZHANG Shi-lei, CUI Yu, XING Mu-zeng, YAN Bin-bin. Light field imaging target ranging technology[J]. Chinese Optics, 2020, 13(6): 1332-1342. doi: 10.37188/CO.2020-0043
Citation: ZHANG Shi-lei, CUI Yu, XING Mu-zeng, YAN Bin-bin. Light field imaging target ranging technology[J]. Chinese Optics, 2020, 13(6): 1332-1342. doi: 10.37188/CO.2020-0043

Light field imaging target ranging technology

doi: 10.37188/CO.2020-0043
Funds:  Supported by the Joint Fund of the National Natural Science Commission and China Academy of Engineering physics (No. U1730135)
More Information
  • Corresponding author: yanbinbin@nwpu.edu.cn
  • Received Date: 20 Mar 2020
  • Rev Recd Date: 24 Apr 2020
  • Available Online: 10 Nov 2020
  • Publish Date: 01 Dec 2020
  • At present, it is difficult to obtain target distance information in image guidance. In order to apply modern guidance laws to image guidance technology and improve its performance, a target ranging algorithm using light field imaging is proposed. The algorithm decodes and tunes light field data to extract sub-aperture images from an original image. Bilinear interpolation is then performed on the two sub-aperture images to improve the image’s spatial resolution, and two sub-aperture images are selected as calibration data to obtain the corresponding internal and external parameters. The parameters are used to correct the sub-aperture images, which aligns them and makes them coplanar. Finally, a semi-global matching method is used to match the images to obtain the disparity value of the target. Then, 3D transformation of parallax can be used to get the target distance. The experimental results show that the average measurement errors of the algorithm are 28.54 mm and 14.96 mm, respectively, before and after improvement. This algorithm can effectively extract target distance information in complex scenes, which has value in theoretical and real-world applications.

     

  • loading
  • [1]
    姚秀娟, 彭晓乐, 张永科. 几种精确制导技术简述[J]. 激光与红外,2006,36(5):338-340. doi: 10.3969/j.issn.1001-5078.2006.05.002

    YAO X J, PENG X L, ZHANG Y K. Brief descriptions of precision guidance technology[J]. Laser &Infrared, 2006, 36(5): 338-340. (in Chinese) doi: 10.3969/j.issn.1001-5078.2006.05.002
    [2]
    胡林亭, 李佩军, 姚志军. 提高外场重频激光光斑测量距离的研究[J]. 液晶与显示,2006,31(12):1137-1142.

    HU L T, LI P J, YAO ZH J. Improvement of the measuring distance of repetitive-frequency laser spot in field[J]. Chinese Journal of Liquid Crystals and Displays, 2006, 31(12): 1137-1142. (in Chinese)
    [3]
    黄继鹏, 王延杰, 孙宏海. 激光光斑位置精确测量系统[J]. 光学 精密工程,2013,21(4):841-848. doi: 10.3788/OPE.20132104.0841

    HUANG J P, WANG Y J, SUN H H. Precise position measuring system for laser spots[J]. Optics and Precision Engineering, 2013, 21(4): 841-848. (in Chinese) doi: 10.3788/OPE.20132104.0841
    [4]
    谢艳新. 基于LatLRR和PCNN的红外与可见光融合算法[J]. 液晶与显示,2019,34(4):423-429. doi: 10.3788/YJYXS20193404.0423

    XIE Y X. Infrared and visible fusion algorithm based on latLRR and PCNN[J]. Chinese Journal of Liquid Crystals and Displays, 2019, 34(4): 423-429. (in Chinese) doi: 10.3788/YJYXS20193404.0423
    [5]
    赵战民, 朱占龙, 王军芬. 改进的基于灰度级的模糊C均值图像分割算法[J]. 液晶与显示,2020,35(5):499-507. doi: 10.3788/YJYXS20203505.0499

    ZHAO ZH M, ZHU ZH L, WANG J F. Improved fuzzy C-means algorithm based on gray-level for image segmentation[J]. Chinese Journal of Liquid Crystals and Displays, 2020, 35(5): 499-507. (in Chinese) doi: 10.3788/YJYXS20203505.0499
    [6]
    冯维, 吴贵铭, 赵大兴, 等. 多图像融合Retinex用于弱光图像增强[J]. 光学 精密工程,2020,28(3):736-744. doi: 10.3788/OPE.20202803.0736

    FENG W, WU G M, ZHAO D X, et al. Multi images fusion Retinex for low light image enhancement[J]. Optics and Precision Engineering, 2020, 28(3): 736-744. (in Chinese) doi: 10.3788/OPE.20202803.0736
    [7]
    YANG J C, EVERETT M, BUEHLER C. A real-time distributed light field camera[C]. Proceedings of the 13th Eurographics Workshop on Rendering, ACM, 2002: 77-86.
    [8]
    NG R. Digital light field photography[D]. California: Stanford University, 2006: 38-50.
    [9]
    计吉焘, 翟雨生, 吴志鹏, 等. 基于周期性光栅结构的表面等离激元探测[J]. 光学 精密工程,2020,28(3):526-534. doi: 10.3788/OPE.20202803.0526

    JI J T, ZHAI Y SH, WU ZH P, et al. Detection of surface plasmons based on periodic grating structure[J]. Optics and Precision Engineering, 2020, 28(3): 526-534. (in Chinese) doi: 10.3788/OPE.20202803.0526
    [10]
    于洁, 李鹏涛, 王春华, 等. RGBW液晶显示中的像素极性排布方式解析[J]. 液晶与显示,2020,35(5):444-448. doi: 10.3788/YJYXS20203505.0444

    YU J, LI P T, WANG CH H, et al. Pixel polarity arrangement analysis of RGBW LCD module[J]. Chinese Journal of Liquid Crystals and Displays, 2020, 35(5): 444-448. (in Chinese) doi: 10.3788/YJYXS20203505.0444
    [11]
    王江南, 丁磊, 倪婷, 等. 基于微结构阵列基板的高效顶发射OLED器件[J]. 液晶与显示,2019,34(8):725-732. doi: 10.3788/YJYXS20193408.0725

    WANG J N, DING L, NI T, et al. High-efficiency top-emitting OLEDs based on microstructure array substrate[J]. Chinese Journal of Liquid Crystals and Displays, 2019, 34(8): 725-732. (in Chinese) doi: 10.3788/YJYXS20193408.0725
    [12]
    解培月, 杨建峰, 薛彬, 等. 基于矩阵变换的光场成像及重聚焦模型仿真[J]. 光子学报,2017,46(5):0510001. doi: 10.3788/gzxb20174605.0510001

    XIE P Y, YANG J F, XUE B, et al. Simulation of light field imaging and refocusing models based on matrix transformation[J]. Acta Photonica Sinica, 2017, 46(5): 0510001. (in Chinese) doi: 10.3788/gzxb20174605.0510001
    [13]
    张春萍, 王庆. 光场相机成像模型及参数标定方法综述[J]. 中国激光,2016,43(6):0609004. doi: 10.3788/CJL201643.0609004

    ZHANG CH P, WANG Q. Survey on imaging model and calibration of light field camera[J]. Chinese Journal of Lasers, 2016, 43(6): 0609004. (in Chinese) doi: 10.3788/CJL201643.0609004
    [14]
    LIN X, RIVENSON Y, YARDIMCI N T, et al. All-optical machine learning using diffractive deep neural networks[J]. Science, 2018, 361(6406): 1004-1008. doi: 10.1126/science.aat8084
    [15]
    YAN T, WU J M, ZHOU T K, et al. Fourier-space diffractive deep neural network[J]. Physical Review Letters, 2019, 123(2): 023901. doi: 10.1103/PhysRevLett.123.023901
    [16]
    SHIN C, JEON H G, YOON Y, et al.. EPINET: a fully-convolutional neural network using epipolar geometry for depth from light field images[C]. Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2018: 4748-4757.
    [17]
    PENG J Y, XIONG ZH W, LIU D, et al.. Unsupervised depth estimation from light field using a convolutional neural network[C]. Proceedings of 2018 International Conference on 3D Vision, IEEE, 2018: 295-303.
    [18]
    ZHOU T H, TUCKER R, FLYNN J, et al. Stereo magnification: learning view synthesis using multiplane images[J]. ACM Transactions on Graphics, 2018, 37(4): 65.
    [19]
    YEUNG H W F, HOU J H, CHEN J, et al.. Fast light field reconstruction with deep coarse-to-fine modeling of spatial-angular clues[C]. Proceedings of the 15th European Conference on Computer Vision, Springer, 2018: 137-152.
    [20]
    ZHANG ZH Y. A flexible new technique for camera calibration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11): 1330-1334. doi: 10.1109/34.888718
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(18)  / Tables(2)

    Article views(2270) PDF downloads(169) Cited by()
    Proportional views

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return