Turn off MathJax
Article Contents
GU Jia-lin, LV Heng-yi, LI Zhuo-xian, QIAO Shan-tong. Event deblurring via feature enhancement and lightweight attention[J]. Chinese Optics. doi: 10.37188/CO.2026-0011
Citation: GU Jia-lin, LV Heng-yi, LI Zhuo-xian, QIAO Shan-tong. Event deblurring via feature enhancement and lightweight attention[J]. Chinese Optics. doi: 10.37188/CO.2026-0011

Event deblurring via feature enhancement and lightweight attention

cstr: 32171.14.CO.2026-0011
Funds:  Supported by Jilin Province Science and Technology Development Plan (No. 20250201053GX)
More Information
  • Single-frame image deblurring remains an inherently ill-posed problem. Furthermore, existing diffusion models suffer from high inference latency, while state space models lack sufficient cross-modal interaction capabilities. To overcome these limitations, we propose an end-to-end Event-fusion Multi-head Attention Network (EFMAN) that exploits high-frequency spatiotemporal priors from event cameras for high-quality image restoration. Specifically, a cross-modal adaptive attention mechanism is designed to precisely align asynchronous high-frequency event streams with synchronous RGB features in both spatial and temporal dimensions, thereby compensating for exposure deficiencies. To mitigate the impact of inherent sensor noise, a Feature Enhancement Attention (FEA) module bolsters feature robustness against noise via global context modeling. Additionally, a Lightweight Channel-Spatial Attention (LCSA) module is integrated to adaptively recalibrate feature responses while substantially alleviating computational redundancy. These components are optimized by a multidimensional joint loss function—encompassing pixel, feature, and gradient domains—to synergistically enforce multi-scale constraints, ensuring consistency between micro-textures and global topologies. Extensive experiments demonstrate that EFMAN significantly enhances deblurring performance while maintaining efficient inference. Compared to state-of-the-art methods, our approach achieves maximum PSNR and SSIM improvements of 1.19 dB and 0.005 on the GoPro dataset, and 0.38 dB and 0.003 on the REBlur dataset, respectively. By effectively addressing the challenges of multi-modal alignment and noise interference, EFMAN strikes an optimal balance between restoration quality and computational efficiency, making it highly suitable for clear image reconstruction in high-dynamic-range and rapid-motion scenarios.

     

  • loading
  • [1]
    ZHANG K H, REN W Q, LUO W H, et al. Deep image deblurring: a survey[J]. International Journal of Computer Vision, 2022, 130(9): 2103-2130. doi: 10.1007/s11263-022-01633-5
    [2]
    NAH S, KIM T H, LEE K M. Deep multi-scale convolutional neural network for dynamic scene deblurring[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2017: 257-265.
    [3]
    LUO Z W, GUSTAFSSON F K, ZHAO ZH, et al. Image restoration with mean-reverting stochastic differential equations[C]. Proceedings of the 40th International Conference on Machine Learning, JMLR, 2023: 957.
    [4]
    LIN X Q, HE J W, CHEN Z Y, et al. DiffBIR: toward blind image restoration with generative diffusion prior[C]. 18th European Conference on Computer Vision – ECCV 2024, Springer, 2024: 430-448.
    [5]
    WU H J, ZHANG M Q, HE L CH, et al. Enhancing diffusion model stability for image restoration via gradient management[C]. Proceedings of the 33rd ACM International Conference on Multimedia, Association for Computing Machinery, 2025: 10768-10777.
    [6]
    GUO H, LI J M, DAI T, et al. MambaIR: a simple baseline for image restoration with state-space model[C]. 18th European Conference on Computer Vision – ECCV 2024, Springer, 2024: 222-241.
    [7]
    LIU Y, TIAN Y J, ZHAO Y ZH, et al. VMamba: visual state space model[C]. Proceedings of the 38th International Conference on Neural Information Processing Systems, Curran Associates Inc., 2024: 3273.
    [8]
    ZHU L H, LIAO B CH, ZHANG Q, et al. Vision mamba: efficient visual representation learning with bidirectional state space model[C]. Proceedings of the 41st International Conference on Machine Learning, JMLR, 2024: 2584.
    [9]
    LI B Y, ZHAO H Y, WANG W X, et al. MaIR: a locality- and continuity-preserving mamba for image restoration[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2025: 7491-7501.
    [10]
    SHI Y, XIA B, JIN X Y, et al. VmambaIR: visual state space model for image restoration[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2025, 35(6): 5560-5574. doi: 10.1109/TCSVT.2025.3530090
    [11]
    WANG Y F, LIAO K, ZHANG K, et al. Reconfigurable versatile integrated photonic computing chip[J]. eLight, 2025, 5(1): 20. doi: 10.1186/s43593-025-00098-6
    [12]
    FANG X Y, HU X N, LI B L, et al. Orbital angular momentum-mediated machine learning for high-accuracy mode-feature encoding[J]. Light: Science & Applications, 2024, 13(1): 49.
    [13]
    GALLEGO G, DELBRUCK T, ORCHARD G, et al. Event-based vision: a survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 154-180. doi: 10.1109/TPAMI.2020.3008413
    [14]
    LICHTSTEINER P, POSCH C, DELBRUCK T. A 128×128 120 dB 15 μs latency asynchronous temporal contrast vision sensor[J]. IEEE Journal of Solid-State Circuits, 2008, 43(2): 566-576. doi: 10.1109/JSSC.2007.914337
    [15]
    方应红, 徐伟, 朴永杰, 等. 事件视觉传感器发展现状与趋势[J]. 液晶与显示, 2021, 36(12): 1664-1673. doi: 10.37188/CJLCD.2021-0149

    FANG Y H, XU W, PIAO Y J, et al. Development status and trend of event-based vision sensor[J]. Chinese Journal of Liquid Crystals and Displays, 2021, 36(12): 1664-1673. doi: 10.37188/CJLCD.2021-0149
    [16]
    柳长源, 曹青, 刘金凤. 遥感图像双模态融合去云方法[J]. 光学 精密工程, 2025, 33(18): 2996-3007. doi: 10.37188/OPE.20253318.2996

    LIU CH Y, CAO Q, LIU J F. A bimodal fusion method for remote sensing images to cloud removal[J]. Optics and Precision Engineering, 2025, 33(18): 2996-3007. doi: 10.37188/OPE.20253318.2996
    [17]
    CHEN C, SHI H, YANG Y, et al. Uncertainty-aware fusion for event-based deblurring[C]. ACM Multimedia, 2023. (查阅网上资料, 未找到本条文献信息, 请确认).
    [18]
    ZHANG X, YU L, YANG W. Unifying motion deblurring and frame interpolation with events[C]. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2022: 17744-17753.
    [19]
    SONG CH, HUANG Q X, BAJAJ C. E-CIR: event-enhanced continuous intensity recovery[C]. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2022: 7793-7802.
    [20]
    LIN X P, HUANG Y L, REN H W, et al. ClearSight: human vision-inspired solutions for event-based motion deblurring[C]. IEEE/CVF International Conference on Computer Vision, IEEE, 2025: 7462-7471.
    [21]
    XIAO Z Y, WANG X CH. Event-based video super-resolution via state space models[C]. 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2025: 12564-12574.
    [22]
    TULYAKOV S, GEHRIG D, GEORGOULIS S, et al. Time lens: event-based video frame interpolation[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2021: 16150-16159.
    [23]
    ZAMIR S W, ARORA A, KHAN S, et al. Restormer: efficient transformer for high-resolution image restoration[C]. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2022: 5718-5729.
    [24]
    YAO M, HU J K, ZHOU ZH K, et al. Spike-driven transformer[C]. Proceedings of the 37th International Conference on Neural Information Processing Systems, Curran Associates Inc. , 2023: 2798.
    [25]
    SUN L, SAKARIDIS C, LIANG J Y, et al. Event-based fusion for motion deblurring with cross-modal attention[C]. 17th European Conference on Computer Vision – ECCV 2022, Springer, 2022: 412-428.
    [26]
    吕建威, 钱锋, 韩昊男, 等. 结合光源分割和线性图像深度估计的夜间图像去雾[J]. 中国光学, 2022, 15(1): 34-44. doi: 10.37188/CO.2021-0114

    LV J W, QIAN F, HAN H N, et al. Nighttime image dehazing with a new light segmentation method and a linear image depth estimation model[J]. Chinese Optics, 2022, 15(1): 34-44. doi: 10.37188/CO.2021-0114
    [27]
    SUN L, ALFARANO A, DUAN P Q, et al. NTIRE 2025 challenge on event-based image deblurring: methods and results[C]. 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE, 2025: 1315-1332.
    [28]
    LI K CH, LI X H, WANG Y, et al. VideoMamba: state space model for efficient video understanding[C]. 18th European Conference on Computer Vision – ECCV 2024, Springer, 2024: 237-255.
    [29]
    WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]. 15th European Conference on Computer Vision – ECCV 2018, Springer, 2018: 3-19.
    [30]
    王慧, 曹召良, 王军. 改进丰富卷积特征算法的液滴边缘检测模型[J]. 中国光学(中英文), 2024, 17(4): 886-895. doi: 10.37188/CO.2024-0019

    WANG H, CAO ZH L, WANG J. Improved droplet edge detection model based on RCF algorithm[J]. Chinese Optics, 2024, 17(4): 886-895. doi: 10.37188/CO.2024-0019
    [31]
    贺兴, 王磊, 张鹏超, 等. 基于多维注意力网络的图像超分辨率重建[J]. 液晶与显示, 2025, 40(7): 1056-1066. doi: 10.37188/CJLCD.2025-0058

    HE X, WANG L, ZHANG P CH, et al. Image super-resolution reconstruction based on multidimensional attention network[J]. Chinese Journal of Liquid Crystals and Displays, 2025, 40(7): 1056-1066. doi: 10.37188/CJLCD.2025-0058
    [32]
    PAN L Y, SCHEERLINCK C, YU X, et al. Bringing a blurry frame alive at high frame-rate with an event camera[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2019: 6813-6822.
    [33]
    KUPYN O, MARTYNIUK T, WU J R, et al. DeblurGAN-v2: deblurring (orders-of-magnitude) faster and better[C]. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), IEEE, 2019: 8877-8886.
    [34]
    ZAMIR S W, ARORA A, KHAN S, et al. Multi-stage progressive image restoration[C]. IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2021: 14821-14831.
    [35]
    TAO X, GAO H Y, SHEN X Y, et al. Scale-recurrent network for deep image deblurring[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2018: 8174-8182.
    [36]
    CHEN L Y, LU X, ZHANG J, et al. HINet: half instance normalization network for image restoration[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE, 2021: 182-192.
    [37]
    CUI Y N, REN W Q, YANG S N, et al. IRNeXt: rethinking convolutional network design for image restoration[C]. Proceedings of the 40th International Conference on Machine Learning, PMLR, 2023: 6545-6564.
    [38]
    CUI Y N, KNOLL A. Exploring the potential of channel interactions for image restoration[J]. Knowledge-Based Systems, 2023, 282: 111156. doi: 10.1016/j.knosys.2023.111156
    [39]
    CHEN L Y, CHU X J, ZHANG X Y, et al. Simple Baselines for Image Restoration[C]. 17th European Conference on Computer Vision – ECCV 2022, Springer, 2022: 17-33.
    [40]
    KONG L, DONG X, ZHANG J, et al. FFTformer: toward efficient image restoration via frequency domain learning[C]. CVPR, 2023. (查阅网上资料, 未找到本条文献信息, 请确认).
    [41]
    WANG P, HE J M, YAN Q S, et al. DiffEvent: event residual diffusion for image deblurring[C]. ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2024: 3450-3454.
    [42]
    YANG W, WU J J, MA J P, et al. Motion deblurring via spatial-temporal collaboration of frames and events[C]. Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, AAAI, 2024: 726.
    [43]
    SUN ZH J, FU X Y, HUANG L ZH, et al. Motion aware event representation-driven image deblurring[C]. 18th European Conference on Computer Vision – ECCV 2024, Springer, 2024: 418-435.
    [44]
    JING SH L, LV H Y, ZHAO Y CH, et al. Hyper lightweight neural networks towards spike-driven deep residual learning[J]. Knowledge-Based Systems, 2025, 327: 114099. doi: 10.1016/j.knosys.2025.114099
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(9)  / Tables(7)

    Article views(14) PDF downloads(0) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return