留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

无监督掩码循环对抗网络实现细胞虚拟染色

林俊豪 张云飞 陈少伟 张国勋 谢浩

林俊豪, 张云飞, 陈少伟, 张国勋, 谢浩. 无监督掩码循环对抗网络实现细胞虚拟染色[J]. 中国光学(中英文). doi: 10.37188/CO.2026-0021
引用本文: 林俊豪, 张云飞, 陈少伟, 张国勋, 谢浩. 无监督掩码循环对抗网络实现细胞虚拟染色[J]. 中国光学(中英文). doi: 10.37188/CO.2026-0021
LIN Jun-hao, ZHANG Yun-fei, CHEN Shao-wei, ZHANG Guo-xun, XIE Hao. Unsupervised masked cycle-adversarial network for cellular virtual staining[J]. Chinese Optics. doi: 10.37188/CO.2026-0021
Citation: LIN Jun-hao, ZHANG Yun-fei, CHEN Shao-wei, ZHANG Guo-xun, XIE Hao. Unsupervised masked cycle-adversarial network for cellular virtual staining[J]. Chinese Optics. doi: 10.37188/CO.2026-0021

无监督掩码循环对抗网络实现细胞虚拟染色

cstr: 32171.14.CO.2026-0021
基金项目: 国家自然科学基金委青年科学基金B类(No. 62422515)
详细信息
    作者简介:

    林俊豪(2000—),男,安徽铜陵人,硕士研究生,2022年于福建农林大学获得学士学位,主要从事计算显微成像方面的研究。E-mail: 202312490675@nuist.edu.cn

    张国勋(1997—),男,河南新乡人,博士,2024年于清华大学获得博士学位,主要从事智能增强显微成像方面的研究。E-mail:zgx31415@gmail.com

    谢 浩(1988—),男,浙江杭州人,博士,副研究员,博士生导师,2017年于北京大学获得博士学位,主要从事光学显微成像方面的研究。E-mail:xiehao@iphy.ac.cn

  • 中图分类号: TP394.1;TH691.9

Unsupervised masked cycle-adversarial network for cellular virtual staining

Funds: Supported by
More Information
  • 摘要:

    虚拟染色技术通过深度学习实现无标记成像到荧光特异性成像的转换,能够显著降低活细胞成像的复杂性和光毒性,从而实现多通道、高通量、长时程的高分辨率成像,对生物医学研究具有重要意义。现有方法多依赖配对数据的有监督学习,为降低虚拟染色对配对数据的依赖,并进一步提升生成图像的质量,本文提出一种融合掩码自监督机制的无监督虚拟染色框架MVS-CycleGAN。该方法无需配对图像,通过引入随机掩码重建任务,遮挡输入图像的部分区域并强制网络利用语义信息进行补全,使模型能够同时捕捉目标域的全局形态和局部纹理,有效施加语义约束,从而缓解传统无监督模型在跨域转换中常见的语义漂移问题。在三类细胞数据集上的实验表明,MVS-CycleGAN整体优于传统方法:FSIM在BJ-5ta细胞膜/细胞核分别为0.784和0.565,HEK293T为0.854/0.830,Neuromast为0.657/0.740(分别提升了1.03%、9.50%、1.07%、0.85%、1.08%、5.56%)。此外,下游分割实验进一步证实了虚拟染色图像在定量分析中的有效性。研究结果表明,该方法为虚拟染色技术在多样化生物医学场景中的应用提供一种可行的解决思路。

     

  • 图 1  用于非配对虚拟染色任务的MVS-CycleGAN总体架构和训练框架示意图(a)不同细胞类型明场图像数据集,包括BJ-5ta人成纤维细胞、HEK293T人胚肾细胞和Neuromast斑马鱼神经丘;(b)训练流程框架示意图;(c)循环一致性约束机制;(d)掩码自监督学习模块

    Figure 1.  Overall architecture and training framework of the proposed MVS-CycleGAN for unpaired virtual staining(a)bright-field image datasets from different cell types, including BJ-5ta human fibroblast cells, HEK293T human embryonic kidney cells and Zebrafish Neuromast; (b) Schematic of training process framework; (c) Cycle consistency constraint mechanism; (d) masked self-supervised learning module

    图 2  网络结构图(a)生成器网络结构;(b)判别器网络结构

    Figure 2.  Network structure diagram (a) Generator network structure diagram; (b) Discriminator network structure diagram

    图 3  不同方法对BJ-5ta细胞、HEK293T细胞和Neuromast细胞膜与核进行虚拟染色图像。BJ-5ta细胞膜(a)和细胞核(b)明场图像、真实荧光图像及虚拟染色对比,比例尺:50 µm;HEK293T细胞膜(c)和细胞核(d)明场图像、真实荧光图像及虚拟染色对比,比例尺:50 µm;Neuromast细胞膜(e)和细胞核(f)明场图像、真实荧光图像及虚拟染色对比,比例尺:10 µm

    Figure 3.  Virtual staining images of cell membranes and nuclei in BJ-5ta, HEK293T and Neuromast produced by different methods. Comparisons of bright-field images, real fluorescence images, and virtual staining results are shown for BJ-5ta cell membranes (a) and nuclei (b), scale bar: 50 µm; HEK293T cell membranes (c) and nuclei (d), scale bar: 50 µm; and Neuromast cell membranes (e) and nuclei (f), scale bar: 10 µm

    图 4  BJ-5ta人成纤维细胞细胞膜虚拟染色指标对比

    Figure 4.  Comparison of virtual staining indices of BJ-5ta human fibroblast cell membranes

    图 5  Neuromast斑马鱼神经丘细胞核虚拟染色指标对比

    Figure 5.  Comparison of virtual staining indices of Zebrafish Neuromast cell nuclei

    图 6  细胞明场图像及虚拟染色图像细胞分割结果。比例尺:50 µm(BJ-5ta细胞,HEK293T细胞); 10 µm(Neuromast细胞)

    Figure 6.  Cell segmentation results from bright-field and virtual staining images. scale bar: 50 µm (BJ-5ta cells,HEK293T cells); 10 µm(Zebrafish Neuromast)

    表  1  细胞及类器官虚拟染色像素误差及结构相似性指标

    Table  1.   Cell and organoid virtual staining pixel error and structural similarity index

    ModelsCell TypePSNRRMSESSIM
    MVS-CycleGANBJ-5ta
    Membrane
    17.270.1380.582
    CycleGAN16.960.1430.550
    MVS-CycleGANBJ-5ta
    Nuclei
    13.400.2150.724
    CycleGAN12.960.2260.703
    MVS-CycleGANHEK293T
    Membrane
    20.720.0940.555
    CycleGAN20.600.0950.485
    MVS-CycleGANHEK293T
    Nuclei
    19.800.1060.763
    CycleGAN19.480.1100.763
    MVS-CycleGANNeuromast
    Membrane
    13.910.2060.589
    CycleGAN13.400.2180.522
    MVS-CycleGANNeuromast
    Nuclei
    13.070.2240.345
    CycleGAN12.480.2400.346
    下载: 导出CSV

    表  2  细胞及类器官虚拟染色结构相似性及感知质量指标

    Table  2.   Cell and organoid virtual staining structural similarity and image perceived quality indicators

    ModelsCell TypeFSIMLPIPSVIF
    MVS-CycleGANBJ-5ta
    Membrane
    0.7840.3370.0820
    CycleGAN0.7760.4180.0824
    MVS-CycleGANBJ-5ta
    Nuclei
    0.5650.4740.0364
    CycleGAN0.5160.5210.0434
    MVS-CycleGANHEK293T
    Membrane
    0.8540.1840.0827
    CycleGAN0.8450.1910.0797
    MVS-CycleGANHEK293T
    Nuclei
    0.8300.1710.0433
    CycleGAN0.8230.1830.0408
    MVS-CycleGANNeuromast
    Membrane
    0.6570.4010.1573
    CycleGAN0.6500.4120.0852
    MVS-CycleGANNeuromast
    Nuclei
    0.7400.2530.0906
    CycleGAN0.7010.2960.0723
    下载: 导出CSV
  • [1] 高歌, 郭晓光, 吴俊楠, 等. 用于单切片双模态光学关联成像的肾脏组织样本处理方法[J]. 中国光学(中英文), 2024, 17(5): 1227-1235.

    GAO G, GUO X G, WU J N, et al. Methods for processing renal tissue samples for single-slice dual-mode optical correlation imaging[J]. Chinese Optics, 2024, 17(5): 1227-1235. (in Chinese).
    [2] 王鹏, 周瑶, 赵宇轩, 等. 用于多尺度高分辨率三维成像的双环光片荧光显微技术[J]. 中国光学(中英文), 2022, 15(6): 1321-1331.

    WANG P, ZHOU Y, ZHAO Y X, et al. Double-ring-modulated light sheet fluorescence microscopic technique for multi-scale high-resolution 3D imaging[J]. Chinese Optics, 2022, 15(6): 1321-1331. (in Chinese).
    [3] KUMAR A, MCNALLY K E, ZHANG Y X, et al. Multispectral live-cell imaging with uncompromised spatiotemporal resolution[J]. Nature Photonics, 2025, 19(10): 1146-1156. doi: 10.1038/s41566-025-01745-7
    [4] XIANG D, WANG ZH CH, ZHENG H W, et al. Organic small-molecule NIR-II fluorophores for tumor phototheranostics[J]. Light: Science & Applications, 2026, 15(1): 173.
    [5] CHRISTIANSEN E M, YANG S J, ANDO D M, et al. In silico labeling: predicting fluorescent labels in unlabeled images[J]. Cell, 2018, 173(3): 792-803. e19.
    [6] HOU Y W, WANG W Y, FU Y ZH, et al. Multi-resolution analysis enables fidelity-ensured deconvolution for fluorescence microscopy[J]. eLight, 2024, 4(1): 14. doi: 10.1186/s43593-024-00073-7
    [7] SHAKED N T, BOPPART S A, WANG L V, et al. Label-free biomedical optical imaging[J]. Nature Photonics, 2023, 17(12): 1031-1041. doi: 10.1038/s41566-023-01299-6
    [8] WANG Q, AKRAM A R, DORWARD D A, et al. Deep learning-based virtual H& E staining from label-free autofluorescence lifetime images[J]. npj Imaging, 2024, 2(1): 17. doi: 10.1038/s44303-024-00021-7
    [9] 黄宇然, 张智敏, 董婉潔, 等. 多色虚拟荧光辐射差分显微成像[J]. 中国光学(中英文), 2022, 15(6): 1332-1338.

    HUANG Y R, ZHANG ZH M, DONG W J, et al. Multi-color virtual fluorescence emission difference microscopy[J]. Chinese Optics, 2022, 15(6): 1332-1338. (in Chinese).
    [10] COMBS C A, SHROFF H. Fluorescence microscopy: a concise guide to current imaging methods[J]. Current Protocols in Neuroscience, 2017, 79: 2.1. 1-2.1. 25.
    [11] SEO J, SIM Y, KIM J, et al. PICASSO allows ultra-multiplexed fluorescence imaging of spatially overlapping proteins without reference spectra measurements[J]. Nature Communications, 2022, 13(1): 2475. doi: 10.1038/s41467-022-30168-z
    [12] PIRONE D, BIANCO V, MICCIO L, et al. Beyond fluorescence: advances in computational label-free full specificity in 3D quantitative phase microscopy[J]. Current Opinion in Biotechnology, 2024, 85: 103054. doi: 10.1016/j.copbio.2023.103054
    [13] YIN Z CH, HE B, YING Y ZH, et al. Fast and label-free 3D virtual H&E histology via active phase modulation-assisted dynamic full-field OCT[J]. npj Imaging, 2025, 3(1): 12. doi: 10.1038/s44303-025-00068-0
    [14] KREISS L, JIANG SH W, LI X, et al. Digital staining in optical microscopy using deep learning - a review[J]. PhotoniX, 2023, 4(1): 34. doi: 10.1186/s43074-023-00113-4
    [15] ZHANG Y J, HUANG L ZH, PILLAR N, et al. Pixel super-resolved virtual staining of label-free tissue using diffusion models[J]. Nature Communications, 2025, 16(1): 5016. doi: 10.1038/s41467-025-60387-z
    [16] ICHITA M, YAMAMICHI H, HIGAKI T. Virtual staining from bright-field microscopy for label-free quantitative analysis of plant cell structures[J]. Plant Molecular Biology, 2025, 115(1): 29. doi: 10.1007/s11103-025-01558-w
    [17] KAMATH V, BHAT V G, RAJU G, et al. Application of fluorescence lifetime imaging-integrated deep learning analysis for cancer research[J]. Light: Advanced Manufacturing, 2025, 6(3): 49. doi: 10.37188/lam.2025.049
    [18] OUNKOMOL C, SESHAMANI S, MALECKAR M M, et al. Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy[J]. Nature Methods, 2018, 15(11): 917-920. doi: 10.1038/s41592-018-0111-2
    [19] PARK E, MISRA S, HWANG D G, et al. Unsupervised inter-domain transformation for virtually stained high-resolution mid-infrared photoacoustic microscopy using explainable deep learning[J]. Nature Communications, 2024, 15(1): 10892. doi: 10.1038/s41467-024-55262-2
    [20] DAI W X, WONG I H M, WONG T T W. Exceeding the limit for microscopic image translation with a deep learning-based unified framework[J]. PNAS Nexus, 2024, 3(4): 133. doi: 10.1093/pnasnexus/pgae133
    [21] MA J B, LI W Q, LI J B, et al. Generative AI for misalignment-resistant virtual staining to accelerate histopathology workflows[J]. arXiv: 2509.14119, 2025. (查阅网上资料, 请核对文献类型及格式是否正确).
    [22] ZHU J Y, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]. Proceedings of the IEEE International Conference on Computer Vision, IEEE, 2017: 2242-2251.
    [23] LI X Y, ZHANG G X, QIAO H, et al. Unsupervised content-preserving transformation for optical microscopy[J]. Light: Science & Applications, 2021, 10(1): 44.
    [24] LIU Z W, HIRATA-MIYASAKI E, PRADEEP S, et al. Robust virtual staining of landmark organelles with Cytoland[J]. Nature Machine Intelligence, 2025, 7(6): 901-915. doi: 10.1038/s42256-025-01046-2
    [25] HE K M, CHEN X L, XIE S N, et al. Masked autoencoders are scalable vision learners[C]. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2022: 15979-15988.
    [26] PANG SH Y, XIANG J W, ZUO ZH Q, et al. Contrastive masked feature modeling for self-supervised representation learning of high-resolution remote sensing images[J]. Remote Sensing, 2026, 18(4): 626. doi: 10.3390/rs18040626
    [27] WANG ZH, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612. doi: 10.1109/TIP.2003.819861
    [28] ZHANG L, ZHANG L, MOU X Q, et al. FSIM: a feature similarity index for image quality assessment[J]. IEEE Transactions on Image Processing, 2011, 20(8): 2378-2386. doi: 10.1109/TIP.2011.2109730
    [29] ZHANG R, ISOLA P, EFROS A A, et al. The unreasonable effectiveness of deep features as a perceptual metric[C]. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2018: 586-595.
    [30] SHEIKH H R, BOVIK A C, DE VECIANA G. An information fidelity criterion for image quality assessment using natural scene statistics[J]. IEEE Transactions on Image Processing, 2005, 14(12): 2117-2128. doi: 10.1109/TIP.2005.859389
  • 加载中
图(6) / 表(2)
计量
  • 文章访问数:  25
  • HTML全文浏览量:  10
  • PDF下载量:  0
  • 被引次数: 0
出版历程
  • 收稿日期:  2026-02-12
  • 录用日期:  2026-03-31
  • 网络出版日期:  2026-05-08

目录

    /

    返回文章
    返回