基于改进 U-Net 的 PET-CT 双模态头颈部肿瘤分割
CSTR:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

广东省 2019 年省拨高建“冲补强”专项项目(5041700175);教育部第二批新工科研究与实践项目(E-RGZN20201036)


ISA-DUNet: Inception Spatial-Attention Dense U-Net for Head and Neck Tumor Segmentation in PET-CT
Author:
Affiliation:

Fund Project:

This work is supported by High-level University Construction Special Project of Guangdong Province in 2019, China (5041700175) and the Second Batch of New Engineering Research and Practice Project of Ministry of Education, China (E-RGZN20201036)

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    放射肿瘤学领域中,头颈部被认为是轮廓绘制最困难且最耗时的疾病部位之一。目前,在临床中头颈部轮廓绘制通常由人工完成,耗时且费力,因此,开发一种自动的医学图像分割方法十分必要,可在节省人力和时间的同时,避免由于不同医师的主观因素导致的诊断结果差异。该文使用正电子发射断层成像/计算机断层扫描双模态的数据对头颈部的肿瘤进行分割,利用不同模态之间的信息互补,实现了更加精确的分割。整个网络基于传统 U-Net 架构,在编码器中增加 Inception 模块,在解码器中增加 Dense 模块以及空间注意力来对网络进行改进。在头颈部肿瘤数据集上,将该模型与不同的 U-Net 架构进行对比,结果表明,该文提出的方法多个指标均较优。该网络的 Dice 相似度系数为 0.782,召回率为 0.846,Jaccard 系数为 0.675,较原始 U-Net 分别提升 6.8%、13.4% 和 9.8%;95% 豪斯多夫距离为 5.661,较原始 U-Net 下降了 1.616。对比实验结果表明,该文提出的 Inception Spatial-Attention Dense U-Net 分割模型在头颈部肿瘤数据集上有效改善了分割结果,较标准的 U-Net 具有更高的性能,提高了分割的准确率。

    Abstract:

    In radiation oncology, it is usually difficult and time-consuming to manually profile the targets in the head and neck. Therefore, it is very necessary to develop an automatic medical image segmentation method, which not only saves time and energy, but also avoids the subjective variations among different physicians. In this work, we used positron emission computed tomography and computed tomography image data to segment head and neck tumors, and realized more accurate segmentation by using the complementary information between them. The network was developed based on the U-Net architecture, and an inception module was added into the encoder module. In addition, dense modules and spatial attention are added to the decoder to improve the network performance. Experimental results show that our method outperforms the other U-Net networks. Quantitatively, the dice similarity coefficient, recall rate and Jaccard similarity coefficient are found to be 0.782, 0.846 and 0.675, respectively. Compared with the original U-Net, these results corresponds to an improvement by 6.8%, 13.4% and 9.8%, respectively. The 95% Hausdorff distance is found to be 5.661, which is 1.616 smaller than the original U-Net. In conclusion, this study demonstrates that the inception spatial-attention dense U-Net model can effectively improve the segmentation accuracy on the head and neck tumor PET-CT images.

    参考文献
    相似文献
    引证文献
引用本文

引文格式
朱雅琳,陈宇骞,常青玲,等.基于改进 U-Net 的 PET-CT 双模态头颈部肿瘤分割 [J].集成技术,2023,12(3):94-104

Citing format
ZHU Yalin, CHEN Yuqian, CHANG Qingling, et al. ISA-DUNet: Inception Spatial-Attention Dense U-Net for Head and Neck Tumor Segmentation in PET-CT[J]. Journal of Integration Technology,2023,12(3):94-104

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2023-05-11
  • 出版日期:
文章二维码