Parallel matters: Efficient polyp segmentation with parallel structured feature augmentation modules

Qingqing Guo, Xianyong Fang, Kaibing Wang, Yuqing Shi, Linbo Wang, Enming Zhang, Zhengyi Liu

Research output: Contribution to journalArticlepeer-review

Abstract

The large variations of polyp sizes and shapes and the close resemblances of polyps to their surroundings call for features with long-range information in rich scales and strong discrimination. This article proposes two parallel structured modules for building those features. One is the Transformer Inception module (TI) which applies Transformers with different reception fields in parallel to input features and thus enriches them with more long-range information in more scales. The other is the Local-Detail Augmentation module (LDA) which applies the spatial and channel attentions in parallel to each block and thus locally augments the features from two complementary dimensions for more object details. Integrating TI and LDA, a new Transformer encoder based framework, Parallel-Enhanced Network (PENet), is proposed, where LDA is specifically adopted twice in a coarse-to-fine way for accurate prediction. PENet is efficient in segmenting polyps with different sizes and shapes without the interference from the background tissues. Experimental comparisons with state-of-the-arts methods show its merits.

Original languageEnglish
Pages (from-to)2503-2515
Number of pages13
JournalIET Image Processing
Volume17
Issue number8
DOIs
Publication statusPublished - 2023

Subject classification (UKÄ)

  • Computer Vision and Robotics (Autonomous Systems)
  • Bioinformatics (Computational Biology)

Free keywords

  • biomedical imaging
  • computer vision
  • image segmentation

Fingerprint

Dive into the research topics of 'Parallel matters: Efficient polyp segmentation with parallel structured feature augmentation modules'. Together they form a unique fingerprint.

Cite this