Tags

Type your tag names separated by a space and hit enter

UNet++: A Nested U-Net Architecture for Medical Image Segmentation.

Abstract

In this paper, we present UNet++, a new, more powerful architecture for medical image segmentation. Our architecture is essentially a deeply-supervised encoder-decoder network where the encoder and decoder sub-networks are connected through a series of nested, dense skip pathways. The re-designed skip pathways aim at reducing the semantic gap between the feature maps of the encoder and decoder sub-networks. We argue that the optimizer would deal with an easier learning task when the feature maps from the decoder and encoder networks are semantically similar. We have evaluated UNet++ in comparison with U-Net and wide U-Net architectures across multiple medical image segmentation tasks: nodule segmentation in the low-dose CT scans of chest, nuclei segmentation in the microscopy images, liver segmentation in abdominal CT scans, and polyp segmentation in colonoscopy videos. Our experiments demonstrate that UNet++ with deep supervision achieves an average IoU gain of 3.9 and 3.4 points over U-Net and wide U-Net, respectively.

Authors+Show Affiliations

Arizona State University.Arizona State University.Arizona State University.Arizona State University.

Pub Type(s)

Journal Article

Language

eng

PubMed ID

32613207

Citation

Zhou, Zongwei, et al. "UNet++: a Nested U-Net Architecture for Medical Image Segmentation." Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction With MICCAI 2018, Granada, Spain, S..., vol. 11045, 2018, pp. 3-11.
Zhou Z, Siddiquee MMR, Tajbakhsh N, et al. UNet++: A Nested U-Net Architecture for Medical Image Segmentation. Deep Learn Med Image Anal Multimodal Learn Clin Decis Support (2018). 2018;11045:3-11.
Zhou, Z., Siddiquee, M. M. R., Tajbakhsh, N., & Liang, J. (2018). UNet++: A Nested U-Net Architecture for Medical Image Segmentation. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction With MICCAI 2018, Granada, Spain, S..., 11045, 3-11. https://doi.org/10.1007/978-3-030-00889-5_1
Zhou Z, et al. UNet++: a Nested U-Net Architecture for Medical Image Segmentation. Deep Learn Med Image Anal Multimodal Learn Clin Decis Support (2018). 2018;11045:3-11. PubMed PMID: 32613207.
* Article titles in AMA citation format should be in sentence-case
TY - JOUR T1 - UNet++: A Nested U-Net Architecture for Medical Image Segmentation. AU - Zhou,Zongwei, AU - Siddiquee,Md Mahfuzur Rahman, AU - Tajbakhsh,Nima, AU - Liang,Jianming, Y1 - 2018/09/20/ PY - 2020/7/3/entrez PY - 2018/9/1/pubmed PY - 2018/9/1/medline SP - 3 EP - 11 JF - Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S... JO - Deep Learn Med Image Anal Multimodal Learn Clin Decis Support (2018) VL - 11045 N2 - In this paper, we present UNet++, a new, more powerful architecture for medical image segmentation. Our architecture is essentially a deeply-supervised encoder-decoder network where the encoder and decoder sub-networks are connected through a series of nested, dense skip pathways. The re-designed skip pathways aim at reducing the semantic gap between the feature maps of the encoder and decoder sub-networks. We argue that the optimizer would deal with an easier learning task when the feature maps from the decoder and encoder networks are semantically similar. We have evaluated UNet++ in comparison with U-Net and wide U-Net architectures across multiple medical image segmentation tasks: nodule segmentation in the low-dose CT scans of chest, nuclei segmentation in the microscopy images, liver segmentation in abdominal CT scans, and polyp segmentation in colonoscopy videos. Our experiments demonstrate that UNet++ with deep supervision achieves an average IoU gain of 3.9 and 3.4 points over U-Net and wide U-Net, respectively. UR - https://www.unboundmedicine.com/medline/citation/32613207/UNet++:_A_Nested_U-Net_Architecture_for_Medical_Image_Segmentation L2 - https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/32613207/ DB - PRIME DP - Unbound Medicine ER -
Try the Free App:
Prime PubMed app for iOS iPhone iPad
Prime PubMed app for Android
Prime PubMed is provided
free to individuals by:
Unbound Medicine.