TY - GEN
T1 - Segmentation of 4D images via space-time neural networks
AU - Sun, Changjian
AU - Udupa, Jayaram K.
AU - Tong, Yubing
AU - Sin, Sanghun
AU - Wagshul, Mark
AU - Torigian, Drew A.
AU - Arens, Raanan
N1 - Funding Information:
This work is partly supported by an NIH grant HL130468.
Publisher Copyright:
© 2020 SPIE
PY - 2020
Y1 - 2020
N2 - Medical imaging techniques currently produce 4D images that portray the dynamic behaviors and phenomena associated with internal structures. The segmentation of 4D images poses challenges different from those arising in segmenting 3D static images due to different patterns of variation of object shape and appearance in the space and time dimensions. In this paper, different network models are designed to learn the pattern of slice-to-slice change in the space and time dimensions independently. The two models then allow a gamut of strategies to actually segment the 4D image, such as segmentation following just the space or time dimension only, or following first the space dimension for one time instance and then following all time instances, or vice versa, etc. This paper investigates these strategies in the context of the obstructive sleep apnea (OSA) application and presents a unified deep learning framework to segment 4D images. Because of the sparse tubular nature of the upper airway and the surrounding low-contrast structures, inadequate contrast resolution obtainable in the magnetic resonance (MR) images leaves many challenges for effective segmentation of the dynamic airway in 4D MR images. Given that these upper airway structures are sparse, a Dice coefficient (DC) of ~0.88 for their segmentation based on our preferred strategy is similar to a DC of >0.95 for large non-sparse objects like liver, lungs, etc., constituting excellent accuracy.
AB - Medical imaging techniques currently produce 4D images that portray the dynamic behaviors and phenomena associated with internal structures. The segmentation of 4D images poses challenges different from those arising in segmenting 3D static images due to different patterns of variation of object shape and appearance in the space and time dimensions. In this paper, different network models are designed to learn the pattern of slice-to-slice change in the space and time dimensions independently. The two models then allow a gamut of strategies to actually segment the 4D image, such as segmentation following just the space or time dimension only, or following first the space dimension for one time instance and then following all time instances, or vice versa, etc. This paper investigates these strategies in the context of the obstructive sleep apnea (OSA) application and presents a unified deep learning framework to segment 4D images. Because of the sparse tubular nature of the upper airway and the surrounding low-contrast structures, inadequate contrast resolution obtainable in the magnetic resonance (MR) images leaves many challenges for effective segmentation of the dynamic airway in 4D MR images. Given that these upper airway structures are sparse, a Dice coefficient (DC) of ~0.88 for their segmentation based on our preferred strategy is similar to a DC of >0.95 for large non-sparse objects like liver, lungs, etc., constituting excellent accuracy.
UR - http://www.scopus.com/inward/record.url?scp=85120374605&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85120374605&partnerID=8YFLogxK
U2 - 10.1117/12.2549605
DO - 10.1117/12.2549605
M3 - Conference contribution
AN - SCOPUS:85120374605
T3 - Progress in Biomedical Optics and Imaging - Proceedings of SPIE
BT - Medical Imaging 2020
A2 - Krol, Andrzej
A2 - Gimi, Barjor S.
PB - SPIE
T2 - Medical Imaging 2020: Biomedical Applications in Molecular, Structural, and Functional Imaging
Y2 - 18 February 2020 through 20 February 2020
ER -