Visualization and tissue classification of human breast cancer images using ultrahigh-resolution OCT

Xinwen Yao, Yu Gan, Ernest Chang, Hanina Hibshoosh, Sheldon Feldman, Christine Hendon

Research output: Contribution to journalArticlepeer-review

44 Scopus citations


Background and Objective: Breast cancer is one of the most common cancers, and recognized as the third leading cause of mortality in women. Optical coherence tomography (OCT) enables three dimensional visualization of biological tissue with micrometer level resolution at high speed, and can play an important role in early diagnosis and treatment guidance of breast cancer. In particular, ultra-high resolution (UHR) OCT provides images with better histological correlation. This paper compared UHR OCT performance with standard OCT in breast cancer imaging qualitatively and quantitatively. Automatic tissue classification algorithms were used to automatically detect invasive ductal carcinoma in ex vivo human breast tissue. Study Design/Materials and Methods: Human breast tissues, including non-neoplastic/normal tissues from breast reduction and tumor samples from mastectomy specimens, were excised from patients at Columbia University Medical Center. The tissue specimens were imaged by two spectral domain OCT systems at different wavelengths: a home-built ultra-high resolution (UHR) OCT system at 800 nm (measured as 2.72 μm axial and 5.52 μm lateral) and a commercial OCT system at 1,300 nm with standard resolution (measured as 6.5 μm axial and 15 μm lateral), and their imaging performances were analyzed qualitatively. Using regional features derived from OCT images produced by the two systems, we developed an automated classification algorithm based on relevance vector machine (RVM) to differentiate hollow-structured adipose tissue against solid tissue. We further developed B-scan based features for RVM to classify invasive ductal carcinoma (IDC) against normal fibrous stroma tissue among OCT datasets produced by the two systems. For adipose classification, 32 UHR OCT B-scans from 9 normal specimens, and 28 standard OCT B-scans from 6 normal and 4 IDC specimens were employed. For IDC classification, 152 UHR OCT B-scans from 6 normal and 13 IDC specimens, and 104 standard OCT B-scans from 5 normal and 8 IDC specimens were employed. Results: We have demonstrated that UHR OCT images can produce images with better feature delineation compared with images produced by 1,300 nm OCT system. UHR OCT images of a variety of tissue types found in human breast tissue were presented. With a limited number of datasets, we showed that both OCT systems can achieve a good accuracy in identifying adipose tissue. Classification in UHR OCT images achieved higher sensitivity (94%) and specificity (93%) of adipose tissue than the sensitivity (91%) and specificity (76%) in 1,300 nm OCT images. In IDC classification, similarly, we achieved better results with UHR OCT images, featured an overall accuracy of 84%, sensitivity of 89% and specificity of 71% in this preliminary study. Conclusion: In this study, we provided UHR OCT images of different normal and malignant breast tissue types, and qualitatively and quantitatively studied the texture and optical features from OCT images of human breast tissue at different resolutions. We developed an automated approach to differentiate adipose tissue, fibrous stroma, and IDC within human breast tissues. Our work may open the door toward automatic intraoperative OCT evaluation of early-stage breast cancer. Lasers Surg. Med. 49:258–269, 2017.

Original languageEnglish (US)
Pages (from-to)258-269
Number of pages12
JournalLasers in Surgery and Medicine
Issue number3
StatePublished - Mar 1 2017
Externally publishedYes


  • breast cancer
  • image processing
  • optical coherence tomography
  • tissue classification

ASJC Scopus subject areas

  • Surgery
  • Dermatology


Dive into the research topics of 'Visualization and tissue classification of human breast cancer images using ultrahigh-resolution OCT'. Together they form a unique fingerprint.

Cite this