Tags

Type your tag names separated by a space and hit enter

Decoding phonation with artificial intelligence (DeP AI): Proof of concept.
Laryngoscope Investig Otolaryngol. 2019 Jun; 4(3):328-334.LI

Abstract

Objective

Acoustic analysis of voice has the potential to expedite detection and diagnosis of voice disorders. Applying an image-based, neural-network approach to analyzing the acoustic signal may be an effective means for detecting and differentially diagnosing voice disorders. The purpose of this study is to provide a proof-of-concept that embedded data within human phonation can be accurately and efficiently decoded with deep learning neural network analysis to differentiate between normal and disordered voices.

Methods

Acoustic recordings from 10 vocally-healthy speakers, as well as 70 patients with one of seven voice disorders (n = 10 per diagnosis), were acquired from a clinical database. Acoustic signals were converted into spectrograms and used to train a convolutional neural network developed with the Keras library. The network architecture was trained separately for each of the seven diagnostic categories. Binary classification tasks (ie, to classify normal vs. disordered) were performed for each of the seven diagnostic categories. All models were validated using the 10-fold cross-validation technique.

Results

Binary classification averaged accuracies ranged from 58% to 90%. Models were most accurate in their classification of adductor spasmodic dysphonia, unilateral vocal fold paralysis, vocal fold polyp, polypoid corditis, and recurrent respiratory papillomatosis. Despite a small sample size, these findings are consistent with previously published data utilizing deep neural networks for classification of voice disorders.

Conclusion

Promising preliminary results support further study of deep neural networks for clinical detection and diagnosis of human voice disorders. Current models should be optimized with a larger sample size.

Levels of Evidence

Level III.

Authors+Show Affiliations

Vanderbilt Bill Wilkerson Center for Otolaryngology Vanderbilt University Medical Center Nashville Tennessee U.S.A.Department of Information Technology Vanderbilt University Nashville Tennessee U.S.A.Vanderbilt Bill Wilkerson Center for Otolaryngology Vanderbilt University Medical Center Nashville Tennessee U.S.A.Department of Electrical Engineering and Computer Science Vanderbilt University Nashville Tennessee U.S.A.Vanderbilt Bill Wilkerson Center for Otolaryngology Vanderbilt University Medical Center Nashville Tennessee U.S.A.Vanderbilt Bill Wilkerson Center for Otolaryngology Vanderbilt University Medical Center Nashville Tennessee U.S.A.Center of Research in Computational and Numerical Methods in Engineering Central University Marta Abreu of Las Villas Santa Clara Cuba. Infralab University of Brasília Brasília Brazil.Department of Electrical Engineering and Computer Science Vanderbilt University Nashville Tennessee U.S.A.Vanderbilt Bill Wilkerson Center for Otolaryngology Vanderbilt University Medical Center Nashville Tennessee U.S.A.Department of Electrical Engineering and Computer Science Vanderbilt University Nashville Tennessee U.S.A.Department of Electrical Engineering and Computer Science Vanderbilt University Nashville Tennessee U.S.A.Vanderbilt Bill Wilkerson Center for Otolaryngology Vanderbilt University Medical Center Nashville Tennessee U.S.A.

Pub Type(s)

Journal Article

Language

eng

PubMed ID

31236467

Citation

Powell, Maria E., et al. "Decoding Phonation With Artificial Intelligence (DeP AI): Proof of Concept." Laryngoscope Investigative Otolaryngology, vol. 4, no. 3, 2019, pp. 328-334.
Powell ME, Rodriguez Cancio M, Young D, et al. Decoding phonation with artificial intelligence (DeP AI): Proof of concept. Laryngoscope Investig Otolaryngol. 2019;4(3):328-334.
Powell, M. E., Rodriguez Cancio, M., Young, D., Nock, W., Abdelmessih, B., Zeller, A., Perez Morales, I., Zhang, P., Garrett, C. G., Schmidt, D., White, J., & Gelbard, A. (2019). Decoding phonation with artificial intelligence (DeP AI): Proof of concept. Laryngoscope Investigative Otolaryngology, 4(3), 328-334. https://doi.org/10.1002/lio2.259
Powell ME, et al. Decoding Phonation With Artificial Intelligence (DeP AI): Proof of Concept. Laryngoscope Investig Otolaryngol. 2019;4(3):328-334. PubMed PMID: 31236467.
* Article titles in AMA citation format should be in sentence-case
TY - JOUR T1 - Decoding phonation with artificial intelligence (DeP AI): Proof of concept. AU - Powell,Maria E, AU - Rodriguez Cancio,Marcelino, AU - Young,David, AU - Nock,William, AU - Abdelmessih,Beshoy, AU - Zeller,Amy, AU - Perez Morales,Irvin, AU - Zhang,Peng, AU - Garrett,C Gaelyn, AU - Schmidt,Douglas, AU - White,Jules, AU - Gelbard,Alexander, Y1 - 2019/03/25/ PY - 2018/11/26/received PY - 2019/02/12/revised PY - 2019/03/01/accepted PY - 2019/6/26/entrez PY - 2019/6/27/pubmed PY - 2019/6/27/medline KW - Voice disorders KW - acoustic analysis KW - classification KW - convolutional neural network KW - detection SP - 328 EP - 334 JF - Laryngoscope investigative otolaryngology JO - Laryngoscope Investig Otolaryngol VL - 4 IS - 3 N2 - Objective: Acoustic analysis of voice has the potential to expedite detection and diagnosis of voice disorders. Applying an image-based, neural-network approach to analyzing the acoustic signal may be an effective means for detecting and differentially diagnosing voice disorders. The purpose of this study is to provide a proof-of-concept that embedded data within human phonation can be accurately and efficiently decoded with deep learning neural network analysis to differentiate between normal and disordered voices. Methods: Acoustic recordings from 10 vocally-healthy speakers, as well as 70 patients with one of seven voice disorders (n = 10 per diagnosis), were acquired from a clinical database. Acoustic signals were converted into spectrograms and used to train a convolutional neural network developed with the Keras library. The network architecture was trained separately for each of the seven diagnostic categories. Binary classification tasks (ie, to classify normal vs. disordered) were performed for each of the seven diagnostic categories. All models were validated using the 10-fold cross-validation technique. Results: Binary classification averaged accuracies ranged from 58% to 90%. Models were most accurate in their classification of adductor spasmodic dysphonia, unilateral vocal fold paralysis, vocal fold polyp, polypoid corditis, and recurrent respiratory papillomatosis. Despite a small sample size, these findings are consistent with previously published data utilizing deep neural networks for classification of voice disorders. Conclusion: Promising preliminary results support further study of deep neural networks for clinical detection and diagnosis of human voice disorders. Current models should be optimized with a larger sample size. Levels of Evidence: Level III. SN - 2378-8038 UR - https://www.unboundmedicine.com/medline/citation/31236467/Decoding_phonation_with_artificial_intelligence_(DeP_AI):_Proof_of_concept L2 - https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/31236467/ DB - PRIME DP - Unbound Medicine ER -
Try the Free App:
Prime PubMed app for iOS iPhone iPad
Prime PubMed app for Android
Prime PubMed is provided
free to individuals by:
Unbound Medicine.