IATos: AI-powered pre-screening tool for COVID-19 from cough audio samples

27 Apr 2021  ·  D. Trejo Pizzo, S. Esteban ·

OBJECTIVE: Our objective is to evaluate the possibility of using cough audio recordings (spontaneous or simulated) to detect sound patterns in people who are diagnosed with COVID-19. The research question that led our work was: what is the sensitivity and specificity of a machine learning based COVID-19 cough classifier, using RT-PCR tests as gold standard? SETTING: The audio samples that were collected for this study belong to individuals who were swabbed in the City of Buenos Aires in 20 public and 1 private facilities where RT-PCR studies were carried out on patients suspected of COVID, and 14 out-of-hospital isolation units for patients with confirmed COVID mild cases. The audios were collected through the Buenos Aires city government WhatsApp chatbot that was specifically designed to address citizen inquiries related to the coronavirus pandemic (COVID-19). PARTICIPANTS: The data collected corresponds to 2821 individuals who were swabbed in the City of Buenos Aires, between August 11 and December 2, 2020. Individuals were divided into 1409 that tested positive for COVID-19 and 1412 that tested negative. From this sample group, 52.6% of the individuals were female and 47.4% were male. 2.5% were between the age of 0 and 20 , 61.1% between the age of 21 and 40 , 30.3% between the age of 41 and 60 and 6.1% were over 61 years of age. RESULTS: Using the dataset of 2821 individuals our results showed that the neural network classifier was able to discriminate between the COVID-19 positive and the healthy coughs with an accuracy of 86%. This accuracy obtained during the training process was later tested and confirmed with a second dataset corresponding to 492 individuals.

PDF Abstract

Datasets


Introduced in the Paper:

IATOS Dataset

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here