COVID-19

Donate your cough
Save Lives

Contribute to the world’s biggest database of COVID-19 coughs – to build better algorithms for detecting it.
Donate your cough here

To stop the COVID-19 pandemic, we need to screen a lot of people – maybe in the tens or hundreds of millions. Most COVID-19 carriers don’t develop symptoms severe enough to prompt medical help, and this undiagnosed population acts as active spreaders, contributing to much faster, covert spread.

Current wet lab-based tests can’t meet this need. They are expensive, scarce, and slow. And they pose a risk as they require an in-person visit, exposing more members of the public and healthcare personnel; a risk that would be much amplified by large-scale testing.

There’s an urgent need for a cheap, scalable, and remote test. And you can help! By taking this survey, you’ll contribute to building the world’s biggest database of coughs from COVID-19 patients and controls; that can be used for building algorithms for detecting it – through a simple voice recording.

Using cough analysis to assess respiratory disease

The respiratory system is key for humans to produce voice – when air from the lungs passes through and is shaped by the airways, the mouth and nasal cavities. When the respiratory system is affected by a disease it can change the sound of you breathing, coughing, and the vocal quality. Specific cough characteristics have already been shown in respiratory diseases such as asthma, pneumonia, whooping cough – using advanced analysis.

COVID-19 also affects the respiratory system – and in ways that are different from these other diseases. Voice might thus be a cheap, scalable and remote method for assessing infection accurately. And with unlimited capacity for testing.

To analyse the fine patterns of voice and coughs we train advanced algorithms. To make them better, we need a lot of data. This makes the information cough and voice provide more accurate and useful.

Cough up! Donate your cough

We’re now asking your help, your ‘cough donation’ to get to the critical amount of data.
Here’s the survey we ask you to take. Healthy and non-healthy people both welcome!
Start the survey now

In this survey, you will be asked to record and submit short voice and cough samples, and some data about your demographics and health status.

Specifically, other than your cough and voice samples, we will ask:

  • If you’ve been tested to COVID-19 already.
  • The date and outcome of your test.
  • About your present symptoms.
  • About your location/travel history.
  • About your history of smoking, or diseases that could also cause/affect your symptoms/voice.
  • A few general demographic questions.

You can find more information in our Information Sheet.

What happens to the data

Donating cough is completely anonymous, we are not collecting personally identifiable data.

We aim to create an application that can reliably differentiate between the cough and voice of COVID-19 infected patients, and those with other respiratory conditions. With enough data collected we can make our machine learning algorithms smarter.

With your permission, we’d also like to share the data with academic researchers and nonprofit groups working on the same mission.

Who’s Novoic?

Novoic is a research focussed, digital biotechnology company, founded by researchers from Oxford and Cambridge universities; it’s based in London.

What we do: assess brain and respiratory health from the way you speak. Our research team uses recent breakthroughs in deep audio and natural language processing, to accelerate decades of academic research in audio processing, linguistics, and neurological diseases. The outcome: Identifying fine patterns in voice recordings that are associated with a multitude of neurological conditions and respiratory illnesses.

We work to democratize health assessment, by making it as easy as having a conversation.

Seeing the global crisis and our internal expertise – we decided to do what we could to help. That’s why we launch the Cough Donation project. With this we hope to help the academic research community and give every pair of lungs a voice.

We know what needs to get done, and have the expertise to carry it out; together with the academic research community. What we need more of, is data to train and validate our AI algorithms on. What we need more of is your help – and coughs!


Proudly supported by


What’s the science behind? – Background and Rationale

SARS-CoV2 has been declared a pandemic by the WHO and a national health emergency in many countries, with 1,844,863 confirmed cases globally and 117,021 deaths as of 15th April 2020.[R1] This novel Corona-virus affects the respiratory system. Common symptoms including coughing (a characteristically dry cough known as the “Wuhan Cough”) and fever; pneumonia can develop, and this is one of the primary causes of mortality.[R2] Studies based on CT scans have shown that early stage SARS-CoV2 infection manifests as inflammatory infiltration in the subpleural and/or peribronchovascular regions of one or both lungs, which spreads with a larger number of pure ground-glass opacities and consolidation of lesions, as the disease progresses.[R3]

SARS-CoV2-related pneumonia differ in its patterns on chest CT scans compared with non-SARS-CoV2 related pneumonia, i.a.: a more peripheral distribution (80% vs 57%), more ground-glass opacity (91% vs 68%), vascular thickening (59% vs 22%), pleural thickening (15% vs 33%) and effusion (4% vs 39%), and being less likely to have a distribution that is both central and peripheral (14% vs 35%).[R4] These patterns have been exploited by researchers using image-based AI techniques for analysing both CT scans[R5] and X-rays[R6] to discriminate between SARS-CoV2 viral infection, non-SARS-CoV2 viral infection, and bacterial infection. The studies above provide evidence that SARS-CoV2 affects the respiratory system in a characteristic way.

Another modality for reading out symptoms that are synergistic with AI-based approaches is coughing and vocalisation patterns. The respiratory system is a key component for humans to both cough and produce voice – when air from the lungs passes through and is shaped by the airways, the mouth and nasal cavities. Respiratory disease can affect the sound of someone’s breathing, coughing, and vocal quality – as anyone will be familiar with from having e.g. the common cold. Indeed analysis of auditory patterns in coughs and voice can provide an array of information relevant to respiratory illnesses[R8].

Automatic speech recognition and acoustic analysis has been used to identify speakers suffering from a cold[R9], provide health-relevant information in individuals suffering from Asthma[R10] or head-and-neck cancer[R11], and estimating smoking habits[R12, R13]. Cough analysis specifically has been used for automatically recognising coughs[R14, R15], sneezes[R16] and through-clearing[R14] sounds; and has been used to classify coughs between dry vs wet[R17] and dry vs productive[R18]

A recent study showed cough-based analysis capable of detecting asthma, pneumonia, bronchiolitis, croup, and lower respiratory tract infections, with over 80% sensitivity and specificity.[R19] It’s been used for detecting influenza[R20], whooping cough[R21], and childhood pneumonia[R22, R23].

Interestingly these different breathing and coughing patterns have been shown to be detectable with commonly available devices, e.g. smartphones. An important point should be made that the human ear, at least untrained, is unable to differentiate coughs from many of these conditions above. Instead analysis is based on signal processing and acoustic analysis. Patterns extracted include spectral patterns (roll-off, skewness, kurtosis, centroid, spread, decrease, flatness, slope, and standard deviation), frequency measures (band power), energy (energy, log energy, energy per second), zero-crossing, and wavelet packet coefficients[R21, R22, R23].

References

[R1] WHO, Coronavirus disease 2019 (COVID-19) Situation Report – 85. https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200414-sitrep-85-covid-19.pdf?sfvrsn=7b8629bb_4

[R2] Yang X, Yu Y, Xu J, Shu H, Liu H, Wu Y, Zhang L, Yu Z, Fang M, Yu T, Wang Y. Clinical course and outcomes of critically ill patients with SARS-CoV-2 pneumonia in Wuhan, China: a single-centered, retrospective, observational study. The Lancet Respiratory Medicine. 2020 Feb 24.

[R3] Dai, W. C. et al. CT Imaging and Differential Diagnosis of COVID-19. Canadian Association of Radiology. 2020 May. https://www.ncbi.nlm.nih.gov/pubmed/
32129670

[R4] H. X. Bai, B. Hsieh, Z. Xiong, K. Halsey, J. W. Choi, T. M. L. Tran, I. Pan, L.-B. Shi, D.-C. Wang, J. Mei et al., “Performance of radiologists in differentiating COVID-19 from viral pneumonia on chest CT,” Radiology, p. 200823, 2020.

[R5] Xu X, Jiang X, Ma C, Du P, Li X, Lv S, Yu L, Chen Y, Su J, Lang G, Li Y. Deep learning system to screen coronavirus disease 2019 pneumonia. arXiv preprint arXiv:2002.09334. 2020 Feb 21.

[R6] Wang L, Wong A. COVID-Net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest radiography images. arXiv preprint arXiv:2003.09871. 2020 Mar 22.

[R7] Narin A, Kaya C, Pamuk Z. Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks. arXiv preprint arXiv:2003.10849. 2020 Mar 24.

[R8] Schuller BW, Schuller DM, Qian K, Liu J, Zheng H, Li X. COVID-19 and Computer Audition: An Overview on What Speech & Sound Analysis Could Contribute in the SARS-CoV-2 Corona Crisis. arXiv preprint arXiv:2003.11117. 2020 Mar 24.

[R9] Schuller, B., Steidl, S., Batliner, A., Bergelson, E., Krajewski, J., Janott, C., Amatuni, A., Casillas, M., Seidl, A., Soderstrom, M., Warlaumont, A.S., Hidalgo, G., Schnieder, S., Heiser, C., Hohenhorst, W., Herzog, M., Schmitt, M., Qian, K., Zhang, Y., Trigeorgis, G., Tzirakis, P., Zafeiriou, S. (2017) The INTERSPEECH 2017 Computational Paralinguistics Challenge: Addressee, Cold & Snoring. Proc. Interspeech 2017, 3442-3446, DOI: 10.21437/Interspeech.2017-43

[R10] I. Mazic ́, M. Bonkovic ́, and B. Džaja, “Two-level coarse-to-fine classification algorithm for asthma wheezing recognition in chil- dren’s respiratory sounds,” Biomedical Signal Processing and Control, vol. 21, pp. 105–118, 2015.

[R11] A. Maier, T. Haderlein, F. Stelzle, E. Nöth, E. Nkenke, F. Rosanowski, A. Schützenberger, and M. Schuster, “Automatic speech recognition systems for the evaluation of voice and speech disorders in head and neck cancer,” EURASIP Journal on Audio, Speech, and Music Processing, vol. 2010, no. 1, p. 926951, 2009.

[R12] A. H. Poorjam, M. H. Bahari et al., “Multitask speaker profiling for estimating age, height, weight and smoking habits from spontaneous telephone speech signals,” in Proceedings 4th International Conference on Computer and Knowledge Engineering (ICCKE). Masshad, Iran: IEEE, 2014, pp. 7–12.

[R13] H. Satori, O. Zealouk, K. Satori, and F. Elhaoussi, “Voice comparison between smokers and non-smokers using hmm speech recognition system,” International Journal of Speech Technology, vol. 20, no. 4, pp. 771–777, 2017.

[R14] S. Matos, S. S. Birring, I. D. Pavord, and H. Evans, “Detection of cough signals in continuous audio recordings using hidden markov models,” IEEE Transactions on Biomedical Engineering, vol. 53, no. 6, pp. 1078–1083, 2006.

[R15] T. Olubanjo and M. Ghovanloo, “Tracheal activity recognition based on acoustic signals,” in Proceedings 36th Annual Interna- tional Conference of the IEEE Engineering in Medicine and Bi- ology Society (EMBC). Chicago, USA: IEEE, 2014, pp. 1436– 1439.

[R16] S.Amiriparian,S.Pugachevskiy,N.Cummins, S.Hantke, J.Pohjalainen, G. Keren, and B. Schuller, “CAST a database: Rapid targeted large-scale big data acquisition via small-world modelling of social media platforms,” in Proceedings 7th biannual Confer- ence on Affective Computing and Intelligent Interaction (ACII). San Antionio, USA: IEEE, 2017, pp. 340–345.

[R17] P. Moradshahi, H. Chatrzarrin, and R. Goubran, “Improving the performance of cough sound discriminator in reverberant en- vironments using microphone array,” in Proceedings Interna- tional Instrumentation and Measurement Technology Conference (I2MTC). Graz, Austria: IEEE, 2012, pp. 20–23.

[R18] J. Schröder, J. Anemiiller, and S. Goetze, “Classification of hu- man cough signals using spectro-temporal gabor filterbank features,” in Proceedings International Conference on Acoustics, Speech and Signal Processing (ICASSP). Shanghai, China: IEEE, 2016, pp. 6455–6459.

[R19] P. Porter, U. Abeyratne, V. Swarnkar, J. Tan, T.-w. Ng, J. M. Brisbane, D. Speldewinde, J. Choveaux, R. Sharan, K. Kosasih et al., “A Prospective Multicentre Study Testing the Diagnostic Accuracy of an Automated Cough Sound Centred Analytic System for the Identification of Common Respiratory Disorders in Children,” Respiratory Research, vol. 20, no. 1, p. 81, 2019.

[R20] F. Al Hossain, A. A. Lover, G. A. Corey, N. G. Reich, and T. Rahman, “FluSense: A Contactless Syndromic Surveillance Platform for Influenza- Like Illness in Hospital Waiting Areas,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 4, no. 1, pp. 1–28, 2020.

[R21] Pramono RX, Imtiaz SA, Rodriguez-Villegas E. A cough-based algorithm for automatic diagnosis of pertussis. PloS one. 2016;11(9).

[R22] Abeyratne UR, Swarnkar V, Setyati A, Triasih R. Cough sound analysis can rapidly diagnose childhood pneumonia. Annals of biomedical engineering. 2013 Nov 1;41(11):2448-62.

[R23] Kosasih K, Abeyratne UR, Swarnkar V, Triasih R. Wavelet augmented cough analysis for rapid childhood pneumonia diagnosis. IEEE Transactions on Biomedical Engineering. 2014 Dec 18;62(4):1185-94.

[R24] Song I. Diagnosis of pneumonia from sounds collected using low cost cell phones. In 2015 International joint conference on neural networks (IJCNN) 2015 Jul 12 (pp. 1-8). IEEE.

[R25] https://www.statnews.com/2020/03/24/
when-might-experimental-drugs-to-treat-covid-19-be-ready-a-forecast/

By now you’ll know how much we care about conversations here at Novoic.

If you want to have one with us, let us know here!
Commercial and academic partnerships welcome.

Shoot an email to contact@novoic.com