Snoring Detection on Embedded Devices

Hello Everyone,

I wanted to share with you an interesting TinyML project on snoring detection, that I have been working alongside Kelly Zhang and Jiayu Yao.

Some background on privacy and TinyML:
With the increased privacy concerns related to using private data, TinyML can be widely applied to cases where privacy is a major concern, cases like preventing certain medical conditions, such as snoring. Since inferences can be done in real-time on device, TinyML modeling offers a great competitive edge on keeping private data away from remote access that can occur with ML models running on large devices, in the cloud, etc.

For this project, we looked at snoring, which can lead to many serious health issues including diabetes, stroke, and depression. Due to the negative health impacts of such medical conditions, it is crucial for people to know whether they snore and understand their snoring patterns and triggers of snoring.

Screenshot (6)

What makes the model interesting?
We developed a model that can be deployed on an embedded device - for instance, a bed-side microcontroller with a microphone - that automatically identifies snoring sounds. Our device extracts spectrograms from real time audio data and applies a Convolutional Neural Network (CNN) to classify whether an audio sample contains snoring sounds or not. In our work, we investigate different pre-processing methods, including Fast Fourier Transforms (FFT) and Mel-frequency cepstral coefficients (MFCC), as well as different neural network model architectures. For instance, one consideration is emphasizing the range of frequencies that are the most common in snoring sounds, and possibly attribute different ranges of frequencies detected to the severity of the snoring condition. MFCC is a great way to prioritize desired frequency ranges. In general you would want to put more weight on low ranges, that can be differentiated better by the human year.

Real-life considerations
For better detection on device, you can also tweak parameters like detection-window-size or inference-delay, to cater your detection to real life situations. For instance, a snoring person would usually snore every 4sec, which means that you want your device to receive inputs at specific clock ticks, intervals, and ranges.

The best model that we deploy has accuracy of 96.86 %, which is comparable to that of existing snoring detection models. However, our model is of size 18,712 bytes which is over 500 times smaller than other models in the literature (e.g. 9.8MB).

In general, I think that snoring detection can be a very promising application. If you want to look into it, there is a great Kaggle dataset with snoring sounds and a very detailed-written paper on snoring detection:

You can also check out our project video for more information:
Snoring Detection with TinyML deployed on Arduino Nano - YouTube

Also, I would love to hear any of your thoughts on this topic and on the applications of TinyML in disease/disorder detection or prevention.


Congratulations on the progress you have made in this project! I can see this being a very useful application. It was interesting for me to see your thoughtful selection of “non-snoring” sounds. It looks like a very good representation of sounds which may be present in a snoring situation.

I fear I have a tendency to give more consideration to the sound I am looking for rather than the sounds which should be rejected when I think of datasets. Thank you for sharing your experience.

1 Like

Hey @stephenbaileymagic,

That’s a great question. Selecting the non-snoring sounds strongly depends on the environment that the device is placed in. As most snoring devices are used indoors, we selected non-snoring sounds of baby cries, wind noises, people talking, TV sounds, etc. You can also select the percentage of background noise, as well as silence, to add to your sounds in order to avoid false negatives or positives. We used the Google’s speech commands dataset for additional non-snoring sounds and background noise.

One thing about the tendency to overrepresent target sounds is that usually overrepresentation in data samples leads to a biased model towards that particular sound. For example, when we trained the model with many snoring samples and fewer non-snoring ones, there were many false positives, so you would definitely want to have a balance. Thank you for bringing in this interesting point!


This is a fantastic use of ML, thanks for all the detail, I think it will be very helpful for anyone who wants to build on this. I’ve shared the project on Twitter too, since I think others will be interested too:

1 Like

This is a really interesting project! I wonder if this could be expanded into a larger sleep assistant? Maybe also track “restlessness” or night time habits with an imu attached to the bed. Or even listen for a human voice speaking out during a nightmare. I digress, but that is seriously so cool and can lead to some very interesting products!

1 Like