TinyMLx Course 3: [Post 5] - Project Submissions


All project summaries must be submitted in the thread below by 12:00p EST on August 31st, 2021 to be eligible for the prizes described here. Prize winners will be announced by September 15th, 2021.


If you’d like to indicate support for a particular project you find noteworthy, reply to this thread with the following text, “I’d like to vote for project .”

Requirements for submission

Note: only one member of any team should submit a project summary.

  1. Project name
  2. Team member names, roles
  3. Project summary - a description in a sentence or two (at most)
  4. Longer project description
  5. References (including other code or data), sources of inspiration
  6. A link to the corresponding GitHub repository, under an open source license
  7. A YouTube video showcasing your application

1. Project name

The “IoT Stallion” - providing feedback to boxers on their techniques.

2. Team member names, roles

Anthony Joseph: project leader

3. Project summary

Can TinyML check if a boxer is using a correct technique?

4. Longer project description

Boxing trainers often focus on common fundamental themes during classes to ensure their students avoid injury and maximise their results. However 1:1 training is expensive and group training sessions makes it challenging for trainers to provide personalised feedback.

Therefore, I propose using TinyML to detect punch types to provide feedback to trainers. I started with a simple goal to see if we can detect different punch types. Then once I could detect punch types I can use that as a foundation to track:

  • punch combos (a series of punch types),

  • defensive positions such as blocking, slipping, weaving, rolling and ducking,

  • both-hand positions such as blocking while punching.

Initially this project started using Edge Impulse with a single hand. While it was successful, I wanted to see whether the “magic wand” approach of rasterizing accelerometer data would yield better results. Meanwhile, I also investigated ways of communicating from a central bluetooth device to remote devices so we can evaluate defensive positions combined with punch types.

5. References (including other code or data), sources of inspiration

Competitor/subsitute products:

6. A link to the corresponding GitHub repository, under an open source license

7. A YouTube video showcasing your application


A nice example of how to use tinyML in sport.

Personally, I think sport is one of the areas where embedded machine learning combined with digital signal processing can be an added value. Give instant feedback to athletes. This project gives a good illustration.

One of the challenges is that we need to use multiple sensors (sensor fusion of different sensor modalities), in a wireless body area network. Try to establish this at very low energy consumption. We need to build very low cost, hybrid solutions, a combination of small end-point and edge device solutions, combined with low energy wireless communication protocol. Another challenge is of course obtain good datasets.

Nice work Anthony. Keep us posted about the progress of the project.

1 Like

Thanks so much for the feedback! Indeed you’re correct, I faced all of those issues using multiple sensors. If it’s any encouragement, there’s a good variety of fitness trackers and medical devices in the marketplace right now that show that these problems can be solved. I do wonder whether they use TinyML, or some other signal processing techniques to analyse movement. Thanks again - I hope to share any new lessons I learn with the TinyMLx community.

This post serves as the submission of my project. Thank you very much for your attention and consideration.

  1. Project name

The name of this project is vocal_numeric

  1. Team member names, roles

Stephen Bailey, project lead.

I would like to thank Carlos @Chaplin5 for his feedback on this project and Chris Purcell for his moral support, and for sharing his experience with embedded development.

  1. Project summary

a model and circuit which replicate the functionality of a 10-digit keypad via keyword spotting.

  1. Longer project description

The purpose of the vocal_numeric project was to develop a simple and direct use for a keyword based model using commonly available parts and techniques. The model recognizes the digits 0-9 and writes them to a 7-segment led display. There is also a response for unknown words.

In my study of TinyML, I have seen many amazing projects which hold a great deal of promise and potential. However, I did not see as many projects which took the ideas and concepts of TinyML and applied them directly to current, practical applications. Taking numeric input and displaying it is a technique used everywhere with well proven methodology. Adding TinyML to the chain provides a new layer of functionality on top of a rock-solid base.

My goal for this project was to create a practical device which could be dropped in to any project using a 10-digit keypad or other number input device and have it perform, adding voice recognition functionality for a minimal addition of money, time or energy usage. While this may not stretch the technological limits of what is possible with TinyML, I believe that effectively addressing simple needs using TinyML will help drive adoption of the techniques and devices for the general public.

  1. References (including other code or data), sources of inspiration

The sources of information I used are below. Full citations can be found in the GitHub repository.

The circuit was taken from a project by akarsh98 on the create.arduino.cc project hub which can be found here -

The akarsh98 project was modified by me as it did not work with my hardware.

My data was taken from Pete Warden’s speech_commands dataset, the Mozilla Common Voice single word dataset, the Microsoft Noisy Speech dataset and my own personal recordings.

My inspirations for this project are the TinyML edX courses, the tutorials and resources provided by Edge Impulse and the enthusiasm of the worldwide TinyML community to bring these devices and techniques to the public.

  1. A link to the corresponding GitHub repository, under an open sources license

This repository is formatted as an Arduino library so it can be downloaded as a zip file and directly imported into the Arduino IDE. Two sample sketches are included, one which writes to the serial monitor and one which works with the circuit described in the repository.

  1. A YouTube video showcasing your application

I am afraid that video is not my strength, but hopefully these will be of some interest

A video walk through of the code and a look at the circuit in action can be found here:

A video of some potential use case ideas can be found here:

An additional thought about multiple devices:


1. Project name

Smart Alarm using Tiny Machine Learning.

2. Team member names, roles

  • Carlos Gil García: project developer (@Chaplin5)

  • Daniel Moreno París: project developer (@Danimp94)

Community support:
Special thanks to Stephen Bailey @stephenbaileymagic for his help in assisting us.

3. Project summary

Alarm system that adjusts the waking up time to the best moment based on data collected from different sensors.

4. Longer project description

A number of systems already employ Machine Learning (ML) to predict the sleep state of a person, but these depend on performing the calculations on external machines. These systems fall into two categories:

  • Wearable devices equipped with different sensors that are directly attached to the user (i.e., wristband, smartwatch, rings…) which rely upon wireless communication (BLE, WIFI…) so as to transmit this information to another more powerful computer, typically a smartphone.
  • A nightstand-style alarm clock that employs other different sensors which measure remotely the sleep cycles of the user.

While the former gives very reasonable results due to the fact that it has sensors that are in close contact with the body, it suffers from the drawback of a considerable battery drain. Not to mention the need for stable connectivity for data transmission.

Conversely, the latter will not lead to any reliance upon a battery as a power supply. However, the quality of data seems to be more difficult to obtain and more sophisticated components are needed. As a result, these devices are priced much too high, becoming not very appealing to the ‘common’ user.

Therefore, we propose a combination of the two above-mentioned systems into a more feasible solution. In short, we aim to achieve accurate predictions while performing data analysis on a device powered by a battery without the need of a daily/weekly recharge.

5. References (including other code or data), sources of inspiration

Learning resources

edX course: https://www.edx.org/professional-certificate/harvardx-tiny-machine-learning

Supplementary book: TinyML [Book]

Sources of inspiration

LSTMs for Human Activity Recognition:

Papers and research


Dataset used



Arduino Nano BLE Sense 3D case: Arduino Nano case with completely accessible pins by nspohrer - Thingiverse

Tool for automated feature selection: [1901.07329] The autofeat Python Library for Automated Feature Engineering and Selection

Source code for the external sensor used: Pulse Sensor · GitHub

Note: These citations are available as well in the GitHub’s repository README file with the corresponding IEEE style.

6. A link to the corresponding GitHub repository, under an open-source license

7. A YouTube video showcasing your application


Hi all.

Does anyone know if the winners for the competition have been announced?

As it was due September 15th, I am not sure if the announcement was taking place in this thread or another place that maybe I am not aware of.

1 Like

For anyone following this thread, please be aware that I am continuing to develop this project and the current version of vocal_numeric may no longer match the version submitted on this thread. I have not yet documented all of the changes on the github, but will do so in coming days.

Thank you for your attention.


I hope you don’t mind an unsolicited suggestion, but may I suggest making a release in your repository of your code at this time and update your post with the direct link to the release? That way, we can download a copy of your source code at that point in time? Now that I mention it I might do something similar.

This documentation may help you in the meantime: Managing releases in a repository - GitHub Docs

Thank you! I appreciate the suggestion. I updated the model in the Edge Impulse project, but still have to update the arduino sketches themselves. When I do, I will post a link here as suggested.

Also, many thanks for the link to the github docs. I am very new with github, so I am always happy to learn best practices.

Best regards,

1 Like