TinyMLx Course 3: [Post 4] - Feedback on Project Progress

Purpose

Once you or your team have made meaningful progress and would like feedback from the community on specific technical elements of your project (prior to project submissions), we’d encourage you to post questions and demonstrations at the level of “proof of concept” here for others to see and to discuss.

Relevant pictures, videos, data sets, code excerpts, and links to GitHub project repos are all encouraged! It may also be helpful to provide a link to your project proposal in the progress update so that others can reference your original intention.

As was the case for project proposals, we’d also suggest that you review and provide feedback for other projects, as this process may help you to consider things that wouldn’t otherwise occur to you for your own project. In doing so, you’d be helping out a fellow member of the community, something I’m sure you’d appreciate yourself.

Important

Please do not create separate threads with a redundant purpose :slight_smile:

Allow this to serve as the official thread for TinyML project progress feedback.

Here’s a follow up from my project pitch here.

One of the major themes I learned from the TinyMLx course was that one of the major challenges in TinyML is collecting clean data for use in TensorFlow. To that end, I’ve spent some time developing a method for collecting tagged sensor data. I use the Arduino Nano 33 BLE sense with silicone sleeve, powered by a LiPoly battery and the Adafruit PowerBoost 1000 charger. It sends accelerometer data via Bluetooth to the Seeed Studio Wio Terminal and its associated battery pack and micro SD card.

This configuration has allowed me to record data in a completely standalone environment (that is, without Internet connection) such as a gym or boxing studio. I made several mistakes/revisions, such as:

  • Blocking system calls such as writes to the SD card and LCD updates, which meant I lost sensor data and reduced the resolution of the data I was collecting
  • Issues with the BLE stacks across the Arduino and Seeed Studio libraries. They had differing abilities which required me to change how I do bluetooth connectivity and detect states like peripheral/central connectivity.
  • Reconfiguring the ‘magic wand’ example to transmit the accelerometer data plus the ‘2d raster’ of the sensor data, but not run the Tensorflow models to reduce the amount of time in the main execution loop.
  • The Arduino nano 33 BLE sense only has an onboard neopixel as a means of giving feedback, so it is hard to tell if the board has ‘crashed’, or if it has lost connectivity to the Wio terminal. I introduced a “rainbow” effect on the neopixel so I can tell if the board is working or not.
  • I’m the only user of this system so far, so any insights and model outputs are limited to me. While this fails all the ethics tests for diverse data collection, I’m just experimenting with the concept and would expand my user data collection if the concept is feasible.

Here’s a video of the setup in action: Instagram video link.

I’ve been testing the configuration in a practical environment (i.e. a 45 minute boxing session), and I had one failure with the Nano 33 BLE sense board. Both devices had enough battery power to last the session and did not move with the various body-weight and boxing exercises, so I’m satisfied with the sensor placements.

In response to that failure I have introduced a couple of feature changes in response to the live test:

  • adding the silicone sleeve for the nano 33 BLE sense to more securely attach it to my boxing wraps
  • adding the ‘rainbow effect’ on the neopixel when the nano 33 BLE sense is actively transmitting to the Wio terminal.
  • using the result of the peripheral ‘writeValue()’ function to confirm if the nano 33 BLE sense has bluetooth connectivity: if I lose connectivity then I try to rescan and reconnect to the Wio terminal.

I’m going to continue to collect sensor data and train my models over the next few months. But any feedback on my progress so far would be very much welcome!

3 Likes

This is so awesome! You’ve got to take a video with the boxing in action :slight_smile:

1 Like

Thank you @vjreddi ! Now that I’ve got a bit more confidence in how to correctly collect data, I’m going to train my models and try it out with a shadow boxing session. For the uninitiated, shadow boxing is where you practice punching without a bag, boxing gloves or an opponent. If it’s effective I’ll try it out in one of my gym sessions.

Just to start out simple, I’m going to try to count the different punch types, and build up from there!

1 Like

Keep us posted on how things are coming along!

1 Like

Will do! I was having some issues keeping my hand sensors in place on the back of my hand, as I found it was moving around in my boxing gloves. This was especially the case as my training sessions often involve taking off and putting on my gloves which changed the sensor position and naturally results in bad data collection.

I did take a little side journey to look into how to better secure cabling and sensors, and I’m working on another “magic wand”: https://www.instagram.com/p/CQEui5JByde/?utm_medium=copy_link . This one uses the circuit playground Bluefruit, which has an nRF52840 chip (the same one on the arduino nano 33 ble sense). It supports tensorflow lite, but only has a 3dof accelerometer. I want to make a magic wand that lights up to different movements for the tensorflow lite experiments: The TensorFlow Microcontroller Challenge - Experiments with Google. I did have some challenges compiling the Adafruit neopixel libraries for the arduino nano 33 ble sense.

Hoping to try out the boxing again soon. I’ve seen a new boxing tracker which is attached to the wrist which may solve some of my problems: https://instagram.com/rooq.boxing?utm_medium=copy_link

Hello community members,

This post is to serve as the introduction to my entry into the class project competition. As the competition includes a community review/collaboration component, I am posting all of my work so far here. I am hopeful that the month between now and August 31 gives us all a chance to look at and improve the project so it will be as polished as possible by the deadline.

I have titled the project vocal_numeric and it is an ML model and hardware combination which is intended to provide similar functionality to a 10 digit keypad but which uses keyword spotting to recognize the digits when spoken in English. I am hopeful that this will be useful for others to drop into their own projects which require numeric input thereby adding voice functionality for a minimal addition in power consumption or engineering time.

All of my work so far can be found on the GitHub for this project:

https://github.com/stephenbaileymagic/vocal_numeric

This includes not only the code and the ideas behind the work but also links to some videos and the Edge Impulse project which includes all of the model settings, data, etc. The repository is in the format of an Arduino library so hopefully it will be easy to download and check out.

I hope you enjoy looking at this project and find it in some way useful or inspiring. There have been some ups and downs in creating this, but overall it has been an amazing learning experience.

As mentioned above, this is not by any means a finished product. Please feel free to make any suggestions, corrections, or additions which you feel appropriate.

Thank you for your attention, and I can’t wait to see what everyone else has been working on!

Best regards,
Stephen

2 Likes

I wanted to post a small update on my project: I recorded some of my punches with my left and right hand, and I used the “magic wand” process for processing recorded data.

While the punches with my left hand had a great model performance (0.8307 accuracy, 94.6% correct with the validation set), the right hand had less accurate performance (0.7847 accuracy, 77% correct with the validation set). I took a look at the rasterised output and noticed that my right hand data had some of the data cut off during recording;

Here’s an example of an ‘uppercut’ with my left hand:
left-0 (Small)
and an example with my right hand:
right-0 (Small)

I’m going to re-record my right hand punch data and try again, but I’m relatively confident that I’m on the right track! I’ve also configured a simple kNN model to detect whether I am correctly guarding or not, so I’m looking forward to using a compiled model with a fully programmed solution!

A technical update: I regenerated some data and now my right hand models work perfectly! Updated statistics

Right

  • 100.0% correct (N=207, 13 unknown)
  • loss: 0.1872 - accuracy: 0.8764

Left

  • 94.6% correct (N=167, 53 unknown)
  • loss: 0.2228 - accuracy: 0.8307

A small update: I live in Sydney, Australia which is currently seeing an outbreak of the delta-variant of COVID-19. To that end, the state government has implemented strict controls on visiting people outside your household (I live by myself), let alone going to the gym! It might be a few months until I’ll be able to test my devices in a practical environment. Here’s a picture from a city train station escalators at 5pm - a typical time people would leave the city for home: completely deserted!

2 Likes

Thank you for the update! I have a motion tracking project in mind for the future which will require capturing a lot of data, so I am watching your progress with great interest.

Stay safe out there and best of luck with your project!

Best regards,
Stephen

1 Like

I’ll most definitely have something interesting to share before the end of the month - it’s been quite the journey since starting the TinyMLx course!

I’m planning on discussing my learnings in my submission, but if there’s anything I’ve missed please DM me and I’m happy to share my experiences!

Hi @stephenbaileymagic, thanks for sharing.

I have taken a look at your project and it looks very interesting. Especially because it focuses only on numeric numbers and it allows you to tweak it to recognize these words with a higher accuracy than a more general-purpose KWS device. Some ideas that occur to me might be in robotic applications or to tell an elevator which storey to go (especially these times).

From your videos, I liked how the device performed well in different scenarios with different noise and reverberation. Probably I missed it, but what about other types of voice and accents?

A suggestion I have in mind for the competition would be to add a link to the source code (colab) where all the ML training process has taken place. This way, you can state clearly what has been the process before deployment to the microcontroller.

Hello all,

I would like to present the project a colleague (@Danimp94) and I are working on for the HardvardX competition.

Although I posted about it in an open thread under Projects section, I would like to officially post it here for community review.

PS: thanks @stephenbaileymagic to have taken your time to review our query in the other thread. Spoiler alert, we solved the problem! There was a problem in the dataset which made the model get the data all mixed up.

Project name: Smart Alarm

GIT repo:

GitHub

cargilgar/Smart-Alarm-using-tinyML

Contribute to cargilgar/Smart-Alarm-using-tinyML development by creating an account on GitHub.

Project description

In this project we are aiming to develop an intelligent alarm using TinyML that wakes up the user at the best moment (i.e. the REM stage). The idea behind this project is to develop a device capable of identifying the different sleep stages (Wake, Non-REM, REM), and then predict which moment is the most suitable to wake up the user within a given boundary.

For example: the user wishes to wake up at around 8 am and it is okay if the system could wake him 15 minutes earlier or later. The user will then define a threshold of 15 minutes and the system will know that it can wake the user up at the most appropriate time between 7:45 and 8:15.

For this project we have planned to use the IMU integrated in the Arduino Nano BLE 33 Sense and an external Heart Rate sensor, obtaining a total of 4 different measurements (3 for the axes of accelerometer and 1 for the HR) in a time series format.

Further detailed description can be found in the repo. There is still some work to do and it is not finished yet. But anyway, if anyone wishes to put their two cents in, we are all ears :slight_smile:

2 Likes

Hi @Chaplin5 ,

Congratulations on your progress! I’m glad to hear you were able to solve your data problem. I am looking forward to taking a longer look at your github when I get back from work.

Thank you very much for your feedback on my project. You are right, I am hoping that by focusing on making one aspect work as well as possible there will better real world performance than trying to be more general purpose.

You didn’t miss anything, so far there is only one voice in the demonstration videos, as I am the only one working on it. I agree it is a good idea to get some more participants! :slight_smile:

In regards to the source code, I have included a link on github to my public Edge Impulse project, which contains all of the code in their format. If you have not used it before, I think Edge Impulse is worth checking out. It does make some aspects of the model creation and deployment very easy. That having been said, I agree it is a good idea to include the plain code for those who don’t use Edge Impulse.

Again, thank you very much for your feedback, I really appreciate it!

1 Like