Thoughts on Fundamentals of TinyMl Edx course

I just finished Course 1 of the 3 part TinyMl series. I am interested by the vast applications of tinyML and took the free version of the course. I understood how the code worked because of Lawrence’s explanations but I feel like I was just memorizing the code. Like if I were tasked with writing my own Image classifying program I don’t think I’d be able to. How did you guys feel about the way tensor flow was thought in course 1? Does it get better in courses 2 & 3?

1 Like

For my reference (and to improve other courses I hopefully get to help develop in the future) what would have made it easier for you to feel like you could do it yourself? Also in case it makes you feel better – I find myself googling and copying and pasting code the first few times I do anything new myself! :slight_smile:

1 Like

Anything to reduce the black boxness. stepping through the hyperparameters in a more challenging way would be nice

Attending all three courses in the specialisation made me conscious of better utilisation of resources on the devices for ML models for example quantisation. etc. I think I paid fee to get understanding of these fundamental concepts. These days I apply these concepts to the ML models which are beyond Tinyml. If you take out of these fundamental concepts out of this specialisation that would help you for your rest of ML career. I would consider it was a really good investment for my career.

1 Like

Good to know. This is actually a very very active research area – so there aren’t a lot of answers here as it tends to be very problem specific. But/and there are probably ways to think more about what the hyperparameters mean which may make it helpful if you want to tune them. I will think more about this. In the meantime, if you google AutoML – you will come across a bunch of tools that try to help tune these parameters for you (often using ML)!