I trained a rock paper scissors detection model that achieved accuracy of more than 90%. However, when I deployed the model on the Arduino Nano 33 BLE Sense, the inferences it made were significantly inaccurate.
Moreover it puts all the images under one label(Link to the code)
I followed the code deployment approach inspired by the Harvard TinyMLx library on the Arduino IDE, making necessary modifications for my specific model.
What could be the potential reasons for this discrepancy between the accurate training results and the incorrect inferences made on the Arduino?
PS- Just to be extra sure I tried capturing the images with my Arduino kit and checked the inference it made via Arduino and predicting the label of the test image via the model on my PC and the prediction made by the model on my PC were correct while Arduino continuously predicted only one class for all images