TensorFlow Lite Micro on Embedded FPGA for Research

Dear community,

I am working on a research project (part of my Ph.D. thesis) to accelerate artificial neural networks (ANN) with approximate processing techniques.

I was wondering if you have used quantized aware training methods in TensorFlow.(Quantization aware training  |  TensorFlow Model Optimization)

The proposed hardware uses custom floating-point and logarithmic formats on the filter and bias.

The results are good, however, the accuracy gets degradation that can be improved by emulating this custom hardware computation in the feedforward phase of training.

I am using small models to classify CFAR-10. Which is enough for proof-of-concept.

At this link, you can find a research poster of this work.
https://drive.google.com/drive/folders/1C9faxs50OQLG6Hb73VVJgHrSNgRObYA6?usp=sharing

The entire work will be open source.

If there’s a researcher interested, I am planning to submit the progress in conference and journals, I would like to cite your training method, or based on your possibilities we can collaborate on this.

@Yarib Sure, it would be a pleasure to collaborate with you. I recently deployed shallow CNN architectures on a Xilinx FPGA. See link https://github.com/deremustapha/quantized-sEMG-Net

1 Like

Hi @deremustapha, Thanks!.. Here we have the results: IEEE Xplore Full-Text PDF: