How to hide a backdoor in AI software – such as a bank app depositing checks or a security cam checking faces

Hi there everyone,

A colleague of mine showed me this really interesting article on AI. In summary, it describes how the process of pruning and quantizing can be manipulated to the advantage of an attacker, such as a visual recognition app. Interesting stuff and directly related to some project ideas I may have to rethink!

What does everyone make of that?


I think I’d better augment heartily images in my data set, and with enough noise, in order to build robustness to β€œseeming”.

Very interesting article, thanks for sharing! Similar to this, one of my student groups in the Harvard tinyML course showed that quantized/optimized models can also make inference biases more pronounced.