TinyML Security (in general)

I’m starting this topic to discuss TinyML security issues in general, separate from any specific product or projects. If you have any questions, feel free to ask and I’ll do my best to answer.


One way to learn about security is to consider what has gone wrong in the past.

Here’s today’s embedded security horror show:

1 Like


Even I can tell how bad that looks for a “finished” product.

Thank you for helping me have higher standards! :slight_smile:

1 Like

Wow… that’s pretty epic!

This is where I wish we had tiers of best practices for securing TinyML – but perhaps from a buyer’s perspective, not a technical engineers perspective. The latter will always be clouded by details given that we like to geek out.

1 Like

Requirements are the place to start. Security is too broad - we need to break it down to confidentiality, integrity, and availability requirements.

ENISA has been looking at IoT security for a while: Internet of Things (IoT) — ENISA

There are also frameworks available (for example Common Criteria Protection Profiles, OWASP, CWE Top 25) that we can borrow from.

1 Like

That is terrifying! I’m not sure whether to be proud that they’re using TensorFlow? :slight_smile:

One of my big hopes is that we can move to a world of less-connected devices. For instance, imagine if you had a peel-and-stick sensor for counting people that displayed the total on a low-power e-ink screen, ran for years on a battery, and had no network connectivity (wifi, bluetooth, ethernet) at all. You’d have to have someone with a clipboard (or tablet) check it manually, but you’d have security guaranteed by an “air gap”. Maybe I’m a bit too Battlestar Galactica in my concerns, but I love that we have the opportunity to create unhackable solutions if we need to.

I do have an upcoming paper talking about some related ideas for embedded ML security too: https://openreview.net/pdf?id=rlHeH9tx3SM

1 Like

@petewarden I’m looking forward to reading your paper!

I’m not convinced that network connectivity is necessarily bad.

While air gaps can be used effectively, they have been breached. There can also be additional privacy/confidentiality complications. In your example, if anyone can read the screen, wouldn’t that create confidentiality issues? For example, if I owned store A, it would be cool if I could just walk into store B and read my competitor’s person counter. To counter that (exclude the pun) you may need to add additional features like a way for an authorized user to authenticate prior to the display being active.

Note that transmitting data (even encrypted) every time a person is detected can be an issue as well (traffic analysis).

Getting back to network connectivity, from my perspective, the challenge is that we often don’t consider:

  1. The risk posed to the device itself (and the data it contains) by the network connection
  2. The risk to other devices on the network
  3. The more general risk of the device being used as an unauthorized entry point to the network.
  4. The larger system that the device becomes part of.

Many people pop cheaps WiFi devices onto their existing wireless network without fully appreciating the implications.

For example, by default, every device on a WiFi network can communicate with every other device on the same network. Some WiFi access points can provide separation (i.e. wireless clients aren’t allowed to communicate with each other), but unfortunately that breaks a lot of fun IoT devices that depend on layer 2 connectivity.

Developers often made decisions that simplify installation and operation, like layer 2 connectivity and WPA[2/3]-PSK, but those decisions have security implications and often tie our hands.

From a WiFi perspective, we need to kill WPA2-PSK and WPA3-PSK. Using a single shared key for multiple devices might have made sense in the 90s, but from a security perspective, it creates far more problems than it solves. Unfortunately, it is easier to implement, so vendors don’t bother with WPA2/WPA3-Enterprise.

Requiring layer 2 connectivity also needs to go the way of the dinosaur. I should be able to tell my iPhone the IP address of the Apple TV I want to stream to instead of Apple essentially mandating that they have to be on the same L2 network.

The stopgap for WiFi has been to use separate SSIDs and/or VLANs to control some of these risk. For example, we can put legacy wireless devices on a WPA2-PSK network but use a separate network for WPA2-Enterprise. But in a lot of homes and small businesses we end up having to bridge them due to L2 connectivity requirements.

But we can, and do, use separate networks for some devices to help mitigate risk.

In the long term, looking toward IPv6 and 5G, this approach is going to fail. We need to build devices that do not trust the network. Everyone should be reading about Zero Trust. Every device needs to be capable of defending itself (perhaps with the exception of DoS attacks) whether it is connected to a “safe” LAN or the Internet.

My other thought specifically for IoT / TinyML devices is that we should carefully consider our choices of wireless technologies. While many vendors loveWiFi because it can directly connect to an existing network, there are significant security and power implications. Bluetooth, Zigbee, and Z-Wave may be better choices.

An an example, I’m playing with Zigbee sensors around my lab (ok, my home :slight_smile: because they are cheap, low power, and isolated from my WiFi network. If someone totally compromised my Zigbee network, they could read some sensors (temperature, motion) and emulate pushing some buttons to turn on and off lights. But the chances of an attacker being able to jump from Zigbee to my WiFi network is infinitesimally small.


@petewarden Thank you very much for the preview of that paper.

Not only does the TEE sound like a powerful technique to keep the input data secure, but it also seems like something I can actually explain to people: “The data is not recorded, it is processed securely and then it disappears.” That might not be enough for the people who are drilling out the microphones on their smart devices, but at least the concept makes sense.

Also, one can never have too much Battlestar Galactica, although I have to admit I am old enough to be an original series guy. :slight_smile:

@SecurityGuy Thank you very much for the introduction to ENISA, I look forward to exploring the information there!

@petewarden Interesting paper, thanks for the preview of your paper!

One of the topics that received a lot of attention at ICCC18 (International Common Criteria Conference 2018 in Amsterdam) was the certification of IoT devices. Certification is a great way to build confidence in a product. However, CC certification is a slow process (typically 6+ months), presenting a significant challenge to rapid product development and evolution.

One (if not “the”) leading approach discussed was to modularize the software, which may be similar to your suggestion. For example, there could be a certified module or modules that handles network communications, crypto, secure data storage, etc., and then the application could use those trusted services.

The uncertified app, for example, would not have access to cryptographic keys used to store data objects, nor have direct access to the network. With network access tightly controlled, it is less likely that an error in the uncertified app could be attacked from the network or exfiltrate data.

In some respects, the proposed CC approach might be the inverse of your TEE model. The trusted components handle network communications and data object storage, while the untrusted components access sensors, etc. On the other hand, if the concept of a trusted module was extended to the sensor data interface, we could have a trusted app handling access to sensors, data, and the network.

Unfortunately, the copy of the presentation I have is marked confidential, but the talk was, “Common Criteria as a Backbone of IoT Security Certification” by Eric Vetillard and Georg Stütz at NXP. I bet they’d send you a copy if you ask.

1 Like

I like that approach, Pete. But I think the interconnectivity is overwhelming at his point and especially into the future. An IoT device could become compromised because something is configured wrong in the firewall! An out of date mobile phone could become compromised and through that an IoT device could become infected with a malware. There is just no way to completely secure everything in a system.

My thought is opensource can make up the leverage needed.

Just like you suggest about creating modules of connectivity that developers could use, maybe we should also open source the main communication modules to allow the world to scrutinize everything and to allow communities to contribute into making this a better world.

Physical security could never have been placed upon the shoulders of only the government. People still use alarm systems in their houses or carry a concealed firearm or live within gated communities or consciously decide not to travel to problematic areas (at least without the correct equipment and training).

Cyber security as well will always be a de-centralized ongoing project where the the winner is whomever can evolve faster. The hacker will win some rounds, we all know that; but the faster we are in finding vulnerabilities and releasing patches and the more thorough users are in their own scrutiny the better off we will be.

One of the challenges we face is that the concept of a security perimeter, including devices being protected by a firewall, doesn’t apply well to many environments. In datacenter and cloud environments we can zone networks, but in homes and offices there is so much reliance on layer 2 that we end up with one big network.

The implication for IoT devices (as well as PCs, laptops, etc.) is that they need to be capable of protecting themselves. In other words, we should assume that attacks against them will originate on the LAN where firewalls aren’t able to help.

When you refer to devices needing to protect themselves, am I correct that this will primarily be a matter of authenticating any connections, appropriately encrypting any data that is transferred, and using trusted processes as mentioned above?

If so, is there any rule of thumb about the device resources needed to achieve these objectives? I am thinking of this very simplistically (probably too simplistically). For example, should I save 10% of my device storage memory for security when I am making an app? 20%? Will it be different for every device?

As always, thank your for your thoughts on this topic, I really appreciate it.

Yes, using encryption, authenticating connections, protection against brute force attacks, etc. In other words, even if we plan to (or recommend) putting the device behind a firewall as an extra line of protection, we should be designing devices as if the intent is to allow them to be connected directly to the Internet – and survive.

Without jumping onto my soapbox, the primary issue is lateral movement. Once the perimeter is breached, subsequent attacks appear to originate from inside the perimeter. In the corporate world we see a single point of compromise (for example an employee clicking on an attachment and being infected by malware) as just the starting point. From that point onward, to most of the network, the attacker looks like an internal user.

From a system resource perspective, I can’t put a percentage on it. My advice is to consider security requirements and design options along with hardware and software constraints.

The lowest impact design pattern is usually to have the IoT device make an outbound connection to a server, as opposed to allowing inbound connections, having to authenticate them, deal with anti-automation, brute force attacks, rate limiting, etc.

For example, if we want to use a RESTful approach, securing an outbound connection is an order of magnitude (or two) easier than securing an inbound connection. Assuming your library does HTTPS, you generally just need to protect against man-in-the-middle attacks (certificate or CA pinning).

I think I mentioned it before, but just in case…the NIST Zero Trust document is worth considering: SP 800-207, Zero Trust Architecture | CSRC

TL;DR – one of the key takeaways from Zero Trust is that we shouldn’t make security decisions based solely on the network that a connection originates from. The idea, for example, that an IoT device should not require authentication from other devices on LAN or that it should rely on a perimeter firewall to protect it from evil on the Internet is flawed.

1 Like

Your mention of lateral movement in a network gave me a flashback to my office in 1999-2000 and the IT manager literally running from office to office telling us to unplug our machines from the network because somebody had opened something they got in an email. Exciting times!

Thank you for helping me along as I slowly wrap my head around these concepts. I always learn something from your posts.

1 Like

I am working on TinyML & Cybersecurity project that use tinyML model that detect sensor-based attacks, I am a little bit confused and I have a question if you don’t mind, what data should i use to build the model? network traffic? or sensor data? if yes which sensor that detect the attacks?

Thank you in Advance.

Hi @Mzon! It really depends on what kind of attack you’re trying to detect. One of the key problems in cybersecurity is that we don’t know how we’ll be attacked next. The reason that many intrusion protection systems (IPS) fail is that they are signature-based, so they are only able to detect known attacks. Paradoxically that means they are better at detecting unsophisticated attackers than they are at detecting sophisticated attackers that create new attack methodologies.

I suggest you start by deciding what you want to protect, and from there consider how it might be attacked, how the attack could be detected, and how ML could applied to differentiate between attacks and non-attacks. I think that trying to defend organizations or households at the network layer (i.e. the magical box that stops all attacks) becomes an increasingly impossible task every day. I may be proven wrong, but I think the next leap forward in security will be for each system/device to become much better at protecting itself, and that partially drives my interest in tinyML.


Thank you so much for your reply! make sense