Writing up IotMark, part 2 of 2

I said on twitter that I’d do a write up of the Friday IoTMark workshop last week:

I’ve had enough people fave it to give me a reason to write it, but over the weekend, I realised that giving context and backstory to the event took up more than 1500 words already. I’ve split that out into a separate post, so here’s how the day went down and the key things I learned.

Much less than 60 people this time around

Previous events have been relatively large affairs, taking over entire venues, with breakout rooms, and culminating in a somewhat chaotic mob-editing of a google doc, to come up with either a bill of rights style document, or some kind of spec or set of principles to inform the creation of a certification mark for connected products.


This time round, there was a handful of people, in person, in a room for the whole day in central Mitte instead – albeit one with a nice view from the tenth floor, and a few people skyped in, in anm iPhone in a glass, to make them easier to hear.


What we were going for

The goal of the day, as I understood it, was to get the ideas in the most recent document in better shape for being used as a basis for a certification mark.

This included:

  1. tidying up the language and trying to make it as accessible as possible without losing the substance of the initial document
  2. reconciling it with the issues raised on github, and feedback when presenting it at Mozfest in November 2017.

There was also a move to make the IoTMark something people and organisations of all sizes could realistically commit to adhering to – when setting up a certification mark can easily cost more than 40k USD, it makes sense make it something that actually could be adopted.

A bit like Energy Star, for IoT


One strategy to follow if you want a diverse set of people to commit to a set of ideas, is to introduce levels of commitment, so you aren’t asking for people to make a binary all in/ all out choice.

There was already some implicit sense of grading here in the normative language (a MUST  is non-negotiable, but a SHOULD for example, is not absolutely mandatory), so started with that, to come up with a rough set of three groupings.

I say groupings here, because it’s really, really hard to have a single axis when discussing these products, that goes from, say, bad to good. It’s much more complicated than that, and different people value different qualities – some people might value openness of software and hardware over the supply chain transparency, and some might treat being explicit about how personal data is used as more important than marketing to a specific group of people.

That said, one grouping was considered a bare minimum – something considered reasonable to expect anyone wanting a connected product that follows the principles to follow. It felt important to have something everyone could have as some shared values, before focusing on diverging areas.

Relating GDPR to IoT

The other thing that came out of the day was just how far reaching the GDPR is when it comes to thinking about about connected products, and why it made sense to think about it in the context of IoT.

It’s a common refrain that the tech industry needs to improve when it comes to it’s cavalier attitude to people’s data – often deeply personal data secured in dangerously careless ways, and used in ways that definitely aren’t in the interests of the consumer.

I also think it’s safe to say that there is appetite for change, and well… from my perspective at least, GDPR feels like change for the tech industry in the same way that a giant asteroid might have represented change to wildlife around the end of the Cretaceous period on Earth.

To be clear, this change isn’t necessarily bad – asteroids can sometimes get rid of dinosaurs and make life easier for us humans, after all.

Why you might care about GDPR for IoT

I often point people to these posters from the co-up to help understand the ideas behind the GDPR, and why it’s a big deal – for many consumers, the ideas in the GDPR don’t sound like unreasonable things to ask for, especially when there often seems to be no real penalty for bad behaviour, from lawmakers.

Chaos Monkeys for your data pipeline, otherwise known as Europeans

In addition, if you are building a connected product the the changes to privacy law are extraterritorial, as in – if any citizens of EU member states end up in your data pipeline, or data is proceeded in the EU these laws still apply. I think Heather Burns, covers this really well in a recent piece aimed at web developers, and much of it applies for IoT too:

In May of 2018, a major upgrade to Europe’s overarching data protection framework becomes enforceable. This will be followed by a companion piece of legislation pertaining to data in transit. The extraterritorial nature of these two frameworks — they protect the privacy rights of people in Europe regardless of where their data is collected — means that they will become the de facto standard for privacy around the world.

Enforcement may be another issue, but given that for 300 million people at least, the biggest changes to privacy law in 25 years are coming into effect this year, and will force changes anyway, ignoring it seemed short-sighted, as well as being against many of the ideas in original document.

Overlap between IoT principles and GDPR

I’ll end this section with one thing that struck me – it’s totally possible to build connected products that comply with the ideas of the GDPR, and there are very large companies doing exactly this. In fact, there’s some great stuff by Matt Webb, on how privacy can be a competitive advantage for IoT services that’s worth reading if you’re interested in this field.

He cites Hoxton Analytics as a good example – by building privacy into their product from the beginning, they could be deployed in places more invasive systems couldn’t, helping them against competitors – there’s a lot of FUD about GDPR, but it’s also an opportunity to rethink the playing field, something lots of companies already invested in one way of working seem to miss.

RFCs versus principles

Another thing that came up thing during the day, was the difference between a spec using specific normative language as described by the IETF, which you typically validate programmatically based on the MUSTs and SHOULDs, and a set of principles which tend to have more fuzzy edges, and are more open to interpretation.

It’s relatively old now, there’s some really great thinking in Lawrence Lessig’s book Code, about different ways of enforcing behaviour, which I think is relevant.

I don’t have the book handy, but Danah Boyd’s blog summarises one of the key ideas nicely :

In his seminal book “Code”, Larry Lessig argued that social systems are regulated by four forces: 1) the market; 2) the law; 3) social norms; and 4) architecture or code.

I think it’s easy to conflate these. When you’re trying to bring about a specific kind of behaviour you want people to follow, it’s worth thinking in these terms, so you don’t try to make one force act as it’s another.

Embodied thinking in physical space


Finally, in the afternoon, after re-reading through the the principles as described in the original Open IoT documents, referring to the Github issues, and Mozilla versions, and deduping them, we spent a bunch of time finding ways to group these principles, to make it easier to follow a meaningful amount of them if you can’t follow all of them.

If you have a load of people in a room, and you need to help build a shared understanding, one strategy to help people to this is use some physical token represent an idea, and let people use the physical space to communicate how ideas relate to each other.

Given we had spent a bunch of time already checking our understanding of the ideas in isolation, I found it useful to help communicate positions for people in the room to discuss more effectively.

The downside here when you are skyping people in, is that they’re not able to move things around themselves – and by doing this, you’re making a specific decision to favour collaboration in the room, over those who are remote. In most cases, I think this is the right call, as the alternative is often to collaborate at speed of the person with the lowest bandwidth connection, which reduces what you can get done in a limited amount of time.

I’d love to hear suggestions for finding a way around this – while you can use tools like realtimeboard to create a whiteboard-like experience, you’re still reducing the richness of your interactions in the room to what the software lets you do with a piece of backlit glass.

How to get involved in this in future

If you’re interested in any of this, I’d suggest heading to the iotmark website – from there you can see the current version of the principles, join the slack workspace, or subscribe for updates over email, or follow the IotMark on twitter.

As ever, comments on this post are welcome, and if you prefer you can contact me directly.