Monitoring Temperatures in a Remote Pump House Using LoraWAN

Snow in Island Park

I've got a pumphouse in Island Park, ID that I'm responsible for. Winter temperatures are often below 0°F (-18°C) and occasionally get as cold as -35°F (-37°C). We have a small baseboard heater in the pumphouse to keep things from freezing. That works pretty well, but one night last December, the temperature was -35°F and the power went out for five hours. I was in the dark, unable to know if the pumphouse was getting too cold. I determined that I needed a temperature sensor in the pumphouse that I could monitor remotely.

The biggest problem is that the pumphouse is not close to any structures with internet service. Wifi signals just don't make it out there. Fortunately, I've got some experience using LoraWAN, a long-range (10km), low-power, wireless protocol. This use-case seemed perfect for LoraWAN. About a year ago, I wrote about how to use LoraWAN and a Dragino LHT65 temperature and humidity sensor along with picos to get temperature data over the Helium network.

I've installed a Helium hotspot near the pumphouse. The hotspot and internet router are both on battery backup. Helium provides a convenient console that allows you to register devices (like the LHT65) and configure flows to send the data from a device on the Helium network to some other system over HTTP. I created a pico to represent the pumphouse and routed the data from the LHT65 to a channel on that pico.

The pico does two things. First it processes the hearthbeat event that Helium sends to it, parsing out the parts I care about and raising another event so other rules can use the data. Processing the data is not simple because it's packed into a base64-encoded, 11-byte hex string. I won't bore you with the details, but it involves base64 decoding the string and splitting it into 6 hex values. Some of those a further packing data into specific bits of the 16-bit word, so binary operations are required. Those weren't built into the pico engine, so I added those libraries. If you're interested in the details of decoding, splitting, and unpacking the payload, check out the receive_heartbeat rule in this ruleset.

Second, he receive_heartbeat rule raises the lht65:new_readings event in the pico adding all the relevant data from the LHT65 heartbeat. Any number of rules could react to that event depending on what needs to be done. For example, they could store the event, alarm on a threshold, or monitor the battery status. What I wanted to do is plot the temperature so I can watch it over time and let other members of the water group check it too. I found a nice service called IoTPlotter that provides a basic plotting service on any data you post to it. I created a feed for the pumphouse data and wrote a rule in my pumphouse pico to select on the lht65:new_readings event and POST the relevant data, in the right format, to IoTPlotter. Here's that rule:

  rule send_temperature_data_to_IoTPlotter {
    select when lht65 new_readings

    pre {
      feed_id = "367832564114515476";
      api_key = meta:rulesetConfig{["api_key"]}.klog("key"); 
      payload = {"data": {
                    "device_temperature": [
                      {"value": event:attrs{["readings", "internalTemp"]},
                       "epoch": event:attrs{["timestamp"]}}
                    ],
                    "probe_temperature": [
                      {"value": event:attrs{["readings", "probeTemp"]},
                       "epoch": event:attrs{["timestamp"]}}
                    ],
                    "humidity": [
                      {"value": event:attrs{["readings", "humidity"]},
                       "epoch": event:attrs{["timestamp"]}}
                    ],
                    "battery_voltage": [
                      {"value": event:attrs{["readings", "battery_voltage"]},
                       "epoch": event:attrs{["timestamp"]}}
                    ]}
                };
    }

    http:post("http://iotplotter.com/api/v2/feed/" + feed_id,
       headers = {"api-key": api_key},
       json = payload
    ) setting(resp);
 }

The rule, send_temperature_data_to_IoTPlotter is not very complicated. You can see that most of the work is just reformatting the data from the event attributes into the right structure for IoTPlotter. The result is a set of plots that looks like this:

Swanlake Pumphouse Temperature Plot
Swanlake Pumphouse Temperature Plot (click to enlarge)

Pretty slick. If you're interested in the data itself, you're seeing the internal temperature of the sensor (orange line) and temperature of an external probe (blue line). We have the temperature set pretty high as a buffer against power outages. Still, it's not using that much power because the structure is very small. Running the heater only adds about $5/month to the power bill. Pumping water is much more power intensive and is the bulk of the bill. The data is choppy because, by default, the LHT65 only transmits a payload once every 20 minutes. This can be changed, but at the expense of battery life.

This is a nice, evented system, albeit simple. The event flow looks like this:

Event Flow for Pumphouse Temperature Sensor
Event Flow for Pumphouse Temperature Sensor (click to enlarge)

I'll probably make this a bit more complete by adding a rule for managing thresholds and sending a text if the temperature gets to low or too high. Similarly, I should be getting notifications if the battery voltage gets too low. The battery is supposed to last 10 years, but that's exactly the kind of situation you need an alarm on—I'm likely to forget about it all before the battery needs replacing. I'd like to experiment with sending data the other way to adjust the frequency of readings. There might be times (like -35°F nights when the power is out) where getting more frequent results would reduce my anxiety.

This was a fun little project. I've got a bunch of these LHT65 temperature sensors, so I'll probably generalize this by turning the IoTPlotter ruleset into a module that other rulesets can use. I may eventually use a more sophisticated plotting package that can show me the data for all my devices on one feed. For example, I bought a LoraWAN soil moisture probe for my garden. I've also got a solar array at my house that I'd like to monitor myself and that will need a dashboard of some kind. If you've got a sensor that isn't within easy range of wifi, then LoraWAN is a good solution. And event-based rules in picos are a convenient way to process the data.


Not all PBAC is ABAC: Access Management Patterns

Learning Digital Identity

The primary ways of implementing access control in modern applications are (1) access control lists (ACLs), (2) role-based access control (RBAC), and (3) attribute-based access control (ABAC). I assume you're familiar with these terms in this post. If you're not there's a great explanation in chapter 12 of my new book, Learning Digital Identity.1

To explore access management patterns, let's classify applications requiring fine-grained access management into one of two types:

  • Structured—these applications can use the structure of the attribute information to simplify access management. For example, an HR application might express a policy as “all L9 managers with more than 10 reports can access compensation management functionality for their reports”. The structure allows attributes like level and number_of_reports to be used to manage access to the compensation tool with a single policy. A smalls set of policies can control access to the compensation tool. These applications are the sweet spot for ABAC.
  • Ad hoc—these applications allow users to manage access to resources they control based on identifiers for both principals and resources without any underlying structure. For example, Alice shares her vacation photo album with Betty and Charlie. The photo album, Betty, and Charlie have no attributes in common that can be used to write a single attribute-based policy defining access. These applications have a harder time making effective use of ABAC.

Ad hoc access management is more difficult than structured because of the combinatorial explosion of possible access relationships. When any principal can share any resource they control with any other principal and with any subset of possible actions, the number of combinations quickly becomes very large.

There are several approaches we can take to ad hoc access management:

  1. Policy-based—In this approach the application writes a new policy for every access case. In the example given above, when Alice shares her vacation photo album with Betty and Charlie, the application would create a policy that explicitly permits Betty and Charlie to access Alice’s vacation photo album. Every change in access would result in a new policy or the modification of an existing one. This is essentially using policies as ACLs.
  2. Resource-based—In this approach, we add a sharedWith or canEdit attribute to Alice’s vacation photos album that records the principals who can access the resource. Now our policy uses the resource attribute to allow access to anyone in that list. Resource-based policies look like "principal P can edit resource R if P is in R.canEdit". Every resource of the same type has the same attributes. This approach is close to ABAC because it makes use of attributes on the resources to manage access, reducing the combinatorial explosion.
  3. Hybrid—We can combine group and resource-based access management by creating groups of users and storing group names in resource attribute instead of principals. For example, if Alice adds Betty and Charlie to her group friends, then she could add friends to the sharedWith attribute on her album. The advantage of the hybrid approach is we reduce the length of the attribute lists.

Policy-Based Approach

The advantage of the policy-based approach is that it’s the simplest thing that could possibly work. Given a policy store with sufficient management features (i.e., finding, filtering, creating, modifying, and deleting policies) this is straight-forward. The chief downside is the explosion in the number of policies and the scaling that it requires of the policy store. Also since the user’s permissions are scattered among many different policies, knowing who can do what is difficult and relies on the policy store's filtering capabilities.

Group-Based Approach

The group-based approach results in a large number of groups for very specific purposes. This is a common problem with RBAC systems. But given an attribute store (like an IdM profile) that scales well, it splits the work between the attribute and policy stores by reducing the number of policies to one per share type (or combination). That is, we need a policy for each resource that allows viewing, one to allow editing, and so on.

Resource-Based Approach

The resource-based approach reduces the explosion of groups by attaching attributes to the resource, imposing structure. In the photo album sharing example, each album (and photo) would need an attribute for each sharing type (view, modify, delete). If Alice says Betty can view and modify an album, Betty’s identifier would be added to the view and modify attributes for that album. We need a policy for each unique resource type and action.

The downside of the resource-based approach is that the access management system has to be able to use resource attributes in the authorization context. Integrating the access management system with an IdP provides attributes about principals so that we can automatically make those attributes available in the authorization context. You could integrate using the attributes in an OIDC token or by syncing the authorization service with the IdP using SCIM.

But the ways that attributes can be attached to a resource are varied. For example, they might be stored in the application's database. They might be part of an inventory control system. And so on. So the access management system must allow developers to inject those attributes into the authorization context when the policy enforcement point is queried or have a sufficiently flexible policy information point to easily integrate it with different databases and APIs. Commercial ABAC systems will have solved this problem because it is core to how they function.

Conclusion

Every application, of course, will make architectural decisions about access management based its specific needs. But if you understand the patterns that are available to you, then you can think through the ramifications of your design ahead of time. Sometimes this is lost in the myth that policy-based access management (PBAC) is all the same. All of the approaches I list above are PBAC, but they're not all ABAC.


Notes

  1. The material that follows is not in the book. But it should be. An errata, perhaps.


Learning Digital Identity Podcasts

Learning Digital Identity

I was recently the guest on a couple of podcasts to talk about my new book, Learning Digital Identity. The first was with Mathieu Glaude, CEO of Northern Block. The second was with Sam Curren, the Deputy CTO of Indicio. One of the fun things about these podcasts is how different they were despite being about the same book.

Mathieu focused on relationships, a topic I deal with quite a bit in the book since I believe we build identity systems to manage relationships, not identities. In addition we discussed the tradespace among privacy, authenticity, and confidentiality and how verifiable credentials augment and improve attribute-based access control (ABAC).

Sam and I discussed identity metasystems and why they're necessary for building identity systems that enable us living effective online lives. We also talked about Kim Cameron's Laws of Identity and how they help analyze identity systems and their features. Other topics include the relationship between DIDComm and identity, how self-sovereign identity relates to IoT, and the relationship between trust, confidence, and governance.

These were both fun conversations and I'm grateful to Mathieu and Sam for lining them up. If you'd like me to talk about the book on a podcast you or your company hosts, or even for a private employee event, I'm happy to. Just send me a note and we'll line it up.


Why Doesn't This Exist Already?

Riley Hughes and I recently had a conversation about the question "why doesn't SSI exist already?" Sometimes the question is asked because people think it's such a natural idea that it's surprising that it's not the norm. Other times, the question is really a statement "if this were really a good idea, it would exist already!" Regardless of which way it's asked, the answer is interesting since it's more about the way technology develops and is adopted than the technology itself.

Riley calls identity products "extremely objectionable" meaning that there are plenty of reasons for people to object to them including vendor lock-in, privacy concerns, security concerns, and consumer sentiment. I think he's right. You're not asking people and companies to try a new app (that they can easily discard if it doesn't provide value). You're asking them to change the fundamental means that they use to form, manage, and use online relationships.

Learning Digital Identity

The last chapter of my new book, Learning Digital Identity, makes a case that there is an existing identity metasystem that I label the Social Login (SL) metasystem. The SL metasystem is supported by OpenID Connect and the various "login in with..." identity providers. The SL metasystem is widely used and has provided significant value to the online world.

There is also an emerging Self-Sovereign Identity (SSI) metasystem based on DIDs and verifiable credentials. I evaluate each in terms of Kim Cameron’s Laws of Identity. In this evaluation, the SL metasystem comes out pretty good. I believe this accounts for much of its success. But it fails in some key areas like not supporting directed (meaning not public) identifiers. As a result of these failings, SLM has not been universally adoptable. Banks, for example, aren’t going to use Google Signin for a number of reasons.

The SSI Metasystem, on the other hand, meets all of Cameron’s Laws. Consequently, I think it will eventually be widely adopted and gradually replace the SL metasystem. The key word being eventually. The pace of technological change leads us to expect that change will happen very quickly. Some things (like the latest hot social media app) seem to happen overnight. But infrastructural change, especially when it requires throwing out old mental models about how things should work is much slower. The fact is, we’ve been building toward the ideas in SSI (not necessarily the specific tech) for several decades. Work at IIW on user-centric identity led to the SL metasystem. But the predominant mental model of that metasystem didn't change much from the one-off centralized accounts people used before. You still get an account administrated by the relying party, they've just outsourced the authentication to someone else. (Which means another party is intermediating the relationship). Overcoming that mental model, especially with entrenched interests, is a long slog.

In the 80s and 90s (pre-web) people were only online through the grace of their institution (university or company). So, I was windley@cs.ucdavis.edu and there was no reason to be anything else. When the web hit, I needed to be represented (have an account) in dozens or hundreds of places with whom I no longer had a long term relationship (like employee or student). So, we moved the idea of an account from workstation operating systems to the online service. And became Sybill.

When Kim first introduced the Laws of Identity, I literally didn’t understand what he as saying. I understood the words but not the ideas. Certainly not the ramifications. I don’t think many did. He’s the first person I know who understood the problems and set out a coherent set of principles to solve them. We used Infocards in our product at Kynetx and they worked pretty well. But because of how they were rolled out, people came to associate them strictly with Microsoft. The SL metasystem won o, offering the benefits of federation, without requiring that people, developers, or companies change their mental model.

Changing metasystems isn't a matter of technology. It's a social phenomenon. Consequently it's slow and messy. Here's my answer to the question "why doesn't this exist yet?": The arc of development for digital identity systems has been bending toward user-controlled, decentralized digital identity for decades. That doesn't mean that SSI, as currently envisioned, is inevitable. Just that something like it, that better complies with Kim's laws than the current metasystem, is coming. Maybe a year from now. Maybe a decade. No one can say. But it's coming. Plan and work accordingly.


SSI Doesn't Mean Accounts Are Going Away

Creditor's Ledger, Holmes McDougall

I saw a tweet that said (paraphrasing): "In the future people won't have accounts. The person (and their wallet) will be the account." While I appreciate the sentiment, I think reality is much more nuanced than that because identity management is about relationships, not identities (whatever those are).

Supporting a relationship requires that we recognize, remember, and react to another party (person, business, or thing). In self-sovereign identity (SSI), the tools that support that are wallets and agents. For people, these will be personal. For a business or other organization they'll be enterprise wallets and agents. The primary difference between these is that enterprise wallets and agents will be integrated with the other systems that the business uses to support the relationships they have at scale.

Remembering and reacting to another entity requires that you keep information about them for the length of the relationship. Some relationships, like the one I form with the convenience store clerk when I buy a candy bar, are ephemeral, lasting only for the length of the transaction. I don't remember much while its happening and forget it as soon as it's done. Others are long-lasting and I remember a great deal in order for the relationship to have utility.

So, let's say that we're living in the future where SSI is ubiquitous and I have a DID-based relationship with Netflix. I have a wallet full of credentials. In order for my relationship to have utility, they will have to remember a lot about me, like what I've watched, what devices I used, and so on. They will likely still need to store a form of payment since it's a subscription. I call that an account. And for the service Netflix provides, it's likely not optional.

Let's consider a different use case: ecommerce. I go to a site, select what I want to buy, supply information about shipping and payment, and submit the order. I can still create a DID-based relationship, but the information needed from me beyond what I want to buy can all come from my credentials. And it's easy enough to provide that I don't mind supplying it every time. The ecommerce site doesn't need to store any of it. They may still offer to let me create an account, but it's optional. No more required than the loyalty program my local supermarket offers. The relationship I create to make the purchase can be ephemeral if that's what I want.

What will definitely go away is the use of accounts for social login. In social login, large identity providers have accounts that are then used by relying parties to authenticate people. Note that authentication is about recognizing. SSI wallets do away with that need by providing the means for different parties to easily create relationships directly and then use verifiable credentials to know things about the other with certainty. Both parties can mutually authenticate the other. But even here, social login is usually a secondary purpose for the account. I have an account with Google. Even if I never use it for logging in anywhere but Google, I'll still have an account for the primary reasons I use Google.

Another thing that goes away is logging in to your account. You'll still be authenticated, but that will fade into the background as the processes we use for recognizing people (FIDO and SSI) become less intrusive and fade into the background. We have a feel for this now with apps on our smartphones. We rarely authenticate because the app does that and then relies on the smartphone to protect the app from use by unauthorized people. FIDO and SSI let us provide similar experiences on the web as well. Because we won't be logging into them, the idea of accounts will fade from people's consciousness even if they still exist.

I don't think accounts are going away anytime soon simply because they are a necessary part of the relationship I have with many businesses. I want them to remember me and react to me in the context of the interactions we've had in the past. SSI offers new ways of supporting relationships, especially ephemeral ones, that means companies need to store less. But for long-term relationships, your wallet can't be the account. The other party needs their own means of remembering you and they will do that using tools that look just like an account.


Photo Credit: Creditor's Ledger, Holmes McDougall from Edinburgh City of Print (CC BY 2.0)


Defining Digital Identity

Who's car?

The family therapist Salvador Minuchin declared, "The human experience of identity has two elements: a sense of belonging and a sense of being separate." This is as good a description of digital identity as it is of our psychological identity. A digital identity contains data that uniquely describes a person or thing but also contains information about the subject's relationships to other entities.

To see an example of this, consider the data record that represents your car, stored somewhere in your state or country's computers. This record, commonly called a title, contains a vehicle identification number (VIN) that uniquely identifies the car to which it belongs. In addition, it contains other attributes of the car such as year, make, model, and color. The title also contains relationships: most notably, the title relates the vehicle to a person who owns it. In many places, the title is also a historical document, because it identifies every owner of the car from the time it was made, as well as whether it's been in a flood or otherwise salvaged.

While fields as diverse as philosophy, commerce, and technology define identity, most are not helpful in building, managing, and using digital identity systems. Instead, we need to define identity functionally, in a way that provides hooks for us to use in making decisions and thinking about problems that arise in digital identity.

Joe Andrieu, principal at Legendary Requirements, writes that "identity is how we recognize, remember, and respond to specific people and things. Identity systems acquire, correlate, apply, reason over, and govern information assets of subjects, identifiers, attributes, raw data, and context." This definition is my favorite because it has proven useful over the years in thinking through thorny identity issues.

The identity record for a car includes attributes that the system uses to recognize it: in this case, the VIN. The title also includes attributes that are useful to people and organizations who care about (that is, need to respond to) the car, including the owner, the state, and potential buyers. The government runs a system for managing titles that is used to create, manage, transfer, and govern vehicles (or, in Andrieu's formulation, remember them). The system is designed to achieve its primary goal (to record valuable property that the state has an interest in taxing and regulating) and secondary goals (protecting potential buyers and creating a way to prove ownership).

Digital identity management consists of processes for creating, managing, using, and eventually destroying digital records, like the one that contains your car title. These records might identify a person, a car, a computer, a piece of land, or almost anything else. Sometimes they are created simply for inventory purposes, but the more interesting ones are created with other purposes in mind: allowing or denying access to a building, the creation of a file, the transfer of funds, and so on. These relationships and the authorized actions associated with them make digital identities useful, valuable, and sometimes difficult to manage.


Photo Credit: Plate - WHOS_CAR from Lone Primate (CC BY-NC-SA 2.0)


Better Onboarding with Verifiable Credentials

Abandoned Road

Last week a friend referred me to a question on Guru.com about devices for connected cars. Since I used to do Fuse, he figured I might be able to help. I was happy to. Unfortunately, Guru wasn't so happy to let me.

You can't answer a question at Guru.com without registering, enrolling, and onboarding. Fair enough. So I started down the path. Here's their process:

  1. Enter name and email on first screen.
  2. Choose whether you're en employer or freelancer and set your password. Be sure to follow their password conventions. Then agree to the terms of service and agree to get emails (or not).
  3. Enter the four-digit code that was sent to the email address you gave in (1).
  4. Solve the captcha.
  5. Choose whether to use 2FA or security questions to secure your account. I chose 2FA.
  6. Verify your phone number using SMS or WhatsApp (they recommend WhatsApp). I chose SMS.
  7. Enter the 4 digit code they send.
  8. Continue with 2FA. I'm not sure why this screen shows up twice.
  9. Logout and log back in.
  10. Scan the QR code to set up a TOTP authenticator.
  11. Enter the one-time code from the authenticator app.
  12. Upload a photo and enter a mailing address (yes, they're required).

Congratulations! You've gone through Guru's twelve step program and you're registered! I went through all this just to discover I can't answer questions unless I pay them money. I bailed.

As I was going though this, I couldn't help thinking how much easier it could be using verifiable credentials.

  1. Enter an email.
  2. Scan the QR code they present using my smart wallet to establish a DID connection.
  3. Verify information about myself that they ask for using verifiable credentials.

Credentials asserting your verified email and phone number would be easy enough to get if I don't already have them. And they've not verifying address and photo anyway, so there's no need for anything but a self-asserted credential for that. Admittedly, if I've never used verifiable credentials before they need to coach me on getting a wallet and the phone and email address credential. But they're already doing that for the authenticator app in step 10 above.

Guru's registration process is one of the most arduous I have encountered. If I were them and unwilling to use verifiable credentials, I'd at least split it up and let people add their photo, address, and authenticator app after they're already on board. Guru.com (and lots of other web sites) have to be shedding potential customers at every step in their onboarding process. I wonder if they keep track of abandoned registrations and where it happens? Does anyone? I'd love to know the numbers.

Verifiable credentials could make the onboarding experience a breeze, get more customers in the door, and reduce the cost of customer support calls associated with it.


Photo Credit: Abandoned Road from Tim Emerich (CC0)


Wallets and Agents

Our physical wallets are, historically, for holding currency. But that may be the least interesting use case for wallets. Many of the things people put in their wallets represent relationships they have and authorizations they hold. Most people don't often leave home without their wallet.

But the analogy to a physical wallet can only take us so far, because as physical beings, our natural capabilities are multitude. In the digital world, we need tools to accomplish almost anything useful. The name wallet1 for the software we use to interact digitally doesn't do the tool justice.

A digital identity wallet is a secure, encrypted database that collects and holds keys, identifiers, and verifiable credentials (VCs). The wallet is also a digital address book, collecting and maintaining its controller's many relationships. The wallet is coupled with a software agent that speaks the protocols necessary to engage with others.

Wallets and agents are not the same thing, even though they're often conflated. Agents are tools for taking action. Wallets are where stuff is stored. Still, most people just say "wallet," even when they mean "wallet and agent." For this post, when I say "wallet" I mean wallet and when I say "agent" I mean agent.

Identity agents are software services that manage all the stuff in the wallet. Agents store, update, retrieve, and delete all the artifacts that a wallet holds. Beyond managing the wallet, agents perform many other important tasks:

  • Sending and receiving messages with other agents
  • Requesting that the wallet generate cryptographic key pairs
  • Managing encrypted data interactions with the wallet
  • Performing cryptographic functions like signing and verifying signatures
  • Backing up and retrieving data in the wallet
  • Maintaining relationships by communicating with other agents when DID documents are updated
  • Routing messages to other agents
The relationship between identity wallets and agents
The relationship between identity wallets and agents (click to enlarge)

This figure shows the relationship between an agent, a wallet, and the underlying operating system. While most current implementations pair a single agent with a single wallet, the presence of an API means that it's possible for one agent to use several wallets, or for multiple agents to access one wallet. Some specialized agents might not even need a wallet, such as those that just perform routing, although most will at least need to store their own keys.

The key-management functions in the wallet includes actions on cryptographic keys like generation, storage, rotation, and deletion. Key management is performed in cooperation with the operating system and underlying hardware. Ideally, the operating system and hardware provide a secure enclave for key storage and a trusted execution environment for performing key-management functions.

The basic functions shown in the diagram might not seem to have much to do with identity. Identity-related activities like authentication and credential exchange are built on top of these basic functions. The agent can issue, request, and accept VCs. The agent also presents and verifies credentials. Specialized messages perform these activities.

Agents and Credential Exchange

Agents speak a protocol called DIDComm (DID-based communication) that provides a secure communications layer for the exchange of identity information via verifiable credentials (VCs). Agents speak DIDComm to each other without a third-party intermediary (i.e., they're peer-to-peer). Because of DIDComm's flexibility and the ability to define protocols on top of DIDComm messaging, it promises to be as important as the identity layer it enables. The DIDComm protocol is governed by the DIDComm specification, hosted at the Decentralized Identity Foundation. The current ratified version is 2.0.

The specification's opening sentence states that "the purpose of DIDComm Messaging is to provide a secure, private communication methodology built atop the decentralized design of DIDs." Note that the specification describes DIDComm as a communications methodology. This means that DIDComm is more than just a way to send a message or chat with someone else. DIDComm messaging allows individual messages to be composed into application-level protocols and workflows. This makes DIDComm messaging a foundational technology for performing different kinds of interactions within the framework of trust that a DID-based relationship implies.

To enable the exchange of verifiable credentials, the agent, using the wallet as secure storage, performs three primary activities:

  1. Exchanging DIDs with other agents
  2. Requesting and issuing credentials
  3. Requesting and presenting credential proofs

The agent does these activities using protocols that run on top of DIDComm. DIDComm's job is to create a secure, mutually authenticated channel for exchanging DIDComm messages. The protocols that operate inside of it, carry out specific activities.

Exchanging DIDs

Agents take care of the tedious and tricky job of exchanging DIDs between parties who want to communicate so that people don't have to get entangled in the details of how DIDs work: how they're created, stored, and validated. Or the work that's necessary when one of the parties needs to rotate keys. The DIDComm v2 spec is capable of exchanging DIDs without a separate protocol so the process can be automated by smart identity agents working on behalf of the various parties.

Requesting and Issuing Credentials

Requesting and issuing credentials is defined in Aries RFC 0036: Issue Credential Protocol 1.0. The protocol "formalizes messages used to issue a credential." The protocol describes four primary messages: propose-credential, offer-credential, request-credential, and issue-credential. The protocol also defines the state machine that the agent operates in response to these messages. These messages combined with the state machine allow the credential issuer and the credential holder to engage is the ceremonies necessary for the issuer to issue a credential to the holder.

Requesting and Presenting Credential Proofs

Requesting and presenting credential proofs is defined in Aries RFC 0037: Present Proof Protocol 1.0. The protocol formalizes and generalizes message formats used for presenting a proof of the attributes in a credential. The protocol describes three primary messages: propose-proof, request-proof, and present-proof. The protocol also defines the state machine that the agent operates in response to these messages. These messages and state machine allow the credential holder and the credential verifier to engage in the ceremonies necessary for the holder to present a credential proof to the verifier.

The Nature of Wallets and Agents

Agents and wallets, working together, perform the work necessary for people, businesses, and devices to create mutually-authenticated, secure connections and use those connections to exchange verifiable credentials. People, businesses, and devices all have different needs and so they'll use different agents and wallets.

  • People will generally use agents and wallets running on smart phones, laptops, or other personal devices. Your Amazon Alexa, for example could have an agent/wallet pair installed on it to act on your behalf. Most people will have agents on every device. Most of these will have wallets associated with them. Wallets will use device secure enclaves to store sensitive cryptographic information. People will also have agents and wallets in the cloud. All of the agents and wallets under a person's control will interoperate with each other and perform different roles. For example, cloud-based agents are needed to route DIDComm messages to devices that may not have a routable IP address.
  • Businesses will use enterprise agents that are integrated with other enterprise systems like CRM, ERP, and IAM systems. The wallets associated with these will be more sophisticated than personal wallets since they have to manage DIDs and their associated keys that various employees, departments, and processes use. The ability to delegate authority and permission actions will be more rigorous than is needed in a personal wallet. A large business might operate thousands of enterprise agents for various business purposes.
  • Devices will use agents with associated wallets to create relationships and perform credential exchange with the device owner, other devices, their manufacturer, and other people or companies. How they operate and their sophistication depend in great measure on the nature of the device and its expected function. I wrote about the reasons for using agents as part of IoT devices in The Self-Sovereign Internet of Things.

Despite differences that these agents exhibit, they all run the same protocols and use DIDComm messaging. There are no intermediaries—the connections are all peer-to-peer. Every agent works on behalf of the entity who controls it. To get a feel for how they might interoperate, see Operationalizing Digital Relationships and SSI Interaction Patterns.

DIDComm-capable agents can be used to create sophisticated relationship networks that include people, institutions, and things. The relationships in that network are rich and varied—just like relationships in the real world. Smart agents allow people, business and devices to create, manage, and utilize secure, trustworthy communications channels with anyone online without reliance on any third party. The agent serves as flexible digital tool that people can use to manage their digital life.


Notes

  1. I've heard various people object to the term wallet, but so far, no one has come up with anything else that has stuck, so for now, wallet is the word the industry uses.


A Healthcare Utopia of Rules

Blood Draw

I have a medical condition that requires that I get blood tests every three months. And, having recently changed jobs, my insurance, and thus the set of acceptable labs, changed recently. I know that this specific problem is very US-centric, but bear with me, I think the problems that I'll describe, and the architectures that lead to them, are more general than my specific situation.

My doctor sees me every 6 months, and so gives me two lab orders each time. Last week, I showed up at Revere Health's lab. They were happy to take my insurance, but not the lab order. They needed a date on it. So, I called my doctor and they said they'd fax over an order to the lab. We tried that three times but the lab never got it. So my doctor emailed it to me. The lab wouldn't take the electronic lab order from my phone, wouldn't let me email it to them (citing privacy issues with non-secure email), and couldn't let me print it there. I ended up driving to the UPS Store to print it, then return to the lab. Ugh.

This story is a perfect illustration of what David Graeber calls the Utopia of Rules. Designers of administrative systems do the imaginative work of defining processes, policies, and rules. But, as I wrote in Authentic Digital Relationships:

Because of the systematic imbalance of power that administrative ... systems create, administrators can afford to be lazy. To the administrator, everyone is structurally the same, being fit into the same schema. This is efficient because they can afford to ignore all the qualities that make people unique and concentrate on just their business. Meanwhile subjects are left to perform the "interpretive labor," as Graeber calls it, of understanding the system, what it allows or doesn't, and how it can be bent to accomplish their goals. Subjects have few tools for managing these relationships because each one is a little different from the others, not only technically, but procedurally as well. There is no common protocol or user experience [from one administrative system to the next].

The lab order format my doctor gave me was accepted just fine at Intermountain Health Care's labs. But Revere Health had different rules. I was forced to adapt to their rules, being subject to their administration.

Bureaucracies are often made functional by the people at the front line making exceptions or cutting corners. In my case no exceptions were made. They were polite, but ultimately uncaring and felt no responsibility to help me solve the problem. This is an example of the "interpretive labor" borne by the subjects of any administrative system.

Centralizing the system—such as having one national healthcare system—could solve my problem because the format for the order and the communication between entities could be streamlined. You can also solve the problem by defining cross-organization schema and protocols. My choice, as you might guess, would be a solution based on verifiable credentials—whether or not the healthcare system is centralized. Verifiable credentials offer a few benefits:

  • Verifiable credentials can solve the communication problem so that everyone in the system gets authentic data.
  • Because the credentials issued to me, I can be a trustworthy conduit between the doctor and the lab.
  • Verifiable credentials allow an interoperable solution with several vendors.
  • The tools, software, and techniques for verifiable credentials are well understood.

Verifiable credentials don't solve the problem of the lab being able to understand the doctor's order or the order having all the right data. That is a governance problem outside the realm of technology. But because we've narrowed the problem to defining the schema for a given localized set of doctors, labs, pharmacies, and other health-care providers, it might be tractable.

Verifiable credentials are a no-brainer for solving problems in health care. Interestingly, many health care use cases already use the patient as the conduit for transferring data between providers. But they are stuck in a paper world because many of the solutions that have been proposed for solving it, lead to central systems that require near-universal buy-in to work. Protocol-based solutions are the antedote to that and, fortunately, they're available now.


Photo Credit: Blood Draw from Linnaea Mallette (CC0 1.0)


Verifying Twitter

Bluebird on branch

This thread from Matthew Yglesias concerning Twitter's decision to charge for the blue verification checkmark got me thinking. Matthew makes some good points:

  1. Pseudonymity has value and offers protection to people who might not otherwise feel free to post if Twitter required real names like Facebook tries to.
  2. Verification tells the reader that the account is run by a person
  3. There's value to readers in knowing the real name and professional affiliation of some accounts

Importantly, the primary value accrues to the reader, not the tweeter. So, charging the tweeter $20/month (now $8) is charging the wrong party. In fact, more than the reader, the platform itself realizes the most value from verification because it can make the platform more trustworthy. Twitter will make more money if the verification system can help people understand the provenance of tweets because ads will become more valuable.

Since no one asked me, I thought I'd offer a suggestion on how to do this right. You won't be surprised that my solution uses verifiable credentials.

First, Twitter needs to make being verified worthwhile to the largest number of users possible. Maybe that means that tweets from unverified accounts are delayed or limited in some way. There are lots of options and some A/B testing would probably show what incentives work best.

Second, pick a handful (five springs to mind) of initial credential issuers that Twitter will trust and define the credential schema they'd prefer. Companies like Onfido can already do this. It wouldn't be hard for others like Equifax, ID.me, and GLEIF to issue credentials based on the "real person" or "real company" verifications they're already doing. These credential issuers could charge whatever the market would bear. Twitter might get some of this money.

Last, Twitter allows anyone with a "real person" credential from one of these credential issuers to verify their profile. The base verification would be for the holder to use zero-knowledge proof to prove they are a person or legal entity. If they choose, the credential holder might want to prove their real name and professional affiliation, but that wouldn't be required. Verifying these credentials as part of the Twitter profile would be relatively easy for Twitter to implement.

Twitter would have to decide what to do about accounts that are not real people or legal entities. Some of these bots have value. Maybe there's a separate verification process for these that requires that the holder of the bot account prove who they are to Twitter so they can be held responsible for their bot's behavior.

You might be worried that the verified person would sell their verification or verify multiple accounts. There are a number of ways to mitigate this. I explained some of this in Transferable Accounts Putting Passengers at Risk.

Real person verification using verifiable credentials has a number of advantages.

  1. First, Twitter never knows anyone's real name unless that person chooses to reveal it. This means that Twitter can't be forced to reveal it to someone else. They just know they're a real person. This saves Twitter from being put in that position and building infrastructure and teams to deal with it. Yes, the police, for example, could determine who issued the Twitter Real Person credential and subpoena them, but that's the business these companies are in, so presumably they already have processes for doing this.
  2. Another nice perk from this is that Twitter jump starts an ecosystem for real person credentials that might have uses somewhere else. This has the side benefit of making fraud less likely since the more a person relies on a credential the less likely they are to use it for fraudulent purposes.
  3. A big advantage is that Twitter can now give people peace of mind that they accounts they're following are controlled by real people. Tools might let people adjust their feed accordingly so they see more tweets by real people.
  4. Twitter also can give advertisers comfort that their engagement numbers are closer to reality. Twitter makes more money.

Yglesias says:

Charging power users for features that most people don’t need or want makes perfect sense.

But verification isn’t a power user feature, it’s a terrible implementation of what’s supposed to be a feature for the everyday user. It should help newbies figure out what’s going on.

Verifiable credentials can help make Twitter a more trustworthy place by providing authentic data about people and companies creating accounts—and do it better than Twitter's current system. I'm pretty sure Twitter won't. Elon seems adamant that they are going to charge to get the blue checkmark. But, I can dream.

Bonus Link: John Bull's Twitter thread on Trust Thermoclines

Notes


Photo Credit: tree-nature-branch-bird-flower-wildlife-867763-pxhere.com from Unknown (CC0)