Building an SSI Ecosystem: Digital Staff Passports at the NHS

Dr Manny Nijjar is an infectious disease doctor with Whipps Cross Hospital in the UK. He’s also an innovator who quickly saw how verifiable credentials could be applied to health care. I first met Manny at the launch of Sovrin Foundation in London in September 2016. He’s been working to bring this vision to life with his company Truu, ever since.

SSI For Healthcare: Lessons from the NHS Frontline

In this video, Manny discusses why he became interested in digital credentials. He also speaks to the influence medical ethics has had on his journey. In 2015, he was training to become an infectious disease specialist. Manny was the most senior clinician on site in the evenings, in charge of about 500 beds.

Manny kept getting called by, and about, a temporary agency doctor every night. Manny and other medical staff had questions about this doctor’s skills, qualifications, and the decisions he was making. But there were shortages and the hospital needed to fill the gap. Manny was so discouraged by seeing an unqualified physician slip through the cracks, that he was about to quit his career, but instead he determined to do something about it.

Serendipitously, Manny came across self-sovereign identity (SSI) at the same time and, as I said, spoke at the launch of Sovrin Foundation. Over the next several years, Manny and his partners worked to create an SSI solution that the National Health Service in the UK could use to instantly verify the identity and skills of temporary and permanent clinical staff. There were three primary problems that this solves:

  1. Patient Safety - Verifying the identity and skills of temporary and permanent clinical staff.
  2. Burden on Clinical Staff - Admin time for repeated identity and pre-employment checks.
  3. Organizational Risk and Operational Inefficiencies - Failure of manual checks. Time and cost to onboard healthcare staff.

Manny’s first thought had been to use a traditional, administrative scheme using usernames and passwords. But he saw the problems with that. He realized a digital credential was a better answer. And his journey into self-sovereign identity commenced.

Manny's paper credentials
Manny's paper credentials (click to enlarge)

Over the past five years, Manny and his team at Truu have worked with clinicians, various parts of the NHS, employers, HR departments, and locum agencies to understand their needs and build a solution that fits.

In 2019, Truu conducted a pilot with the NHS where the General Medical Council (GMC) issued “license to practice” credentials to SSI wallets controlled by medical staff. Medical staff could present that credential to Blackpool Teaching Hospitals. The hospital, in turn, issued a “sign in” credential to the staff member who could then use it to log into clinical systems at the hospital.

Digital Credentials for People and Organizations
Digital Credentials for People and Organizations (click to enlarge)

The Covid-19 pandemic increased the pressure on the NHS, making the need to easily move staff between facilities acute. Truu worked with NHS to use this critical moment to shift to digital credentials and to do it in the right way. Truu’s early work, including the pilot, positioned the idea so that it could be quickly adopted when it was needed most. Digital credentialing in healthcare simplifies onboarding for providers, enables the secure expansion of telehealth services, and enhances information exchange"providing a path to interoperability for healthcare data.

The National Health Service in the UK has a program to issue staff passports to medical personnel, confirming their qualifications and ability to work. NHS staff passports are based on verifiable credentials. Eighty-four NHS organizations are participating to date.

Locations of Participating Organizations in the NHS Staff Passport Program in April 2021
Locations of Participating Organizations in the NHS Staff Passport Program in April 2021 (click to enlarge)

The work that Manny, his team at Truu, and partners like Evernym have done has already had a big impact. The UK Department of Health and Social Care recognized the importance of the program, promising to expand the use of staff passports in their Busting Bureaucracy report. They said:

NHSE/I, NHSX and HEE are working to provide multiple staff groups with access to digital staff passports in line with People Plan commitments to improve workforce agility and to support staff training and development.

  • Junior doctors, who frequently rotate to different healthcare providers, are being prioritized and the ambition is that they will have access to staff passports in 2021/22. The passports will hold digital credentials representing their skills, competencies and occupational health checks.
  • Other target groups include specialists such as maternity and stroke care staff who often need to be rapidly deployed to a neighboring hospital or care home. The use of digital staff passports will save agency fees and release time for care.

Medical staff passports are catching on in the UK where they are solving real problems that ultimately impact patient care, staff fatigue, and patient access and privacy. The journey hasn’t been short, but the NHS Staff Passport program is illustrative of a successful credential ecosystem.

Related Videos

In this 11 minute video, I explain how trust frameworks function in an ecosystem like the one that the NHS has created.

Phil Windley on Trust Frameworks

In this hour-long meetup, Drummond Reed talks with CU Ledger (now Bonifii), about their work to establish a trust framework for credit union credentials. I’ll be writing more about the credit union industry’s MemberPass credential in a future newsletter.

Trust Frameworks and SSI: An Interview with CULedger on the Credit Union MyCUID Trust Framework

A version of this article was previously published in the Technometria Newsletter, Issue #9, May 4, 2021.

Images are from the SSI For Healthcare: Lessons from the NHS Frontline video referenced above.

Decentralized System in a Box

I installed a package of bees in a hive over the weekend. You buy bees in packages that contain 15-20 thousand bees and a queen. The queen is in a cage so she is easy to find. Queens give off a pheromone that attracts the other bees in the hive. The queen is the secret to creating legitimacy for the hive (see Legitimacy and Decentralized Systems for more on legitimacy). If the queen is in the new hive, chances are the other bees will see it as their legitimate home and stick around.

Queen in a cage
Queen in a cage (click to enlarge)

I placed queen cage in the hive using a rubber band to fix the cage on one of the frames that the bees make honeycomb on. I replaced the cork in the cage with a candy stopper. The bees eat through the candy over the course of a few days and free the queen. Hopefully, by that time, the hive is established and the bees stick around.

After placing the queen cage in the hive, you just dump the bees out on top of the frames. I love this part because thousands of bees are flying everywhere trying to make sense of what just happened. But over the course of an hour or two, the hive coalesces on the queen and most of the bees are inside, getting adjusted to their new home.

Bees on top of the hive frames
Bees on top of the hive frames (click to enlarge)
About an hour after the bees get their new home, they're out on the porch, fanning and taking orientation flights.
About an hour after the bees get their new home, they're out on the porch, fanning and taking orientation flights. (click to enlarge)

Besides providing a basis for hive legitimacy, the queen is also the sole reproductive individual, responsible for laying every egg that will be raised in the hive. This is a big job. During the summer, she will lay about 2000 eggs per day and the hive will swell to multiple tens of thousands of bees. But beyond this, the queen’s role is limited. She doesn’t direct the actions of the members of the hive. No one does.


So, how does the hive function without central direction? Thermoregulation provides an example. Despite the fact that bees themselves are not homeothermic, the hive is. The bees manage to keep the hive at 93-94°F (34°C) regardless of the outside air temperature.

How do the bees do that? The straightforward answer is that some bees go to the entrance of the hive and fan air to increase circulation when the internal temperature gets too high. When it gets too low, bees cluster in the center and generate heat by shivering.

The more interesting question is “how do the bees know to do that?” All the bees have similar genetic programming (algorithmic governance). But the tasks that they’re inclined to do depend on their age. The youngest workers clean cells, then move onto nursing functions, mortuary activities, guarding the hive, and finally, in the last weeks of their lives, to foraging for water, nectar, and pollen.

Bees have a genetic threshold for carrying out these tasks. The threshold changes as they age. A young bee has a very high threshold for foraging that decreases over her life. Further, these thresholds vary by patriline (even though every bee in the hive has the same mother, there are many fathers), providing diversity.

So as the temperature in the hive climbs, a few bees go down to the hive entrance and fan. As it gets hotter, even more bees will take up the task, depending on their internal threshold. Their genetic programming, combined with the diversity in their thresholds, promotes an even response to temperature swings that could damage the hive. You can read more about hive thermoregulation in an earlier blog post I wrote on the topic.

Swarming and Protecting Against Byzantine Failure

An even more interesting phenomenon is how bees decide to swarm. Because the hive is a super organism, the queen’s efforts to reproduce don’t result in a new hive unless there’s a swarm. Swarming is how new hives are created.

Bees swarm in response to stresses like insufficient food supply, too little space, and so on. But no one really knows how a hive decides it’s time to swarm. In preparation for a swarm, the hive starts to raise new queens. Whether an egg grows into a worker, drone, or queen is determined by how the larva is fed by nurse bees. At some point the bees collectively determine to swarm and the queen produces a pheromone that broadcasts that decision.

The swarm consists of the current queen (and her powerful pheromones), some of the worker bees, and a portion of the honey stores. The swarm leaves the hive and the remaining bees raise the new queen and carry on. The swarm flies a short distance and settles down on some convenient structure to decide where to make their permanent home. Again the swarm centers on the queen. This is where the fun starts.

Thomas Seeley of Cornell has been studying swarms for his entire career. In the following video he describes how bees use collective decision making to choose their new home.

Cornell professor, biologist and beekeeper Thomas Seeley
Cornell professor, biologist and beekeeper Thomas Seeley (click to view)

There are several interesting features in this process. First, Seeley has determined that bees don’t just make a good decision, but the best possible decision. I think that’s amazing. Several hundred bees leave the swarm to search for a new home and participate in a debate to choose one of the available sites and settle on the best choice.

This is a process that is potentially subject to byzantine failure. Not that the bees are malicious, in fact they’re programmed to accurately represent their findings. But they can report faulty information based on their judgment of the suitability of a candidate site. The use of reputation signals for sites and voting by multiple inspectors allows the bees avoid bad decisions even in the face of false signals.

Swarm lodged in a fruit tree in my garden
Swarm lodged in a fruit tree in my garden (click to enlarge)

The process is further protected from error because bees are programmed to only advertise sites they’ve actually visited. Again, they don’t have the ability to be malicious. Each bee advertising a potential site has done the work of flying to the site and inspecting it. As bees signal their excitement for that site in a waggle dance, even more bees will fly out to it, perform an inspection, and return to advertise their findings. I don’t know if I’d characterize this as proof of work, but it does ensure that votes are based on real information. Once a quorum of bees in the swarm reach consensus about a particular site, the swarm departs and takes up residence in their new home.

Honeybee Democracy by Thomas D. Seeley

Honeybees make decisions collectively--and democratically. Every year, faced with the life-or-death problem of choosing and traveling to a new home, honeybees stake everything on a process that includes collective fact-finding, vigorous debate, and consensus building. In fact, as world-renowned animal behaviorist Thomas Seeley reveals, these incredible insects have much to teach us when it comes to collective wisdom and effective decision making.

You may not be thrilled if a swarm determines the best new home is in your attic, but you can be thrilled with the knowledge that ten thousand decentralized bees with sophisticated algorithmic programming achieved consensus and ranked it #1.

The hive is a super organism with its intelligence spread out among its tens of thousands of members. Life and death decisions are made on a daily basis in a completely decentralized fashion. Besides thermoregulation of the hive and finding a new home, the bees in a hive autonomously make millions of other decentralized decisions every day that result in the hive not only surviving but thriving in hostile conditions. I find that remarkable.

Legitimacy and Decentralized Systems

Major General Andrew Jackson and his Soldiers claim a victory in the Battle of New Orleans during the War of 1812.

As an undergraduate engineering major, I recall being surprised by the so-called three body problem. In Newtonian mechanics, there are nice closed-form solutions to problems involving the motion of two interacting bodies, given their initial position and velocity. This isn’t true of systems with three or more points. How can adding just one more point to the system make it unsolvable?

N-body systems are chaotic for most initial conditions and their solution involves numerical methods—simulation—rather than nice, undergraduate-level math. In other words, it’s messy. Humans like simple solutions.

Like the n-body problem, decentralized systems are chaotic and messy. Humans aren’t good at reasoning about emergent behavior from the coordinated, yet autonomous, behavior of interacting agents. We build bureaucracies and enact laws to try to make chaotic systems legible. The internet was our first, large-scale technical system where decentralization and governance clashed. I remember people in the 90’s asking “Who’s in charge of the internet?”

In The Most Important Scarce Resource is Legitimacy, Vitalik Buterin, the creator of Ethereum, discusses why legitimacy is crucial for the success of any decentralized endeavor. He says:

[T]he Bitcoin and Ethereum ecosystems are capable of summoning up billions of dollars of capital, but have strange and hard-to-understand restrictions on where that capital can go.
From The Most Important Scarce Resource is Legitimacy
Referenced 2021-04-26T14:46:43-0600

These “strange and hard to understand restrictions” are rooted in legitimacy. Decentralized systems must be considered legitimate in order to thrive. That legitimacy is tied to how well the systems and people enabling them, like programmers and miners, are seen to be following “the rules” both written and unwritten. Legitimacy isn’t a technical issue, but a social one.

Wikipedia defines legitimacy as

the right and acceptance of an authority, usually a governing law or a regime.

While this is most often applied to governments, I think we can rightly pose legitimacy questions for technical systems, especially those that have large impacts on people and society.

With respect to legitimacy, Philip Bobbit says:

The defining characteristic … of a constitutional order is its basis for legitimacy. The constitutional order of the industrial nation state, within which we currently live, promised: give us power and we will improve the material well-being of the nation.

In other words, legitimacy comes from the constitutional order: the structure of the governance and its explicit and implicit promises. People grant legitimacy to constitutional orders that meet their expectations by surrendering part of their sovereignty to them. In the quote from Vilaik above, the "strange and hard to understand restrictions" are promises that members of the Bitcoin or Ethreum ecosystems believe those constitutional orders have made. And if they're broken, the legitimacy of those system is threatened.

Talking about “legitimacy” and “constitutional orders” for decentralized systems like Bitcoin, Ethereum, or your favorite NFT might feel strange, but I believe these are critical tools for understanding why some thrive and others wither. Or why some hard forks succeed and others don't.

In Bobbitt’s theory of constitutional orders, transitions from one constitutional order to a new one always requires war. While people seeking legitimacy for one decentralized system or another might not use tanks or missiles, a hard fork is essentially just that—a war fought to cause the transition from one constitutional order to another because of a question of legitimacy. For example, Vitalik describes how the Steem community did a hard fork to create Hive, leaving Steem’s founder (and his tokens) behind because the constitutional order he represented lost its legitimacy because people believed it could no longer keep its promises.

So when you hear someone talking about a decentralized system and starting sentences with phrases like “Somebody should…” or “Why do we let them…” or “Who’s in charge of…”, beware. Unlike most of the easy to understand systems we’re familiar with, decentralized systems are heterarchical, not hierarchical. Thus the means of their control is political, not authoritarian. These systems are not allowed to exist—they're called "permissionless" for a reason. They simply are, by virtue of their legitimacy in the eyes of people who use and support them.

This doesn’t mean decentralized systems are unassailable, but changing them is slower and less sure than most people would like. When you “know” the right way to do something, you want a boss who can dictate the change. Changing decentralized systems is a political process that sometimes requires war. As Clausewitz said “War is the continuation of politics by other means.”

There are no closed-form solutions to the n-body problems represented by decentralized systems. They are messy and chaotic. I’m not sure people will ever get more comfortable with decentralization or understand it well enough to reason about it carefully. But one thing is for sure: decentralized systems don’t care. They simply are.

A version of this article was previously published in Technometria Newsletter, Issue #6, April 13, 2021.

Photo Credit: Major General Andrew Jackson and his Soldiers claim a victory in the Battle of New Orleans during the War of 1812. from Georgia National Guard (CC BY 2.0)

The Politics of Vaccination Passports

CDC Covid-19 Vaccination Card

On December 2, 1942, Enrico Fermi and his team at the University of Chicago initiated the first human-made, self-sustaining nuclear chain reactions in history beneath the viewing stands of Stagg Field. Once humans knew how nuclear chain reactions work and how to initiate them, an atomic bomb was inevitable. Someone would build one.

What was not inevitable was when, where, and how nuclear weapons would be used. Global geopolitical events of the last half of the 20th century and many of the international questions of our day deal with the when, where, and how of that particular technology.

A similar, and perhaps just as impactful, discussion is happening now around technologies like artificial intelligence, surveillance, and digital identity. I’d like to focus on just one small facet of the digital identity debate: vaccination passports.

In Vaccination Passports, Devon Loffreto has strong words about the effort to create vaccination passports, writing:

The vaccination passport represents the introduction of the CCP social credit system to America, transmuting people into sub-human data points lasting lifetimes.
From Vaccination Passports
Referenced 2021-04-12T11:13:58-0600

Devon’s larger point is that once we get used to having to present a vaccination passport to travel, for example, it could quickly spread. Presenting an ID could become the default with bars, restaurants, churches, stores, and every other public place saying “papers, please!” before allowing entry.

This is a stark contrast to how people have traditionally gone about their lives. Asking for ID is, by social convention and practicality, limited mostly to places where it’s required by law or regulation. We expect to get carded when we buy cigarettes, but not milk. A vaccination passport could change all that and that’s Devon’s point.

Devon specifically calls out the Good Health Pass collaborative as "supporting the administration of people as cattle, as fearful beings 'trusting' their leaders with their compliance."

For their part, participants of the Good Health Pass collaborative argue that they are working to create a “safe path to restore international travel and restart the global economy.” Their principles declare that they are building health-pass systems that are privacy protecting, user-controlled, interoperable, and widely accepted.

I’m sympathetic to Devon’s argument. Once such a passport is in place for travel, there’s nothing stopping it from being used everywhere, moving society from free and open to more regulated and closed. Nothing that is, unless we put something in place.

Like the direct line from Fermi’s atomic pile to an atomic bomb, the path from nearly ubiquitous smartphone use to some kind of digital vaccination passport is likely inevitable. The question for us isn’t whether or not it will exist, but where, how, and when passports will be used.

For example, I’d prefer a vaccination passport that is built according to principles of the Good Health Pass collaborative than, say, one built by Facebook, Google, Apple, or Amazon. Social convention, and regulation where necessary, can limit where such a passport is used. It’s an imperfect system, but social systems are. More important, decentralized governance processes are necessarily political.

As I said, I’m sympathetic to Devon’s arguments. The sheer ease of presenting digital credentials removes some of the practicality barrier that paper IDs naturally have. Consequently, digital IDs are likely to be used more often than paper. I don’t want to live in a society where I’m carded at every turn—whether for proof of vaccination or anything else. But I’m also persuaded that organizations like the Good Health Pass collaborative aren’t the bad guys. They’re just folks who see the inevitability of a vaccination credential and are determined to at least see that it’s done right, in ways that respect individual choice and personal privacy as much as possible.

The societal questions remain regardless.

Photo Credit: COVID-19 Vaccination record card from Jernej Furman (CC BY 2.0)

Building Decentralized Applications with Pico Networks

Picos are designed to form heterarchical, or peer-to-peer, networks by connecting directly with each other. Because picos use an actor model of distributed computation, parent-child relationships are very important. When a pico creates another pico, we say that it is the parent and the pico that got created is the child. The parent-child connection allows the creating pico to perform life-cycle management tasks on the newly minted pico such as installing rulesets or even deleting it. And the new pico can create children of its own, and so on.

Building a system of picos for a specific application requires programming them to perform the proper lifecycle management tasks to create the picos that model the application. Wrangler is a ruleset installed in every pico automatically that is the pico operating system. Wrangler provides rules and functions for performing these life-cycle management tasks.

Building a pico application can rarely rely on the hierarchical parent-child relationships that are created as picos are managed. Instead, picos create connections between picos by creating what are called subscriptions, providing bi-directional channels used for raising events to and making queries of the other pico.

This diagram shows a network of temperature sensors built using picos. In the diagram, black lines are parent-child relationships, while pink lines are peer-to-peer relationships between picos.

Temperature Sensor Network
Temperature Sensor Network (click to enlarge)

There are two picos (one salmon and the other green) labeled a "Sensor Community". These are used for management of the temperature sensor picos (which are purple). These community picos are performing life-cycle management of the various sensor picos that are their children. They can be used to create new sensor picos and delete those no longer needed. Their programming determines what rulesets are installed in the sensor picos. Because of the rulesets installed, they control things like whether the sensor pico is active and how often if updates its temperature. These communities might represent different floors of a building or different departments on a large campus.

Despite the fact that there are two different communities of temperature sensors, the pink lines tell us that there is a network of connections that spans the hierarchical communities to create a single connected graph of sensors. In this case, the sensor picos are programmed to use a gossip protocol to share temperature information and threshold violations with each other. They use a CRDT to keep track of the number of threshold violations currently occuring in the network.

The community picos are not involved in the network interactions of the sensor picos. The sensor network operates independently of the community picos and does not rely on them for communication. Astute readers will note that both communities are both children of a "root" pico. That's an artifact of the way I built this, not a requirement. Every pico engine has a root pico that has no parent. These two communities could have been built on different engines and still created a sensor network that spanned multiple communities operating on multiple engines.

Building decentralized networks of picos is relatively easy because picos provide support for many of the difficult tasks. The actor model of picos makes them naturally concurrent without the need for locks. Picos have persistent, independent state so they do not depend on external data stores. Picos have a persistent identity—they exist with a single identity from the time of their creation until they are deleted. Picos are persistently available, always on and ready to receive messages. You can see more about the programming that goes into creating these systems in these lessons: Pico-Based Systems and Pico to Pico Subscriptions.

If you're intrigued and want to get started with picos, there's a Quickstart along with a series of lessons. If you want support, contact me and we'll get you added to the Picolabs Slack.

The pico engine is an open source project licensed under a liberal MIT license. You can see current issues for the pico engine here. Details about contributing are in the repository's README.

Passwords Are Ruining the Web

No Passwords

Compare, for a moment, your online, web experience at your bank with the mobile experience from the same bank. Chances are, if you're like me, that you pick up your phone and use a biometric authentication method (e.g. FaceId) to open it. Then you select the app and the biometrics play again to make sure it's you, and you're in.

On the web, in contrast, you likely end up at a landing page where you have to search for the login button which is hidden in a menu or at the top of the page. Once you do, it probably asks you for your identifier (username). You open up your password manager (a few clicks) and fill the username and only then does it show you the password field1. You click a few more times to fill in the password. Then, if you use multi-factor authentication (and you should), you get to open up your phone, find the 2FA app, get the code, and type it in. To add insult to injury, the ceremony will be just different enough at every site you visit that you really don't develop much muscle memory for it.

As a consequence, when I need somethings from my bank, I pull out my phone and use the mobile app. And it's not just banking. This experience is replicated on any web site that requires authentication. Passwords and the authentication experience are ruining the web.

I wouldn't be surprised to find businesses abandon functional web sites in the future. There will still be some marketing there (what we used to derisively call "brochure-ware") and a pointer to the mobile app. Businesses love mobile apps not only because they can deliver a better user experience (UX) but because they allow business to better engage people. Notifications, for example, get people to look at the app, giving the business opportunities to increase revenue. And some things, like airline boarding passes, just work much better on mobile.

Another factor is that we consider phones to be "personal devices". They aren't designed to be multi-user. Laptops and other devices, on the other hand, can be multi-user, even if in practice they usually are not. Consequently, browsers on laptops get treated as less insecure and session invalidation periods are much shorter, requiring people to login more frequently than in mobile apps.

Fortunately, web sites can be passwordless, relieving some of the pain. Technologies like FIDO2, WebAuthn, and SSI allow for passwordless user experiences on the web as well as mobile. The kicker is that this isn't a trade off with security. Passwordless options can be more secure, and even more interoperable, with a better UX than passwords. Everybody wins.


  1. This is known as "identifier-first authentication". By asking for the identifier, the authentication service can determine how to authenticate you. So, if you're using a token authentication instead of passwords, it can present that next. Some places do this well, merely hiding the password field using Javascript and CSS, so that password managers can still fill the password even though it's not visible. Others don't.

Photo Credit: Login Window from AchinVerma (Pixabay)

Persistence, Programming, and Picos

Jon Udell introduced me to a fascinating talk by the always interesting r0ml. In it, r0ml argues that Postgres as a programming environment feels like a Smalltalk image (at least that's the part that's germane to this post). Jon has been working this way in Postgres for a while. He says:

For over a year, I’ve been using Postgres as a development framework. In addition to the core Postgres server that stores all the Hypothesis user, group, and annotation data, there’s now also a separate Postgres server that provides an interpretive layer on top of the raw data. It synthesizes and caches product- and business-relevant views, using a combination of PL/pgSQL and PL/Python. Data and business logic share a common environment. Although I didn’t make the connection until I watched r0ml’s talk, this setup hearkens back to the 1980s when Smalltalk (and Lisp, and APL) were programming environments with built-in persistence.
From The Image of Postgres
Referenced 2021-02-05T16:44:56-0700

Here's the point in r0ml's talk where he describes this idea:

As I listened to the talk, I was a little bit nostalgic for my time using Lisp and Smalltalk back in the day, but I was also excited because I realized that the model Jon and r0ml were talking about is very much alive in how one goes about building a pico system.

Picos and Persistence

Picos are persistent compute objects. Persistence is a core feature of how picos work. Picos exhibit persistence in three ways. Picos have1:

  • Persistent identity—Picos exist, with a single identity, continuously from the moment of their creation until they are destroyed.
  • Persistent state—Picos have state that programs running in the pico can see and alter.
  • Persistent availability—Picos are always on and ready to process queries and events.

Together, these properties give pico programming a different feel than what many developers are used to. I often tell my students that programmers write static documents (programs, configuration files, SQL queries, etc.) that create dynamic structures—the processes that those static artifacts create when they're run. Part of being a good programmer is being able to envision those dynamic structures as you program. They come alive in your head as you imagine the program running.

With picos, you don't have to imagine the structure. You can see it. Figure 1 shows the current state of the picos in a test I created for a collection of temperature sensors.

Network of Picos for Temperatures Sensors
Figure 1: Network of Picos for Temperatures Sensors (click to enlarge)

In this diagram, the black lines show the parent-child hierarchy and the dotted pink lines show the peer-to-peer connections between picos (called "subscriptions" in current pico parlance). Parent-child hierarchies are primarily used to manage the picos themselves whereas the heterarchical connections between picos is used for programmatic communication and represent the relationships between picos. As new picos are created or existing picos are deleted, the diagram changes to show the dynamic computing structure that exists at any given time.

Clicking on one of the boxes representing a pico opens up a developer interface that enables interaction with the pico according to the rulesets that have been installed. Figure 2 shows the Testing tab for the develop interface of the io.picolabs.wovyn.router ruleset in the pico named sensor_line after the lastTemperature query has been made. Because this is a live view into the running system, the interface can be used to query the state and raise events in the pico.

Interacting with a Pico
Figure 2: Interacting with a Pico (click to enlarge)

A pico's state is updated by rules running in the pico in response to events that the pico sees. Pico state is made available to rules as persistent variables in KRL, the ruleset programming language. When a rule sets a persistent variable, the state is persisted after the rule has finished execution and is available to other rules that execute later2. The Testing tab allows developers to raise events and then see how that impacts the persistent state of the pico.

Programming Picos

As I said, when I saw r0ml's talk, I was immediately struck by how much programming picos felt like using the Smalltalk or Lisp image. In some ways, it's like working with Docker images in a Fargate-like environment since it's serverless (from the programmer's perspective). But there's far less to configure and set up. Or maybe, more accurately, the setup is linguistically integrated with the application itself and feels less onerous and disconnected.

Building a system of picos to solve some particular problem isn't exactly like using Smalltalk. In particular, in a nod to modern development methodologies, the rulesets are installed from URLs and thus can be developed in the IDE the developer chooses and versioned in git or some other versioning system. Rulesets can be installed and managed programmatically so that the system can be programmed to manage its own configuration. To that point, all of the interactions in developer interface are communicated to the pico via an API installed in the picos. Consequently, everything the developer interface does can be done programmatically as well.

Figure 3 shows the programming workflow that we use to build production pico systems.

Programming Workflow
Figure 3: Programming Workflow (click to enlarge)

The developer may go through multiple iterations of the Develop, Build, Deploy, Test phases before releasing the code for production use. What is not captured in this diagram is the interactive feel that the pico engine provides for the testing phase. While automated tests can test the unit and system functionality of the rules running in the pico, the developer interface provides a visual tool for envisioning the interaction of the picos that are animated by dynamic interactions. Being able to query the state of the picos and see their reaction to specific events in various configurations is very helpful in debugging problems.

A pico engine can use multiple images (one at a time). And an image can be zipped up and shared with another developer or checked into git. By default the pico engine stores the state of the engine, including the installed rulesets, in the ~/.pico-engine/ directory. This is the image. Developers can change this to a different directory by setting the PICO_ENGINE_HOME environment variable. By changing the PICO_ENGINE_HOME environment variable, you can keep different development environments or projects separate from each other, and easily go back to the place you left off in a particular pico application.

For example, you could have a different pico engine image for a game project and an IoT project and start up the pico engine in either environment like so:

# work on my game project
PICO_ENGINE_HOME=~/.dnd_game_image pico-engine

# work on IoT project
PICO_ENGINE_HOME=~/.iot_image pico-engine

Images and Modern Development

At first, the idea of using an image or the running system and interacting with it to develop an application may see odd or out of step with modern development practices. After all, developers have had the idea of layered architectures and separation of concerns hammered into them. And image-based development in picos seems to fly in the face of those conventions. But it's really not all that different.

First, large pico applications are not generally built up by hand and then pushed into production. Rather, the developers in a pico-based programming project create a system that comes into being programmatically. So, the production image is separate from the developer's work image, as one would like. Another way to think about this, if you're familiar with systems like Smalltalk and Lisp is that programmers don't develop systems using a REPL (read-eval-print loop). Rather they write code, install it, and raise events to cause the system to talk action.

Second, the integration of persistence into the application isn't all that unusual when one considers the recent move to microservices, with local persistence stores. I built a production connected-car service called Fuse using picos some years ago. Fuse had a microservice architecture even though it was built with picos and programmed with rules.

Third, programming in image-based systems requires persistence maintenance and migration work, just like any other architecture does. For example, a service for healing API subscriptions in Fuse was also useful when new features, requiring new APIs, were introduced since the healing worked as well for new, as it did existing, API subscriptions. These kinds of rules allowed the production state to migrate incrementally as bugs were fixed and features added.

Image-based programming in picos can be done with all the same care and concern for persistence management and loose coupling as in any other architecture. The difference is that developers and system operators (these days often one and the same) in a pico-based development activity are saved the effort of architecting, configuring, and operating the persistence layer as a separate system. Linguistically incorporating persistence in the rules provides for more flexible use of persistence with less management overhead.

Stored procedures will not likely soon lose their stigma. Smalltalk images, as they were used in the 1980's, are unlikely to find a home in modern software development practices. Nevertheless, picos show that image-based development can be done in a manner consistent with the best practices we use today without losing the important benefits it brings.

Future Work

There are some improvements that should be made to the pico-engine to make image-based development better.

  • Moving picos between engines is necessary to support scaling of pico-based system. It is still too hard to migrate picos from one engine to another. And when you do, the parent-child hierarchy is not maintained across engines. This is a particular problem with systems of picos that have varied ownership.
  • Managing images using environment variables is clunky. The engine could have better support for naming, creating, switching, and deleting images to support multiple project.
  • Bruce Conrad has created a command-line debugging tool that allows declarations (which don't affect state) to be evaluated in the context of a particular pico. This needs functionality could be better integrated into the developer interface.

If you're intrigued and want to get started with picos, there's a Quickstart along with a series of lessons. If you want support, contact me and we'll get you added to the Picolabs Slack.

The pico engine is an open source project licensed under a liberal MIT license. You can see current issues for the pico engine here. Details about contributing are in the repository's README.


  1. These properties are dependent on the underlying pico engine and the persistence of picos is subject to availability and correct operation of the underlying infrastructure.
  2. Persistent variables are lexically scoped to a specific ruleset to create a closure over the variables. But this state can be accessed programmatically by other rulesets installed in the same pico by using the KRL module facility.

Announcing Pico Engine 1.0

Flowers Generative Art

The pico engine creates and manages picos.1 Picos (persistent compute objects) are internet-first, persistent, actors that are a good choice for building reactive systems—especially in the Internet of Things.

Pico engine is the name we gave to the node.js rewrite of the Kynetx Rules Engine back in 2017. Matthew Wright and Bruce Conrad have been the principal developers of the pico engine.

The 2017 rewrite (Pico Engine 0.X) was a great success. When we started that project, I listed speed, internet-first, small deployment, and attribute-based event authorization as the goals. The 0.X rewrite achieved all of these. The new engine was small enough to be able to be deployed on Raspberry Pi's and other small computers and yet was significantly faster. One test we did on a 2015 13" Macbook Pro handled 44,504 events in over 8000 separate picos in 35 minutes and 19 seconds. The throughput was 21 events per second or 47.6 milliseconds per request.

This past year Matthew and Bruce reimplemented the pico engine with some significant improvements and architectural changes. We've released that as Pico Engine 1.X. This blog post discusses the improvements in Pico Engine 1.X, after a brief introduction of picos so you'll know why you should care.


Picos support an actor model of distributed computation. Picos have the following three properties. In response to a received message,

  1. picos send messages to other picos—Picos respond to events and queries by running rules. Depending on the rules installed, a pico may raise events for itself or other picos.
  2. picos create other picos—Picos can create and delete other picos, resulting in a parent-child hierarchy of picos.
  3. picos change their internal state (which can affect their behavior when the next message received)—Each pico has a set of persistent variables that can only be affected by rules that run in response to events.

I describe picos and their API and programming model in more detail elsewhere. Event-driven systems, like those built from picos, can be used to create systems that meet the Reactive Manifesto.

Despite the parent-child hierarchy, picos can be arranged in a heterachical network for peer-to-peer communication and computation. As mentioned, picos support direct asynchronous messaging by sending events to other picos. Picos have an internal event bus for distributing those messages to rules installed in the pico. Rules in the pico are selected to run based on declarative event expressions. The pico matches events on the bus with event scenarios declared in the event expressions. Event expressions can specify simple single event matches, or complicated event relationships with temporal ordering. Rules whose event expressions match are scheduled for execution. Executing rules may raise additional events. More detail about the event loop and pico execution model are available elsewhere.

Each pico presents a unique Event-Query API that is dependent on the specific rulesets installed in the pico. Picos share nothing with other picos except through messages exchanged between them. Picos don't know and can't directly access or affect the internal state of another pico.

As a result of their design, picos exhibit the following important properties:

  • Lock-free concurrency—picos respond to messages without locks.
  • Isolation—state changes in one pico cannot affect the state in other picos.
  • Location transparency—picos can live on multiple hosts and so computation can be scaled easily and across network boundaries.
  • Loose coupling—picos are only dependent on one another to the extent of their design.

Pico Engine 1.0

Version 1.0 is a rewrite of pico-engine that introduces major improvements:

  • A more pico-centric architecture that makes picos less dependent on a particular engine.
  • A more module design that supports future improvements and makes the engine code easier to maintain and understand.
  • Ruleset versioning and sharing to facilitate decentralized code sharing.
  • Better, attribute-based channel policies for more secure system architecture.
  • A new UI written in React that uses the event-query APIs of the picos themselves to render.

One of our goals for future pico ecosystems is build not just distributed, but decentralized peer-to-peer systems. One of the features we'd very much like picos to have is the ability to move between engines seamlessly and with little friction. Pico engine 1.X better supports this roadmap.

Figure 1 shows a block diagram of the primary components. The new engine is built on top of two primary modules: pico-framework and select-when.

Pico Engine Modular Architecture
Figure 1: Pico Engine Modular Architecture (click to enlarge)

The pico-framework handles the building blocks of a Pico based system:

  • Pico lifecycle—picos exist from the time they're created until they're deleted.
  • Pico parent/child relationships—Every pico, except for the root pico, has a parent. All picos may have children.
  • Events—picos respond to events based on the rules that are installed in the pico. The pico-framework makes use of the select_when library to create rules that pattern match on event streams.
  • Queries—picos can also respond to queries based on the rulesets that are installed in the pico.
  • Channels—Events and queries arrive on channels that are created and deleted. Access control policies for events and queries on a particular channel are also managed by the pico-framework
  • Rulesets—the framework manages installing, caching, flushing, and sandboxing rulesets.
  • Persistence—all picos have persistence and can manage persistent data. The pico-framework uses Levelup to define an interface for a LevelDB compatible data store and uses it to handle persistence of picos.

The pico-framework is language agnostic. Pico-engine-core combines pico-framework with facilities for rendering KRL, the rule language used to program rulesets. KRL rulesets are compiled to Javascript for pico-framework. Pico-engine-core contains a registry (transparent to the user) that caches compiled rulesets that have been installed in picos. In addition, pico-engine-core includes a number of standard libraries for KRL. Rulesets are compiled to Javascript for execution. The Javascript produced by the rewrite is much more readable than that rendered by the 0.X engine. Because of the new modular design, rulesets written entirely in Javascript can be added to a pico system.

The pico engine combines the pico-engine-core with a LevelDB-compliant persistent store, an HTTP server, a log writer, and a ruleset loader for full functionality.


Wrangler is the pico operating system. Wrangler presents an event-query API for picos that supports programatically managing the pico lifecycle, channels and policies, and rulesets. Every pico created by the pico engine has Wrangler installed automatically to aid in programatically interacting with picos.

One of the goals of the new pico engine was to support picos moving between engines. Picos relied too heavily on direct interaction with the engine APIs in 0.X and thus were more tightly coupled to the engine than is necessary. The 1.0 engine minimizes the coupling to the largest extent possible. Wrangler, written in KRL, builds upon the core functionality provided by the engine to provide developers with an API for building pico systems programmatically. A great example of that is the Pico Engine Developer UI, discussed next.

Pico Engine Developer UI

Another significant change to the pico engine with the 1.0 release was a rewritten Developer UI. In 0.X, the UI was hard coded into the engine. The 1.X UI is a single page web application (SPA) written in React. The SPA uses an API that the engine provides to get the channel identifier (ECI) for the root pico in the engine. The UI SPA uses that ECI to connect to the API implemented by the io.picolabs.pico-engine-ui.krl ruleset (which is installed automatically in every pico).

Figure 2 shows the initial Developer UI screen. The display is the network of picos in the engine. Black lines represent parent-child relationships and form a tree with the root pico at the root. The pink lines are subscriptions between picos—two-way channels formed by exchanging ECIs. Subscriptions are used to form peer-to-peer (heterachical) relationships between picos and do no necessarily have to be on the same engine.

Pico Engine UI
Figure 2: Pico Engine UI (click to enlarge)

When a box representing a pico in the Developer UI is clicked, the display shows an interface for performing actions on the pico as shown in Figure 3. The interface shows a number of tabs.

  • The About tab shows information about the pico, including its parent and children. The interface allows information about the pico to be changed and new children to be created.
  • The Rulesets tab shows any rulesets installed in the pico, allows them to be flushed from the ruleset cache, and for new rulesets to be installed.
  • The Channels tab is used to manage channels and channel policies.
  • The Logging tab shows execution logs for the pico.
  • The Testing tab provides an interface for exercising the event-query APIs that the rulesets installed in the pico provide.
  • The Subscriptions tab provides an interface for managing the pico's subscriptions and creating new ones.
Pico Developer Interface
Figure 3: Pico Developer Interface (click to enlarge)

Because the Developer UI is just using the APIs provided by the pico, everything it does (and more) can be done programatically by code running in the picos themselves. Most useful pico systems will be created and managed programmatically using Wrangler. The Developer UI provides a convenient console for exploring and testing during development. The io.picolabs.pico-engine-ui.krl ruleset can be replaced or augmented by another ruleset the developer installs on the pico to provide a different interface to the pico. Interesting pico-based system will have applications that interact with their APIs to present the user interface. For example, Manifold is a SPA written in React that creates a system of picos for use in IoT applications.

Come Contribute

The pico engine is an open source project licensed under a liberal MIT license. You can see current issues for the pico engine here. Details about contributing are in the repository's README.

In addition to the work on the engine itself, one of the primary workstreams at present is to complete Bruce Conrad's excellent work to use DIDs and DIDComm as the basis for inter-pico communication, called ACA-Pico (Aries Cloud Agent - Pico). We're holding monthly meetings and there's a repository of current work complete with issues. This work is important because it will replace the current subscriptions method of connecting heterarchies of picos with DIDComm. This has the obvious advantages of being more secure and aligned with an important emerging standard. More importantly, because DIDComm is protocological, this will support protocol-based interactions between picos, including credential exchange.

If you're intrigued and want to get started with picos, there's a Quickstart along with a series of lessons. If you want support, contact me and we'll get you added to the Picolabs Slack.


  1. The pico engine is to picos as the docker engine is to docker containers.

Photo Credit: Flowers Generative Art from dp792 (Pixabay)

Generative Identity

Generative Art Ornamental Sunflower

The Generative Self-Sovereign Internet explored the generative properties of the self-sovereign internet, a secure overlay network created by DID connections. The generative nature of the self-sovereign internet is underpinned by the same kind of properties that make the internet what it is, promising a more secure and private, albeit no less useful, internet for tomorrow.

In this article, I explore the generativity of self-sovereign identity—specifically the exchange of verifiable credentials. One of the key features of the self-sovereign internet is that it is protocological—the messaging layer supports the implementation of protocol-mediated interchanges on top of it. This extensibility underpins its generativity. Two of the most important protocols defined on top of the self-sovereign internet support the exchange of verifiable credentials as we'll see below. Together, these protocols work on top of the the self-sovereign internet to give rise to self-sovereign identity through a global identity metasystem.

Verifiable Credentials

While the control of self-certifying identifiers in the form of DIDs is the basis for the autonomy of the self-sovereign internet, that autonomy is made effective through the exchange of verifiable credentials. Using verifiable credentials, an autonomous actor on the self-sovereign internet can prove attributes to others in a way they can trust. Figure 1 shows the SSI stack. The self-sovereign internet is labeled "Layer Two" in this figure. Credential exchange happens on top of that in Layer Three.

SSI Stack
Figure 1: SSI Stack (click to enlarge)

Figure 2 shows how credentials are exchanged. In this diagram, Alice has DID-based relationships with Bob, Carol, and Alice has received a credential from The credential contains attributes that is willing to attest belong to Alice. For example, Attestor might be her employer attesting that she is an employee. Attestor likely gave her a credential for their own purposes. Maybe Alice uses it for passwordless login at company web sites and services and to purchase meals at the company cafeteria. She might also use it at partner websites (like the benefits provider) to provide shared authentication without federation (and it's associated infrastructure). Attestor is acting as a credential issuer. We call Alice a credential holder in this ceremony. The company and partner websites are credential verifiers. Credential issuance is a protocol that operates on top of the self-sovereign internet.

Credential exchange
Figure 2: Credential Exchange (click to enlarge)

Even though issued the credential to Alice for its own purposes, she holds it in her wallet and can use it at other places besides Attestor. For example, suppose she is applying for a loan and her bank, Certiphi, who wants proof that she's employed and has a certain salary. Alice could use the credential from Attestor to prove to Certiphi that she's employed and that her salary exceeds a given threshold1. Certiphi is also acting as a credential verifier. Credential proof and verification is also protocol that operates on top of the self-sovereign internet. As shown in Figure 2, individuals can also issue and verify credentials.

We say Alice "proved" attributes to Certiphi from her credentials because the verification protocol uses zero knowledge proof to support the minimal disclosure of data. Thus the credential that Alice holds from Attestor might contain a rich array of information, but Alice need only disclose the information that Certiphi needs for her loan. In addition, the proof process ensures that Alice can't be correlated though the DIDs she has shared with others. Attribute data isn't tied to DIDs or the keys that are currently assigned to the DID. Rather than attributes bound to identifiers and keys, Alice's identifiers and keys empower the attributes.

Certiphi can validate important properties of the credential. Certiphi is able to validate the fidelity of the credential by reading the credential definition from the ledger (Layer One in Figure 1), retrieving Attestor's public DID from the credential definition, and resolving it to get's public key to check the credential's signature. At the same time, the presentation protocol allows Certiphi to verify that the credential is being presented by the person it was issued to and that it hasn't been revoked (using a revocation registry store in Layer 1). Certiphi does not need to contact Attestor or have any prior business relationship to verify these properties.

The global identity metasystem, shown as the yellow box in Figure 1, comprises the ledger at Layer 1, the self-sovereign internet at Layer 2, and the credential exchange protocols that operate on top of it. Together, these provide the necessary features and characteristics to support self-sovereign identity.

Properties of Credential Exchange

Verifiable credentials have five important characteristics that mirror how credentials work in the offline world:

  1. Credentials are decentralized and contextual. There is no central authority for all credentials. Every party can be an issuer, a holder, or a verifier. Verifiable credentials can be adapted to any country, any industry, any community, or any set of trust relationships.
  2. Credential issuers decide what data is contained in their credentials. Anyone can write credential schemas to the ledger. Anyone can create a credential definition based on any of these schemas.
  3. Verifiers make their own decisions about which credentials to accept—there's no central authority who determines what credentials are important or which are used for a given purpose.
  4. Verifiers do not need to contact issuers to perform verification—that's what the ledger is for. Credential verifiers don't need to have any technical, contractual, or commercial relationship with credential issuers in order to determine the credentials' fidelity.
  5. Credential holders are free to choose which credentials to carry and what information to disclose. People and organizations are in control of the credentials they hold and to determine what to share with whom.

These characteristics underlie several important properties that support the generativity of credential exchange. Here are the most important:

PrivatePrivacy by Design is baked deep into the architecture of the identity metasystem as reflected by several fundamental architectural choices:

  1. Peer DIDs are pairwise unique and pseudonymous by default to prevent correlation.
  2. Personal data is never written to the ledgers at Layer 1 in Figure 1—not even in encrypted or hashed form. Instead, all private data is exchanged over peer-to-peer encrypted connections between off-ledger agents at Layer 2. The ledger is used for anchoring rather than publishing encrypted data.
  3. Credential exchange has built-in support for zero-knowledge proofs (ZKP) to avoid unnecessary disclosure of identity attributes.
  4. As we saw earlier, verifiers don’t need to contact the issuer to verify a credential. Consequently, the issuer doesn’t know when or where the credential is used.

Decentralized—decentralization follows directly from the fact that no one owns the infrastructure that supports credential exchange. This is the primary criterion for judging the degree of decentralization in a system. Rather, the infrastructure, like that of the internet, is operated by many organizations and people bound by protocol.

Heterarchical—a heterarchy is a "system of organization where the elements of the organization are unranked (non-hierarchical) or where they possess the potential to be ranked a number of different ways." Participants in credential exchange relate to each other as peers and are autonomous.

Interoperable—verifiable credentials have a standard format, readily accessible schemas, and a standard protocols for issuance, proving (presenting), and verification. Participants can interact with anyone else so long as they use tools that follow the standards and protocols. Credential exchange isn't a single, centralized system from a single vendor with limited pieces and parts. Rather, interoperability relies on interchangeable parts, built and operated by various parties. Interoperability supports substitutability, a key factor in autonomy and flexibility.

Substitutable—the tools for issuing, holding, proving, and verifying are available from multiple vendors and follow well-documented, open standards. Because these tools are interoperable, issuers, holders, and verifiers can choose software, hardware, and services without fear of being locked into a proprietary tool. Moreover, because many of the attributes the holder needs to prove (e.g. email address or even employer) will be available on multiple credentials, the holder can choose between credentials. Usable substitutes provide choice and freedom.

Flexible—closely related to substitutability, flexibility allows people to select appropriate service providers and features. No single system can anticipate all the scenarios that will be required for billions of individuals to live their own effective lives. The characteristics of credential exchange allow for context-specific scenarios.

Reliable and Censorship Resistant—people, businesses, and others must be able to exchange credentials without worrying that the infrastructure will go down, stop working, go up in price, or get taken over by someone who would do them harm. Substitutability of tools and credentials combined with autonomy makes the system resistant to censorship. There is no hidden third party or intermediary in Figure 2. Credentials are exchanged peer-to-peer.

Non-proprietary and Open—no one has the power to change how credentials are exchanged by fiat. Furthermore, the underlying infrastructure is less likely to go out of business and stop operation because its maintenance and operation are decentralized instead of being in the hands of a single organization. The identity metasystem has the same three virtues of the Internet that Doc Searls and Dave Weinberger enumerated as NEA: No one owns it, Everyone can use it, and Anyone can improve it. The protocols and code that enable the metasystem are open source and available for review and improvement.

Agentic—people can act as autonomous agents, under their self-sovereign authority. The most vital value proposition of self-sovereign identity is autonomy—not being inside someone else's administrative system where they make the rules in a one-sided way. Autonomy requires that participants interact as peers in the system, which the architecture of the metasystem supports.

Inclusive—inclusivity is more than being open and permissionless. Inclusivity requires design that ensures people are not left behind. For example, some people cannot act for themselves for legal (e.g. minors) or other (e.g. refugees) reasons. Support for digital guardianship ensures that those who cannot act for themselves can still participate.

Universal—successful protocols eat other protocols until only one survives. Credential exchange, built on the self-sovereign internet and based on protocol, has network effects that drive interoperability leading to universality. This doesn't mean that the metasystem will be mandated. Rather, one protocol will mediate all interaction because everyone in the ecosystem will conform to it out of self-interest.

The Generativity of Credential Exchange

Applying Zittrain's framework for evaluating generativity is instructive for understanding the generative properties of self-sovereign identity.

Capacity for Leverage

In Zittrain's words, leverage is the extent to which an object "enables valuable accomplishments that otherwise would be either impossible or not worth the effort to achieve." Leverage multiplies effort, reducing the time and cost necessary to innovate new capabilities and features.

Traditional identity systems have been anemic, supporting simple relationships focused on authentication and a few basic attributes their administrators need. They can't easily be leveraged by anyone but their owner. Federation through SAML or OpenID Connect has allowed the authentication functionality to be leveraged in a standard way, but authentication is just a small portion of the overall utility of a digital relationship.

One example of the capacity of credential exchange for leverage is to consider that it could be the foundation for a system that disintermediates platform companies like Uber, AirBnB, and the food delivery platforms. Platform companies build proprietary trust frameworks to intermediate exchanges between parties and charging exorbitant rents for what ought to be a natural interaction among peers. Credential exchange can open these trust frameworks up to create open marketplaces for services.

The next section on Adaptability lists a number of uses for credentials. The identity metasystem supports all these use cases with minimal development work on the part of issuers, verifiers, and holders. And because the underlying system is interoperable, an investment in the tools necessary to solve one identity problem with credentials can be leveraged by many others without new investment. The cost to define a credential is very low (often less than $100) and once the definition is in place, there is no cost to issue credentials against it. A small investment can allow an issuer to issue millions of credentials of different types for different use cases.


Adaptability can refer to a technology's ability to be used for multiple activities without change as well as its capacity for modification in service of new use cases. Adaptability is orthogonal to a technology's capacity for leverage. An airplane, for example, offers incredible leverage, allowing goods and people to be transported over long distances quickly. But airplanes are neither useful in activities outside transportation or easily modified for different uses. A technology that supports hundreds of use cases is more generative than one that is useful in only a few.

Identity systems based on credential exchange provide people with the means of operationalizing their online relationships by providing them the tools for acting online as peers and managing the relationships they enter into. Credential exchange allows for ad hoc interactions that were not or cannot be imagined a priori.

The flexibility of credentials ensures they can be used in a variety of situations. Every form or official piece of paper is a potential credential. Here are a few examples of common credentials:

  • Employee badges
  • Drivers license
  • Passport
  • Wire authorizations
  • Credit cards
  • Business registration
  • Business licenses
  • College transcripts
  • Professional licensing (government and private)

But even more important, every bundle of data transmitted in a workflow is a potential credential. Since credentials are just trustworthy containers for data, there are many more use cases that may not be typically thought of as credentials:

  • Invoices and receipts
  • Purchase orders
  • Airline or train ticket
  • Boarding pass
  • Certificate of authenticity (e.g. for art, other valuables)
  • Gym (or any) membership card
  • Movie (or any) tickets
  • Insurance cards
  • Insurance claims
  • Titles (e.g. property, vehicle, etc.)
  • Certificate of provenance (e.g. non-GMO, ethically sourced, etc.)
  • Prescriptions
  • Fractional ownership certificates for high value assets
  • CO2 rights and carbon credit transfers
  • Contracts

Since even a small business might issue receipts or invoices, have customers who use the company website, or use employee credentials, most businesses will define at least one credential and many will need many more. There are potentially tens of millions of different credential types. Many will use common schemas but each credential from a different issuer constitutes a different identity credential for a different context.

With the ongoing work in Hyperledger Aries, these use cases expand even further. With a “redeemable credentials” feature, holders can prove possession of a credential in a manner that is double-spend proof without a ledger. This works for all kinds of redemption use cases like clocking back in at the end of a shift, voting in an election, posting an online review, or redeeming a coupon.

The information we need in any given relationship varies widely with context. Credential exchange protocols must be flexible enough to support many different situations. For example, in You've Had an Automobile Accident, I describe a use case that requires the kinds of ad hoc, messy, and unpredictable interactions that happen all the time in the physical world. Credential exchange readily adapts to these context-dependent, ad hoc situations.

Ease of Mastery

Ease of mastery refers to the capacity of a technology to be easily and broadly adapted and adopted. One of the core features of credential exchange on the identity metasystem is that supports the myriad use cases described above without requiring new applications or user experiences for each one. The digital wallet that is at the heart of credential exchange activities on the self-sovereign internet supports two primary artifacts and the user experiences to manage them: connections and credentials. Like the web browser, even though multiple vendors provide digital wallets, the underlying protocol informs a common user experience.

A consistent user experience doesn’t mean a single user interface. Rather the focus is on the experience. As an example, consider an automobile. My grandfather, who died in 1955, could get in a modern car and, with only a little instruction, successfully drive it. Consistent user experiences let people know what to expect so they can intuitively understand how to interact in any given situation regardless of context.


Accessible technologies are easy to acquire, inexpensive, and resistant to censorship. Because of it's openness, standardization, and support by multiple vendors, credential exchange is easily available to anyone with access to a computer or phone with an internet connection. But we can't limit its use to individuals who have digital access and legal capacity. Ensuring that technical and legal architectures for credential exchange support guardianship and use on borrowed hardware can provide accessibility to almost everyone in the world.

The Sovrin Foundation's Guardianship Working working group has put significant effort into understanding the technical underpinnings (e.g., guardianship and delegation credentials), legal foundations (e.g., guardianship contracts), and business drivers (e.g., economic models for guardianship). They have produced an excellent whitepaper on guardianship that "examines why digital guardianship is a core principle for Sovrin and other SSI architectures, and how it works from inception to termination through looking at real-world use cases and the stories of two fictional dependents, Mya and Jamie."

Self-Sovereign Identity and Generativity

In What is SSI?, I made the claim that SSI requires decentralized identifiers, credential exchange, and autonomy for participants. Dick Hardt pushed back on that a bit and asked me if decentralized identifiers were really necessary? We had a several fun discussions on that topic.

In that article, I unfortunately used decentralized identifiers and verifiable credentials as placeholders for their properties. Once I started looking at properties, I realized that generative identity can't be built on an administrative identity system. Self-sovereign identity is generative not only because of the credential exchange protocols but also because of the properties of the self-sovereign internet upon which those protocols are defined and operate. Without the self-sovereign internet, enabled through DIDComm, you might implement something that works as SSI, but it won't provide the leverage and adaptability necessary to creating a generative ecosystem of uses that creates the network effects needed to propel it to ubiquity.

Our past approach to digital identity has put us in a position where people's privacy and security are threatened by the administrative identity architecture it imposes. Moreover, limiting its scope to authentication and a few unauthenticated attributes, repeated across thousands of websites with little interoperability, has created confusion, frustration, and needless expense. None of the identity systems in common use today offer support for the same kind of ad hoc attribute sharing that happens everyday in the physical world. The result has been anything but generative. Entities who rely on attributes from several parties must perform integrations with all of them. This is slow, complex, and costly, so it typically happens only for high-value applications.

An identity metasystem that supports protocol-mediated credential exchange running on top of the self-sovereign internet solves these problems and promises generative identity for everyone. By starting with people and their innate autonomy, generative identity supports online activities that are life-like and natural. Generative identity allows us to live digital lives with dignity and effectiveness, contemplates and addresses the problems of social inclusion, and supports economic access for people around the globe.


  1. For Alice to prove things about her salary, Attestor would have to include that in the credential they issue to Alice.

Photo Credit: Generative Art Ornamental Sunflower from dp792 (Pixabay)

The Generative Self-Sovereign Internet

Seed Germinating

This is part one of a two part series on the generativity of SSI technologies. This article explores the properties of the self-sovereign internet and makes the case that they justify its generativity claims. The second part will explore the generativity of verifiable credential exchange, the essence of self-sovereign identity.

In 2005, Jonathan Zitrain wrote a compelling and prescient examination of the generative capacity of the Internet and its tens of millions of attached PCs. Zittrain defined generativity thus:

Generativity denotes a technology’s overall capacity to produce unprompted change driven by large, varied, and uncoordinated audiences.

Zittrain masterfully describes the extreme generativity of the internet and its attached PCs, explains why openness of both the network and the attached computers is so important, discusses threats to the generativity nature of the internet, and proposes ways that the internet can remain generative while addressing some of those threats. While the purpose of this article is not to review Zittrain's paper in detail, I recommend you take some time to explore it.

Generative systems use a few basic rules, structures, or features to yield behaviors that can be extremely varied and unpredictable. Zittrain goes on to lay out the criteria for evaluating the generativity of a technology:

Generativity is a function of a technology’s capacity for leverage across a range of tasks, adaptability to a range of different tasks, ease of mastery, and accessibility.

This sentence sets forth four important criteria for generativity:

  1. Capacity for Leverage—generative technology makes difficult jobs easier—sometimes possible. Leverage is measured by the capacity of a device to reduce effort.
  2. Adaptability—generative technology can be applied to a wide variety of uses with little or no modification. Where leverage speaks to a technology's depth, adaptability speaks to its breadth. Many very useful devices (e.g. airplanes, saws, and pencils) are nevertheless fairly narrow in their scope and application.
  3. Ease of Mastery—generative technology is easy to adopt and adapt to new uses. Many billions of people use a PC (or mobile device) to perform tasks important to them without significant skill. As they become more proficient in its use, they can apply it to even more tasks.
  4. Accessibility—generative technology is easy to come by and access. Access is a function of cost, deployment, regulation, monopoly power, secrecy, and anything else which introduces artificial scarcity.

The identity metasystem I've written about in the past is composed of several layers that provide its unique functionality. This article uses Zittrain's framework, outlined above, to explore the generativity of what I've called the Self-Sovereign Internet, the second layer in the stack shown in Firgure 1. A future article will discuss the generativity of credential exchange at layer three.

SSI Stack
Figure 1: SSI Stack (click to enlarge)

The Self-Sovereign Internet

In DIDComm and the Self-Sovereign Internet, I make the case that the network of relationships created by the exchange of decentralized identifiers (layer 2 in Figure 1) forms a new, more secure layer on the internet. Moreover, the protocological properties of DIDComm make that layer especially useful and flexible, mirroring the internet itself.

This kind of "layer" is called an overlay network. An overlay network comprises virtual links that correspond to a path in the underlying network. Secure overlay networks rely on an identity layer based on asymmetric key cryptography to ensure message integrity, non-repudiation, and confidentiality. TLS (HTTPS) is a secure overlay, but it is incomplete because it's not symmetrical. Furthermore, it's relatively inflexible because it overlays a network layer using a client-server protocol1.

In Key Event Receipt Infrastructure (KERI) Design, Sam Smith makes the following important point about secure overlay networks:

The important essential feature of an identity system security overlay is that it binds together controllers, identifiers, and key-pairs. A sender controller is exclusively bound to the public key of a (public, private) key-pair. The public key is exclusively bound to the unique identifier. The sender controller is also exclusively bound to the unique identifier. The strength of such an identity system based security overlay is derived from the security supporting these bindings.
From Key Event Receipt Infrastructure (KERI) Design
Referenced 2020-12-21T11:08:57-0700

Figure 2 shows the bindings between these three components of the secure overlay.

Figure 1: Binding of controller, authentication factors, and identifiers in identity systems.
Figure 2: Binding of controller, authentication factors, and identifiers that provide the basis for a secure overlay network. (click to enlarge)

In The Architecture of Identity Systems, I discuss the strength of these critical bindings in various identity system architectures. The key point for this discussion is that the peer-to-peer network created by peer DID exchanges constitute an overlay with an autonomic architecture, providing not only the strongest possible bindings between the controller, identifiers, and authentication factors (public key), but also not needing an external trust basis (like a ledger) because they are self-certifying.

DIDs allow us to create cryptographic relationships, solving significant key management problems that have plagued asymmetric cryptography since it's inception. Consequently, regular people can use a general purpose secure overlay network based on DIDs. The DID network that is created when people use these relationships provides a protocol, DIDComm, that is every bit as flexible and useful as is TCP/IP.

Consequently, communications over a DIDComm-enabled peer-to-peer network are as generative as the internet itself. Thus, the secure overlay network formed by DIDComm connections represents a self-sovereign internet, emulating the underlying internet's peer-to-peer messaging in a way that is both secure and trustworthy2 without the need for external third parties3.

Properties of the Self-Sovereign Internet

In World of Ends, Doc Searls and Dave Weinberger enumerate the internet's three virtues:

  1. No one owns it.
  2. Everyone can use it.
  3. Anyone can improve it.

These virtues apply to the self-sovereign internet as well. As a result, the self-sovereign internet displays important properties that support it's generativity. Here are the most important:

Decentralized—decentralization follows directly from the fact that no one owns it. This is the primary criterion for judging the degree of decentralization in a system.

Heterarchical—a heterarchy is a "system of organization where the elements of the organization are unranked (non-hierarchical) or where they possess the potential to be ranked a number of different ways." Nodes in a DIDComm-based network relate to each other as peers. This is a heterarchy; there is no inherent ranking of nodes in the architecture of the system.

Interoperable—regardless of what providers or systems we use to connect to the self-sovereign internet, we can interact with any other principles who are using it so long as they follow protocol4.

Substitutable—The DIDComm protocol defines how systems that use it must behave to achieve interoperability. That means that anyone who understands the protocol can write software that uses DIDComm. Interoperability ensure that we can operate using a choice of software, hardware, and services without fear of being locked into a proprietary choice. Usable substitutes provide choice and freedom.

Reliable and Censorship Resistant—people, businesses, and others must be able to use the secure overlay network without worrying that it will go down, stop working, go up in price, or get taken over by someone who would do it and those who use it harm. This is larger than mere technical trust that a system will be available and extends to the issue of censorship.

Non-proprietary and Open—no one has the power to change the self-sovereign internet by fiat. Furthermore, it can't go out of business and stop operation because its maintenance and operation are distributed instead of being centralized in the hands of a single organization. Because the self-sovereign internet is an agreement rather than a technology or system, it will continue to work.

The Generativity of the Self-Sovereign Internet

Applying Zittrain's framework for evaluating generativity is instructive for understanding the generative properties of the self-sovereign internet.

Capacity for Leverage

In Zittrain's words, leverage is the extent to which an object "enables valuable accomplishments that otherwise would be either impossible or not worth the effort to achieve." Leverage multiplies effort, reducing the time and cost necessary to innovate new capabilities and features. Like the internet, DIDComm's extensibility through protocols enables the creation of special-purpose networks and data distribution services on top of it. By providing a secure, stable, trustworthy platform for these services, DIDComm-based networks reduce the effort and cost associated with these innovations.

Like a modern operating system's application programming interface (API), DIDComm provides a standardized platform supporting message integrity, non-repudiation, and confidentiality. Programmers get the benefits of a trusted message system without need for expensive and difficult development.


Adaptability can refer to a technology's ability to be used for multiple activities without change as well as its capacity for modification in service of new use cases. Adaptability is orthogonal to capacity for leverage. An airplane, for example, offers incredible leverage, allowing goods and people to be transported over long distances quickly. But airplanes are neither useful in activities outside transportation or easily modified for different uses. A technology that supports hundreds of use cases is more generative than one that is useful in only a few.

Like TCP/IP, DIDComm makes few assumptions about how the secure messaging layer will be used. Thus the network formed by the nodes in a DIDComm network can be adapted to any number of applications. Moreover, because a DIDComm-based network is decentralized and self-certifying, it is inherently scalable for many uses.

Ease of Mastery

Ease of use refers to the ability of a technology to be easily and broadly adapted and adopted. The secure, trustworthy platform of the self-sovereign internet allows developers to create applications without worrying about the intricacies of the underlying cryptography or key management.

At the same time, because of its standard interface and protocol, DIDComm-based networks can present users with a consistent user experience that reduces the skill needed to establish and use connections. Just like a browser presents a consistent user experience on the web, a DIDComm agent can present users with a consistent user experience for basic messaging, as well as specialized operations that run over the basic messaging system.

Of special note is key management, which has been the Achilles heal of previous attempts at secure overlay networks for the internet. Because of the nature of decentralized identifiers, identifiers are separated from the public key, allowing the keys to be rotated when needed without also needing to refresh the identifier. This greatly reduces the need for people to manage or even see keys. People focus on the relationships and the underlying software manages the keys.5


Accessible technologies are easy to acquire, inexpensive, and resistant to censorship. DIDComm's accessibility is a product of its decentralized and self-certifying nature. Protocols and implementing software are freely available to anyone without intellectual property encumbrances. Multiple vendors, and even open-source tools can easily use DIDComm. No central gatekeeper or any other third party is necessary to initiate a DIDComm connection in service of a digital relationship. Moreover, because no specific third parties are necessary, censorship of use is difficult.


Generativity provides decentralized actors to create cooperating, complex structures and behavior. No one person or group can or will think of all the possible uses, but each is free to adapt the system to their own use. The architecture of the self-sovereign internet exhibits a number of important properties. The generativity of the self-sovereign internet depends on those properties. The true value of the self-sovereign internet is that it provides an leveragable, adaptable, usable, accessible, and stable platform upon which others can innovate.


  1. Implementing general-purpose messaging on HTTP is not straightforward, especially when combined with non-routable IP addresses for many clients. On the other hand, simulating client-server interactions on a general-purpose messaging protocol is easy.
  2. I'm using "trust" in the cryptographic sense, not in the reputational sense. Cryptography allows us to trust the fidelity of the communication but not its content.
  3. Admittedly, the secure overlay is running on top of a network with a number of third parties, some benign and others not. Part of the challenge of engineering a functional secure overlay with self-sovereignty it mitigating the effects that these third parties can have within the self-sovereign internet.
  4. Interoperability is, of course, more complicated than merely following the protocols. Daniel Hardman does an excellent job of discussing this for verifiable credentials (a protocol that runs over DIDComm), in Getting to Practical Interop With Verifiable Credentials.
  5. More details about some of the ways software can greatly reduce the burden of key management when things go wrong can be found in What If I Lose My Phone? by Daniel Hardman.

Photo Credit: Seed Germination from USDA (CC0)