Summary

Key signing parties attempted to establish decentralized trust but they ultimately failed due to poor usability, lack of incentives, and shallow trust models. Verifiable Relationship Credentials (VRCs) provide a modern, peer-to-peer approach that enables actionable, contextual trust built on decentralized identifiers, and secure messaging. First-person identity emerges from direct connections that form relationships, mutual authentication, and portable, verifiable trust.

Alice and Bob exchange VRCs

I've spent the better part of the week thinking about the idea of first-person identity and verifiable relationship credentials after Drummond Reed spoke about them on Monday at VRM day. I decided to write about it to force myself to understand it better.

One of the hard parts of first-person identity is knowing who to trust online. This isn't a new problem. Back in the day, people trying to use Pretty Good Privacy (PGP) faced the same issue when dealing with public keys. Their solution? Key signing parties.

Never heard of a key signing party? Imagine Alice and Bob are at the O'Reilly Open Source conference in 2007, tucked into a side room labeled "PGP Key Signing Party." About a dozen people mill about, each holding a printed sheet of paper covered in strange-looking hexadecimal strings. Alice approaches Bob, both a little unsure of how to proceed.

"Hi, I'm Alice," she says, holding up her badge and offering her driver's license. Bob does the same. They each squint at the other's ID, then down at the printouts, comparing fingerprints. Neither really knows what they're supposed to be verifying beyond the digits matching. Satisfied enough, they nod awkwardly and move on.

Later, back at her laptop, Alice uses the terminal to sign Bob's key and upload the signature to a public key server. It's a little thrilling, in a nerdy kind of way—but the truth is, she's not sure if she'll ever need Bob's key again.

This ritual—half security theater, half social ceremony—was the heart of early attempts at decentralized identity verification. It was a noble effort to build trust without relying on central authorities. But as creative and community-driven as key signing parties were, they never really worked at scale.

Let's talk about why—and how decentralized identifiers and verifiable credentials might offer a better path to first-person trust in the digital world.

Why They Didn't Work

After the conference, Alice doesn't think much more about Bob's key. Sure, she signed it and uploaded the signature to a key server, but that was more out of politeness than practical necessity. Weeks later, when she sees Bob's name in her inbox, she vaguely remembers meeting him—but she has no idea whether she should trust the key attached to his email.

Bob, meanwhile, has been trying to get more people to sign his key. He's collected half a dozen signatures, but they're from people he met once, briefly. The "web of trust" he's supposed to be building still feels like a pile of disconnected threads.

This is where things fell apart:

  1. It wasn't user-friendly and was far too manual—Every step was an opportunity for confusion, mistakes, or simply giving up. And once the key was signed, there was no easy way to use that trust meaningfully in everyday communication. Nothing about the process felt intuitive. Fingerprints were long strings of hexadecimal gibberish. The tools were cryptic and unforgiving. Even for technical folks like Alice and Bob, the experience was brittle. For most people, it was impossible.
  2. The web of trust never reached critical mass—The key idea behind the web of trust was that if Alice trusted Bob, and Bob trusted Carol, then Alice might come to trust Carol, too. But that only works if:
    • A lot of people are participating
    • They're actively managing their trust relationships
    • The connections form a dense, navigable graph
    Instead, what Alice and Bob ended up with were isolated clusters—tiny pockets of trust with no meaningful way to bridge between them.
  3. No immediate payoff—The effort required didn't translate into practical value. Alice never encrypted an email to Bob. Bob never used his signed key to unlock any kind of access or reputation. Signing a key became a kind of ceremonial gesture—well-meaning, but ultimately inconsequential.
  4. Trust was binary and shallow—In theory, key signing meant "I've verified this person's identity." In practice, it often meant "I met this person at a conference and glanced at their ID." The depth of trust was thin, and the binary nature of key signatures (signed or not) didn't reflect the nuanced reality of human relationships.

The core idea was right: identity verification shouldn't require a central authority. But the implementation relied on people doing too much, too manually, and for too little benefit. The trust infrastructure never got far enough to be usable in real life—and so, even though Alice and Bob meant well, their efforts ended up as little more than cryptographic footnotes.

What Can We Learn from the Experience?

Let's rewind and replay that moment between Alice and Bob—only this time, they're operating in a modern, decentralized identity system. No key servers. No GPG. No fingerprints printed on paper.

At another tech conference, Alice scans a QR code on Bob's badge or uses her device's NFC reader to create a connection with Bob. Her personal agent (not necessarily AI-powered) resolves the self-certifying, autonomic decentralized identifier (DID) that Bob provided, pulling Bob's DID document—not from a central directory, but from a peer-to-peer interaction.

Bob's agent reciprocates, requesting a DID from Alice. This isn't just identity exchange—it's mutual authentication. Each party cryptographically proves control over their identifier. No centralized certificate authority is involved; trust is rooted in the interaction itself, supported by verifiable credentials issued by organizations and communities both recognize.

But here's where it gets really interesting: by exchanging DIDs, Alice and Bob have created an actionable connection. Their exchange creates a secure, private DIDComm messaging channel. This isn't just for encrypted chat—though it could be. It's a foundation for ongoing interaction: credential presentations, access control, consent requests, proofs of presence, or even contract negotiation. The connection is both trusted and usable.

Later, Alice could send Bob a verifiable credential confirming they met. Bob could follow up by sharing a credential that gives Alice access to a community space. Their agents handle the details behind the scenes, using DIDComm protocols to maintain privacy and ensure integrity.

There are a number of important changes in this new model:

  1. Trust is peer-to-peer—No key servers. No middlemen. Just Alice and Bob exchanging self-certifying identifiers directly and building trust based on verifiable claims and mutual context.
  2. Mutual authentication is built-in—Both parties authenticate each other through cryptographic proof of control and credentials. It's not a one-way lookup; it's a handshake.
  3. DIDs enable ongoing, secure interaction—Unlike traditional key signing, which ended after the ceremony, exchanging DIDs gives Alice and Bob a secure channel for ongoing communication. DIDComm messaging transforms identity exchange into a persistent, actionable relationship.
  4. Trust has become usable—What began as an in-person meeting becomes a functional connection: a secure link over which credentials, messages, and permissions can flow. Trust becomes a bridge, not just a checkmark.
  5. There are no key servers, no command line—Everything happens in the background: the agents manage key material, update DIDs, and maintain the messaging link. Alice and Bob stay focused on their goals—not cryptography.

Key signing parties were built on a noble idea: decentralized, user-driven trust. But they stopped at verification. In the world of DIDs, DIDComm, and Verifiable Credentials, trust becomes a living channel, not a static record. Alice and Bob didn't just verify each other. They connected. And that is a huge difference.

Improving the UX of Trust: Verifiable Relationship Credentials

After Alice and Bob exchange DIDs and establish a secure DIDComm channel, they have the foundation of a relationship. But what if they want to do more than just message each other? What if they want to capture, express, and eventually use the fact that they met—on their own terms? That's where the verifiable relationship credential (VRC) comes in.

Let's say Alice decides to issue a VRC to Bob. She does this through her personal agent, which creates a standard verifiable credential with self-asserted attributes describing her side of the relationship. The credential could include:

  • Her name and other contact information
  • A claim that Alice met Bob in person at "IIW XL"
  • An optional role or label she assigns ("professional contact," "trusted peer," "collaborator")
  • A brief note about context ("Talked about SSI, aligned on agent interoperability")
  • A timestamp and a validity window, if she wants the credential to expire
  • Her DID as the issuer and Bob's DID as the subject
  • Importantly, her identifier within a shared community context (e.g., her IIW working group handle or project-specific DID)

The VRC is signed by Alice as the issuer. Bob can now store that credential in his wallet—not just as a keepsake, but as evidence of his connection to Alice. He can selectively present this credential to others who might trust Alice, using it to bootstrap his reputation or prove participation in a network. Crucially, this credential is voluntary, signed, and contextual. Alice isn't vouching for Bob's entire identity—just the fact that she knows him, in a specific capacity, at a specific time.

Bob, in turn, can issue a VRC to Alice, reflecting his view of the relationship. These credentials don't have to match. They don't have to be symmetrical. But together, they form a mutual web of attestations—a decentralized, trust-enhancing social layer. Over time, as Bob collects similar credentials from others, he builds a mosaic of relationships that's both verifiable and portable. It's like LinkedIn endorsements, but cryptographically signed and under the subject's control—not platform-owned.

This works better than key signing parties for several reasons:

  • Trust becomes tangible—Instead of an abstract handshake, Alice gives Bob something concrete: a verifiable statement of trust. It's not absolute—it's scoped to their interaction—but it's actionable.
  • Portable reputation—Bob can present Alice's credential in other contexts where Alice is known or trusted. It's a decentralized version of "you can use my name."
  • Contextual and subjective—The VRC reflects Alice's view of Bob. It's self-scoped and doesn't pretend to be a universal truth. That makes it both useful and safe—especially when combined with selective disclosure.
  • Built for agents—Bob's agent can surface VRCs when interacting with third parties: "Alice has attested to this relationship." This creates a fabric of lightweight, useful credentials that can augment decision-making.

The verifiable relationship credential is simple, but it captures something that key signing never could: the social, situational texture of trust. It turns a peer-to-peer interaction into a reusable proof of connection—issued by people, not platforms. For Alice and Bob, it's no longer just "we exchanged keys." It's "we created a relationship—and here's what it meant."

From Relationships to Reputation: Trust as a Graph

Alice and Bob meet at Internet Identity Workshop (IIW)—a place where decentralized identity isn't just theory, it's hallway conversations, whiteboard sessions, and rapid prototyping in the lounge. After exchanging DIDs and establishing a DIDComm channel, they each issued the other a verifiable relationship credential (VRC). Alice's credential says she met Bob at IIW, discussed personal agents and DIDComm, and found him a thoughtful collaborator. Bob issues a similar credential to Alice, reflecting his side of the relationship.

Fast forward a few months: Bob keeps showing up in conversations, contributing to working groups, and collaborating on new specs. Each new interaction leads to more VRCs—credentials from others in the community who are attesting, in their own words and context, to their relationship with him. These VRCs, taken individually, are simple statements of relationship. But collectively, they form a decentralized, living trust graph—a network of attestations that agents can navigate.

Now imagine Carol, another participant in the identity community, is deciding whether to bring Bob into a working group on credential portability. She doesn't know Bob personally, but she sees that he has a VRC from Alice—a name she recognizes and trusts from prior collaboration. Her agent reviews the credential and spots something important: the community identifier in the VRC Bob presents from Alice is the same one that appears in the VRC Carol received directly from Alice months earlier.

That shared identifier becomes a verifiable thread—linking two private relationships into a meaningful chain of trust. Carol's agent now has high confidence that the Alice in Bob's credential is the same Alice who endorsed Carol. Bob doesn't need to present Alice's global identity—just the portion she's chosen to make consistent in this context. Carol's agent reviews Bob's broader trust graph and finds:

  • Multiple VRCs from known IIW regulars
  • Overlapping context (working on agents, involved in open standards)
  • A consistent pattern of positive, scoped endorsements
  • Crucially, a link back to someone she already knows and trusts, via Alice's community identifier

Carol doesn't have to "trust Bob" in the abstract. She can trust that Bob is part of her extended network, with specific, verifiable relationships that support the decision she needs to make.

This is reputation without centralization:

  • Peer-to-peer, not platform-owned
  • Contextual, not generic
  • Verifiable, but privacy-preserving

There's no algorithm deciding who's "influential." There's no reputation score being gamed. Each relationship credential is a piece of a mosaic, curated and held by the people who made them.

Personal agents that are augmented with AI could traverse these graphs on our behalf, weighting relationships based on factors like recency and frequency of interactions, the trustworthiness of issuers (based on our past experience), and relevance to the current task or decision. The agent doesn't just tally up VRCs—it reasons about them. It can say, "Bob is trusted by people you've worked with, in contexts that matter, and here's what they said." That's real, usable trust—not a badge, but a story.

This system isn't just more private—it's more resilient. There's no single point of failure. No platform to de-platform you. Just people, agents, and credentials, all stitched together into a flexible, interpretable web of trust. It's the old dream of the PGP web of trust—but with context, usability, and actionability baked in. From one simple moment at IIW, Alice and Bob built not just a connection, but a durable credentialed relationship. And from many such connections, a rich, decentralized reputation emerges—one that's earned, not claimed.

Relationships Are the Root of First-Person Identity

When Alice and Bob met at IIW, they didn't rely on a platform to create their connection. They didn't upload keys to a server or wait for some central authority to vouch for them. They exchanged DIDs, authenticated each other directly, and established a secure, private communication channel.

That moment wasn't just a technical handshake—it was a statement of first-person identity. Alice told Bob, "This is who I am, on my terms." Bob responded in kind. And when they each issued a verifiable relationship credential, they gave that relationship form: a mutual, portable, cryptographically signed artifact of trust. This is the essence of first-person identity—not something granted by an institution, but something expressed and constructed in the context of relationships. It's identity as narrative, not authority; as connection, not classification.

And because these credentials are issued peer-to-peer, scoped to real interactions, and managed by personal agents, they resist commodification and exploitation. They are not profile pages or social graphs owned by a company to be monetized. They are artifacts of human connection, held and controlled by the people who made them. In this world, Alice and Bob aren't just users—they're participants. They don't ask permission to establish trust. They build it themselves, one relationship at a time, with tools that respect their agency, privacy, and context.

In the end, relationships are the root of first-person identity, based on the people we meet, the trust we earn, and the stories we're willing to share. If we want identity systems that serve people, not platforms, we should start where trust always begins: with relationships.


Photo Credit: Alice and Bob Exchange VRCs from DALL-E (public domain)


Please leave comments using the Hypothes.is sidebar.

Last modified: Thu Apr 10 12:11:51 2025.