Digital Identity Podcasts

Learning Digital Identity I've been invited to be on a few more podcasts to talk about my new book Learning Digital Identity from O'Reilly Media. That's one of the perks of writing a book. People like to talk about it. I always enjoy talking about identity and especially how it's so vital to the quality of our lives in the digital world, so I'm grateful these groups found time to speak with me.

First, I had a great discussion about identity, IIW, and the book with Rich Sordahl of Anonyome Labs.

I also had a good discussion with Joe Malenfant of Ping Identity on digial identity fundamentals and future trends. We did this in two parts. Part 1 focused on the problems of digital identity and the Laws of Identity. Digital Identity Fundamentals and Future Trends with Phil Windley, Part 1

Part 2 discussed how SSI is changing the way we see online identity and the future of digital identity. We even got into AI and identity a bit at the end. Digital Identity Fundamentals and Future Trends with Phil Windley Part 2

Finally, Harrison Tang spoke with me in the the W3C Credentials Community Group. We had a fun discussion about the definition of identity, administrative and autonomic identity systems, and SSI wallets.

I hope you enjoy these. As I said, I'm always excited to talk about identity, so if you'd like to have me on your podcast, let me know.

Zero Trust

Open Gate

Learning Digital Identity

My new book Learning Digital Identity from O'Reilly Media covers many of the topics in this post such as multi-factor authentication, authorization and access control, and identity policy development in depth.

Zero Trust is a security framework that is better attuned to the modern era of sophisticated threats and interconnected systems. Past practices included techniques like virtual private networks (VPNs) that tried to emulate the idea of an intranet where trusted computers and people were protected from hackers by a firewall that "kept the bad stuff out." As more and more work has gone remote and personal devices like phones, tablets, and even laptops are being used for work, a firewall—virtual or physical—offers less and less protection. Often the bad actors are hard to tell apart from your employees, partners, and customers.

Zero Trust operates on a simple yet powerful principle: "assume breach." In a world where network boundaries are increasingly porous and cyber threats are more evasive than ever, the Zero Trust model centers around the notion that no one, whether internal or external, should be inherently trusted. This approach mandates continuous verification, strict access controls, and micro-segmentation, ensuring that every user and device proves their legitimacy before gaining access to sensitive resources. If we assume breach, then the only strategy that can protect the corporate network, infrastructure, applications, and people is to authorize every access.

Defense in Depth

Zero Trust solutions offer a multifaceted defense against the evolving threat landscape, encompassing various aspects of network security, infrastructure protection, user authentication, and application security. These solutions collectively work together to uphold the "never trust, always verify" principle. Here are some of the different kinds of Zero Trust solutions that safeguard networks, infrastructure, people, and applications from malicious actors:

  1. Network Security:
    • Micro-Segmentation: Dividing the network into smaller segments to limit lateral movement of threats and control access between different segments.
    • Software-Defined Perimeter (SDP): Creating an invisible perimeter around resources, allowing authorized users and devices to connect while remaining invisible to unauthorized entities.
  2. Infrastructure Protection:
    • Identity and Access Management (IAM): Implementing strong identity verification and access controls to ensure that only authenticated users can access critical resources.
    • Endpoint Security: Employing solutions that monitor and secure devices (endpoints) to prevent malware infiltration and unauthorized access.
  3. User Authentication:
    • Multi-Factor Authentication (MFA): Requiring users to provide multiple forms of verification (e.g., password, fingerprint, OTP) before granting access.
    • Risk-Based Authentication: Assessing user behavior and context to dynamically adjust authentication requirements based on potential risks.
  4. Application Security:
    • Application Whitelisting: Allowing only approved applications to run, preventing the execution of unauthorized or malicious software.
    • Application-Centric Authorization and Security: Implementing application-specific authorization policies and implementing security measures directly within applications, such as encryption, code signing, and runtime protection.

These Zero Trust practices collectively work to defend corporate assets in depth and implement a security architecture that assumes breach and enforces rigorous access controls. This not only reduces the attack surface, but also minimizes potential damage. By safeguarding networks, infrastructure, people, and applications, organizations can better defend against the increasingly sophisticated tactics employed by malicious actors in the digital realm. A comprehensive Zero Trust approach ensures that security is embedded at every layer, providing a robust defense against cyber threats.

Implementing Zero Trust

Some of the components of a zero trust strategy can be bought. For example, network equipment vendors offer products that make it easier to implement micro segmentation or define a perimeter. You can buy IAM solutions that make it easier to up your authentication game to simultaneously reduce phishing and the burden on your employees, partners, and customers (hint: use passkeys). You can buy a endpoint security clients for devices that make it easier to manage corporate devices and to know the security posture of both corporate and personal devices. Authorization platforms like Cedar are available to control access to your infrastructure and applications.

While vendors provide ready-made solutions for many aspects of Zero Trust, you'll still need to tailor these solutions to your organization's unique needs and integrate them into a coherent strategy. Here's breakdown of the things you need to do on your own (i.e., you can't buy these):

  1. Policy Development:
    • Access Policies: You'll need to design access policies that define who can access what resources and under what circumstances.
    • Authentication Policies: Developing policies for user authentication, device verification, and authorization.
    • Organizational Policies: The organization must define how it governs the various aspects of Zero Trust and the underlying identity infrastructure.
  2. Identity and Access Management (IAM):
    • Identity Management Infrastructure: Building a user identity repository, user directories, and user profiles.
    • Access Control Logic: Developing the logic that enforces access controls based on user roles and permissions.
  3. Custom Integrations:
    • Integration with Existing Systems: If you have legacy systems, you might need to develop custom integrations to ensure they adhere to the Zero Trust model.
  4. Training and Awareness:
    • Security Awareness Programs: Creating training materials and programs to educate employees and stakeholders about Zero Trust principles and best practices.
  5. Continuous Monitoring and Analysis:
    • Threat Detection Logic: Developing mechanisms to continuously monitor network traffic, endpoints, and applications for suspicious activities.

The Zero Trust components in the preceding list require internal expertise and a deep understanding of your organization's structure and workflows. You might need to change how your organization does authentication, develop a process for installing and maintaining device clients, perform numerous integrations, create authorization policies as well as organizational policies, build threat dashboards, and institute a training program. You can get help doing this, but ultimately it's up to you.

Zero Trust represents a big change for many organizations. Implementing a Zero Trust strategy involves not just changing the architecture of your network, infrastructure, and applications, but your organizational culture as well. In a hyper-connected digital landscape marked by relentless cyber threats and evolving attack vectors, the Zero Trust model is the best defense available. By challenging the conventional "trust but verify" approach, Zero Trust asks organizations to embrace an "assume breach" mindset, demanding continuous vigilance to authorize every access.

Photo Credit: Open Gate from aitoff (Pixabay License)

Using Amazon Cognito Tokens for Fine-Grained Access Control

Puzzle pieces

Earlier this month Amazon launched Amazon Verified Permissions, a "scalable permissions management and fine-grained authorization service for the applications that you build." Verified Permissions has been in gated preview since November, but now it's available to everyone.

My team worked on an integration between Verified Permissions and Amazon Cognito, Amazon's OpenID Connect service. The idea is pretty simple: if you're using Cognito to manage the people who use your application, then you will likely want to use the information that is stored in Cognito for your access control policies.

Our bit of code allows developers to just submit the tokens they get back from Cognito as a result of the authentication process to Verified Permissions. Verified Permissions uses the attributes they contain to perform a policy evaluation. I wrote a blog post for the AWS Security blog about how this works called Simplify fine-grained authorization with Amazon Verified Permissions and Amazon Cognito. I'll describe the basic idea here, but the blog post has details and links.

Suppose you've got a policy that allows anyone to view any resource so long as their location is USA:

  action == Action::"view",
when {principal.location == "USA"};

If Alice wants to view VacationPhoto94.jpg, then your call to Verified Permissions might look like this if you're not using the Cognito Token:

//JS pseudocode
const authResult = await avp.isAuthorized({
    principal: 'User::"alice"',
    action: 'Action::"view"',
    resource: 'Photo::"VacationPhoto94.jpg"',
    // whenever our policy references attributes of the entity, isAuthorized 
    // needs an entity argument that provides those attributes
    entities: [
            "identifier": {
                "entityType": "User",
                "entityId": "alice"
            "attributes": {
                "location": {
                    "String": "USA"

You have to supply information about the principal and any relevant attributes so the policy has the context it needs to evaluate correctly. Now, suppose your Cognito token contains location information as an attribute:

 “iss”: “User”,
 “sub”: “alice”,
 “custom:location”: “USA”,

Assuming the token is stored in a variable named cognito_id_token you can make a call like this:

const authResult = await avp.isAuthorizedWithToken({
    identity_token: cognito_id_token
    action: 'Action::"view"',
    resource: 'Photo::"VacationPhoto94.jpg"'

Using isAuthorizedWithToken, you don’t have to supply the principal or construct the entities argument. Pretty slick.

Take a look at the blog post I mentioned earlier if you're interested in using this in your application. There's more information about the configuration process and how to add resource attributes to the call. This was a fun project.

Photo Credit: Puzzle Pieces from Erdenebayar (Pixabay Content License)

Rule-Based Programming and the Internet of Things

Temperature Sensor Network Built from Picos

Picos are a rule-based system for programming in the Internet of Things (IoT). Rules are a good fit for IoT programming because so much of what happens with things is event-based. Events say something happened. They are different from commands (telling) or requests (asking). A sensor might indicate the door opened, a light came on, or the temperature passed some threshold. An actuator might indicate its current status.

Rules, triggered by an event, determine what should be done to respond to the event. IoT rule-systems use event-condition-action (ECA) rules: when something happens, if this is true, do this. Multiple rules can be triggered by the same event. This allows functionality to be layered on. For example, an event that contains the temperature reported by a sensor might also report the sensor's batter level. One rule that responds to the event can handle the temperature report and another, independent rule can handle the battery level.

Picos are an actor-model programming system. Actors are a good match for IoT programming using rules. Actors keep state information and respond to events (messages) by changing their state, taking action, and possibly raising other events. Actors provide lock-free concurrency, data isolation, location transparency, and loose coupling. These are great features to have in IoT programming because devices are naturally decentralized, each acting independently.

Building a functional system usually involves multiple picos, each with a number of rules to respond to the events that the pico will handle. For example, I recently built a sensor network using picos for monitoring temperatures in a remote pump house using Lorawan. There is one pico for each sensor and a community sensor that manages them. The temperature sensor pico acts as a digital twin for the sensor.

As the digital twin, the sensor pico does all the things that the sensor is too low-powered (literally) to do. For example, one rule receives the heartbeat event from the sensor, decodes the payload, stores values, and raises new events with the temperature values. Another rule processes the event with the new values and checks for threshold violations. If one occurs, it raises an event to the sensor community pico. Another rule sees the new value event and takes an action that plots the new value on an external site. And there are others. The point is that rules make it easy to break up the tasks into manageable chunks that can thread execution differently, and concurrently, depending on Pixabay what's happening.

I've been using picos to program IoT systems for over a decade now. Rule-based, actor model systems are a great match for IoT workloads.

Streaming Trust

In a recent discussion we had, Marie Wallace shared a wonderful analogy for verifiable credentials. I think it helps with understanding how credentials will be adopted. She compares traditional approaches to identity and newer, decentralized approaches to the move from music CDs to streaming. I'm old enough to remember the hubbub around file sharing. As this short video on Napster shows, the real winner in the fight against piracy was Apple and, ultimately, other streaming services:

Apple changed the business model for online music from one focused on sales of physical goods to one focused on licensing individual tracks. They launched the iPod and the iTunes music store and used their installed user base to bring the music companies to the table. They changed the business model and that ultimately gave birth to the music streaming services we use today.

So, what's this got to do with identity? Most of the online identity services we use today are based on a centralized "identity provider" model. In the consumer space, this is the Social Login model where you use your account at Google, Facebook, or some other service to access some third-party online service. But the same thing is true in the workforce side where each company creates great stores of customer, employee, and partner data that they can use to make authorization decisions or federate with others. These are the CDs of identity.

The analog to streaming is decentralized or self-sovereign identity (SSI). In SSI the source of identity information (e.g., a drivers license bureau, bank, university, etc.) is called the issuer. They issue credentials to a person, called the holder, who carries various digital credentials in a wallet on their phone or laptop. When someone, called the verifier, needs to know something about them, the holder can use one or more credentials to prove attributes about themselves. Instead of large, centralized collections of data that companies can tap at will, the data is streamed to the verifier when it's needed. And only the attributes that are germane to that exchange need to be shared. Cryptography ensures we can have confidence in the payload.

The three parties to credential exchange
The three parties to credential exchange (click to enlarge)

Identity streaming has several benefits:

  • Confidence in the integrity of the data is increased because of the underlying cryptographic protocols.
  • Data privacy is increased because only what needs to be shared for a given interaction is transmitted.
  • Data security is increased because there are fewer large, comprehensive troves of data about people online for hackers to exploit.
  • The burden of regulatory compliance is reduced since companies need to keep less data around when they know they can get trustworthy information from the holder just in time.
  • The cost of maintaining, backing up, and managing large troves of identity data goes away.
  • Access to new data is easier because of the flexibility of just-in-time attribute delivery.

And yet, despite these benefits, moving from big stores of identity data to streaming identity when needed will take time. The big stores already exist. Companies have dedicated enormous resources to building and managing them. They have integrated all their systems with them and depend on them to make important business decisions. And it works. So why change it?

The analogy also identifies the primary driver of adoption: demand. Napster clearly showed that there was demand for online music. Apple fixed the business model problem. And thousands of businesses were born or died on the back of this change from CDs to streaming.

Digital credentials don't have the same end user demand pull that music does. Music is emotional and the music industry was extracting huge margins by making people buy an $18 CD to get the one song they liked. People will likely love the convenience that verifiable credentials offer and they'll be more secure and private, but that's not driving demand in any appreciable way. I think Riley Hughes, CEO of, is on to something with his ideas about digital trust ecosystems. Ecosystems that need increased trust and better security are likely to be the real drivers of this transition. That's demand too, but of a different sort: not demand for credentials themselves, but for better models of interaction. After all, people don’y want a drill, they want a hole.

Verifiable data transfer is a powerful idea. But to make it valuable, you need a trust gap. Here's an example of a trust gap: early on, the veracity and security of a web site was a big problem. As a result many people were scared to put their credit card into a web form. The trust gap was that there was no way to link a domain name to a public key. Transport Layer Security (TLS, also known as SSL) uses digital certificates, which link a domain name to a public key (and perhaps other data) in a trustworthy way to plug the gap.

There are clearly ecosystems with trust gaps right now. For example, fraud is a big problem in online banking and ecommerce. Fraud is the symptom of a trust gap between the scam's target and the legitimate actor that they think they're interacting with. If you can close this gap, then the fraud is eliminated. Once Alice positively knows when she's interacting with her bank and when she's not, she'll be much harder to fool. Passkeys are one solution to this problem. Verifiable credentials are another—one that goes beyond authentication (knowing who you're talking to) to transferring data in a trustworthy way.

In the case of online music, the solution and the demand were both there, but the idea wasn't legitimate in the eyes of the music industry. Apple had the muscle to bring the music industry to the table and help them see the light. They provided much needed legitimacy to the idea online music purchases and, ultimately, streaming. They didn't invent online music, rather they created a viable business model for it and made it valuable. They recognized demand and sought out a new model to fill the demand. Verifiable credentials close trust gaps. And the demand for better ways to prevent fraud and reduce friction is there. What's missing, I think, is that most of the companies looking for solutions don't yet recognize the legitimacy of verifiable credentials.

Internet Identity Workshop 36 Report

IIW 36 Attendee Pin Map

We recently completed the 36th Internet Identity Workshop. Almost 300 people from around the world called 160 sessions. The energy was high and I enjoyed seeing so many people who are working on identity talking with each other and sharing their ideas. The topics were diverse but I think it's fair to say that verifiable credentials were a hot topic. And while there were plenty of discussions about technical implementations, I think those were overshadowed by sessions discussing credential business models, use cases, and adoption. We should have the book of proceedings completed in about a month and you'll be able to get the details of sessions there. You can view past Book of Proceedings here.

As I said, there were attendees from all over the world as you can see by the pins in the map at the top of this post. Not surprisingly, most of the attendees were from the US (219), followed by Canada (20). Germany, the UK, and Switzerland rounded out the top five with 8, 7, and 6 attendees respectively. The next five, Australia (5), South Korea (3), Japan (3), Indonesia (3), and Columbia (3) showed the diversity with attendees from APAC and South America. Sadly, there were no attendees from Africa this time. Please remember we offer scholarships for people from underrepresented areas, so if you'd like to come to IIW37, please let us know.

In terms of states and provinces, California was, unsurprisingly, first with 101. New York (17), Washington (16), Utah (15), and British Columbia (11) rounded out the top five. Victoria was the source of BC's strong Canadian showing coming in fifth in cities with 8 attendees after San Jose (15), San Francisco( 13), Seattle (12), and New York (10).

The week was fabulous. I can't begin to recount the many important, interesting, and timely conversations I had. I heard from many others that they had a similar experience. IIW 37 will be held Oct 10-12, 2023 at the Computer History Museum. We'll have tickets available soon. I hope you'll be able to join us.

OAuth and Fine-grained Access Control

Learning Digital Identity

Some of the following is excerpted from my new book Learning Digital Identity from O'Reilly Media.

OAuth was invented for a very specific purpose: to allow people to control access to resources associated with their accounts, without requiring that they share authentication factors. A primary use case for OAuth is accessing data in an account using an API. For example, the Receiptify service creates a list of your recent listening history on Spotify,, or Apple Music that looks like a shopping receipt. Here's a sample receipt of some of my listening history.

Receiptify Listening History
Receiptify Listening History (click to enlarge)

Before OAuth, if Alice (the resource owner) wanted Receiptify (the client) to have access to her history on Spotify (the resource server), she'd give Receiptify her username and password on Spotify. Receiptify would store the username and password and impersonate Alice each time it needs to access Spotify on her behalf. By using Alice's username and password, Receiptify would demonstrate that it had Alice's implicit permission to access her data on Spotify. This is sometimes called the "password antipattern" and it has several serious drawbacks:

  • The resource server can't differentiate between the user and other servers accessing the API.
  • Storing passwords on other servers increases the risk of a security breach.
  • Granular permissions are hard to support since the resource server doesn't know who is logging in.
  • Passwords are difficult to revoke and must be updated in multiple places when changed.

OAuth was designed to fix these problems by making the resource server part of the flow that delegates access. This design lets the resource server ask the resource owner what permissions to grant and records them for the specific client requesting access. Moreover, the client can be given its own credential, apart from those that the resource owner has or those used by other clients the user might have authorized.

OAuth Scopes

The page that allows the owner to grant or deny permission might display what permissions the RS is requesting. This isn't freeform text written by a UX designer; rather, it's controlled by scopes. A scope is a bundle of permissions that the client asks for when requesting the token, coded by the developer who wrote the client.

The following screenshots show the permissions screens that I, as the owner of my Twitter account, see for two different applications, Revue and Thread Reader.

OAuth Scopes Displayed in the Permission Screen
OAuth Scopes Displayed in the Permission Screen (click to enlarge)

There are several things to note about these authorization screens. First, Twitter is presenting these screens, not the clients (note the URL in the browser window). Second, the two client applications are asking for quite different scopes. Revue wants permission to update my profile as well as post and delete tweets, while Thread Reader is only asking for read-only access to my account. Finally, Twitter is making it clear to me who is requesting access. At the bottom of the page, Twitter warns me to be careful and to check the permissions that the client is asking for.

Fine-Grained Permissions

Scopes were designed so that the service offering an API could define the relatively course-grained permissions needed to access it. So, Twitter, for example, has scopes like tweet:read and tweet:write. As shown above, when a service wants to use the API for my Twitter account, it has to ask for specific scopes—if it only wants to read my tweets, it would ask for tweet:read. Once granted, they can be used with the API endpoint to gain access.

But OAuth-style scopes aren't the best tool for fine-grained permissioning in applications. To see why, imagine you are building a photo sharing app that allows a photographer like Alice to grant permission for her friends Betty and Charlie to view her vacation photos. Unlike the Twitter API, where the resources are fairly static (users and tweets, for example) and you're granting permissions to a fairly small number of applications on an infrequent basis, the photo sharing app has an indeterminant number of resources that change frequently. Consequently, the app would need to have many scopes and update them each time a user grants new permissions on a photo album to some other user. A large photo sharing service might have a million users and 50 million photo albums—that's a lot of potential scopes.

Policy-based Access Control (PBAC) systems, on the other hand, were built for this kind of permissioning. As I wrote in Not all PBAC is ABAC, your app design and how you choose to express policies make a big difference in the number of policies you need to write. But regardless, PBAC systems, with good policy stores, can manage policies and the information needed to build the authorization context much more easily than you'll usually find in a system for managing OAuth scopes.

So, if you're looking to allow client applications to access your API on behalf of your customers, then OAuth is the right tool. But if you're building an application that allows sharing between its users of whatever resources they build, then you need the fine-grained permissioning that a policy-based access control system provides.

Minimal vs Fully Qualified Access Requests

Information Desk

In access control, one system, generically known as the policy enforcement point (PEP), makes a request to anothera service, generically known as a policy decision point (PDP), for an authorization decision. The PDP uses a policy store and an authorization context to determine whether access should be granted or not. How much of the authorization context does the PEP have to provide to the PDP?

Consider the following policy1:

Managers in marketing can approve transactions if they are not the owner and if the amount is less than their approval limit.

A PDP might make the following fully qualified request to get an authorization decision:

Can user username = Alice with jobtitle = manager and department = marketing do action = approve on resource of type = financial transaction with transaction id = 123 belonging to department = finance and with amount = 3452?

This request contains all of the authorization context needed for the policy to run. A fully qualified request requires that the PEP gather all of the information about the request from various attribute sources.

Alternatively, consider this minimal request:

Can user username = Alice do action = approve on resource with transaction id = 123?

The minimal request reduces the work that the PEP must do, placing the burden on the PDP to build an authorization context with sufficient information to make a decision. We can separate this work out into a separate service called a policy information point (PIP). To build an authorization context for this request, the PIP must retrieve the following information:

  • The user’s job title, department, and approval limit.
  • The transaction’s owner and amount.

Building this context requires that the PIP have access to several attribute sources including the HR system (where information about Alice is stored) and the finance system  (where information about transactions is stored).

Attribute Value Providers

PIPs are not usually the source of attributes, but rather a place where attributes are brought together to create the authorization context. As you saw, the PIP might query the HR and Finance systems, using attributes in the request to create fully qualified authorization context. Other examples of attribute value providers (AVP) include databases, LDAP directories, Restful APIs, geolocation services, and any other data source that can be correlated with the attributes in the minimal request.

What’s needed depends on the policies you’re evaluating. The fully qualified context should only include needed attributes, not every piece of information the PIP can gather about the request. To that end, the PIP must be configurable to create the context needed without wasteful queries to unneeded AVPs.

PEP Attributes

There is context that the PEP has that the PDP might need. For example, consider this request:

Can user username = Alice do action = view on resource with transaction id = 123 from IP = and browser = firefox?

The PIP typically would not have access to the IP address or browser type. The PEP must pass that information along for use in evaluating the policy.

Derived Attributes

The PIP can enhance information that the PDP passes in to derive new attributes. For example, consider a policy that requires the request come from specific ranges of IP addresses. The first instinct a developer might have is to embed the IP range directly in the policy. However, this is a nightmare to maintain if multiple policies have the same range check—especially when the range changes. The solution is to use derived attributes, where you can define a new attribute, is_ip_address_in_range, and have the PIP calculate it instead.

This might be generalized with a Session Attribute component in the PIP that can enhance or transform session attributes into others that are better for policies. This is just one example of how raw attributes might be transformed to provide richer, derived attributes. Other examples include deriving an age from a birthday, an overall total for several transactions, or a unique key from other attributes.

Benefits of a PIP

While the work to gather the attributes to build the authorization context may seem like it’s the same whether the PEP or PIP does it, there are several benefits to using a PIP:

  1. Simplified attribute administration—A PIP provides a place to manage integration with attribute stores that ensures the integration and interpretation are done consistently across multiple systems.
  2. Reduced Complexity—PIPs can reduce the complexity of the access control system by removing the need for individual access control components to have their own policy information. PEPs are typically designed to be lightweight so they can be used close to the point where the access is enforced. For example, you might have PEPs in multiple API gateways co-located with the services they intermediate or in smartphone apps. A PIP offloads work from the PEP to keep it lightweight.
  3. Separation of Concerns—PIPs separate policy information from policy decision-making, making it easier to update and modify policies and AVPs without impacting other parts of the access control system. For example, making fully qualified requests increases the coupling in the system because the PEP has to know more about policies to formulate fully qualified requests.
  4. Improved Scalability— PIPs can be deployed independently of other access control components, which means that they can be scaled independently as needed to accommodate changes in policy volume or access requests.
  5. Enhanced Security— PIPs can be configured to provide policy information only to authorized PDPs, which improves the security of the access control system and helps to prevent unauthorized access. In addition, a PIP builds consistent authorization contexts whereas different PEPs might make mistakes or differ in their output.


There are tradeoffs with using a PIP. For example, suppose that the PEP is embedded in the ERP system and has local access to both the HR and financial data. It might more easily and cheaply use those attributes to create a fully qualified request. Making a minimal request and requiring the PIP to make a call back to the ERP for the data to create the authorization context would be slower. But, as noted above, this solution increases the coupling between the PEP and PDP because the PEP now has to have more knowledge of the policies to make the fully qualified request. Developers need to use their judgement about request formulation to evaluate the tradeoffs.


By creating fully qualified authorization contexts from minimal requests, a PIP reduces the burden on developers building or integrating PEPs, allows for more flexible and reusable policies, and enriches authorization contexts to allow more precise access decisions.


  1. The information in this article was inspired by Should the Policy Enforcement Point Send All Attributes Needed to Evaluate a Request?

Photo Credit: Information Desk Charleston Airport from (CC BY 2.0, photo is cropped from original)

Passkeys: Using FIDO for Secure and Easy Authentication

Learning Digital Identity

This article is adapted from Chapter 12 of my new book Learning Digital Identity from O'Reilly Media.

I was at SLC DevOpsDays last week and attended a talk by Sharon Goldberg on MFA in 2023. She's a security expert and focused many of her remarks on the relative security of different multi-factor authentication (MFA) techniques, a topic I cover in my book as well. I liked how she described the security provisions of passkeys (also know as Fast ID Online or FIDO).

FIDO is a challenge-response protocol that uses public-key cryptography. Rather than using certificates, it manages keys automatically and beneath the covers, so it’s as user-friendly as possible. I’m going to discuss the latest FIDO specification, FIDO2, here, but the older FIDO U2F and UAF protocols are still in use as well.

FIDO uses an authenticator to create, store, and use authentication keys. Authenticators come in several types. Platform authenticators are devices that a person already owns, like a laptop or smartphone. Roaming authenticators take the form of a security key that connects to the laptop or smartphone using USB, NFC, or Bluetooth.

This is a good time for you to stop reading this and head over to and try them for yourself. If you're using a relatively modern OS on your smartphone, tablet, or computer, you shouldn't have to download anything. Sign up using your email (it doesn't have to be a real email address), do whatever your device asks when you click "Save a Passkey" (on my iPhone it does Face ID, on my MacOS laptop, it does Touch ID). Then sign out.

Using Touch ID with Passkey
Using Touch ID with Passkey (click to enlarge)

Now, click on "Sign in with a passkey". Your computer will let you pick an identifier (email address) that you've used on that site and then present you with a way to locally authenticate (i.e., on the device). It's that simple. In fact, my biggest fear with passkeys is that it's so slick people won't think anything has happened.

Here's what's going on behind the scenes: When Alice registers with an online service like, her authenticator (software on her phone, for example) creates a new cryptographic key pair, securely storing the private key locally and registering the public key with the service. The online service may accept different authenticators, allowing Alice to select which one to use. Alice unlocks the authenticator using a PIN, fingerprint reader, or face ID.

When Alice authenticates, she uses a client such as a browser or app to access a service like a website (see figure below). The service presents a login challenge, including the chance to select an account identifier, which the client (e.g., browser) passes to the authenticator. The authenticator prompts Alice to unlock it and uses the account identifier in the challenge to select the correct private key and sign the challenge. Alice’s client sends the signed challenge to the service, which uses the public key it stored during registration to verify the signature and authenticate Alice.

Authenticating with Passkey
Authenticating with Passkey (click to enlarge)

FIDO2 uses two standards. The Client to Authenticator Protocol (CTAP) describes how a browser or operating system establishes a connection to a FIDO authenticator. The WebAuthN protocol is built into browsers and provides an API that JavaScript from a Web service can use to register a FIDO key, send a challenge to the authenticator, and receive a response to the challenge.

One of the things I liked about Dr. Goldberg's talk is that she emphasized that the security of passkeys rests on three things:

  1. Transport Layer Security (TLS) to securely transport challenges and responses.
  2. The WebAuthN protocol that gives websites a way to invoke the local authentication machinery using a Javascript API.
  3. A secure, local connection between the client and authenticator using CTAP.

One of the weaknesses of how we use TLS today is that people don't usually check the lock icon in the browser and don't understand domain names enough to tell if they're being phished. Passkeys do this for you. The browser unambiguously transfers the domain name to the authenticator which knows if it has an established relationship with that domain or not. Authenticating that you're on the right site is a key reason they're so much more secure than other MFA alternatives. Another is having a secure channel from authenticator to service, making phishing nearly impossible because there's no way to break into the authentication flow.

To see how this helps with phishing, imagine that a service uses QR codes to initiate the Passkey registration process and that an attacker has managed to use cross-site scripting or some other weakness to switch out the QR code the service provided with one of their own. When the user scanned the attacker's QR code, the URL would not match the URL of the page the user is on and the authenticator would not even start the transaction to register a key. Passkey's superpower is that it completes the last mile of TLS using WebAuthN and local authenticators.

Passkeys provide a secure and convenient way to authenticate users without resorting to passwords, SMS codes, or TOTP authenticator applications. Modern computers and smartphones and most mainstream browsers understand FIDO protocols natively. While roaming authenticators (hardware keys) are available, for most use cases, platform authenticators (like the ones built into your smartphone or laptop) are sufficient. This makes FIDO an easy, inexpensive way for people to authenticate. As I said, the biggest impediment to its widespread use may be that people won’t believe something so easy is secure.

Monitoring Temperatures in a Remote Pump House Using LoraWAN

Snow in Island Park

I've got a pumphouse in Island Park, ID that I'm responsible for. Winter temperatures are often below 0°F (-18°C) and occasionally get as cold as -35°F (-37°C). We have a small baseboard heater in the pumphouse to keep things from freezing. That works pretty well, but one night last December, the temperature was -35°F and the power went out for five hours. I was in the dark, unable to know if the pumphouse was getting too cold. I determined that I needed a temperature sensor in the pumphouse that I could monitor remotely.

The biggest problem is that the pumphouse is not close to any structures with internet service. Wifi signals just don't make it out there. Fortunately, I've got some experience using LoraWAN, a long-range (10km), low-power, wireless protocol. This use-case seemed perfect for LoraWAN. About a year ago, I wrote about how to use LoraWAN and a Dragino LHT65 temperature and humidity sensor along with picos to get temperature data over the Helium network.

I've installed a Helium hotspot near the pumphouse. The hotspot and internet router are both on battery backup. Helium provides a convenient console that allows you to register devices (like the LHT65) and configure flows to send the data from a device on the Helium network to some other system over HTTP. I created a pico to represent the pumphouse and routed the data from the LHT65 to a channel on that pico.

The pico does two things. First it processes the hearthbeat event that Helium sends to it, parsing out the parts I care about and raising another event so other rules can use the data. Processing the data is not simple because it's packed into a base64-encoded, 11-byte hex string. I won't bore you with the details, but it involves base64 decoding the string and splitting it into 6 hex values. Some of those pack data into specific bits of the 16-bit word, so binary operations are required to seperate it out. Those weren't built into the pico engine, so I added those libraries. If you're interested in the details of decoding, splitting, and unpacking the payload, check out the receive_heartbeat rule in this ruleset.

Second, the receive_heartbeat rule raises the lht65:new_readings event in the pico adding all the relevant data from the LHT65 heartbeat. Any number of rules could react to that event depending on what needs to be done. For example, they could store the event, alarm on a threshold, or monitor the battery status. What I wanted to do is plot the temperature so I can watch it over time and let other members of the water group check it too. I found a nice service called IoTPlotter that provides a basic plotting service on any data you post to it. I created a feed for the pumphouse data and wrote a rule in my pumphouse pico to select on the lht65:new_readings event and POST the relevant data, in the right format, to IoTPlotter. Here's that rule:

  rule send_temperature_data_to_IoTPlotter {
    select when lht65 new_readings

    pre {
      feed_id = "367832564114515476";
      api_key = meta:rulesetConfig{["api_key"]}.klog("key"); 
      payload = {"data": {
                    "device_temperature": [
                      {"value": event:attrs{["readings", "internalTemp"]},
                       "epoch": event:attrs{["timestamp"]}}
                    "probe_temperature": [
                      {"value": event:attrs{["readings", "probeTemp"]},
                       "epoch": event:attrs{["timestamp"]}}
                    "humidity": [
                      {"value": event:attrs{["readings", "humidity"]},
                       "epoch": event:attrs{["timestamp"]}}
                    "battery_voltage": [
                      {"value": event:attrs{["readings", "battery_voltage"]},
                       "epoch": event:attrs{["timestamp"]}}

    http:post("" + feed_id,
       headers = {"api-key": api_key},
       json = payload
    ) setting(resp);

The rule, send_temperature_data_to_IoTPlotter is not very complicated. You can see that most of the work is just reformatting the data from the event attributes into the right structure for IoTPlotter. The result is a set of plots that looks like this:

Swanlake Pumphouse Temperature Plot
Swanlake Pumphouse Temperature Plot (click to enlarge)

Pretty slick. If you're interested in the data itself, you're seeing the internal temperature of the sensor (orange line) and temperature of an external probe (blue line). We have the temperature set pretty high as a buffer against power outages. Still, it's not using that much power because the structure is very small. Running the heater only adds about $5/month to the power bill. Pumping water is much more power intensive and is the bulk of the bill. The data is choppy because, by default, the LHT65 only transmits a payload once every 20 minutes. This can be changed, but at the expense of battery life.

This is a nice, evented system, albeit simple. The event flow looks like this:

Event Flow for Pumphouse Temperature Sensor
Event Flow for Pumphouse Temperature Sensor (click to enlarge)

I'll probably make this a bit more complete by adding a rule for managing thresholds and sending a text if the temperature gets to low or too high. Similarly, I should be getting notifications if the battery voltage gets too low. The battery is supposed to last 10 years, but that's exactly the kind of situation you need an alarm on—I'm likely to forget about it all before the battery needs replacing. I'd like to experiment with sending data the other way to adjust the frequency of readings. There might be times (like -35°F nights when the power is out) where getting more frequent results would reduce my anxiety.

This was a fun little project. I've got a bunch of these LHT65 temperature sensors, so I'll probably generalize this by turning the IoTPlotter ruleset into a module that other rulesets can use. I may eventually use a more sophisticated plotting package that can show me the data for all my devices on one feed. For example, I bought a LoraWAN soil moisture probe for my garden. I've also got a solar array at my house that I'd like to monitor myself and that will need a dashboard of some kind. If you've got a sensor that isn't within easy range of wifi, then LoraWAN is a good solution. And event-based rules in picos are a convenient way to process the data.