JSON is Robot Barf

Sick Robot

JSON is robot barf. Don't get me wrong. JSON is a fine serialization format for data and I have no problem with it in that context. My beef is with the use of JSON for configuration files, policy specs, and so on. If JSON were all we had, then we'd have to live with it. But we've been building parsers for almost 70 years now. The technology is well understood. There are multiple libraries in every language for parsing. And yet, even very mature, well supported frameworks and platforms persist in using JSON instead of a human-friendly notation.

When a system requires programmers to use JSON, what they're effectively asking developers to use an "abstract" syntax instead of a "concrete" syntax. Here's what I mean. This is a function definition in concrete syntax:

Concrete Syntax

And here's the same function definition expressed as an abstract syntax tree (AST) serialized as JSON:

Abstract Syntax

I don't know any programmer who'd prefer to write the abstract syntax instead of the concrete. Can you imagine an entire program expressed like that? Virtually unreadable and definitely not maintainable. Parsing can take as much as 20% of the time taken to compile code, so there's a clear performance win in using abstract syntax over concrete, but even so we, correctly, let the machine do the work.

I get that systems often start out simply with simple configuration. Some inputs are just hierarchies of data. But that often gets more complicated over time. And spending time figuring out the parser when you're excited to just get it working can feel like a burden. But taking the shortcut of making developers and others write the configuration in abstract syntax instead of letting the computer do the work is a mistake.

I'd like to say that the problem is that not enough programmers have a proper CS education, but I fear that's not true. I suspect that even people who've studied CS aren't comfortable with parsing and developing notations. Maybe it's because we treat the subject too esoterically—seemingly useful for people designing a programming language, but not much else. And students pick up on that and figure this is something, like calculus, they're unlikely to ever use IRL. What if programming langauge classes helped students learn the joy and benefit of building little languages instead?

I'm a big believer in the power of notation. And I think we too often shy away from designing the right notation for the job. As I wrote about Domain Specific Languages (DSLs) in 2007:

I'm in the middle of reading Walter Isaacson's new biography of Einstein. It's clear that notation played a major role in his ability to come up with the principle of general relativity. He demurred at first, believing that the math was for someone else to come along later and tidy up. But later in his life, after the experience of working on general relativity, Einstein became an ardent convert.

Similarly, there is power in notation for computing tasks. Not merely the advantage of parameterized execution but in it's ability to allow us to think about problems, express them so that other's can clearly and unambiguously see our thoughts, and collaborate to create joint solutions. What's more, languages can be versioned. GUI configurations are hard to version. Notation has advantages even when it's not executed.

The DSL becomes the focal point for design activities. The other day, I was having a discussion with three friends about a particular feature. Pulling out pencil and paper and writing what the DSL would need to look like to support the feature helped all of us focus and come up with solutions. Without such a tool, I'm not sure how we would have communicated the issues or whether we'd have all had the same conception of them and the ultimate solution we reached.

As this points out, clear notations have advantages beyond being easier to write and understand. They also provide the means to easily share and think about the problem. I think system designers would be better off if we spent more time thinking about the notation developers will use when they configure and use our systems, making it clear and easy to read and write. Good notation is a thinking tool, not just a way to control the system. The result will be increased expressiveness, design leverage, and freedom.


Photo Credit: Sick Android from gfk DSGN (Pixabay)


Toothbrush Identity

Philips Sonicare BrushSync Logo

I have a Philips Sonicare toothbrush. One of the features is a little yellow light that comes on to tell me that the head needs to be changed. The first time the light came on, I wondered how I would reset it once I got a new toothbrush head. I even googled it to find out.

Turns out I needn't have bothered. Once I changed the head the light went off. This didn't happen when I just removed the old head and put it back on. The toothbrush heads have a unique identity that the toothbrush recognizes. This identity is not only used to signal head replacement, but also to put the toothbrush into different modes based on the type of head installed.

Philips calls this BrushSync, but it's just RFID technology underneath the branding. Each head has an RFID chip embedded in it and the toothbrush body reads the data off the head and adjusts its internal state in the appropriate way.

I like this use case RFID because it's got clear benefits for both Philips and their customers. Philips sells more toothbrush heads—so the internet of things (IoT) use case is clearly aligned with business goals. Customers get reminders to replace their toothbrush head and can reset the reminder by simply doing what they'd do anyway—switch the head

There aren't many privacy concerns at present. But as more and more products include RFID chips, you could imagine scanners on garbage trucks that correlate what gets used and thrown out with an address. I guess we need garbage cans that can disable RFID chips when they're thrown away.

I was recently talking to a friend of mine, Eric Olafson, who is a founding investor in Riot. Riot is another example of how thoughtfully applied RFID-based identifiers can solve business and customer problems. Riot creates tech that companies can use for RFID-based, in-store inventory management. This solves a big problem for stores that often don't know what inventory they have on hand. With Riot, a quick scan of the store each morning updates the inventory management system, showing where the inventory data is out of sync with the physical inventory. As more and more of us go to the physical store because the app told us they had the product we wanted, it's nice to know the app isn't lying. Riot puts the RFID on the tag, not the clothing, dealing with many of the privacy concerns.

Both BrushSync and Riot use identity to solve business problems, showing'that unique identifiers on individual products can be good for business and customers alike. This speaks to the breadth of identity and its importance in areas beyond associating identifiers with people. I've noticed an uptick in discussions at IIW about identity for things and the impact that can have. The next IIW is Oct 12-14—online—join us if you're interested.


Photo Credit: SoniCare G3 from Philips USA (fair use)


Fluid Multi-Pseudonymity

Epupa Falls

In response to my recent post on Ephemeral Relationships, Emil Sotirov tweeted that this was an example of "fluid multi-pseudonymity as the norm." I love that phrase because it succinctly describes something I've been trying to explain for years.

Emil was riffing on this article in Aeon You are a network that says "Selves are not only 'networked', that is, in social networks, but are themselves networks." I've never been a fan of philosophical introspections in digital identity discussions. I just don't think they often lead to useful insights. Rather, I like what Joe Andrieu calls functional identity: Identity is how we recognize, remember, and ultimately respond to specific people and things. But this insight, that we are multiple selves, changing over time—even in the course of a day—is powerful. And as Emil points out, our real-life ephemeral relationships are an example of this fluid multi-pseudonymity.

The architectures of traditional, administrative identity systems do not reflect the fluid multi-pseudonymity of real life and consequently are mismatched to how people actually live. I frequently see calls for someone, usually a government, to solve the online identity problem by issuing everyone a permanent "identity." I put that in quotes because I hate when we use the word "identity" in that way—as if everyone has just one and once we link every body (literally) to some government issued identifier and a small number of attributes all our problems will disappear.

These calls don't often come from within the identity community. Identity professionals understand how hard this problem is and that there's no single identity for anyone. But even identity professionals use the word "identity" when they mean "account." I frequently make an ass of myself my pointing that out. I get invited to fewer meetings that way. The point is this: there is no "identity." And we don't build identity systems to manage identities (whatever those are), but, rather, relationships.

All of us, in real life and online, have multiple relationships. Many of those are pseudonymous. Many are ephemeral. But even a relationship that starts pseudonymous and ephemeral can develop into something permanent and better defined over time. Any relationship we have, even those with online services, changes over time. In short, our relationships are fluid and each is different.

Self-sovereign identity excites me because, for the first time, we have a model for online identity that can flexibly support fluid multi-pseudonymity. Decentralized identifiers and verifiable credentials form an identity metasystem capable of being the foundation for any kind of relationship: ephemeral, pseudonymous, ad hoc, permanent, personal, commercial, legal, or anything else. For details on how this all works, see my Frontiers article on the identity metasystem.

An identity metasystem that matches the fluid multi-pseudonymity inherent in how people actually live is vital for personal autonomy and ultimately human rights. Computers are coming to intermediate every aspect of our lives. Our autonomy and freedom as humans depend on how we architect this digital world. Unless we put digital systems under the control of the individuals they serve without intervening administrative authorities and make them as flexible as our real-lives demand, the internet will undermine the quality of life it is meant to bolster. The identity metasystem is the foundation for doing that.


Photo Credit: Epupa Falls from Travel Trip Journey (none)


Seeing Like the TSA

TSA Screening at SL Airport

I just flew for the first time in 16 months. In that time, Salt Lake International Airport got a new terminal, including an update to the TSA screening area. The new screening area has been touted as a model of efficiency, featuring bin stations for people to load their bags, electronics, belts, and shoes into bins that they then push onto a conveyor. The bins are handled automatically and everything is sunshine and joy. Except it isn't.

The new system is perfect so long as the people using it are too. The first problem is that unless your at the last bin station, the conveyor in front of you is constantly full and it's hard to get your bin onto the conveyor. And if you've got more than one bin to load, they are separated from each other because the loading station isn't big enough for two. People just don't conform to the TSA's ideal!

But the real problem is that people forget things in their pockets or don't take off their belt. In the olden days, the TSA had little bowls. You'd throw your stuff in one, put it on the belt, and be on your way. Now, there's no easy way to accommodate forgotten things except to go back to a bin loading station and put them in a big bin, clogging the conveyor even more. Three people in line ahead of me at the scanner forgot something, causing all kinds of delays. The TSA people were even telling them to just hold them in the scanner and taking them from them to hold while the scan was completed. Because there's no good way to deal with forgotten items, everyone is forced to improvise, but the system is rigid and doesn't easily accommodate improvisation.

The situation reminded me of the story James C. Scott tells in the opening of Seeing Like a State where forestry officials planted neat, efficient rows of trees instead of letting the forest take its natural path. The end result was less yield from the forest, but happier foresters who could now see every tree. Scott's point is that bureaucracy aims for legibility in order to serve its own purposes—and usually fails in that effort. The primary reason states have wanted legibility of citizens is taxes (and, historically, conscription). But once you have legibility, the temptation to extend it to other uses is too great to resist. In this case, the TSA has ordered the screening process and made it legible to the screeners, but made no provision for outliers. If no one forgets anything and the system is lightly loaded, it should work great. Of course, that's not the real world.

IT people are bureaucrats in their own way. We build and operate the systems that people use to do their jobs and live their lives. We strive for legibility in order to make the software simpler for us, even if it doesn't serve the users quite as well. Universities are decentralized places with lots of innovative people pursuing their own goals. They are more feudal than corporate. I've often heard university IT people complain about this reality because it makes their life harder. If you're a professor, you'd like to use whichever LMS suits your particular needs. But that's not very legible. If you're a university IT person, you'd like to force all faculty to use the standard LMS that the university chose. Neat and orderly, but it squeezes the innovation out of the university one drop at a time.

Life is messy. People are forgetful, disorganized, and, relatedly, innovative. Bureaucracy desperately wants legibility so that the rules are followed, the processes perform, and the bureaucrat's life is made easy. Building systems that support decentralized workflows and individual decisions, without getting in the way, is hard. And letting people be people can be frustrating when it's causing you headaches. We'll never build systems that support an authentic, operationalized digital existence until we stop trying to fit people's decentralized lives into our neat, ordered, legible software.

Seeing like a State: How Certain Schemes to Improve the Human Condition Have Failed by James C. Scott

In this wide-ranging and original book, James C. Scott analyzes failed cases of large-scale authoritarian plans in a variety of fields. Centrally managed social plans misfire, Scott argues, when they impose schematic visions that do violence to complex interdependencies that are not—and cannot—be fully understood. Further, the success of designs for social organization depends upon the recognition that local, practical knowledge is as important as formal, epistemic knowledge. The author builds a persuasive case against "development theory" and imperialistic state planning that disregards the values, desires, and objections of its subjects. He identifies and discusses four conditions common to all planning disasters: administrative ordering of nature and society by the state; a "high-modernist ideology" that places confidence in the ability of science to improve every aspect of human life; a willingness to use authoritarian state power to effect large- scale interventions; and a prostrate civil society that cannot effectively resist such plans.


Photo Credit: Security from Anelise Bergin (none)


Life Will Find a Way

Apricot Tree

Last summer, I decided to kill this apricot tree. We weren't using the apricots, it was making a mess, and I wanted to do something else with the spot it's in. So, I cut off the branches, drilled holes in the stumps, and poured undiluted Round-Up into them. And yet, you'll notice that there's a spot on one branch that has sprouted shoots and leaves this summer.

My first thought was that the tree was struggling to live. But then, I realized that I was anthropomorphizing it. In fact, the tree has no higher brain function, it's not struggling to do anything. Instead, what the picture shows is the miracle of decentralization.

Despite my overall success in killing the tree, some cells didn't get the message. They were able to source water and oxygen to grow. They didn't need permission or direction from central authority. They don't even know or care that they're part of a tree. The cells are programmed to behave in specific ways and conditions were such that the cells on that part of the branch were able to execute their programming. Left alone, they might even produce fruit and manage to reproduce. Amazing resiliance.

I've written before about the decentralized behavior of bees. Like the apricot tree, there is no higher brain function in a bee hive. Each bee goes about its work according to its programming. And yet, the result is a marvelously complex living organism (the hive) that is made up of tens of thousands of individual bees, each doing their own thing, but in a way that behaves consistently, reaches consensus about complex tasks, and achieves important goals.

A few years ago, I read The Vital Question by Nick Lane. The book is a fascinating read about the way energy plays a vital role in how life develops and indeed how evolution progresses. I'm still in awe of the mechanisms necessary to provide energy for even a single-celled organism, let alone something as complex as a human being, a bee hive, or even an apricot tree.

Life succeeds because it's terrifically decentralized. I find that miraculous. Humans have a tough time thinking about decentralized systems and designing them in ways that succeed. The current work in blockchains and interest in decentralization gives me hope that we'll design more systems that use decentralized methods to achieve their ends. The result will be an online world that is less fragile, perhaps even anti-fragile, than what we have now.

The Vital Question: Energy, Evolution, and the Origins of Complex Life by Nick Lane

For two and a half billion years, from the very origins of life, single-celled organisms such as bacteria evolved without changing their basic form. Then, on just one occasion in four billion years, they made the jump to complexity. All complex life, from mushrooms to man, shares puzzling features, such as sex, which are unknown in bacteria. How and why did this radical transformation happen? The answer, Lane argues, lies in energy: all life on Earth lives off a voltage with the strength of a lightning bolt. Building on the pillars of evolutionary theory, Lane’s hypothesis draws on cutting-edge research into the link between energy and cell biology, in order to deliver a compelling account of evolution from the very origins of life to the emergence of multicellular organisms, while offering deep insights into our own lives and deaths.


Clean Sheets and Strategy

Sheets

Suppose you're in the hotel business. One of the things you have to do is make sure customers have clean sheets. If you don't change the sheets or launder them properly, you're probably not going to stay in business long. The bad news is that clean sheets are expensive and they don't differentiate you very much—all your competitors have clean sheets. You're stuck.

Consider the following graph plotting the cost of any given business decision against the competitive advantage it brings:

Feature Cost vs Differentiation
Feature Cost vs Differentiation (click to enlarge)

To the right in this diagram are the things we'd call strategic, representing features or practices that differentiate the organization from its competitors. The bottom half of the diagram contains the things that are relatively less expensive. Clearly if you're making a decision on what features to implement, you want to be in the lower right quadrant: low cost and high differentiation. Do those first.

The red quadrant seems like the last place you'd look for features, or is it? Think about clean sheets. As I noted earlier, clean sheets cost a lot of money and everyone has them, so there's not much competitive advantage in having them. But there's a huge competitive disadvantage if you don't. No one can do without clean sheets. Businesses are filled with things like clean sheets. For IT, things like availability, security, networks, and deployment are all clean sheets. Doing these well can differentiate you from those who don't, but they're not strategic. You still need a strategy.

How can you discover the things that really matter? I'm a fan of domain driven design. Domain driven design is a tool for looking at the various domains your business is engaged in and then determining which are core (differentiating), supporting, or merely generic. The things you identify as core are strategic—places you can differentiate yourself. This helps because now you know where to build and where to buy. Generic domains aren't unimportant (think HR or finance, for example), they're simply not strategic. And therefore buying those features is likely going to give you high availability and feature fit for far less money than doing it yourself.

On the other hand, domains that are core are the places you differentiate yourself. When you look at your organization's values, mission, and objectives, the core domains ought to directly support them. If you outsource these, then anyone else can do what you're doing. Core, strategic activities are places where it makes sense to build rather than buy. Spend your time and resources there. But don't neglect the sheets.

Domain-Driven Design Distilled by Vaughn Vernon

Concise, readable, and actionable, Domain-Driven Design Distilled never buries you in detail–it focuses on what you need to know to get results. Vaughn Vernon, author of the best-selling Implementing Domain-Driven Design, draws on his twenty years of experience applying DDD principles to real-world situations. He is uniquely well-qualified to demystify its complexities, illuminate its subtleties, and help you solve the problems you might encounter.


Photo Credit: Sheets from pxfuel (Free for commercial use)


Ephemeral Relationships

Ghost Trees

In real life, we often interact with others—both people and institutions—with relative anonymity. For example, if I go the store and use cash to buy a coke there is no exchange of identity information. Even if I use a credit card it's rarely the case that the entire transaction happens under the administrative authority of the identity system inherent in the credit card. Only the financial part of the transaction takes place in that identity system. This is true of most interactions in real life.

I don't have an account at the local grocery store where I store my address, credit card, and other information so that each transaction is linked to a record about me. True, many businesses have loyalty programs and use those to collect information about customers, but those are optional. And going without one doesn't significantly inconvenience me. In fact, the point of the credit card system is that it avoids long-lived relationships between any of the parties except the customer (or merchant) and their bank.

In real life, we do without identity systems for most things. You don't have to identify yourself to the movie theater to watch a movie or log into some administrative system to sit in a restaurant and have a private conversation with friends. In real life, we act as embodied, independent agents. Our physical presence and the laws of physics have a lot to do with our ability to function with workable anonymity across many domains.

One of the surprising things about identity in the physical world is that so many of the relationships are ephemeral rather than long-lived. While the ticket taker at the movies and the server at the restaurant certainly "identify" patrons, they forget them as soon as the transaction is complete. And the identification is likely pseudonymous (e.g. "the couple at table four" rather than "Phillip and Lynne Windley"). These interactions are effectively anonymous.

Of course, in the digital world, very few meaningful transactions are done outside of some administrative identity system. There are several reasons why identity is so important in the digital world. But we've accepted long-lived relationships with full legibility of patrons as the default on the web.

Some of that is driven by convenience. I like storing my credit cards and shipping info at Amazon because it's convenient. I like that they know what books I've bought so I don't buy the same book more than once (yeah, I'm that guy). But what if I could get that convenience without any kind of account at Amazon at all? That's the promise of verifiable credentials and self-sovereign identity.

You can imagine an ecommerce company that keeps no payment or address information on customers, but is still able to process their orders and send the merchandise. If my shipping information and credit card information are stored as verifiable credentials in a digital wallet I control, I can easily provide these to whatever web site I need to as needed. No need to have them stored. And we demonstrated way back in 2009 a way to augment results from a web site with a self-sovereign data store. That could tell me what I already own as I navigate a site.

There's no technical reason we need long-lived relationships for most of our web interactions. That doesn't mean we won't want some for convenience, but they ought to be optional, like the loyalty program at the supermarket, rather than required for service. Our digital lives can be as private as our physical lives if we choose for them to be. We don't have to allow companies to surveil us. And the excuse that they surveil us to provide better service is just that—an excuse. The real reason they surveil us is because it's profitable.


Photo Credit: Ghost Trees from David Lienhard (CC BY-SA 3.0)


Smart Property

Evolution of Things

I just listened to this excellent podcast by Vinay Gupta about what he calls "smart property." Vinay released a book last year call The Future of Stuff that covers this topic in more detail.

The Future of Stuff by Vinay Gupta

Where and who do we want to be? How might we get there? What might happen if we stay on our current course? The Future of Stuff asks what kind of world will we live in when every item of property has a digital trace, when nothing can be lost and everything has a story. Will property and ownership become as fluid as film is today: summoned on demand, dismissed with a swipe? What will this mean for how we buy, rent, share and dispose of stuff? About what our stuff says about us? And how will this impact on us, on manufacturing and supply, and on the planet?

The idea is similar to what Bruce Sterling has called Spimes and what we've been building on top of picos for over a decade.

Smart property goes well beyond the internet of things and connected devices to imagine a world where every thing is not just online, but has a digital history and can interact with other smart things to accomplish whatever goals their owners desire. Things are members of communities and ecosystems, working through programmable agents.

A world of smart things is decentralized–it has to be. While Vinay talks of blockchains and smart contracts, I work on picos. Likely both, or some version of them, are necessary to achieve the end goal of a sustainable internet of things to replace the anemic, unsustainable CompuServe of Things we're being offered now.

Going Further

We've built a platform on top of picos for managing things called Manifold. This is a successor to SquareTag, if you've been following along. You can use Manifold to create spimes or digital twins for your things. Some of the built-in applications allow you to find lost stuff using QR code stickers or write notes about your things using a journaling app. You could build others since the application is meant to be programmable and extendable. I primarily use it as a personal inventory system.


Photo Credit: Spimes Not Things. Creating A Design Manifesto For A Sustainable Internet of Things from Michael Stead, Paul Coulton & Joseph Lindley (Fair Use)


Ten Reasons to Use Picos for Your Next Decentralized Programming Project

Temperature Sensor Network Built from Picos
Temperature Sensor Network Built from Picos

I didn't start out to write a programming language that naturally supports decentralized programming using the actor-model, is cloud-native, serverless, and databaseless. Indeed, if I had, I likely wouldn't have succeeded. Instead picos evolved from a simple rule language for modifying web pages to a powerful, general-purpose programming system for building any decentralized application. This post explains what picos are and why they are a great way to build decentralized systems.

Picos are persistent compute objects. Persistence is a core feature that distinguishes picos from other programming models. Picos exhibit persistence in three ways:

  • Persistent identity—Picos exist, with a single identity, continuously from the moment of their creation until they are destroyed.
  • Persistent state—Picos have persistent state that programs running in the pico can see and alter. The state is is isolated and only available inside the pico.
  • Persistent availability—Once a pico is created, it is always on and ready to process queries and events.

Persistent identity, state, and availability make picos ideal for modeling entities of all sorts. Applications are formed from cooperating networks of picos, creating systems that better match programmer's mental models. Picos employ the actor model abstraction for distributed computation. Specifically, in response to a received message, a pico may

  1. send messages to other picos—Picos respond to events and queries by running rules. Depending on the rules installed, a pico may raise events for itself or other picos.
  2. create other picos—Picos can create child picos and delete them.
  3. change its internal state (which can affect their behavior when the next message is received)—Each pico has a set of persistent variables that can only be affected by rules that run in response to events.

In addition to the parent-child hierarchy, picos can be arranged in a heterachical network for peer-to-peer communication and computation. A cooperating network of picos reacts to messages, changes state, and sends messages. Picos have an internal event bus for distributing those messages to rules installed in the pico. Rules in the pico are selected to run based on declarative event expressions. The pico matches events on the bus with event scenarios declared in each rule's event expressions. Any rule whose event expression matches is scheduled for execution. Executing rules may raise additional events. More detail about the event loop and pico execution model are available elsewhere.

Here are ten reasons picos are a great development environment for building decentralized applications:

  1. Picos can be a computational node that represents or models anything: person, place, organization, smart thing, dumb thing, concept, even a pothole. Because picos encapsulate identity and state, they can be used to easily model entities of all types.
  2. Picos use a substitutable hosting model. Picos are hosted on the open-source pico engine. Picos can be moved from pico engine to pico engine without loss of functionality or a change in their identifying features1. More importantly, an application built using picos can employ picos running on multiple engines without programmer effort.
  3. Pico-based applications are scalable. Picos provide a decentralized programming model which ensures that an application can use whatever number of picos it needs without locks or multi-threading. Pico-based applications are fully sharded, meaning there is a computational node with isolated state for every entity of interest. Because these nodes can run on different engines without loss of functionality or programmer effort, pico-based applications scale fluidly. Picos provide an architectural model for trillion-node networks where literally everything is online. This provides a better, more scalable model for IoT than the current CompuServe of Things.
  4. Picos can provide high availability in spite of potentially unreliable hosting. Multiple picos, stored in many places can be used to represent a specific entity. Picos modeling a popular web page, for example, could be replicated countless times to ensure the web page is available when needed with low latency. Copies would be eventually consistent with one another and no backups would be needed.
  5. Picos are naturally concurrent without the need for locks. Each pico is an island of determinism that can be used as a building block for non-determinant decentralized systems. Each pico is isolated from any other pico, asynchronous processing is the default, and facts about the pico are published through protocols. State changes are determined by rules and respond to incoming messages seen as events. This minimizes contention and supports the efficient use of resources.
  6. Picos provide a cloud-native (internet-first) architecture that matches the application architecture to the cloud model. The pico programming model lets developers focus on business logic, not infrastructure. Applications built with picos don't merely reside in the cloud. They are architected to be performant while decentralized. Picos support reactive programming patterns that ensure applications are decoupled in both space and time to the extent possible.
  7. Picos enable stateful, databaseless programming. Picos model domain objects in code, not in database tables. Developers don't waste time setting up, configuring, or maintaining a database. Each ruleset forms a closure over the set of persistent variables used in it. Programmers simply use a variable and it is automatically persisted and available whenever a rule or function in the ruleset is executed.
  8. Picos use an extensible service model where new functionality can be layered on. Functionality within a pico is provided by rules that respond to events. An event-based programming model ensures services inside a pico are loosely coupled. And isolation of state changes between services (implemented as rulesets) inside the pico ensure a new service can be added without interfering with existing services. An event can select new rules while also selecting existing rules without the programmer changing the control flow2.
  9. Picos naturally support a Reactive programming model. Reactive programming with picos directly addresses the challenges of building decentralized applications through abstractions, programming models, data handling, protocols, interaction schemes, and error handling3. In a pico application, distribution is first class and decentralization is natural. You can read more about this in Building Decentralized Applications with Pico Networks and Reactive Programming Patterns: Examples from Fuse.
  10. Picos provide better control over terms, apps, and data. This is a natural result of the pico model where each thing has a closure over services and data. Picos cleanly separate the data for different entities. Picos, representing a specific entity, and microservices, representing a specific business capability within the pico, provide fine grained control over data and its processing. For example, if you sell your car, you can transfer the vehicle pico to the new owner, after deleting the trip service, and its associated data, while leaving untouched the maintenance records, which are stored as part of the maintenance service in the pico.

And a bonus reason for using picos:

  1. Picos provide a better model for building the Internet of Things. Picos are an antidote to the CompuServe of Things because they provide a scalable, decentralized model for connecting everything. We built a connected car platform called Fuse to prove this model works (read more about Fuse). Picos are a natural building block for the self-sovereign internet of things (SSIoT) and can easily model necessary IoT relationship. Picos create an IoT architecture that allows interoperable interactions between devices from different manufacturers.

Use Picos

If you're intrigued and want to get started with picos, there's a Quickstart along with a series of lessons. If you need help, contact me and we'll get you added to the Picolabs Slack. We'd love to help you use picos for your next distributed application.

If you're intrigued by the pico engine, the pico engine is an open source project licensed under a liberal MIT license. You can see current issues for the pico engine here. Details about contributing to the engine are in the repository's README.

Notes

  1. The caveat on this statement is that pico engines currently use URLs to identify channels used for inter-pico communication. Moving to a different engine could change the URLs that identify channels because the domain name of the engine changes. These changes can be automated. Future developments on the roadmap will reduce the use of domain names in the pico engine to make moving picos from engine to engine even easier.
  2. Note that while rules within a ruleset are guaranteed to execute in the order they appear, rules selected from different rulesets for the same event offer no ordering guarantee. When ordering is necessary, this can be done using rule chaining, guard rules, and idempotence.
  3. I first heard this expressed by Jonas Bonér in his Reactive Summit 2020 Keynote.


Alternatives to the CompuServe of Things

Acoustically Coupled Modem

In Peloton Bricks Its Treadmills, Cory Doctorow discusses Peloton's response to a product recall on its treadmills. Part of the response was a firmware upgrade. Rather than issuing the firmware upgrade to all treadmills, Peloton "bricked" all the treadmills and then only updated the ones where the owner was paying a monthly subscription for Peloton's service.

When I talk about Internet of Things (IoT), I always make the point that the current architecture for IoT ensures that people are merely renting connected things, not owning them, despite paying hundreds, even thousands, of dollars upfront. Terms and conditions on accounts usually allow the manufacturer to close your account for any reason and without recourse. Since many products cannot function without their associated cloud service, this renders the device inoperable.

I wrote about this problem in 2014, describing the current architecture as the CompuServe of Things. I wrote:

If Fitbit decides to revoke my account, I will probably survive. But what if, in some future world, the root certificate authority of the identity documents I use for banking, shopping, travel, and a host of other things decides to revoke my identity for some reason? Or if my car stops running because Ford shuts off my account? People must have autonomy and be in control of the connected things in their life. There will be systems and services provided by others and they will, of necessity, be administered. But those administering authorities need not have control of people and their lives. We know how to solve this problem. Interoperability takes "intervening" out of "administrative authority."

The architecture of the CompuServe of Things looks like this:

CompuServe of Things
CompuServe of Things Architecture (click to enlarge)

We're all familiar with it. Alice buys a new device, downloads the app for the device to her phone, sets up an account, and begins using the new thing. The app uses the account and the manufacturer-provided API to access data from the device and control it. Everything is inside the administrative control of the device manufacturer (indicated by the gray box).

There is an alternative model:

Internet of Things Architecture
Internet of Things Architecture (click to enlarge)

In this model, the device and data about it are controlled by Alice, not the manufacturer. The device and an associated agent (pico) Alice uses to interact with it have a relationship with the manufacturer, but the manufacturer is no longer in control. Alice is in control of her device, the data is generates, and the agent that processes the data. Note that this doesn't mean Alice has to code or even manage all that. She can run her agent in an agency and the code in her agent is likely from the manufacturer. But it could be other code instead of or in addition to what she gets from the manufacturer. The point is that Alice can decide. A true Internet of Things is self-sovereign.

Can this model work? Yes! We proved the model works for a production connected car platform called Fuse in 2013-2014. Fuse had hundreds of customers and over 1000 devices in the field. I wrote many articles about the experience, it's architecture, and advantages on my blog.

Fuse was built with picos. Picos are the actor-model programming system that we've developed over the last 12 years to build IoT products that respect individual autonomy and privacy while still providing all the benefits we've come to expect from our connected devices. I'll write more about picos as a programming model for reactive systems soon. Here's some related reading on my blog:

Our current model for connected devices is in conflict, not only with our ability to functions as autonomous individuals, but also our vision for a well-functioning society. We can do better and we must. Alternate architectures can give us all the benefits of connected devices without the specter of Big Tech intermediating every aspect of our lives.


Photo Credit: modem and phone from Bryan Alexander (CC BY 2.0)