The Essay

Surveillance capitalism is showing its cracks, as Apple and Google begin shifting the industry away from models built on data exfiltration and personalized ad targeting. This is the first halting step toward an inevitable tectonic shift of economic models that will remake society. The following is from an early draft of a book I’m writing:

Prosocial Capitalism: How positive sum networks will out-compete communism and antisocial capitalism, reshape the Internet, preserve liberty, and stave off dystopia.

Rivalries between Western-style democracies and China, both economic and ideological, will define 21st century geopolitics. China is a communist nation powered by a form of state-sponsored capitalism. It has pinned its future on Big-Tech oligopolists that ruthlessly exploit oceans of citizen surveillance data.

And yet, that last sentence could just as easily describe America, land of liberty, where antisocial forms of capitalism have taken root. Surveillance capitalism is not the only antisocial form; there are also serial polluters, unethical drug companies, monopolists, oil giants sponsoring disinformation, and others who push their true costs onto society as “externalities”. We’ll start our journey by analyzing surveillance capitalism and then see how its undoing will unleash prosocial forces across all sectors.

Why Now?

In July 2021, Apple rolled out an update that asks users whether they would like to stop being tracked across the Internet while using their iPhones. Not surprisingly, many have taken that option. Google has similar changes planned for users of its Chrome browser starting in 2022. Today’s ad network, where 100+ firms you’ve never heard of bid at auction to serve ads each time your page loads, is already feeling the effects, which will accelerate. This system, with Facebook at its center, is addicted to data that is deeply personal, builds a composite picture of you from across all your activities, and is easily tied back to your real identity. Now, finally, that system shows signs of crumbling, catalyzed by the actions by Apple and Google.

It is shocking how long such a broken system stayed at equilibrium. It’s not just broken for you, but for the media companies themselves. Advertisers can learn a lot about you without buying ads from the high-end media properties that hand over your personal data tracking cookies. Listening in on the data flow, they learn enough about you to wait to target ads when you visit less expensive media properties (that’s why click-bait proliferates). (source: James Ball interview with Brian O’Kelley). This costs the reputable media companies many billions! But now even this long-term equilibrium, which has been bad for quality media companies all along, is coming unglued and advertisers are “flying blind”.

There is a more profitable way for media companies, which need to supplement subscription revenues, to generate ad revenue. Slightly different nuggets of information can be shared by media apps that don’t divulge personal identifiers. The more interesting these data nuggets, the higher the value. For example, “anonymous NY Times subscriber in 10018 zip code reads the Travel section, especially about Barbados” could serve just the right ad without divulging anything else. And then that ad can only be served up via the NY Times (resulting in the Times maximizing ad auction prices). No behavior changes are required; the ad networks will simply start learning how to use these smaller but equally powerful (as we’ll see more powerful) data nuggets. Apple and Google are effectively forcing media companies and apps to start learning how to do so. 

Building on the Barbados ad example, there will be demand for more and more interesting (and profitable) data nuggets that are still not based on who you are – that shield your actual identity. For example, it would be extremely valuable to know when you want to travel to Barbados yet clearly dangerous for hundreds or thousands of companies to know whose house will be empty that week!

During this moment of transition, Apple and Google will try to shape the new equilibrium to serve their respective needs and Facebook will lose power. But this transition needn’t just be a transfer of power from one titan to a couple other titans. This is the chance the world has been waiting for to recapture power in the digital economy (and as we’ll see society more broadly) from oligopolists. The transition period represents an unparalleled opportunity to remake the Internet and society.

Antisocial vs. Prosocial Capitalism

Here in the West, we sense that the business models of our oligopolists are upside-down, yet we seem collectively resigned to compete with China on its own terms. Each country practices slightly different variants of antisocial capitalism. Controlled by the state itself, rather than oligopolists, and with access to vastly more data than any Zuckerberg, Page, or Bezos (who each operate in their respective siloes), the Chinese system is poised to out-compete ours if we play by their rules. Our Western phones do not yet interpret facial micro-expressions to determine whether political speech and manipulative ads have the intended effect. No one company aggregates all our data across domains and life experiences. These dystopian inventions remain in beta in China, famously used to suppress targeted ethnic groups and to enrich a tiny elite. Competing head-to-head from the same playbook would lead us down a dark path.

In fact, the US and its allies can and will move to a model that is the antipode of China’s, which I call prosocial capitalism. Unlike surveillance capitalism and other strains of antisocial capitalism, the prosocial form plays to our strengths and holds true to the principles of James Madison, John Locke, Adam Smith, and other founding fathers of Western-style democratic capitalism.

That said, emotional appeals to what we “should” do are never enough. If they were, the rain forests would remain intact, children wouldn’t starve, and fire tornadoes wouldn’t be part of routine weather reporting. And so, I won’t be setting up yet another morality tale about data privacy. No, I recognize that most of us have grown numb to security breaches and data exfiltration. Rather, privacy-friendly yet economically superior business models are the aglet of a string that, once pulled, stands to unravel the entire antisocial dystopia that might otherwise engulf society. Here’s how.

Positive Sum Networks

When Adam Smith described Capitalism 1.0 in 1776 in the Wealth of Nations, he pointed out that economic growth occurs even as (and largely because) each actor pursues their self-interest. Incentives are the way change is made in the world, no matter how much appealing to our collective hearts and minds should ideally change behavior.

Until the mid-1600s, economic growth ran at 0% for millennia. Then suddenly, growth skyrocketed, life expectancies increased, we traveled the cosmos, and more. What changed? Technologies, norms, and laws fostering communication and trust dramatically improved. Examples include the printing press, scientific communities, countries trading rather than pillaging each other (though cruelty to indigenous people continued), establishment of rights to life, liberty, and property (albeit for a time alongside slavery), and more.

Communication and trust are the two pillars of positive sum economic networks. We could have a zero-sum arrangement where you steal my property and next year I mount a retaliatory raid to steal yours. Or better, we realize that I’m good at making nets and you’re good at fishing. If we each specialize and then trade, the pie is bigger because the economic network is positive sum. Trying to make nets and also get good at fishing (while fighting to protect our resources from each other) is an inferior economic model. We should each specialize and trade instead. Notice that we did not have to explicitly cooperate (in the selfless meaning of the word); we can come to our fishing net arrangement based on mutual self-interest.

In this story, we have risen above the animal’s instinct to simply steal its rival’s fish. We had to communicate and develop trust for this to work. These sorts of decisions, about whether and when to cooperate for mutual advantage, are described by game theory. The classic case is the Prisoner’s Dilemma, usually described as a 2-player “game”. If the two players cannot communicate or they do not trust each other, then a zero-sum or negative-sum outcome will occur. The story of civilization is of technologies that foster communication and trust at a societal level in a multi-player game. When we fail to create positive sum networks, Hobbesian spirals of all-against-all take hold; when we succeed, we put men on the moon, defeat the Nazis, and wipe out polio.

The family, tribe, or nation that can harness positive-sum networks will out-compete all others. Trust need not be direct; it can be through intermediary technologies including but not limited to money. “In God We Trust” testifies that my nation stands behind the legal tender value of the coin you give me, even if we are strangers.

Of course, some economic models appear at first to be positive sum but aren’t after accounting for the additional costs borne by society. For an economic model to provide net societal benefit (prosocial), it must grow the pie even after accounting for all true costs: pollution and other “externalities”, the “tragedy of the commons”, undue influence over regulators, exploitation of child labor, deadweight loss to monopoly, and other antisocial impacts.

Tying it Together

So how does this relate to geopolitical economic and ideological rivalry and Big Tech?

Human history and game theory both predict that communication and trust will foster positive-sum networks and those in turn out-compete zero-sum or negative-sum rivals.

Well, so looking around at today’s landscape, do most existing digital media business models foster communication and trust? Do they bring us together in constructive ways that recognize where one person’s need is another’s opportunity? Do they foster productive conversations based on, or after establishing, a shared set of facts? Do they separate truth from falsehood, real humans from bots? Are they designed with our well-being in mind?

No, too many business models treat people as a source of data to be mined and sold. We are like a frog slowly boiled by toxic business models that exploit us. Each incursion is a small cut, but collectively and over time we are bled of agency. We have given up on privacy. Journalism, advertising, entertainment, media, politics, truth itself – all now bow to the business model that enriches Facebook, Google, and a few others.

Where’s the digital robot butler I was promised?

So forget the word privacy for a moment. Of all the writers who have mourned its loss, I won’t be the one who causes everyone to care about that word. What we lose sight of when we focus on privacy is the alternate universe of business models that become available when we regain agency.

Remember the desktop computer? I installed programs there and they ran for my benefit. When we moved to the cloud, that concept went away. Tim Berners Lee, creator of the world wide web, is one of the few who points out the obvious problem that I need a piece of digital real estate that is mine. Only there can I run software powered by AI against any or all of my data, benefitting me and my loved ones.

I’m old enough to remember the dawn of the Internet and being promised a digital robot butler. Never mind flying cars (ok I want those too); give me my digital robot butler. I don’t want an Alexa that only knows my Amazon history and who reports to Bezos. I don’t want a Portal that only knows my Facebook data-self and works for Zuckerberg. I want software that:

  • installs to my personal device or personal cloud,
  • cannot exfiltrate my data (governed by legal terms plus firewalls),
  • is truly secure,
  • brings together my entire digital world: all work email, personal messages, news, entertainment, health records, purchasing history, books I’ve read, places I’ve been, data that has been shared with me by friends and colleagues, everything,
  • keeps unwanted digital interlopers at bay (spammers, phishermen, hate speech, etc.)
  • allows smart, selective sharing with those I care about,
  • applies intelligence to my composite data in ways that serve my interests,
  • helps me filter, understand, and weigh the veracity of the media I take in,
  • identifies opportunities (ranging from retail sales to vacations to investment opportunities) that might genuinely interest me, and
  • lets me choose whether to pay for services via subscription or via a new prosocial advertising model

Today, I don’t even have a place where the requisite data could safely come together. There are good reasons why my boss won’t let me export Salesforce and work email to my personal Google account, why I don’t want Facebook to access my text messages, why I don’t want any of them to have my health records, etc. And so even if Google built a true digital butler, it (a) would not serve my self-interest (given their business model) and (b) would not have access to the full corpus of data to maximize its value!

Only a composite data set, drawing upon the union of personal, shared (friends, family, colleagues), and public data, can provide the substrate for a digital butler to suggest:

“A recent study indicates that, given your MTHFR gene, folic acid can be toxic to you over time.”

“Your tax return is ready for review and approval. And I noticed that last year’s NJ refund check never arrived; I’ve opened an inquiry.”

“Prices for that Tahiti trip are at an all-time low. Shall I book it for November 7-14 when you and Sue are both free?”

“The shoes you always buy are 50% off this month. Shall I order a couple extra pairs?”

“You’ve been searching a lot on ‘self-publishing’. Here are three successful authors who would take your call.”

“A drug for your liver condition is entering human trials. You may wish to ask your doctor.”

“Here is an angel investing opportunity that seems to match your criteria. Shall I connect you?”

The more sensitive the data, the more value that can be extracted from it. Sensitive data capable of driving insights like those above could never be safely shared with firms whose business model benefits from selling access to me to the highest bidder, misinforming me, or manipulating me.

To be clear, the digital butler is not a single piece of software. It does not imply artificial general intelligence. Each use case above could be a very specialized, very narrow piece of code. Drawing from an app store filled with these unbundled intelligent algorithms, the user perception can be that of a single magical assistant.

Furthermore, you are not sending your data anywhere to achieve these miracles. Each piece of software will be not only governed by license but also monitored by a firewall. They can be further subject to certification and third-party testing to ensure no data exfiltration capabilities. Software always runs on a computer; when the data is sensitive that computer should be owned by you. 

As Berners Lee’s vision for SOLID gains traction, personal cloud will become a service, but let’s start with the 7-ounce supercomputer in my pocket. This personal device is 32,000 times more powerful than the computer that sent us to the moon. If repurposed and optimized as a home for my digital butler, sending out fully anonymized data for external analytics when needed, and leveraging encrypted sync across my other devices for storage, the phone will do quite nicely.

One of those mirrored devices could be an actual mirror, perhaps a place for face-to-face conversations with my butler friend (or fairy godmother if you prefer a different avatar). Endless business models are possible using existing technologies once we regain agency over our data and enlist truly trusted digital assistants.

Media and advertising as the catalyst

The business models and use cases that become possible even just leveraging my own composite data will provide a strong pull for adoption. But Big Tech is a stable, albeit inferior, equilibrium; tipping away from it won’t be easy. It’s difficult to get people to change behavior or switch networks.

The catalyst for tipping into a new model is likely to come from the industries most battered by Big Tech’s disruption over the past two decades: journalism, entertainment, media, and yes, advertising.

Any producer of quality media, such as a reputable news organization, faces a conundrum: subscription revenue alone is rarely enough to satisfy business needs. Free is just too compelling and ad revenue can help fill the gap. Meanwhile, companies have products to sell and want the right product to be shown to the right target.

And so we have all been presented with a choice between untargeted, largely-irrelevant, low-ROI ads vs. targeted, surveillance-based ads. This is a false choice and an antisocial one at that. Sure, if I’ve googled “home repair” the local handyman might win the bid, serving exactly the ad I really should see. But if my grandmother’s personal data has shown her to be particularly gullible, the highest bidder for her attention might be a scammer. If your data suggests a rocky marriage, perhaps an Ashley Madison ad will win the bid.

That’s just the consumer’s standpoint. The media companies lose tens of billions of dollars! When I read a story at nytimes.com, advertisers bid based on my personal data. Do they then pay a hefty sum to the NY Times to serve those ads? No! According to Brian O’Kelley, creator of the current ad network system, many of those advertisers wait for me to later visit a click-bait site where I can be served ads more cheaply than via the Times (per his interview with James Ball). The Times loses out, hundreds of advertisers learn about me, and clickbait sites proliferate. As mentioned in this book’s introduction, that system is being dismantled right now

Meanwhile, wouldn’t it make more sense for a personal digital butler to select my ads anonymously?

Selecting: If my wife and I have been texting about our growing family, wouldn’t the best ad of all be for a minivan? Even if I’m reading the sports pages, devoid of automotive context, that ad is perfect for me (and lucrative). My digital butler knows this. In fact, if I click the ad and buy the minivan, maybe I shouldn’t have to see any more ads this month!

Anonymously: I don’t want Big Tech reading my text messages to tell their ad network we have a baby on the way. Yes, I might be interested in that Dodge Caravan. But I don’t want strangers calling or emailing after tying my data together with brokered personal details. What’s more, there is no need for anyone to know all about me in advance; I’m served the ad and buy the vehicle just the same! And they could still track ad efficacy, relying on discount codes or other means.

The current equilibrium of Facebook-led surveillance capitalism is already poised for a fall, with Apple and Google deprecating personal tracking cookies as the immediate catalyst. The short-term replacement will be based on moderately interesting (but anonymous) personal data. The digital butler scenario, where my entire world of personal data feeds ad selection algorithms (while keeping me anonymous) is most powerful of all. And in a capitalist system, incentives drive change. Your butler will be here before you know it. 

Advertising is the tip of the (economic) iceberg

Ads, ads, ads. Yawn. What really drives the economy is commerce! My need is your opportunity so let’s achieve that positive sum outcome together. My garage is broken, you fix garages. I sell refrigerators, yours just broke. I need a job, you have an opening. I need to move out of town, you want to buy a house here.

Ads are just the shiny media version of information about an economic opportunity. Below the water line lies a chance to inform me (directly or via my digital butler) about your products, services, needs, and more. The product’s reputation and reviews can be part of what it analyzes. Details that are hard for humans to track down and compare (e.g. which minivans have side airbags) can be provided by the marketer as fielded data, ready to be digitally discovered. Facts can be checked, including especially by human fact-checkers, who in turn can be paid for their services.

Over time, my digital butler will become more discriminating, requiring more and more reliable information; it must be verifiable or at least not patently false or manipulative. Whereas Facebook ads notoriously disappear and are unavailable for audit (particularly dangerous, as we’ve seen, with political ads), this new model values transparency, auditability, and reputation. On this new playing field, prosocial values will out-compete subterfuge.

Who am I?

The Internet has a fraught relationship with anonymity. On the one hand, trolls and Russian operatives spread misinformation anonymously or using false identities. On the other hand, a degree of anonymity protects the legitimate interests of those expressing unpopular, controversial, or dangerous views (gay marriage, dissent within a repressive country, etc.).

But anonymity vs. fully transparent identity is another false choice! My bank has a legitimate need to know my full identity (name, address, government ID, etc.). But most interactions require far less.

When a woman goes to a bar, she shows her driver’s license to the bouncer to prove that she’s over 21. A total stranger has learned her name, where she lives, her birthday, and her driver’s license number. Meanwhile, an official document with her photo on it saying, “this individual is over 21” would be sufficient. Suppose the Department of Motor Vehicles issued, along with her driver’s license, a second card (physical or digital) linking her photo with the statement “verified over 21 by the DMV”.  Then she could buy that beer without over-sharing personal information.

For advertisers to adequately value my attention, they will naturally want to know who I am. Or at least, they will need to know why I am valuable to them. It would be interesting to an advertiser of minivans that (a) I have been looking for a minivan to buy and (b) I have sufficient income to buy one. But the advertiser doesn’t have a legitimate need to know my name, address, ages of my children, and my exact income decile. (They can learn all this and more via today’s ad system.) Instead, claim (a) can be assumed true based on my digital butler requesting a minivan ad, and claim (b) could be achieved as in the DMV scenario.

As an account holder with a modern bank, I should be able to secure a set of digital certificates to the effect of, “Verified to have a credit score over 700” or “Verified to have a bank balance over $10k”, etc. As an employee whose firm uses a modern payroll service, I should be able to secure a set of digital certificates to the effect of, “Verified to have an income over $50k”, and another for higher-end needs of “Verified to have an income over $75k”, etc. The exact details of my bank balance, income, where I work, and even my name, constitute over-sharing for most scenarios. Instead, a digital certificate claiming only the required information and issued by my bank or payroll company would be more than enough.

My digital butler and I would want to be careful to provide only one certificate for a given encounter so that my full identity is not triangulated. But it would clearly be possible to convince the minivan marketer that serving an ad is worth $x given just a couple narrow pieces of information.

For low-stakes scenarios and during the early adoption phase, banks, payroll providers, and the government needn’t be the certificate issuers. A trusted third-party analytics firm could programmatically evaluate all the information that surveillance capitalist ad networks have already collected on me. The certified claims based on that data wouldn’t have the same force as if signed by a bank, but they could be readily generated at very low (zero?) cost without any system changes required. And of course, the most obvious credentials I should put into my digital wallet are “verified owner of my.name@email.com” and “verified owner of +1.732.555.1212”.

Sidebar: A very brief crypto primer

The digital credentials I’m describing are not a new invention. The driver’s license story is the classic illustration of a “zero knowledge proof” in cryptography. Here is a very simple primer on the basics of crypto, which deals with (a) encrypting/decrypting messages and (b) digitally signing a message to prove authorship. Together, these make today’s modern Internet possible. Used creatively, they form a foundation for prosocial capitalism.

Start with a plain-text message such as our plan for when to attack: “At dawn.” Let’s first represent this numerically (e.g. 4 for “d”) as “1-20 4-1-23-14”. So far, a child could decipher this. Next, we decide on an algorithm (a rule) for making further substitutions. For example, we could shift each number based on a password. If my password is “abc” then the “1-20” for “at” might become “2-22”. But almost any rule leaves some trace “clues” behind, such as the relative frequency of common words or vowels.

Fortunately, mathematicians developed better algorithms including “AES” that are less vulnerable to statistical inference. But how could Alice tell her friend Bob the AES password that will be used to encrypt their messages?

To this end, mathematicians developed the RSA algorithm in the 1970s. RSA relies on a remarkable property of certain pairs of numbers. Number pairs can bear various inverse relationships, such as 6 with 36 (squaring / square root).  The RSA number pairs are special because their inverse relationships are more interesting: “encrypt / decrypt” and also “digitally sign / verify signature”.

In an instant on a modern device, Alice can generate a pair of these numbers. One number is called her “private key” and Alice will not share it (it will reside in a secure area of her device). The other number is called her “public key” and she can tell it to Bob. In fact, she doesn’t have to safeguard the public key at all. She can post it for all the world to see, for anyone who wants to send her an encrypted message.

Bob can use Alice’s public key to encrypt a message for her using RSA. The public key will not decrypt the message; only Alice’s private key can decrypt it! So only Alice can read Bob’s message (since only she holds the private key).

A second question arises. How can Alice know that Bob is the true author of the message? What if Eve forged it or intercepted and altered Bob’s original message: “Attack at sundown”? Here is where the second amazing property of key pairs applies.

Bob makes a pair of keys too, and Bob can digitally sign the message with his private key to prove authorship. A message signed using Bob’s private key can be verified by Alice using Bob’s public key.

If you’re still with me, let’s explore digitally signing one level deeper. So far, we’ve discussed encryption, which preserves the full information content of the message but scrambled. An encrypted message typically won’t get shorter (and might get longer). A related concept is a cryptographic “hash” which is a short code that can only be generated by a particular message. So, for example, the full text of the Gettysburg Address has a hash code of 73203f72f551bd1326fc42e3acc03d98bfdbad5f5eb460d1796bd93101a0f250. (The full text of War and Peace would generate a different 64-character code.) You can’t unfurl that code into the full Gettysburg Address, but only the Gettysburg Address can generate that code (via the SHA2 algorithm). If you changed so much as a comma in the original text, the hash code would be a radically different 64 characters.

Bob doesn’t just send the message.

  • He also sends his public key.
  • And he computes a hash of the message (i.e. the code above, 73203… if he were sending the Gettysburg Address).
  • Then Bob encrypts that hash using his private key and shares the result.
  • Based on the text Alice computes the same code (in this case 73203…).
  • Then she decrypts Bob’s encrypted hash using Bob’s public key to see if it matches 73203…

Thanks to the magical inverse property of the number pair, Alice knows Bob really is the author. More precisely she knows that whoever holds the private key corresponding to that public key is the author of the message and it has not been altered (so it is important for Bob not to lose his keys). Of course, Alice and Bob don’t have to actually run the calculations themselves; they rely on software conveniences like that lock icon that browsers display while shopping (which shows digital signatures have been verified) to show that communications are secure.

I am not an island

So far, our story entails (a) me, a guy with a secure device holding private access to my composite data universe, (b) my digital butler, (c) those offering news, media, journalism, entertainment, etc. (via subscription or ads) and (d) those who want to sell me products and services. Of course, that’s not a very social story yet. Facebook would already lose a lot of revenue to these ad networks. But so far we have not described rival social networks.

It is a design feature that our story doesn’t begin by setting up an alternate social network to unseat Facebook. The game theory dynamics would make switching directly away from Facebook very difficult. My friends and I would each be better off if we could communicate and share videos like we do today minus the spying. But if I join a new network and they haven’t joined yet, I’m worse off. It’s only if we all somehow coordinate to switch to new forms of social networking that we can be better off. We are victims of a “coordination problem” aka “collective action problem”.

Switching away from a social network poses this collective action problem. News, media, journalism, entertainment, and advertising do not pose one (I can enjoy them without my friends). If I am worth advertising to, these firms are glad to find me. If I offer a way for them to make more money from their service or monetize their ads more readily, they will respond. Collective action among my friend group is not required. That is fortunate, because until individuals regain agency in their digital social interactions, coordinating for positive sum collective action is hard.

It would absolutely be appropriate for governments to recognize Facebook as a monopolist, to unwind their acquisitions of Instagram and WhatsApp, and to provide tools that make it easy to get one’s data out of Facebook. For example, I should be able to read my Facebook stream using any compatible technology without sitting inside a Facebook application. I would want my digital butler to curate a single-stream feed based on my own interests, to mix it with genuinely interesting (and true) news, to auto-index photos that I enjoy (for viewing outside of Facebook), and more. If Facebook needs to recoup a certain amount of ad revenue to make up for losing my eyeballs in-app, I could either pay a small appropriate fee or provide recompense via the prosocial ads I view using the new prosocial system. Or they can absorb the cost as part of their antitrust consent decree.

Tipping social systems

A consent decree with teeth would speed our progress toward a more prosocial outcome. But fortunately, while we wait, the story we’ve set up so far already represents a tipping point. My device (and its mirror devices) will have a digital wallet with a set of credentials (“verified home-owner”, etc.) that can be used to fuel the new media + ads regime. The free market will ensure this because of the profit motive of each player. This wouldn’t be a niche outcome; my friends will have a similar setup, and so will everyone else. It’s just software after all! None of us is switching to a new social network; we’re just reading our news app (which has updated its software to serve more lucrative ads), watching Netflix (which has updated its software too), etc.

The digital identity certificates that attest to who I am (at various levels of disclosure) not only power the new ad market. They also let me advertise for opportunities – directly or via my digital butler: e.g. “Mayberry, PA homeowner with credit score over 725 seeks to refinance fixed 30-year mortgage at lowest rate available”.  I could have dozens of these reverse-ad campaigns, where I’m seeking an economic outcome rather than just passively receiving ads, going at any one time. And yet no one has my name or contact information to call me directly; I instead screen respondents (with help from my digital butler) and decide whether to engage.

The identity certificates also help solve a hard problem: how to richly communicate in a trusted manner with people in a decentralized system. Phone numbers and email addresses allow me to communicate and share with you in a decentralized but very simple way. For richer communication features like threaded comments, photo tagging, etc., I currently rely on a centralized system like Facebook. I have to choose a silo (Facebook for social, LinkedIn for business networking, Rover for dog-walking, etc.), and there is probably one winner-take-all silo for any given need. But there is no natural reason why any of these (and other) Internet experiences must be centralized to their respective winner-take-all silo!

In fact, decentralized networks do already exist. If I’m interested to chat or post anonymously along with threaded comments, there are many services to choose from. A huge challenge has been with identity so that we can have rich trusted communications, as required for dog-walking, ride-hailing, etc. Typically, this requires an Internet company as the central arbiter of “is this really you” (and of course, most do a terrible job of it). But the wallet of credentials used to power prosocial ads can address identity without the need for a central hub! The stage is set for decentralized networks of all sorts.

Decentralized networking made easy

Let’s envision a super-simple decentralized social network. Perhaps I start with an application like the one we’re developing at DotAlign to privately analyze my communications and networks. It’s a prosocial application, so nothing leaves my trusted devices. The relationship intelligence algorithms privately identify the 200 people who seem to matter most to me. If I wish, I make a post that can hit Facebook, Twitter, etc. (and/or it could email them). And the post nudges them toward using the decentralized option going forward; here’s how.

First, let me emphasize that good software will hide complexity from users. I’d simply decide, for example, “post this message about NJ politics publicly but anonymously using my ‘NJ residency’ credential”. Or “post this privately to my College Friends group”. Behind the scenes…

There are well-established decentralized file storage systems like the amazingly named Interplanetary File System (IPFS) where I can post messages, files, images, etc. I could digitally sign content using my full identity or via a credential such as “verified NJ resident” if this is a statement about NJ politics. For private posts, the software could place a copy of encrypted data at the equivalent of ipfs\\your.name@email.com\ instead of (or in addition to) emailing you. Or the address on IPFS could use some other handle I have for you, such as your Facebook ID. Assuming you’ve used a prosocial app and have generated a public key already, the software can encrypt the message using your public key. Your digital butler will know where to find any such messages (and can ignore if my identity proof is lacking or indicates a spammer.)

If you haven’t yet joined, I could still post to that location, though the mechanics of resolving the right key are more complicated (and likely involve a trusted third-party). But there is the nice benefit that you’ve now accumulated a bunch of stuff in your inbox from friends to generate FOMO. Especially if it’s easy to dual-post to legacy networks plus the new system, this might become your preferred way to post as well.

Now instead of checking multiple apps, your digital butler will bring together a single stream of all the data to which you’re entitled, drawing from news, media, entertainment, legacy social networks, decentralized social systems, work email, personal email, texts, and more. They’ll be brought together onto one of your devices, mirrored via encryption across your other devices (including your magic mirror), and used for your sole benefit. The data and communications can be analyzed to highlight useful products and services, networking opportunities, relevant job postings, etc. Spammers, phishermen, bots, and hate speech can all be filtered out by your digital butler. You now have agency over your digital information stream and communications.

Lessons from Linux

The Internet was conceived as a decentralized system of communication nodes. The world wide web opened that up to regular people via hypertext and web browsers. And yet, the industry became incredibly concentrated, dominated by a small handful of firms. Whoever is in the lead gets even more funding, buys out even more competitors, charges even less (free of course is the norm), attracts more ad revenue and venture capital, etc. Winner-take-all economics are the norm. Quick, name the #2 online auction site, the #2 business networking site, the #2 online retailer… It’s not easy. These days, #2 is usually acquired by #1 while regulators seem asleep at the switch. Instagram and WhatsApp are prime examples.

Of course, there was a time when Microsoft’s operating system, Windows, also seemed unassailable. Then Linus Torvalds developed a competitive effort called Linux. Linux isn’t a company (though many have made money from it); it is open-source software. He also developed a system called Git for massive-scale collaboration on coding initiatives. Richard Stallman developed the GNU General Public License so that any enhancements to Linux would in turn be open source. Thanks to that license, if someone had a need related to Linux, addressing that need would make all of Linux better.

Today, while Microsoft remains dominant on desktops, Linux is dominant for cloud computing. What’s more, no one can buy Linux away from the world. Linux is therefore a case study of how a movement can leverage community collaboration and a thoughtful licensing model for jiu jitsu against a monopolist.

Prosocial Licensing

Suppose a venture capitalist reads the preceding pages and agrees with this vision. Everything hums along smoothly at the startup they fund until success is evident and then… Google, Facebook or another Big Tech buys them out. That was the VC’s end game after all. But as is so often the case, the acquiror changes the business model, privacy policies are decimated, and the cycle begins again.

What if instead, the licensing model forbids misuse of personal data and other shenanigans down the road. Furthermore, the license could be made binding on any acquiror. It could apply to all data from any application (including the acquiror’s flagship products) derived from a user of the prosocial-licensed application. It could require 100% consensus from all developers and all users to change the license terms. This piece of software would never and could never be acquired by a surveillance capitalist company.

Exfiltration of personal data is just one antisocial behavior that can be prohibited by license. Exploitation of workers (based on relative wages, lack of profit-sharing, or other measures) could be a behavior prohibited by license for any intellectual property underpinning, for example, a prosocial rival to Uber.

Any such licenses would seem at odds with capitalist incentives. Certainly, from the point of view of a Silicon Valley venture firm, monetization via exit to Facebook or Google (or a company that hopes to one day sell to one of them) would be out of the question. But this can very definitely be a profitable commercial license.

First, unlike traditional open source, the software need not be free (it needn’t even be open source). Suppose a group of people decide to create a prosocial alternative to Uber, starting in Philadelphia. Maybe one writes code for GPS, maps, and logistics, another does marketing, another recruits drivers, and so on. The coder is incentivized in several ways: she will earn a “fair” revenue split (allocation is discussed below), she can potentially reuse that code library in other prosocial apps requiring a GPS/logistics component (and earn a fair share there too), and she has the peace of mind of knowing that the entire enterprise is prosocial (no driver exploitation, etc.). Uber can’t buy out this entity, at least not without adhering to the same prosocial model firmwide.

If the GPS/logistics module coder owns her IP free and clear, she can choose to dual-license it (many software projects are dual-licensed today, as with the MIT License) with the other fork’s revenue streams governed by any other license terms. If she needs to earn steady wages while waiting for profitability, she can form an LLC with a capital partner who believes in her GPS/logistics work. The entire enterprise could use a Kickstarter-style approach to solve the inherent collective action problem, i.e. needing critical mass of drivers and would-be ride-hailers. (More on this later!) Many business models are possible.

There could be open-source and closed-source versions of the prosocial license. There could be free and for-profit versions. There can be license versions for software and others for music and other media. The Electronic Frontier Foundation or other well-known organizations focused on licensing issues can work out variations for different needs.

Suppose an artist creates a new song. Perhaps she dual-licenses it (a) for use in a particular movie and (b) via a prosocial license for all other usages. A prosocial music app, which the artist doesn’t have to create herself, can offer unlimited music in exchange for hearing occasional ads via the prosocial ad network. This artist could be paid her “fair share” based on relative listen times, while the prosocial license itself ensures fair allocation between the tech players and the artists. Apple and Google’s power in the music market relative to artists would be held in check.

Moving from a no-revenue model (traditional open source) to a prosocial capitalism model (earning revenue while subject to license terms) does necessitate answering the question, “how will profits be allocated?” Entities will compete for talent and IP based on track records of fairness and specific license and governance terms. An industry would evolve where consultants develop a track record for fairly attributing relative value. Ascribing relative contributions, how much to reinvest, and how to price are all valid functions of an organization. Those needs have to be met for any economic model which does not assume free labor. There will be challenges, but at the end of the day, the chance to earn my fair share (instead of the false choice between traditional employment where I own nothing vs. traditional open-source where I earn nothing) will be a powerful added incentive to participate in the prosocial economy. Incentives, after all, influence behavior.

The wide variety of possibilities, both in terms of licensing and revenue sharing regimes, is a feature, not a bug. Individuals and companies will compete in new ways that lead to much greater economic vibrance than our oligopolists offer today.

Decentralized and also prosocial

Technology on its own is amoral. The printing press could distribute Shakespeare but also Malleus Maleficarum, the treatise on witchcraft that resulted in the burning of millions of innocent women. Facebook has given dissidents voice, but it also sparked the genocide in Myanmar. A new messaging app can be used to share photos of kittens or to celebrate hate crimes.

Big Tech has an answer: they spy on all your communications and set terms of use to hopefully have more kittens and fewer hate crimes. (Of course, they also cave to rules set by autocratic regimes as the price of growth.) This is just the latest manifestation of the Hobbesian maxim: If you want protection from chaos and nastiness, there is no choice but to submit to the Leviathan (in this case Facebook).

In notable cases, ethical coders have built decentralized networking software with the best of intentions, only to have it hijacked by neo-Nazis. After all, technologies are not usually designed from the ground-up with game theory and morality in mind. They lack features to favor positive sum prosocial outcomes over antisocial misuse.

And yet, I’m optimistic about decentralized systems. I believe that armies of coders, artists, media companies, and consumers will dedicate time and energy to ensure that prosocial business models out-compete amoral or antisocial equivalents – while also offering those creators the capitalist incentives they deserve. Depraved and dangerous decentralized networks will starve for oxygen relative to those that serve the interests of normal people, and which advertisers find safer for their brands.

If in fact, prosocial (not just amoral) decentralized systems are the goal, neo-Nazis and their ilk must stand at an inherent asymmetric disadvantage. Fortunately, in terms of system design, decentralized does not have to imply “no rules at all”. Prosocial networks need not tolerate hate speech and antisocial behavior just because they are decentralized. If engineered properly they can retain a prosocial environment and out-compete nasty, brutish, antisocial competitors. If members encounter child pornography, terrorist propaganda, or similar material, or if it is recognized by AI software, appealing to a central Leviathan is not the only recourse. Members could be randomly selected and offered due compensation to review content removal appeals, or in egregious cases, requests for user sanctions.

Sanctioning a fully anonymous user is a game of whack-a-mole, as Facebook and Twitter have learned in their never-ending battle against the bots. But in a prosocial network where identity credentials matter, sanctions can have teeth. At a minimum, a particular identity credential can be burned with regard to use on a particular network. In more sophisticated systems, a family of credentials related to a particular individual (at least as issued by a particular certificate provider, e.g. a particular bank) can be burned up, downgraded, or sanctioned.

Unlike technologies, people are imbued with morality. Yes, humans have an ugly tendency to favor their in-groups and show hostility to “others”. Yes, some percent of people are sociopaths. Questions relating to one’s own self-interest or identity will likely trigger biased responses. Even democracies sometimes succumb to the bloodlust of the majority, trampling on the rights of the minority. Usually, the powerful decide what is acceptable, and that is not always compatible with what is moral.

Fortunately, the overwhelming majority of people are good and decent, know right from wrong, and seek to avoid the social shame that comes from trampling norms and values. Only a small portion of people are actual sociopaths; many more are simply misguided about issues that cut close enough to trigger identity or in-group biases. These same people would recognize a war crime if they saw it, assuming the perpetrator is not in an allied uniform.

In 1948, just after history’s deadliest war, almost all nations (Stalinist Russia and similar countries notwithstanding) signed the Universal Declaration of Human Rights. It is a beautiful document. I’ve taken to thinking of it as a Terms of Use for prosocial capitalism, and I believe the successful networks will be those that embrace similar principles. If you read it (please do), you might be struck by how frequently human societies fail to live up to its ideals. But I believe it will also strike you as a worthy moral North Star to strive toward. Humans at least generally agree on the basics of human decency.

Some countries come closer to the Universal Declaration’s ideals than others. Even the freest nations on Earth get tripped up by commercial interests and corrupting influences from time to time. In some US states, sharing videos from animal slaughterhouses as evidence of unethical treatment is illegal and branded as a form of terrorism (not the cruel treatment; the videography is the crime). The Declaration of Human Rights would recognize this as backwards. In authoritarian regimes, speaking out against the dictator is illegal, punishable by torture; the Declaration would recognize this as twisted. For any decentralized system to offer consistency and provide safe haven to legitimate speech, its terms would need to be compatible with Universal rights, not tribal or parochial concerns.

Mobilizing forces for good to solve coordination and collective action problems, while robbing evildoers of equivalent advantage, has been impossible until now. In decentralized, anarchic, fully anonymous systems without identity verification or source-checking there is little basis for trust. Meanwhile, in centralized systems requiring real names, individual self-interest can be incompatible with the Leviathan, and free speech and online organizing can be dangerous. With decentralized communications that use identity proofs and pay careful attention to game theory, new avenues become available.

Until now, decentralized generally implied anonymous. It is very hard to establish civility and punish bad behavior in a fully anonymous system. But armed with a trusted device, a digital butler, and identity credentials (e.g. “posted by a verified NJ homeowner”), new models can thrive. After all, who cares what a Russian bot thinks about NJ politics! And why spend time on a NJ political forum that gives those bots equal airtime?

Taking stock of decentralized prosocial systems, key elements would include:

  • Individuals vested with agency via a secure personal device running software that treats that individual as its customer, not a source of data to be exfiltrated
  • Prosocial economic models that out-compete antisocial alternatives by appealing to the self-interest of media firms, advertisers, and end-users
  • Identity credentials that allow engagement at various levels of anonymity, yet always, at minimum, differentiating real humans from bots
  • Licensing models that offer a profit incentive to coders, artists, and content producers, allowing distribution and reuse for profit (inspired by zero-compensation open-source licensing models, but in this case allowing contributors to earn their fair share)
  • A community-led, democratic immune system that responds to pathogenic antisocial attacks in violation of Universal human decency norms

Collective action against autocrats

Suppose I live in a country ruled by an autocrat. Those who speak out against the dictator are routinely tortured and their families are murdered. My co-worker tried to spark an uprising, posting with his real name on Big Tech social media; the authorities easily tracked him down and executed him. My cousin tried the same thing but using a fake name; of course, no one could tell if he was a government agent in disguise, so his post had no effect.  Another friend downloaded specialized software used by dissidents; the suspicious download marked him as a person of interest for questioning.

Recently, I learned of a new app for language translation. When activated, it listens to conversations and provides real-time translations, either spoken-word or as a written archive. Its publisher is a Scandinavian firm and the app participates in the modern ad network. It bears a prosocial compliance audit certificate digitally signed by a reputable human rights NGO, among others. My country isn’t yet as locked down as North Korea, so I was able to download this popular app without generating suspicion. Furthermore, decentralized prosocial apps are like a hydra; any one app serves as a gateway and directory to acquire others. There is no central app store where the regime can focus its censorial energy.

I follow a couple setup steps, much as I would for an Airbnb account (a quick photo along with my driver’s license), and my account is activated. Based on the document I provided, a set of credentials is generated, digitally signed by the Scandinavian publisher after clerical review, and saved to a secure vault on my phone. For example, one credential lists me as a verified adult male resident of my city. The translation application is free to use because merchants offer ads via the prosocial network. Government propaganda never reaches through this network; its claims would not survive the fact-checkers.

Over time, the app improves my fluency in some of the regional dialects, and I gain an additional credential to that effect. I use the ad network in reverse to market my services as a translator to merchants in the border region.

One day, I slide into the passenger seat of a truck next to today’s client, a uniformed member of the regime. In the back cabin he points to an unusual freight: the bodies of children who have been gassed. I swallow hard and hide my fear. Why did he summon me here? I keep up the pretense that he really needs my services, but I know this is pure terror and intimidation. The regime has grown so emboldened that their atrocities have become a tool of control.

I surreptitiously activate the translation app in silent mode. The software dutifully records his confession, or should I say boast, about the atrocity he and the general committed. His inflection and manner could just as easily be describing a successful fishing trip. I am sickened.

That night, I review the encrypted recordings and remind myself of the prosocial political asylum protocol. But I am too fearful for my relatives to release this recording and seek asylum today. Instead, I sync the encrypted data to decentralized storage and activate the dead-man’s switch feature. If anything should happen to me, the recording will be publicized. Now I, an ordinary villager, have leverage over the regime. Those children will be avenged.

Emboldened and angry, I agree to attend my uncle’s weekly resistance meeting for the first time. A dozen brave men and women meet me, none exchanging names. They are trusted by my uncle and that is all we each need know. With a tap from my uncle’s phone against mine, I receive an unusual form of identity credential: I can now post anonymously with the credential “trusted member of the Resistance, signed Eldermoor”. A woman seated nearby shows me how to use the certificate, how to trigger its self-destruction, and more tradecraft.

Who is Eldermoor? Few know his true identity; I have always called him Uncle. But Eldermoor was the identity credential used to publish the Resistance Thesis, a provocative document that shook the regime last spring and led to mass riots. Admiral Y***** himself, now in political exile, was famously televised saying that Eldermoor (he bearing that certificate, digitally signed by Human Rights Watch) is the trusted inside lead for the Resistance.

And so, as a trusted deputy of Eldermoor, I am now empowered to carefully sew the seeds of Resistance to those I trust. Each recruited member moves us closer to 5,000, the threshold that Eldermoor believes could trigger regime change in a bloodless coup – if and only if we all take collective action all at once. Silent and hidden little brothers one moment, pouring into the square to dethrone the tyrant the next moment, Big Brother is now vulnerable.

In the event we succeed, Eldermoor and those who committed early to the cause will be front-runner candidates to lead in a new government. Like a high-stakes Kickstarter campaign, this incentive to commit early, when the risks are greatest, helps turn the flywheel for collective action.

Collective action more generally

The story of Eldermoor is a tale of collective action. A better world is possible, but the participants are stuck in an inferior equilibrium. The story mentioned Kickstarter, and with good reason; it is built on design principles that help solve coordination problems. Prosocial principles go further, and they will evolve over time. Remember, the greater the extent that systems favor communication and trust to build positive-sum networks, the faster that network will build value and out-compete its rivals.

Climate change is a collective action problem. Voting out corrupt politicians is a collective action problem. Oppression in all its forms. Nuclear disarmament. Global minimum tax rates. Environmental degradation.

I’ll give an example close to home. I live on the Raritan Bay across from New York City. The Jersey shore is much cleaner than it was in the 1980s. In the winter, seals frolic in the bay. Ospreys and bald eagles flourish. And yet, when it rains a lot, a tide of raw sewage floats in. The NJ Department of Environmental Protection calls this a “rain event” or “floatables washup”. I suspect they choose a euphemism because they feel powerless to stop this environmental assault. There must be a lot of pressure to downplay it, to keep tourism and fishing fleets thriving at the Jersey shore. I understand the good intentions, but there is a better solution.

NY and NJ still rely on antiquated sewage systems, “Combined Sewer Overflows”, which mix raw sewage with excess rainwater and drain to waterways. Ensuring proper sewage treatment every day (not just when it’s sunny!) would be an infrastructure investment, create jobs, improve our fisheries, and increase tourism. But even I feel shy about mentioning this problem here because drawing attention to 20 billion gallons of feces-polluted water per year is not good for my home value or mental health. And yet, I find it hard to see how actually fixing this problem would be against anyone’s self-interest. I’ve lived in the same county for over 40 years, and no one has fixed it because it is a collective action problem. We need to collaborate on trusted communication networks where we have agency and where it’s not just the loudest voices who win. And then, hopefully we can all win (ok, maybe not the sociopaths and nihilists).

Community-level collective action, as with my own bay area, is already big and important. But to give a sense of the ultimate stakes of solving coordination problems, avoiding nuclear holocaust is also a collective action problem… 

There are more than 100,000,000,000 planets. It seems highly improbable that only one of them would host intelligent life. And so scientists have wondered, “Where are all the aliens?” The leading hypothesis has been that soon after inventing advanced technology, the competitive forces that spurred innovations in the first place become impossible to contain and the civilizations self-annihilate in planetary apocalypse, nuclear or otherwise.

If there really are aliens, which no longer sounds crazy after the Pentagon’s announcement, then I have hope that collective action problems really can be solved. The aliens didn’t blow themselves up. Maybe we won’t either.