By Jake Brukhman (@jbrukh). I am cross-posting this article from CoinFund's Blockchain Investments Blog.
“But great power carries with it great responsibility, and great responsibility entails a large amount of anxiety.”
— Sir Hercules G. R. Robinson
Last summer Andreessen-Horowitz and Union Square Ventures announced a $1M seed round in OpenBazaar. Implicitly everyone understood that these highly respected traditional VC firms were, in effect, funding the same kind of proposition that five months earlier landed the alleged proprietor of Silk Road, Ross Ulbricht, in jail for the rest of his life. Consequently, Brad Burnham of Union Square couldn’t responsibly end his blog post on the announcement without “addressing the potential dark side of decentralized markets.” It was a nod and not an admission, but the unspoken analogy hanging in the air was obvious.
Then, less than a year later, this happened: hours after launch, illegal contraband hit the newly launched OpenBazaar decentralized marketplace — because the free market and anonymity of the platform allowed it. Concurrently, the advent of DAO technology inspired Daemon, a straight-up darknet market, to announce a decentralized crowdfunding. It is allegedly built on the DAO work of the Slock.it team, powered by Ethereum, and hosted on Tor.
Up until now, few people have brought up the fact that the blockchain community and generally anyone building decentralized applications right now are taking on a “great responsibility” for unleashing vast and unpredictable systems into the world. In my opinion, we should be much more anxious than we seem to be.
Alex Bulkin rightfully calls this the “elephant in the room”. He posits that letting loose uncensored, anonymous, free market activity on a large scale creates a massive (and unaddressed) potential for destructive unethical behavior, and I’d like to add to that analysis that it may not be entirely intentional. The idea of ethical abuse of technological systems certainly is not new. We’ve lived for many years with its many diverse forms — shady IRC chatrooms, highly questionable subreddits, BitTorrent trackers in grey areas of legality, the Deep Web, and of course Silk Road and other darknet markets. But given the rapid trajectory of growth in the blockchain space and its networks, the difference is scale.
Our investment portfolio, CoinFund, is very much philosophically invested in the principle of a free, open, and uncensored market. But should we invest in the DAO crowdfunding of a platform such as Daemon, which seems to enable, and even implicitly support, highly questionable activities? Intuitively, off the top, the answer seems to be no. Yet, without a framework or set of principles about what kind of open investments are acceptable, we are at a loss as to explain our exact reasoning.
The issue is not so much, “Where are we going?” — humans and economics are great at adapting to new technological circumstances and making them work — but rather, “How will we get there?” I would like to attempt to formulate a framework for thinking about unethical emergent behaviors that arise in decentralized systems, recognize that they are likely short-term but impactful reverberations, and to echo Alex in a call on the community to debate this issue.
The DDoS marketplace
One day, someone creates a decentralized application on a smart contract platform such as Ethereum, and this application implements a DDoS marketplace. What is a DDoS marketplace? It is a marketplace for crowdfunding DDoS attacks aimed at particular websites. The user enters a domain name and the application can estimate the number of requests per second required to make the site unreachable or unusable for a period of time. The application also provides a valuation for a fixed amount of money that the market would be willing to transfer to any network node that provably performs an HTTP request to the site. The entire Internet then proceeds to globally crowdfund the market. When the minimum raise is achieved, the application puts the value proposition to the network: if you can prove that you made a request to this domain, I will pay you — go!
The economics are clear. There is no one stopping anyone from creating such a marketplace, nor such a market, nor is such an application very susceptible to regulation or law enforcement. The risk that this application will come to be enabled and implemented is non-trivial, and every day it inches closer and closer to fruition. If you don’t want to wait until then, you can buy small, cheap DDoS attacks from private botnets today.
The decentralized version of this system is full of emergent behaviors. Few individuals would pay a lot of money to single-handedly DDoS a site. But a decentralized DDoS marketplace harnesses the democratic power that aggregation of global capital provides. And at scale, it has the potential to turn the political sentiments of an entire group into a very physical event with very real consequences for individual freedoms. Mind-bogglingly, it’s not even required that someone makes a conscious decision: I am going to implement a DDoS market. A DDoS market can simply emerge from some general configuration of free market incentives. For example, it is trivial to accidentally build a DDoS market on a prediction platform like Augur. One just needs to forget to think and to enter the question, “Will Disney’s website be down today?” as the future event. What else are we forgetting to think about?
Finally, individual nodes on the network may not even be aware that they are participating in an activity that, in aggregate, is unethical. If some valuable request for an HTTP request comes down the pipe, who is to say exactly why this is so? Chances are, the node won’t think much of it: making HTTP requests is neither illegal nor novel. And such is the trouble of emergent behaviors. Just like the event’s funding, the responsibility for the event is shared among a large group of which no member is directly responsible.
From the standpoint of the victim, a swarm of HTTP requests comes their way and disturbs their enterprise. There is downtime and interruption of service. Sure, there are ways to protect sites from DDoS attacks, but they cost money and not every possible attack vector is preventable. At the end of the day, a market can censor a victim.
At scale, the inefficiencies in these systems collapse to zero; what emerges can only be described as a kind of group telekinesis. DDoS markets are created in milliseconds in response to sentiment. The markets are funded in milliseconds because there are hierarchies of automated investors monitoring for opportunities. DDoS attacks therefore begin in milliseconds as well, with armies of nodes happily performing automated attacks and collecting payments. The emergent behavior of such a system is that the political will of groups to censor targets is carried out instantaneously. This kind of automation could make the case of Justine Sacco, whose life was supposedly ruined by the popular will of Twitter in the course of an airline flight, look like what a tricycle looks like next to an F-16 fighter jet.
What does the proliferation of DDoS marketplaces mean at scale? Does it mean that celebrity popularity contests will be incontrovertibly decided by the new and sudden power of groups to shut down their websites, to censor their data, to invade their privacy, and — ultimately — to put them out of business? (Will you be able to resist raising the proverbial digital pitchfork at that spoiled brat, Justin Bieber?) Will the uptime of the media outlets of presidential candidates become proportional to their polls and debate results? Will “state-run media” be eliminated by reverse censorship perpetuated by the populace against their oppressors?
How does the user, the platform, the application, the victim, and the system as a whole provide ethical guarantees in such a Wild West of incentives?
On ethics in decentralized networks
Google can only afford half a trillion dollars worth of computer nodes, but a global network owned by 7 billion individuals and an equal number of mobile devices can grow massively, massively larger. That is the point — scale.
When cryptocurrency is introduced into a global, decentralized computer network it swiftly unlocks an emergent behavior. That is, given the ability to transfer value programmatically, nodes may compel other nodes to do their bidding through various incentivization processes. It is important to note that “nodes” here refers to completely self-operating, globally distributed, possibly anonymous, and untrusted nodes.
Whether the nodes are operated by humans or by software is immaterial — they exhibit, on average, the same behavioral tendencies:
- Nodes are greedy. If value can be acquired by nodes, it will be acquired by nodes: they will generally fulfill economic demand when it is viable to do so.
- Nodes are amoral. The behavior of nodes is based on economic considerations in networks and not ethical or jurisdictional requirements.
Probably the most advanced real-world example today of the unintended consequences I am describing is the behavior of brain wallets, a Bitcoin address generation scheme which was shown to be insecure. The idea of brain wallets is that because Bitcoin addresses and private keys are hard to memorize, if one could generate a key from a mentally-stored password, convenience and security are achieved. A short time later, this happened:
According to researchers, many wallets were drained within minutes, while most were emptied within 24 hours. […] Experts identified 884 brain wallets storing 1,806 BTC (worth approximately $100,000), and determined that only 21 of them, representing 2 percent of the total, were not drained by cybercriminals.
Of course, the “cybercriminals” in question were actually pieces of automated software running on anonymous nodes — in fact, one doesn’t even need to be connected to the Internet to brute force a result. Putting your money into an insecure wallet today is like throwing it away, and that’s obvious, but consider that this system arose simply of carelessly generated economic incentives and in total contradiction to the intentions of the wallet software.
By example we have demonstrated the basic underlying amoral mechanism of cryptocurrency-enabled decentralized networks, which is worthy of a name — let’s call it the amoral bounty model. In this model, a (i) viable economic pressure exists or is created (as through a smart contract, DAO, bounty, vulnerability, or centralized service), which (ii) incentivizes a large decentralized network of nodes, as above, to (iii) rectify it through solely economic considerations.
You can set a bounty on the cure for cancer or the merciless censorship of a political website: it doesn’t matter to anonymous, amoral, economically-driven agents. The potential for abuse is obvious. To throw out just three of probably thousands of examples of how bounty models at scale can erode freedoms, raise costs, and threaten individual safety:
- The freedom of speech and enterprise can be eroded through DDoS marketplaces, which we have covered at length.
- Short-term security is eroded through bounties on password attacks and general security. And while, like hacking, attacks on security keep a healthy pressure on the strength of security, this of course comes at an economic price. Unlike in our world of analog hackers, anonymous nodes are not likely to expose vulnerabilities quietly and accept private bounties.
- Personal security can be eroded through economic bounties on lives of people, also known as assassination markets. I can’t think of an obvious solution to this kind of threat at scale, except to hope that most people are fundamentally benevolent. Can you?
There is an argument to be made that these kinds of threats are merely transient states in a self-correcting system. If the economic incentive to steal shiny cryptocoins generally erodes security, then security will improve and the problem will go away. If businesses are being censored by DDoS attacks, at some point it becomes more economically viable to prevent the attacks; centralized websites will tend toward decentralization, since it is significantly harder to DDoS and censor data in a highly redundant system — like IPFS. Will people, by analogy, learn to become so personable as not to end up on the shortlist of an assassination market?
In the long-term, it will be prudent to settle on an equilibrium where most users make ethically efficient use of decentralized systems, or at least that we are ethically comfortable with our use. For now, we should be closely monitoring and reasoning about short-term damage that will arise as a result of the sudden availability of large decentralized networks.
What measures of protection are available to potential victims, end users, and innocent bystanders? How do we define, predict, and monitor emergent behaviors of globalized systems? What measures in the protocol layer or the application layer are available that, while respecting privacy and abstaining from censorship, protect legitimate and acceptable uses of decentralized networks?
These are questions we need to be asking while we build extremely powerful, scalable, and future-oriented software.