Robotics, Laws & Blockchain

By Dan Conway  |  October 11, 2017

Recently, I was involved in a conversation with some folks who were using blockchain to persist rules of behavior for swarms of drones. Part of the conversation brought back memories. Almost 25 years ago, when the first Jurassic Park movie was created, gatherings of AAAI (at the time the American Association for Artificial Intelligence) often talked about how flocks of birds could be modeled as agents, and what rules they had to follow to be successful both individually and collectively. Birds in the middle had “keep your distance” rules and birds on the perimeter had some flexibility in choosing direction and speed. Most followed neighbors. I never heard if that was the algorithm chosen for Jurassic Park, but watching the digital flying dinosaurs made us feel like we were aligned with the behavior of Spielberg’s flocks. Admittedly, they had better graphics.

The conversation then turned to rules, and in particular Isaac Asimov’s “Three Laws for Robotics” first published in 1942 for the “56th Edition, Handbook of Robotics, 2058 AD”. These were intended to be both a safety feature as well as a generator of interesting consequences. The rules are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  1. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  2. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

One can’t help but create scenarios where the rules require exceptions or create awkward outcomes, though perhaps these consequences are addressed somewhere in the first 55 editions. Obvious examples involve short term vs. long term harm such as medical procedures or visits with attorneys, incomplete or ambiguous orders, and logical contradictions or violations of assumptions. Recall the Star Trek episode where the robot Vger (Voyager) believes Captain Kirk is its creator and that Kirk cannot lie. When Kirk says “I’m lying”, the robot does not have capacity to handle this paradox and eventually self-destructs.

Fast forward to today and the problem at hand. Blockchain is a disruptive technology with some interesting features, and it's finding its way into applications well beyond virtual currencies such as Bitcoin. Drones operate in a context where communication is sporadic (remember TCP/IP design principles), where k-of-n majority decision making may be needed, and where adversaries have some interest in changing the operating rules. Those controlling the drones may wish to have their programming immutable, thus resistant to adversarial efforts. Blockchain’s multi-signature, consensus network, and immutable record structure seem well suited for this context.

The swarm conversation converged to the question: “Does it make sense to use an immutable blockchain to persist the rules of behavior for the drone swarm?” This is a reasonable question, as among the fears of robots, is that they might eventually learn to reprogram themselves, and it would appear that an immutable blockchain would prevent that.

Of course, immutable implies that neither the robot nor the owner can change the content of the blockchain. The smart contract architectures where source code is embedded into a blockchain have this feature. The code is executed on all nodes in the network concurrently in what is deemed a “World Computer.” The resulting output might be a stream (pub-sub message) or change in possession of an asset, which might in our case be a “direction value” represented as a ledger entry/token owned by a “direction owner.”

The problem with this approach, is that while we might be getting better at writing code, we still occasionally write flawed code, or the context in which the code executes changes. Think of the 10-year-old Linksys home routers with security flaws that operate all around the world. As security guru Dan Geer suggests, we should build software that self-terminates after a set period of time if it cannot be remotely controlled or patched. The Star Trek episode would have been significantly less dramatic if Mr. Geer had had his way.

There are options of course. One could place just links on a blockchain, links to source code or rules which might reside (encrypted) on a global data platform such as IPFS or a private permissioned equivalent. If the rules were to change, then one could put the new rules on IPFS, and then have IPNS redirect requests to the new hash over the updated immutable content. This would give the benefit of immutable information, immutable source code, but better control over the actual code that gets executed.

Rules often require appending, modification, or deprecation. Consider the 0th rule later added by Isaac Asimov:

  1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

More broadly, how might rule additions impact an AI application over blockchain? If rules expire, do they do so gracefully and in coordination with the distributed rule base on the blockchain? Do they expire everywhere? It is my experience that any advice from Mr. Geer is quite well thought out and should be taken seriously.