Sunday, August 5, 2018

Balance of power, time and anticipatory obedience

Balance of power

The USA’s founding fathers were fearful of the federal government abusing its powers. To curb and control the government's might, a system of checks and balances was established.

Similar mechanisms have been installed in most modern democracies and to some extend in corporations as well.

As AI increasingly becomes a maker of decisions with significant impact on people, the questions arises if and how such checks and balances are being implemented to curb AI’s power.

Time

Time is an important dimension in the human decision making, examples include the common practice of sleeping on an important decision or by review periods built-in the legislative processes. The purpose is to reduce emotional aspects in decision making and to provide more time for broader review.

Rushing a decision is often a sign for trouble: politicians pushing an agenda or sales people trying to close a deal.

AI systems are capable to make decisions at a rate which is orders of magnitude faster than human decision making while processing all relevant data. As AI decision making is unlikely to be driven by emotions, a computer might not need to sleep on it.

However, is there a need for slowing down the process so that human and/or independent AIs can review and assess the decisions made?
The Speed Conference by  Cornell Tech on September 28-29, 2018 will discuss the role of speed in teh context of AI decision making.

Anticipatory obedience

Recently I read the Foreign Affairs article “How Artificial Intelligence Will Reshape the Global Order” by Nicholas Wright  in which he states “Even the mere existence of this kind of predictive control will help authoritarians. Self-censorship was perhaps the East German Stasi’s most important disciplinary mechanism. AI will make the tactic dramatically more effective.

As I had grown up in East Germany and had gotten a full dose of party and Stasi treatment (I was 24 when the wall came down), I was particularly interested in this observation. And of course the trouble with self-censorship and anticipatory obedience goes back at least another 50 years into the Prussian-German history.

I discussed the article with a colleague and made the point that “I think, however, that self-censorship and anticipatory obedience is widely spread in the West, too, and to be a key ingredient in many corporate cultures.

In the context of artificial intelligence, I was wondering if anticipatory obedience is a key characteristic of AI. It seems to be certainly true for reinforcement learning, one important technique in AI. In this approach a computer program is trained to take actions that maximize a certain reward function.

Catastrophic developments have been caused by anticipatory obedience. It does not come as a surprise that the #1 lesson of Timothy Snyder’s “On Tyranny: Twenty Lessons from the Twentieth Century” is “Do not obey in advance”.
The role of anticipatory obedience in the context of AI needs careful consideration.

No comments:

Post a Comment