Showing posts with label emotions. Show all posts
Showing posts with label emotions. Show all posts

Sunday, August 5, 2018

Balance of power, time and anticipatory obedience

Balance of power

The USA’s founding fathers were fearful of the federal government abusing its powers. To curb and control the government's might, a system of checks and balances was established.

Similar mechanisms have been installed in most modern democracies and to some extend in corporations as well.

As AI increasingly becomes a maker of decisions with significant impact on people, the questions arises if and how such checks and balances are being implemented to curb AI’s power.

Time

Time is an important dimension in the human decision making, examples include the common practice of sleeping on an important decision or by review periods built-in the legislative processes. The purpose is to reduce emotional aspects in decision making and to provide more time for broader review.

Rushing a decision is often a sign for trouble: politicians pushing an agenda or sales people trying to close a deal.

AI systems are capable to make decisions at a rate which is orders of magnitude faster than human decision making while processing all relevant data. As AI decision making is unlikely to be driven by emotions, a computer might not need to sleep on it.

However, is there a need for slowing down the process so that human and/or independent AIs can review and assess the decisions made?
The Speed Conference by  Cornell Tech on September 28-29, 2018 will discuss the role of speed in teh context of AI decision making.

Anticipatory obedience

Recently I read the Foreign Affairs article “How Artificial Intelligence Will Reshape the Global Order” by Nicholas Wright  in which he states “Even the mere existence of this kind of predictive control will help authoritarians. Self-censorship was perhaps the East German Stasi’s most important disciplinary mechanism. AI will make the tactic dramatically more effective.

As I had grown up in East Germany and had gotten a full dose of party and Stasi treatment (I was 24 when the wall came down), I was particularly interested in this observation. And of course the trouble with self-censorship and anticipatory obedience goes back at least another 50 years into the Prussian-German history.

I discussed the article with a colleague and made the point that “I think, however, that self-censorship and anticipatory obedience is widely spread in the West, too, and to be a key ingredient in many corporate cultures.

In the context of artificial intelligence, I was wondering if anticipatory obedience is a key characteristic of AI. It seems to be certainly true for reinforcement learning, one important technique in AI. In this approach a computer program is trained to take actions that maximize a certain reward function.

Catastrophic developments have been caused by anticipatory obedience. It does not come as a surprise that the #1 lesson of Timothy Snyder’s “On Tyranny: Twenty Lessons from the Twentieth Century” is “Do not obey in advance”.
The role of anticipatory obedience in the context of AI needs careful consideration.

Monday, July 2, 2018

Social relationships

AI will have significant impact on people’s lives, in a most private and personal way. Here are a few emerging scenarios.

Social aspects of work

Beside generating income, work plays an important role in most people’s social life. Work gives a certain sense to people lives by making meaningful contributions and colleagues are often friends. If work is eliminated from people’s lives these become less rich.
While studies have shown a correlation between declining employment prospects and declining mental health, the long-term situation is not clear. These seem to be areas which requires further study. Search for substitutes of work's social aspects, e.g. volunteering, appears to be one avenue.

Social Scoring Systems

Social scoring systems are being pioneered in China, initially by private companies but with plans by the government to make them official. These systems take credit scoring as long practiced in most advanced economies to the next level by extending the data set beyond direct financial indicators.
The implication is a universal judgement of a person. Instead of answering if a person is credit worthy, the question now is: Is this a "good" person?
Such systems are likely to create conformity and distrust between people and to suppress individuality and dissent. Again, citizens need to debate and decide what kind of a society they would like to live in.

AI companionship

AI empowered digital assistants on mobile devices and in-home appliances have made great progress in recent years. Their voices as recently demonstrated by Google’s Digital Assistant are almost indistinguishable from human ones and their ability to understand humans keeps improving.
The questions if digital assistants can become your best friend has been explored, for example by Digg, the Wall Street Journal and others. Alexa and her friends are always there for you, they are great listeners, they know everything. Soon they will really understand you and help you, specifically if you paid for having Dr. Phils premium knowledge base loaded.
Besides the virtual companionship, companies are also working on providing the tactile experience
There are clear benefits to these scenarios, but they come with some significant side effects.
Individuals must think, debate and decide what they want their future to be.
Corporations have great responsibilities in the fashions, tastes and desires they have their marketing departments create.