Saturday, December 29, 2018

Transparency

We need to distinguish between an overall system which is in part powered by AI algorithms and the algorithms themselves.

We also need to distinguish between transparency to whom, e.g. an end-user, an operator, an auditor or a regulator.

There should be no difference between a system if it powered by AI or not and companies should have the necessary policies and processes in place to provide transparency and auditability at a system level.

The focus will be AI algorithms.

Creating transparency by opening the algorithm, e.g. exposing the code, is in most cases not feasible for legal and practical reasons.

There is precedence for the legal consideration. For example, credit rating algorithms are legally considered a trade secret and hence protected from exposure to the public. Even for Germany, which is in generally in the forefront of consumer protection, this is the case for the national credit scoring organization Schufa

Practically, many machine learning algorithms, specifically neural networks, are so complex that they cannot be understood by looking at the code.

As the result of my research and discussion with others working in the field, I see two levels of transparency which could be implemented.

The first, weaker level, is publishing the intentions and policies which are implemented by the algorithm. For the credit scoring case, the intend is to access one's credit risks and the policies could be the amount of credit in relationship to one's net-worth, one's payment history, etc. 

A second level can be achieved by counterfactual explanations. They explain why a negative decision has been made and how circumstances would have had to differ for a desirable outcome. For example, I would have gotten a million dollar loan if I had $250,000 more in assets as collateral, or if I had not defaulted on a car loan 20 years ago.

Counterfactual explanations are discussed in depth in these two papers by Sandra Wachter, Brent Mittelstadt and Chris Russell: 



The concept of counterfactual explanations has been implemented by Google’s What-if tool for Tensorflow.

There are questions which remain:
How to demonstrate that an algorithm implements a certain policies?
Testing, e.g. with Monte Carlo simulations, might be able to demonstrate this is achieved (to a certain extend).

How to resolve an appeal by a user who is not satisfied with counterfactual explanation? What would be the basis for deciding the appeal?
One possible approach would be that person appointed by the operator of the system would make a decision based on the company policy. Further appeals to a human-made decision would follow existing practice.

Tuesday, December 25, 2018

Human empowerment and control


A goal often stated for AI systems is the empowerment and augmentation of the users of such systems.

Empowerment implies control. Paul Scharre discusses different levels of control in his book "Army of None: Autonomous Weapons and the Future of War” which I discussed in an earlier post. This three-level categorization can, however, be applied to AI systems in general.

·       in-the-loop
the AI recommends, and the human decides

This is the strongest form of control and empowerment.

There is, however, the potential that a human is just an actuator and not a decider because the AI makes a suggestion which cannot be investigated and validated by the human decision maker for various reasons, e.g. the time for making a decision is too short,  the large numbers of decisions to be made overwhelms the decider, there's a lack of transparency of the recommendation process.

The human in-the-loop could have also developed a blind trust in the AI recommendations.

·       on-the-loop
the system decides, but the human can take over control at any moment

Essentially the human has control over the on/off switch of the AI systems which is very coarse grain level of control yet ultimately very powerful.

·       out-of-the-loop
this is a fully autonomous system; the human controller has no power.

There can be reasonable use cases for each of these levels of human empowerment.

When developing a AI system, all stake holders including product owners, data scientist, legal and senior executives need to carefully assess which level of human control and empowerment should be implemented.

An important consideration is the reversibility of a decision. If the decision is reversible, a possible mitigation is the possibility to challenge a decision by a person who is subjected to it as discussed in a previous post

Monday, December 24, 2018

Predictability and unreasonable inference


AI can potentially draw non-intuitive and unverifiable inferences and predictions.

To create trust in AI systems, it will be necessary that AI systems behave predictably, i.e. within the expectation of published intent and policies.

When an AI algorithm produces unreasonable inferences, i.e. a result which is outside the expected outcome, and the result has significant impact on a person life, the person subjected to the AI algorithm should have the right to challenge such an unreasonable inference.

Operators of AI systems need to provide adequate means for a person to challenge such unreasonable inference. This includes software solutions, internal processes and sufficient human supervisors to handle such cases.

Operators should also conduct rigorous testing of the algorithm to minimize unreasonable inferences. 

Sandra Wachter and Brent Mittelstadt look at this issue in their forthcoming article "A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI" from a legal perspective.

Fairness Revisited


A few weeks ago, I attended a workshop in Cambridge (MA) organized by Harvard, MIT and Princeton where I had the chance to discuss the concept of fairness with a number of experts. Based on this input revised and schnarpend earlier observation on fairness:

There are many definitions of fairness across the ideological spectrum. It cannot be expected that there will be broad agreement on a single definition.

Furthermore, different definitions of fairness might be applicable to different circumstances.

A company will need to pick a definition of fairness which it deems applicable to a certain area of application and publish the chosen definition of fairness. The algorithm should be also tested to ensure that it implements the specified fairness definition.

Based on stakeholder feedback, the company might adjust the definition of fairness. The adjustment should be appropriately documented.

The Meta Data Manifest


Because data is critical to AI algorithms it must be handled with the necessary rigor and transparency.

The meta data about the data which is used to train an algorithm needs to be carefully documented, including time of data collection, the collection method and device (if applicable), the location of the collection, if and how the data has been cleaned and validated, intended uses, maintenance, life cycle etc.

The article "Datasheets for Datasets" by Timnit Gebru et al. motivates and introduces this approach.

Such a manifest provides auditability and transparency and lays the foundation for responsibility and accountability.

It is important to incorporate the notion of the meta data manifest in machine learning tools and workbenches automate the meta data collection and maintenance wherever possible.

Tuesday, September 18, 2018

Review of "Army of None: Autonomous Weapons and the Future of War” by Paul Scharre


I recently read “Army of None: Autonomous Weapons and the Future of War” by Paul Scharre which was released in April 2018. This book provides a comprehensive view on autonomous weapons and the role of AI in the military.

Paul Scharre is a former U.S. Army Ranger who served in Iraq and Afghanistan. Later he served as civilian adviser in the Pentagon where he wrote policy documents on autonomous weapons. For the last two years, Paul has been a Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security (CNAS), a bipartisan think tank.

Even while there is debate about a commonly accepted definition of autonomous weapon systems, it’s clear that such weapon with a range of capabilities exists and are being used. Examples includes the Patriot air defense system, the Aegis combat system, and loitering drones which can stay over predefined areas and autonomously search for targets and destroy them.

Paul interviewed a wide variety of military and civilian leaders and on their views of the application and limits of autonomous weapons which vary widely. A new arms race seems likely.

An interesting point is that the advancement of technology for autonomous weapons driven is, unlike for previous weapons systems, largely driven by the civilian tech sector. An additional layer of complexity comes from the inherently multi-purpose character of AI technology and the practically unrestricted proliferation of digital technology.

The book also left me with questions. These are two most critical ones to me.

Is the concept of human in/on the loop an illusion?

In 1983, the world was at brink of a nuclear war. The Soviet Army’s Lt Colonel Stanislav Petrov was on duty when their new missile warning system issued alerts of incoming nuclear intercontinental missiles. Petrov assessed the situation and concluded that it was a false alarm. A human in the loop avoided nuclear war.

Petrov, however, had additional information available which the system did not have, e.g. that the attack pattern was unlikely and the understanding that the new warning system was unproven. Assessing this information, enabled him to be a meaningful decider.

During the Gulf War, there were a number of friendly fire incidents by a Patriot battery. As a subsequent investigation showed, the Patriot battery commander was not at fault. She did not have additional information beyond what the Patriots radar was providing. This made her an actuator rather than a decider.

I think the difference between decider and actuator is if there is information which can better processed by a human than an AI system. As AI technology advances this gap is going to shrink. The human in-the-loop will more and more turn into an actuator. The human on-the-loop will have little additional information to determine if and when to pull the plug.

An additional factor is speed, if both sides in a conflict deploy autonomous weapons, e.g. swarms of drones, things will happen so fast they are most likely beyond human sensing capabilities.

Will the reduced risk of loss of human life lower the threshold for military conflict?

Governments are, and rightfully so, reluctant to commit “boots on the ground” as means for resolving political conflicts and to confront bad actors. The threshold is a lot lower when it comes to deliver missile or drone strikes or other military means with a low risk of harm to its armed services personnel. 

However, these means have their limits in today’s asymmetric conflicts.
Projecting from publicly available information about drones, self-driving vehicles, the Boston Dynamics dog and similar ground force robots, swarming, sensing and assessing, e.g. image processing, makes (semi-)autonomous ground forces a likely reality in a not so distant future. These would significantly lower the risk of loss of human life.

While it is a desirable goal to reduce the risk for human life, it might also lower the threshold for governments to engage in military conflict.

Sunday, August 5, 2018

Balance of power, time and anticipatory obedience

Balance of power

The USA’s founding fathers were fearful of the federal government abusing its powers. To curb and control the government's might, a system of checks and balances was established.

Similar mechanisms have been installed in most modern democracies and to some extend in corporations as well.

As AI increasingly becomes a maker of decisions with significant impact on people, the questions arises if and how such checks and balances are being implemented to curb AI’s power.

Time

Time is an important dimension in the human decision making, examples include the common practice of sleeping on an important decision or by review periods built-in the legislative processes. The purpose is to reduce emotional aspects in decision making and to provide more time for broader review.

Rushing a decision is often a sign for trouble: politicians pushing an agenda or sales people trying to close a deal.

AI systems are capable to make decisions at a rate which is orders of magnitude faster than human decision making while processing all relevant data. As AI decision making is unlikely to be driven by emotions, a computer might not need to sleep on it.

However, is there a need for slowing down the process so that human and/or independent AIs can review and assess the decisions made?
The Speed Conference by  Cornell Tech on September 28-29, 2018 will discuss the role of speed in teh context of AI decision making.

Anticipatory obedience

Recently I read the Foreign Affairs article “How Artificial Intelligence Will Reshape the Global Order” by Nicholas Wright  in which he states “Even the mere existence of this kind of predictive control will help authoritarians. Self-censorship was perhaps the East German Stasi’s most important disciplinary mechanism. AI will make the tactic dramatically more effective.

As I had grown up in East Germany and had gotten a full dose of party and Stasi treatment (I was 24 when the wall came down), I was particularly interested in this observation. And of course the trouble with self-censorship and anticipatory obedience goes back at least another 50 years into the Prussian-German history.

I discussed the article with a colleague and made the point that “I think, however, that self-censorship and anticipatory obedience is widely spread in the West, too, and to be a key ingredient in many corporate cultures.

In the context of artificial intelligence, I was wondering if anticipatory obedience is a key characteristic of AI. It seems to be certainly true for reinforcement learning, one important technique in AI. In this approach a computer program is trained to take actions that maximize a certain reward function.

Catastrophic developments have been caused by anticipatory obedience. It does not come as a surprise that the #1 lesson of Timothy Snyder’s “On Tyranny: Twenty Lessons from the Twentieth Century” is “Do not obey in advance”.
The role of anticipatory obedience in the context of AI needs careful consideration.

Wednesday, July 18, 2018

Multi-purpose Nature and Proliferation of AI Technology


AI technology is inherently multi-purpose.

Intelligence is multi-purpose and so is AI.

Face recognition software is good example. Facebooks tags your friends in your pictures so that you don’t have to do it. Police, state and private security organizations may use the same technology to identify a person in a public crowd. Their motivations and actions may vary.

Similarly, self-driving cars and trucks share a lot of technology with autonomous military vehicles. And there are plenty of more examples.

Non-proliferation of AI technology is almost impossible

The history of the internet has demonstrated that it is very hard if not impossible to control the distribution of digital media. For example, the Napster led spread of digital music was solved through a combination of license  restrictions, threat of enforcement, economic measures and convenience.

Furthermore, many AI-related algorithms are available under open source licenses with barely any restrictions on their use.

License restrictions would be unenforceable with bad actors.

Keeping software within in an organization is no guaranty for keeping its use contained. Sensitive data is stolen from companies and government organizations on a regular basis.

The 2017 Vault 7 case where hackers gained access to the CIA’s cybersecurity and surveillance tools, illustrates this vividly.

Commodity hardware

While in the past special hardware was needed for more advanced algorithms, today’s data centers are largely built on commodity hardware which is widely available. Another historically available proliferation barrier has disappeared.

Education is free and widely available

The knowledge barrier also has disappeared. Anyone with sufficient intellectual capabilities, language skills and motivation can obtain the necessary knowledge through online courses, books, manuals, etc.

Treasure is not a key differentiation

Given the software and knowledge is free and commodity hardware is cheap, treasure does not provide a barrier to entry.

Conclusions

Given this situation, it must be assumed that bad state and non-state actors will have a level of AI technology available to them close enough to what the most advanced security, intelligence and military organizations have to create severe internal and external threats.

Saturday, July 7, 2018

AI weapon systems are likely to increase the number military conflicts


The Doomsday Machine

After reading Daniel Ellsberg’s book with the same title I felt lucky that I am still alive and that in general mankind survived the cold war. 

Following the Manhattan project, nuclear weapons and the triad of delivery systems were mass-produced within a couple of decades on both sides of the iron curtain. But the control mechanisms were crude and post cold war analysis of the Cuban missile crisis showed how close we had come to destroying civilization.  

Mutually assured destruction was the doctrine which prevented the use of nuclear weapons since the 1960-ies.

Continuation of policy by other means

In the sense of Clausewitz’ definition of war as a mere continuation of policy by other means, nuclear weapons are rather useless. For the continuation of policy, the military needs weapon systems which can be scaled as well as contained. This was demonstrated by the military conflicts of past decades. Carrier battle groups, advanced airplanes and cruise missiles have been the weapons of choice.

Today’s advanced weapon systems are already highly computerized. GPS and laser based guiding systems for example are controlled by complex software systems and could be considered weak AI systems.

There is no question about AI being used for weapons systems, it’s only a matter of timing and capabilities. Judging from a what’s feasible today, AI weapon systems most likely exist already.

Former US Deputy Defense Secretary Robert Work and Shawn Brimley predicted in their 2014 paper 20YY Preparing for War in the Robotic Age that "The number of instances where humans must remain in the loop will likely shrink over time" due to advances in AI. 

Unmanned aerial vehicles are good examples of the advances towards autonomous weapon systems. Today there is still a human legally required to pull the trigger, but it’s unclear if the human is a true decision maker or just an actuator. The decision might be inevitable by the software-based analysis of the raw data, e.g. face recognition software positively identifying the target.

A major consequence of this development is that the threshold for military means is lowered dramatically. Mutually assured destruction was the strongest deterrent. More than 50,000 dead US soldiers in Vietnam seriously questioned this war and the deployment of the US military in general.

About 5,000 dead US military personnel in Iraq, a tenfold reduction over Vietnam due to technology advances in conventional weapon systems, changed the public debate.

The next war, fought predominately with autonomous weapons, is likely to reduce the body count by another order of magnitude. 

What’s another trillion dollars? 

While the cost of war is measured in lives and treasure, the balance is shifting towards treasure. Over the last half century, politicians have demonstrated little restrain when it comes to spending public treasure and borrowed heavily.

Military conflict seems more likely than ever by the advance of AI weapons systems.

Wednesday, July 4, 2018

Fairness vs. Bias

A requirement for AI systems established by organizations and companies is to ensure fair decisions which are free from bias.

The relationship between fairness and bias is, however, complex.

A suitable and timely example to discuss this complexity is affirmative action, i.e. giving preference to women, black people, or other groups that are often treated unfairly (Cambridge Dictionary). 

A bias is explicitly introduced for rectifying previous unfair treatment.

The intrinsic issue of fairness is that while everyone should be treated the same, not everyone is the same.

In the case for affirmative action, origin, race, sex, the socio-economic status and the education of parents are beyond the control of a young person and but make a huge difference in the opportunities they have. 

This question is currently under debate in the context of the US college admission process where affirmative action is challenged by President Trump.

Affirmative action and similar approaches suffer from the complexity of quantifying the level of preference introduced into the systems. The system of quotas which is often applied takes a very simple approach to a complex problem.

Other applicable examples are 

  • Tax code
    What's fair a flat or a progressive tax system?
     
  • Insurance
    Is risk assessment fair?
    Is it fair that young drivers pay more?
    Is it fair to provide health insurance discounts for age, life style, body mass index?
     

When applying fairness and bias to AI systems, a definition of fairness needs to be provided.
The  discussion between progressive, libertarian and conservatives approaches is ongoing. While some countries found broad consensus on the definition of fairness in their societies, the US is not among them. 

Human values and their implementation

Instilling human values into AI systems appears to be an important goal in the discussion of AI ethics.

There is, however, no clear consensus what universal human values are. Value systems are determined by different historical, cultural and religious experiences.

Furthermore, humans don’t have a good track record of implementing such values.  Despite thousands of years of religious and philosophical scholarship resulting in the creation and interpretation of value systems, human history has been determined by mass killings, rape, plunder, and human exploitation.

Throughout history, we have witnessed two major approaches to the implementation of such values in decision making: Realpolitik, i.e. politics based on practical and material factors rather than on theoretical or ethical objectives; and Ideology, a system of ideas and ideals, especially one which forms the basis of economic or political theory and policy.

Decision making on all levels appears to be a balancing act between the application fundamental values and the consideration of practical concerns.

It seems that we are long way from instilling human values into AI systems. But there’s hope that the AI context will put the questions 

  1. What are the human values?
  2. How should they be applied? 

into the center of the public discourse.

Monday, July 2, 2018

Social relationships

AI will have significant impact on people’s lives, in a most private and personal way. Here are a few emerging scenarios.

Social aspects of work

Beside generating income, work plays an important role in most people’s social life. Work gives a certain sense to people lives by making meaningful contributions and colleagues are often friends. If work is eliminated from people’s lives these become less rich.
While studies have shown a correlation between declining employment prospects and declining mental health, the long-term situation is not clear. These seem to be areas which requires further study. Search for substitutes of work's social aspects, e.g. volunteering, appears to be one avenue.

Social Scoring Systems

Social scoring systems are being pioneered in China, initially by private companies but with plans by the government to make them official. These systems take credit scoring as long practiced in most advanced economies to the next level by extending the data set beyond direct financial indicators.
The implication is a universal judgement of a person. Instead of answering if a person is credit worthy, the question now is: Is this a "good" person?
Such systems are likely to create conformity and distrust between people and to suppress individuality and dissent. Again, citizens need to debate and decide what kind of a society they would like to live in.

AI companionship

AI empowered digital assistants on mobile devices and in-home appliances have made great progress in recent years. Their voices as recently demonstrated by Google’s Digital Assistant are almost indistinguishable from human ones and their ability to understand humans keeps improving.
The questions if digital assistants can become your best friend has been explored, for example by Digg, the Wall Street Journal and others. Alexa and her friends are always there for you, they are great listeners, they know everything. Soon they will really understand you and help you, specifically if you paid for having Dr. Phils premium knowledge base loaded.
Besides the virtual companionship, companies are also working on providing the tactile experience
There are clear benefits to these scenarios, but they come with some significant side effects.
Individuals must think, debate and decide what they want their future to be.
Corporations have great responsibilities in the fashions, tastes and desires they have their marketing departments create.

Economic Considerations


Unemployment

Entire classes of jobs are likely to be eliminated by AI systems in the mid-term, including drivers, sales associates, manufacturing line workers, call center employees. At this point it is not clear how companies can leverage this human potential for growth and it might just lead to the elimination of jobs. Immediate employment opportunity through retraining might be limited. Companies need to rethink their social responsibility.

Macroeconomic Arrogance

Economists have pointed out that historically technology which made jobs obsolete has ultimately lead to the creation of new types of jobs in even greater numbers, as for example in this study by Deloitte.
While this is statistically correct, it is not clear how it impacted the individual. What happened to the weaver, the farm hand and the coal miner once their jobs were eliminated. Did they find a place in the new service economy? Did they become knowledge workers? I'm doubtful. So the questions remains what will happen to millions of drivers when self-driving cars and trucks become a reality?
The warning that past performance is not an indicator of future performance might apply. The AI revolution is unprecedented and is very likely to also eliminate knowledge-based and service-industry jobs. 

Wealth Distribution

Wealth is distributed between labor and capital. As human labor is likely to be reduced by AI, more wealth might be flowing towards capital leading to an even greater concentration of capital and wealth. Countries need to continue public debate and experimentation about the kind of society its people prefer to live in.


Corporate Governance Structure and Framework

You cannot win the game but you can lose it

While AI can certainly be a competitive advantage, its governance falls under the category of risk management. This means it has little to provide towards winning, but a big blunder can have enormous, potentially existential, impact on a company.

Transparency

Transparency is key because transparency is instrumental for gaining the public’s trust that AI is applied in a “good” way. Since AI governance offers little in competitive advantage, openness is not going to hurt the bottom line.

Corporations and public debate

Cooperation cannot address AI governance in isolation. Many of the questions need further elaboration by society and public debate. Even apparently simple concepts such as fairness and non-bias are complex and need further discussion.

Internal guidance and external oversight

Companies will need experts and executives, sufficiently empowered, to govern the development of AI and provide concrete guidance to product developers.
While a company’s board of directors may not have the necessary expertise and experience in AI and its implication and risks, a specialized external advisory or supervisory board might be necessary. It will bring in additional expertise, will be independent of company politics and will increase transparency resulting in increased public trust.

Operational vs societal

The impact of AI on society is very broad and many topics still need research and debate. Therefor it might be useful separating between operational topics and societal challenges.
Operational topics are generally understood and typically under company control, e.g. the quality of data, selection of algorithm by their properties, e.g. explainability, or the design of AI enabled solution in a way that the user is in control.
The governess of operational topics can be structured along the following dimensions: data, algorithms, objective functions, policies and user experience.
Societal challenges are much broader, have deeper impact on society and are typically not under a company’s control. Examples include future of work from a economic and a social perspective, wealth distributions and the balance between privacy and safety and security
I will explore these questions in a separate blog post.