Wednesday, July 18, 2018

Multi-purpose Nature and Proliferation of AI Technology


AI technology is inherently multi-purpose.

Intelligence is multi-purpose and so is AI.

Face recognition software is good example. Facebooks tags your friends in your pictures so that you don’t have to do it. Police, state and private security organizations may use the same technology to identify a person in a public crowd. Their motivations and actions may vary.

Similarly, self-driving cars and trucks share a lot of technology with autonomous military vehicles. And there are plenty of more examples.

Non-proliferation of AI technology is almost impossible

The history of the internet has demonstrated that it is very hard if not impossible to control the distribution of digital media. For example, the Napster led spread of digital music was solved through a combination of license  restrictions, threat of enforcement, economic measures and convenience.

Furthermore, many AI-related algorithms are available under open source licenses with barely any restrictions on their use.

License restrictions would be unenforceable with bad actors.

Keeping software within in an organization is no guaranty for keeping its use contained. Sensitive data is stolen from companies and government organizations on a regular basis.

The 2017 Vault 7 case where hackers gained access to the CIA’s cybersecurity and surveillance tools, illustrates this vividly.

Commodity hardware

While in the past special hardware was needed for more advanced algorithms, today’s data centers are largely built on commodity hardware which is widely available. Another historically available proliferation barrier has disappeared.

Education is free and widely available

The knowledge barrier also has disappeared. Anyone with sufficient intellectual capabilities, language skills and motivation can obtain the necessary knowledge through online courses, books, manuals, etc.

Treasure is not a key differentiation

Given the software and knowledge is free and commodity hardware is cheap, treasure does not provide a barrier to entry.

Conclusions

Given this situation, it must be assumed that bad state and non-state actors will have a level of AI technology available to them close enough to what the most advanced security, intelligence and military organizations have to create severe internal and external threats.

Saturday, July 7, 2018

AI weapon systems are likely to increase the number military conflicts


The Doomsday Machine

After reading Daniel Ellsberg’s book with the same title I felt lucky that I am still alive and that in general mankind survived the cold war. 

Following the Manhattan project, nuclear weapons and the triad of delivery systems were mass-produced within a couple of decades on both sides of the iron curtain. But the control mechanisms were crude and post cold war analysis of the Cuban missile crisis showed how close we had come to destroying civilization.  

Mutually assured destruction was the doctrine which prevented the use of nuclear weapons since the 1960-ies.

Continuation of policy by other means

In the sense of Clausewitz’ definition of war as a mere continuation of policy by other means, nuclear weapons are rather useless. For the continuation of policy, the military needs weapon systems which can be scaled as well as contained. This was demonstrated by the military conflicts of past decades. Carrier battle groups, advanced airplanes and cruise missiles have been the weapons of choice.

Today’s advanced weapon systems are already highly computerized. GPS and laser based guiding systems for example are controlled by complex software systems and could be considered weak AI systems.

There is no question about AI being used for weapons systems, it’s only a matter of timing and capabilities. Judging from a what’s feasible today, AI weapon systems most likely exist already.

Former US Deputy Defense Secretary Robert Work and Shawn Brimley predicted in their 2014 paper 20YY Preparing for War in the Robotic Age that "The number of instances where humans must remain in the loop will likely shrink over time" due to advances in AI. 

Unmanned aerial vehicles are good examples of the advances towards autonomous weapon systems. Today there is still a human legally required to pull the trigger, but it’s unclear if the human is a true decision maker or just an actuator. The decision might be inevitable by the software-based analysis of the raw data, e.g. face recognition software positively identifying the target.

A major consequence of this development is that the threshold for military means is lowered dramatically. Mutually assured destruction was the strongest deterrent. More than 50,000 dead US soldiers in Vietnam seriously questioned this war and the deployment of the US military in general.

About 5,000 dead US military personnel in Iraq, a tenfold reduction over Vietnam due to technology advances in conventional weapon systems, changed the public debate.

The next war, fought predominately with autonomous weapons, is likely to reduce the body count by another order of magnitude. 

What’s another trillion dollars? 

While the cost of war is measured in lives and treasure, the balance is shifting towards treasure. Over the last half century, politicians have demonstrated little restrain when it comes to spending public treasure and borrowed heavily.

Military conflict seems more likely than ever by the advance of AI weapons systems.

Wednesday, July 4, 2018

Fairness vs. Bias

A requirement for AI systems established by organizations and companies is to ensure fair decisions which are free from bias.

The relationship between fairness and bias is, however, complex.

A suitable and timely example to discuss this complexity is affirmative action, i.e. giving preference to women, black people, or other groups that are often treated unfairly (Cambridge Dictionary). 

A bias is explicitly introduced for rectifying previous unfair treatment.

The intrinsic issue of fairness is that while everyone should be treated the same, not everyone is the same.

In the case for affirmative action, origin, race, sex, the socio-economic status and the education of parents are beyond the control of a young person and but make a huge difference in the opportunities they have. 

This question is currently under debate in the context of the US college admission process where affirmative action is challenged by President Trump.

Affirmative action and similar approaches suffer from the complexity of quantifying the level of preference introduced into the systems. The system of quotas which is often applied takes a very simple approach to a complex problem.

Other applicable examples are 

  • Tax code
    What's fair a flat or a progressive tax system?
     
  • Insurance
    Is risk assessment fair?
    Is it fair that young drivers pay more?
    Is it fair to provide health insurance discounts for age, life style, body mass index?
     

When applying fairness and bias to AI systems, a definition of fairness needs to be provided.
The  discussion between progressive, libertarian and conservatives approaches is ongoing. While some countries found broad consensus on the definition of fairness in their societies, the US is not among them. 

Human values and their implementation

Instilling human values into AI systems appears to be an important goal in the discussion of AI ethics.

There is, however, no clear consensus what universal human values are. Value systems are determined by different historical, cultural and religious experiences.

Furthermore, humans don’t have a good track record of implementing such values.  Despite thousands of years of religious and philosophical scholarship resulting in the creation and interpretation of value systems, human history has been determined by mass killings, rape, plunder, and human exploitation.

Throughout history, we have witnessed two major approaches to the implementation of such values in decision making: Realpolitik, i.e. politics based on practical and material factors rather than on theoretical or ethical objectives; and Ideology, a system of ideas and ideals, especially one which forms the basis of economic or political theory and policy.

Decision making on all levels appears to be a balancing act between the application fundamental values and the consideration of practical concerns.

It seems that we are long way from instilling human values into AI systems. But there’s hope that the AI context will put the questions 

  1. What are the human values?
  2. How should they be applied? 

into the center of the public discourse.

Monday, July 2, 2018

Social relationships

AI will have significant impact on people’s lives, in a most private and personal way. Here are a few emerging scenarios.

Social aspects of work

Beside generating income, work plays an important role in most people’s social life. Work gives a certain sense to people lives by making meaningful contributions and colleagues are often friends. If work is eliminated from people’s lives these become less rich.
While studies have shown a correlation between declining employment prospects and declining mental health, the long-term situation is not clear. These seem to be areas which requires further study. Search for substitutes of work's social aspects, e.g. volunteering, appears to be one avenue.

Social Scoring Systems

Social scoring systems are being pioneered in China, initially by private companies but with plans by the government to make them official. These systems take credit scoring as long practiced in most advanced economies to the next level by extending the data set beyond direct financial indicators.
The implication is a universal judgement of a person. Instead of answering if a person is credit worthy, the question now is: Is this a "good" person?
Such systems are likely to create conformity and distrust between people and to suppress individuality and dissent. Again, citizens need to debate and decide what kind of a society they would like to live in.

AI companionship

AI empowered digital assistants on mobile devices and in-home appliances have made great progress in recent years. Their voices as recently demonstrated by Google’s Digital Assistant are almost indistinguishable from human ones and their ability to understand humans keeps improving.
The questions if digital assistants can become your best friend has been explored, for example by Digg, the Wall Street Journal and others. Alexa and her friends are always there for you, they are great listeners, they know everything. Soon they will really understand you and help you, specifically if you paid for having Dr. Phils premium knowledge base loaded.
Besides the virtual companionship, companies are also working on providing the tactile experience
There are clear benefits to these scenarios, but they come with some significant side effects.
Individuals must think, debate and decide what they want their future to be.
Corporations have great responsibilities in the fashions, tastes and desires they have their marketing departments create.

Economic Considerations


Unemployment

Entire classes of jobs are likely to be eliminated by AI systems in the mid-term, including drivers, sales associates, manufacturing line workers, call center employees. At this point it is not clear how companies can leverage this human potential for growth and it might just lead to the elimination of jobs. Immediate employment opportunity through retraining might be limited. Companies need to rethink their social responsibility.

Macroeconomic Arrogance

Economists have pointed out that historically technology which made jobs obsolete has ultimately lead to the creation of new types of jobs in even greater numbers, as for example in this study by Deloitte.
While this is statistically correct, it is not clear how it impacted the individual. What happened to the weaver, the farm hand and the coal miner once their jobs were eliminated. Did they find a place in the new service economy? Did they become knowledge workers? I'm doubtful. So the questions remains what will happen to millions of drivers when self-driving cars and trucks become a reality?
The warning that past performance is not an indicator of future performance might apply. The AI revolution is unprecedented and is very likely to also eliminate knowledge-based and service-industry jobs. 

Wealth Distribution

Wealth is distributed between labor and capital. As human labor is likely to be reduced by AI, more wealth might be flowing towards capital leading to an even greater concentration of capital and wealth. Countries need to continue public debate and experimentation about the kind of society its people prefer to live in.


Corporate Governance Structure and Framework

You cannot win the game but you can lose it

While AI can certainly be a competitive advantage, its governance falls under the category of risk management. This means it has little to provide towards winning, but a big blunder can have enormous, potentially existential, impact on a company.

Transparency

Transparency is key because transparency is instrumental for gaining the public’s trust that AI is applied in a “good” way. Since AI governance offers little in competitive advantage, openness is not going to hurt the bottom line.

Corporations and public debate

Cooperation cannot address AI governance in isolation. Many of the questions need further elaboration by society and public debate. Even apparently simple concepts such as fairness and non-bias are complex and need further discussion.

Internal guidance and external oversight

Companies will need experts and executives, sufficiently empowered, to govern the development of AI and provide concrete guidance to product developers.
While a company’s board of directors may not have the necessary expertise and experience in AI and its implication and risks, a specialized external advisory or supervisory board might be necessary. It will bring in additional expertise, will be independent of company politics and will increase transparency resulting in increased public trust.

Operational vs societal

The impact of AI on society is very broad and many topics still need research and debate. Therefor it might be useful separating between operational topics and societal challenges.
Operational topics are generally understood and typically under company control, e.g. the quality of data, selection of algorithm by their properties, e.g. explainability, or the design of AI enabled solution in a way that the user is in control.
The governess of operational topics can be structured along the following dimensions: data, algorithms, objective functions, policies and user experience.
Societal challenges are much broader, have deeper impact on society and are typically not under a company’s control. Examples include future of work from a economic and a social perspective, wealth distributions and the balance between privacy and safety and security
I will explore these questions in a separate blog post.