Wednesday, August 7, 2019

Alexa, What Is a Conflict of Interest? - A follow-up from the pharma-medical industrial complex

Back in in May, Nicholas Wright and I published an article Alexa, What Is a Conflict of Interest? at slate.com. We discussed the potential conflict of interest which arises from your digital assistant both friend and sales robot.

A few weeks after the article was published, there was news, that Amazon had teamed up with the UK's National Health Service and that Alexa would provide medical advice. 

In some sense this announcement leapfrogged the concerns stated in our article. With individuals' health and potential life-an-death questions on the table, he stakes are higher. And the pharma-medical industrial complex is historically prone to conflict of interest as recently illustrated by the role of Purdue Pharmta in the US opioid crisis.

I think that there is certainly a positive and productive role for conversational AI and AI in general in the medical field. The mental health chat bot Woebot might serve as an example.

However, the ethical guidelines need to be strong and clear and the conflict of commercial interests and the health interests of the patient must be eliminated. 

Human empowerment and control - revisited

In the original post on the topic from 6 month ago, I wrote about human control of AI systems in terms of the three categories in-the-loop, on-the-loop and out of-the-loop.

Upon further reflection, I think there's a fourth category: conditional-on-the-loop.

The idea of conditional-on-the-loop is that one (potential black box) algorithm is producing a decision (the decider)  which is checked against another algorithms which provides full transparency (the checker), for example a rule based system implementing a policy statement.

If the checker agrees with the decider than the system would go ahead with implementing the decision. If, however, there's a conflict between the decider and the checker, the final decision would be delegated to a human decision maker.

The advantage of this approach is its scalability. Routine cases can be automated at low risk and the scarce human resource can be focused on the critical cases.

The approach is of limited use if time-to-decision is of the essence, e.g. in automated weapon systems, autonomous transportation, equity trading or other applications where decision have to be made in extremely short time frames.

Saturday, December 29, 2018

Transparency

We need to distinguish between an overall system which is in part powered by AI algorithms and the algorithms themselves.

We also need to distinguish between transparency to whom, e.g. an end-user, an operator, an auditor or a regulator.

There should be no difference between a system if it powered by AI or not and companies should have the necessary policies and processes in place to provide transparency and auditability at a system level.

The focus will be AI algorithms.

Creating transparency by opening the algorithm, e.g. exposing the code, is in most cases not feasible for legal and practical reasons.

There is precedence for the legal consideration. For example, credit rating algorithms are legally considered a trade secret and hence protected from exposure to the public. Even for Germany, which is in generally in the forefront of consumer protection, this is the case for the national credit scoring organization Schufa

Practically, many machine learning algorithms, specifically neural networks, are so complex that they cannot be understood by looking at the code.

As the result of my research and discussion with others working in the field, I see two levels of transparency which could be implemented.

The first, weaker level, is publishing the intentions and policies which are implemented by the algorithm. For the credit scoring case, the intend is to access one's credit risks and the policies could be the amount of credit in relationship to one's net-worth, one's payment history, etc. 

A second level can be achieved by counterfactual explanations. They explain why a negative decision has been made and how circumstances would have had to differ for a desirable outcome. For example, I would have gotten a million dollar loan if I had $250,000 more in assets as collateral, or if I had not defaulted on a car loan 20 years ago.

Counterfactual explanations are discussed in depth in these two papers by Sandra Wachter, Brent Mittelstadt and Chris Russell: 



The concept of counterfactual explanations has been implemented by Google’s What-if tool for Tensorflow.

There are questions which remain:
How to demonstrate that an algorithm implements a certain policies?
Testing, e.g. with Monte Carlo simulations, might be able to demonstrate this is achieved (to a certain extend).

How to resolve an appeal by a user who is not satisfied with counterfactual explanation? What would be the basis for deciding the appeal?
One possible approach would be that person appointed by the operator of the system would make a decision based on the company policy. Further appeals to a human-made decision would follow existing practice.

Tuesday, December 25, 2018

Human empowerment and control


A goal often stated for AI systems is the empowerment and augmentation of the users of such systems.

Empowerment implies control. Paul Scharre discusses different levels of control in his book "Army of None: Autonomous Weapons and the Future of War” which I discussed in an earlier post. This three-level categorization can, however, be applied to AI systems in general.

·       in-the-loop
the AI recommends, and the human decides

This is the strongest form of control and empowerment.

There is, however, the potential that a human is just an actuator and not a decider because the AI makes a suggestion which cannot be investigated and validated by the human decision maker for various reasons, e.g. the time for making a decision is too short,  the large numbers of decisions to be made overwhelms the decider, there's a lack of transparency of the recommendation process.

The human in-the-loop could have also developed a blind trust in the AI recommendations.

·       on-the-loop
the system decides, but the human can take over control at any moment

Essentially the human has control over the on/off switch of the AI systems which is very coarse grain level of control yet ultimately very powerful.

·       out-of-the-loop
this is a fully autonomous system; the human controller has no power.

There can be reasonable use cases for each of these levels of human empowerment.

When developing a AI system, all stake holders including product owners, data scientist, legal and senior executives need to carefully assess which level of human control and empowerment should be implemented.

An important consideration is the reversibility of a decision. If the decision is reversible, a possible mitigation is the possibility to challenge a decision by a person who is subjected to it as discussed in a previous post

Monday, December 24, 2018

Predictability and unreasonable inference


AI can potentially draw non-intuitive and unverifiable inferences and predictions.

To create trust in AI systems, it will be necessary that AI systems behave predictably, i.e. within the expectation of published intent and policies.

When an AI algorithm produces unreasonable inferences, i.e. a result which is outside the expected outcome, and the result has significant impact on a person life, the person subjected to the AI algorithm should have the right to challenge such an unreasonable inference.

Operators of AI systems need to provide adequate means for a person to challenge such unreasonable inference. This includes software solutions, internal processes and sufficient human supervisors to handle such cases.

Operators should also conduct rigorous testing of the algorithm to minimize unreasonable inferences. 

Sandra Wachter and Brent Mittelstadt look at this issue in their forthcoming article "A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI" from a legal perspective.

Fairness Revisited


A few weeks ago, I attended a workshop in Cambridge (MA) organized by Harvard, MIT and Princeton where I had the chance to discuss the concept of fairness with a number of experts. Based on this input revised and schnarpend earlier observation on fairness:

There are many definitions of fairness across the ideological spectrum. It cannot be expected that there will be broad agreement on a single definition.

Furthermore, different definitions of fairness might be applicable to different circumstances.

A company will need to pick a definition of fairness which it deems applicable to a certain area of application and publish the chosen definition of fairness. The algorithm should be also tested to ensure that it implements the specified fairness definition.

Based on stakeholder feedback, the company might adjust the definition of fairness. The adjustment should be appropriately documented.

The Meta Data Manifest


Because data is critical to AI algorithms it must be handled with the necessary rigor and transparency.

The meta data about the data which is used to train an algorithm needs to be carefully documented, including time of data collection, the collection method and device (if applicable), the location of the collection, if and how the data has been cleaned and validated, intended uses, maintenance, life cycle etc.

The article "Datasheets for Datasets" by Timnit Gebru et al. motivates and introduces this approach.

Such a manifest provides auditability and transparency and lays the foundation for responsibility and accountability.

It is important to incorporate the notion of the meta data manifest in machine learning tools and workbenches automate the meta data collection and maintenance wherever possible.