Wednesday, August 7, 2019

Alexa, What Is a Conflict of Interest? - A follow-up from the pharma-medical industrial complex

Back in in May, Nicholas Wright and I published an article Alexa, What Is a Conflict of Interest? at slate.com. We discussed the potential conflict of interest which arises from your digital assistant both friend and sales robot.

A few weeks after the article was published, there was news, that Amazon had teamed up with the UK's National Health Service and that Alexa would provide medical advice. 

In some sense this announcement leapfrogged the concerns stated in our article. With individuals' health and potential life-an-death questions on the table, he stakes are higher. And the pharma-medical industrial complex is historically prone to conflict of interest as recently illustrated by the role of Purdue Pharmta in the US opioid crisis.

I think that there is certainly a positive and productive role for conversational AI and AI in general in the medical field. The mental health chat bot Woebot might serve as an example.

However, the ethical guidelines need to be strong and clear and the conflict of commercial interests and the health interests of the patient must be eliminated. 

Human empowerment and control - revisited

In the original post on the topic from 6 month ago, I wrote about human control of AI systems in terms of the three categories in-the-loop, on-the-loop and out of-the-loop.

Upon further reflection, I think there's a fourth category: conditional-on-the-loop.

The idea of conditional-on-the-loop is that one (potential black box) algorithm is producing a decision (the decider)  which is checked against another algorithms which provides full transparency (the checker), for example a rule based system implementing a policy statement.

If the checker agrees with the decider than the system would go ahead with implementing the decision. If, however, there's a conflict between the decider and the checker, the final decision would be delegated to a human decision maker.

The advantage of this approach is its scalability. Routine cases can be automated at low risk and the scarce human resource can be focused on the critical cases.

The approach is of limited use if time-to-decision is of the essence, e.g. in automated weapon systems, autonomous transportation, equity trading or other applications where decision have to be made in extremely short time frames.