Monday, December 24, 2018

Predictability and unreasonable inference


AI can potentially draw non-intuitive and unverifiable inferences and predictions.

To create trust in AI systems, it will be necessary that AI systems behave predictably, i.e. within the expectation of published intent and policies.

When an AI algorithm produces unreasonable inferences, i.e. a result which is outside the expected outcome, and the result has significant impact on a person life, the person subjected to the AI algorithm should have the right to challenge such an unreasonable inference.

Operators of AI systems need to provide adequate means for a person to challenge such unreasonable inference. This includes software solutions, internal processes and sufficient human supervisors to handle such cases.

Operators should also conduct rigorous testing of the algorithm to minimize unreasonable inferences. 

Sandra Wachter and Brent Mittelstadt look at this issue in their forthcoming article "A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI" from a legal perspective.

No comments:

Post a Comment