A goal often stated for AI systems is the empowerment and augmentation of the users of such systems.
Empowerment implies control. Paul Scharre discusses different levels of control in his book "Army of None: Autonomous Weapons and the Future of War” which I discussed in an earlier post. This three-level categorization can, however, be applied to AI systems in general.
the AI recommends, and the human decides
This is the strongest form of control and empowerment.
There is, however, the potential that a human is just an actuator and not a decider because the AI makes a suggestion which cannot be investigated and validated by the human decision maker for various reasons, e.g. the time for making a decision is too short, the large numbers of decisions to be made overwhelms the decider, there's a lack of transparency of the recommendation process.
The human in-the-loop could have also developed a blind trust in the AI recommendations.
the system decides, but the human can take over control at any moment
Essentially the human has control over the on/off switch of the AI systems which is very coarse grain level of control yet ultimately very powerful.
this is a fully autonomous system; the human controller has no power.
There can be reasonable use cases for each of these levels of human empowerment.
When developing a AI system, all stake holders including product owners, data scientist, legal and senior executives need to carefully assess which level of human control and empowerment should be implemented.
An important consideration is the reversibility of a decision. If the decision is reversible, a possible mitigation is the possibility to challenge a decision by a person who is subjected to it as discussed in a previous post.