In the original post on the topic from 6 month ago, I wrote about human control of AI systems in terms of the three categories in-the-loop, on-the-loop and out of-the-loop.
Upon further reflection, I think there's a fourth category: conditional-on-the-loop.
The idea of conditional-on-the-loop is that one (potential black box) algorithm is producing a decision (the decider) which is checked against another algorithms which provides full transparency (the checker), for example a rule based system implementing a policy statement.
If the checker agrees with the decider than the system would go ahead with implementing the decision. If, however, there's a conflict between the decider and the checker, the final decision would be delegated to a human decision maker.
The advantage of this approach is its scalability. Routine cases can be automated at low risk and the scarce human resource can be focused on the critical cases.
The approach is of limited use if time-to-decision is of the essence, e.g. in automated weapon systems, autonomous transportation, equity trading or other applications where decision have to be made in extremely short time frames.