Saturday, December 29, 2018

Transparency

We need to distinguish between an overall system which is in part powered by AI algorithms and the algorithms themselves.

We also need to distinguish between transparency to whom, e.g. an end-user, an operator, an auditor or a regulator.

There should be no difference between a system if it powered by AI or not and companies should have the necessary policies and processes in place to provide transparency and auditability at a system level.

The focus will be AI algorithms.

Creating transparency by opening the algorithm, e.g. exposing the code, is in most cases not feasible for legal and practical reasons.

There is precedence for the legal consideration. For example, credit rating algorithms are legally considered a trade secret and hence protected from exposure to the public. Even for Germany, which is in generally in the forefront of consumer protection, this is the case for the national credit scoring organization Schufa

Practically, many machine learning algorithms, specifically neural networks, are so complex that they cannot be understood by looking at the code.

As the result of my research and discussion with others working in the field, I see two levels of transparency which could be implemented.

The first, weaker level, is publishing the intentions and policies which are implemented by the algorithm. For the credit scoring case, the intend is to access one's credit risks and the policies could be the amount of credit in relationship to one's net-worth, one's payment history, etc. 

A second level can be achieved by counterfactual explanations. They explain why a negative decision has been made and how circumstances would have had to differ for a desirable outcome. For example, I would have gotten a million dollar loan if I had $250,000 more in assets as collateral, or if I had not defaulted on a car loan 20 years ago.

Counterfactual explanations are discussed in depth in these two papers by Sandra Wachter, Brent Mittelstadt and Chris Russell: 



The concept of counterfactual explanations has been implemented by Google’s What-if tool for Tensorflow.

There are questions which remain:
How to demonstrate that an algorithm implements a certain policies?
Testing, e.g. with Monte Carlo simulations, might be able to demonstrate this is achieved (to a certain extend).

How to resolve an appeal by a user who is not satisfied with counterfactual explanation? What would be the basis for deciding the appeal?
One possible approach would be that person appointed by the operator of the system would make a decision based on the company policy. Further appeals to a human-made decision would follow existing practice.

8 comments:

  1. Then again, according to the report, the high introductory speculation, low access to financing, and the nonappearance of gifted specialists are among the central factors that may restrain the development of the market in anticipated years.artificial intelligence course

    ReplyDelete
  2. The outrageous perspective on people or society has for quite some time been deserted. for humanist the present have perceived that neither society nor individual can exist without one another and that they are as a general rule various parts of exactly the same thing. dha peshawar prices

    ReplyDelete
  3. Notwithstanding, it's obviously true that numerous co-usable social orders unexpectedly assemble for General Body conference and choose to go in for redevelopment without an underlying review report.
    Nespak Society Rawalpindi

    ReplyDelete
  4. amazing and wonderful to see your blog. Thanks for sharing this information, this is valuable and informative to me. whatsapp mod

    ReplyDelete
  5. Content creation ought to be tied in with teaching and illuminating the peruser, while likewise suggesting conversation starters and responding to them.
    ChatGPT Prompts

    ReplyDelete