We also need to distinguish between transparency to whom,
e.g. an end-user, an operator, an auditor or a regulator.
There should be no difference between a system if it powered
by AI or not and companies should have the necessary policies and processes in place to provide transparency and auditability at a system level.
The focus will be AI algorithms.
Creating transparency by opening the algorithm, e.g.
exposing the code, is in most cases not feasible for legal and practical reasons.
There is precedence for the legal consideration. For example,
credit rating algorithms are legally considered a trade secret and hence
protected from exposure to the public. Even for Germany, which is in generally in the forefront of consumer protection, this is the case for the national credit scoring organization Schufa.
Practically, many machine learning algorithms, specifically neural networks,
are so complex that they cannot be understood by looking at the code.
As the result of my research and discussion with others working in the field, I see two levels of transparency which could be implemented.
The first, weaker level, is publishing the intentions and
policies which are implemented by the algorithm. For the credit scoring case, the intend is to access one's credit risks and the policies could be the amount of credit in relationship to one's net-worth, one's payment history, etc.
A second level can be achieved by counterfactual
explanations. They explain why a negative decision has been made and how
circumstances would have had to differ for a desirable outcome. For example, I would have gotten a million dollar loan if I had $250,000 more in assets as collateral, or if I had not defaulted on a car loan 20 years ago.
Counterfactual explanations are discussed in depth in these two papers by Sandra Wachter, Brent Mittelstadt and Chris Russell:
The concept of counterfactual explanations has been
implemented by Google’s What-if tool for Tensorflow.
There are questions which remain:
How to demonstrate that an algorithm implements
a certain policies?
Testing, e.g. with Monte Carlo simulations, might be able to demonstrate this is
achieved (to a certain extend).
How to resolve an appeal by a user who is not
satisfied with counterfactual explanation? What would be the basis for deciding
the appeal?
One possible approach would be that person appointed by the operator of the system would
make a decision based on the company policy. Further appeals to a human-made
decision would follow existing practice.
Then again, according to the report, the high introductory speculation, low access to financing, and the nonappearance of gifted specialists are among the central factors that may restrain the development of the market in anticipated years.artificial intelligence course
ReplyDeleteBeing a travel blogger and lover, it was appealing and helpful for me. It is a strong article to portray this topic limits the readers to widen their horizon of accepted wisdom and power as they go through the article.
ReplyDeleteData Science training in Mumbai
Data Science course in Mumbai
SAP training in Mumbai
Excellent pieces. Keep posting such kind of info on your blog. I’m really impressed by your site.
ReplyDeleteAWS Training in Hyderabad
AWS Course in Hyderabad
It is amazing and wonderful to see your blog. Thanks for sharing this information, this is useful to me.
ReplyDeleteData Science Training in Hyderabad
Data Science Course in Hyderabad