Skip to content

Commit

Permalink
Model explainers and the press secretary — directly optimizing for tr…
Browse files Browse the repository at this point in the history
…ust in machine learning may be harmful
  • Loading branch information
pbiecek authored Sep 18, 2019
1 parent f9fc405 commit a2b8feb
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -334,9 +334,9 @@ We illustrate our notion using a case study of FICO credit scores.

### 2019

* [Model explainers and the press secretary — directly optimizing for trust in machine learning may be harmful](https://medium.com/@stuart.reynolds/model-explainers-and-the-press-secretary-optimizing-for-trust-in-machine-learning-may-be-harmful-84275b27bea6); If black-box model explainers optimize human trust in machine learning models, why shouldn’t we expect that black-box model explainers will function like a dishonest government Press Secretary?
* [Decoding the Black Box: An Important Introduction to Interpretable Machine Learning Models in Python](https://www.analyticsvidhya.com/blog/2019/08/decoding-black-box-step-by-step-guide-interpretable-machine-learning-models-python/); Ankit Choudhary; Interpretable machine learning is a critical concept every data scientist should be aware of; How can you build interpretable machine learning models? This article will provide a framework; We will also code these interpretable machine learning models in Python


* [I, Black Box: Explainable Artificial Intelligence and the Limits of Human Deliberative Processes](https://warontherocks.com/2019/07/i-black-box-explainable-artificial-intelligence-and-the-limits-of-human-deliberative-processes/); Much has been made about the importance of understanding the inner workings of machines when it comes to the ethics of using artificial intelligence (AI) on the battlefield. Delegates at the Group of Government Expert meetings on lethal autonomous weapons continue to raise the issue. Concerns expressed by legal and scientific scholars abound. One commentator sums it up: “for human decision makers to be able to retain agency over the morally relevant decisions made with AI they would need a clear insight into the AI black box, to understand the data, its provenance and the logic of its algorithms.”
* [Teaching AI, Ethics, Law and Policy](https://arxiv.org/abs/1904.12470); Asher Wilk; The cyberspace and the development of intelligent systems using Artificial Intelligence (AI) created new challenges to computer professionals, data scientists, regulators and policy makers. For example, self-driving cars raise new technical, ethical, legal and policy issues. This paper proposes a course Computers, Ethics, Law, and Public Policy, and suggests a curriculum for such a course. This paper presents ethical, legal, and public policy issues relevant to building and using software and artificial intelligence. It describes ethical principles and values relevant to AI systems.
* [An introduction to explainable AI, and why we need it](https://www.kdnuggets.com/2019/04/introduction-explainable-ai.html); Patrick Ferris; I was fortunate enough to attend the Knowledge Discovery and Data Mining(KDD) conference this year. Of the talks I went to, there were two main areas of research that seem to be on a lot of people’s minds: Firstly, finding a meaningful representation of graph structures to feed into neural networks. Oriol Vinyalsfrom DeepMind gave a talk about their Message Passing Neural Networks. The second area, and the focus of this article, are explainable AI models. As we generate newer and more innovative applications for neural networks, the question of ‘How do they work?’ becomes more and more important.
Expand Down

0 comments on commit a2b8feb

Please sign in to comment.