Skip to content

Rahuketu86/Explainable-AI-Workshop

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

82 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Explainable Machine Learning Models - Workshop

Making sense of opaque and complex models using Python

This repo is home to the code that accompanies a course with the same name on O'Reilly Learning Platform

Workshop

AI models are making predictions that affect people’s lives, so ensuring that they’re fair and unbiased must be an industry imperative. One way to ensure fairness is to discover a model’s mispredictions and analyze and fix the underlying causes. Some machine learning methods, like logistic regression and decision trees, are interpretable, but they aren’t highly accurate in their predictions. Others, like boosted trees and deep neural nets, are more accurate, but the logic behind their predictions can’t be clearly identified or explained, making it more difficult to spot and fix bias.

Join to get the lowdown on commonly used techniques like SHAP values, LIME, partial dependence plots, and more that can help you explain the inexplicable in these models and ensure responsible machine learning. You’ll gain an understanding of the intuition behind the techniques and learn how to implement them in Python. Using case studies, you’ll discover how to extract the most important features and values of a model’s predictions to discover why a particular person has been denied a bank loan or is more susceptible to a heart attack. Finally, you’ll examine the vulnerabilities and shortcomings of these methods and discuss the road ahead.

Recommended preparation:

Recommended follow-up:


About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 80.0%
  • HTML 20.0%