Stars
Local explanations with uncertainty 💐!
GLIME is a post-hoc explanation method which is proved to be much more stable and faithful than LIME.
Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)
SMAC3: A Versatile Bayesian Optimization Package for Hyperparameter Optimization
datasciencejeff / shapr
Forked from NorskRegnesentral/shaprExplaining the output of machine learning models with more accurately estimated Shapley values
Bayesian Adaptive Spline Surfaces for flexible and automatic regression
A clean and robust Pytorch implementation of PPO on Discrete action space
A collection of research materials on explainable AI/ML
IIB Master's Project: Deep Learning for Koopman Optimal Predictive Control
Explaining the output of machine learning models with more accurately estimated Shapley values
Learning how to implement GA and NSGA-II for job shop scheduling problem in python
Data Science in Manufacturing 製造資料科學