Starred repositories
A virtual environment for developing and evaluating automated scientific discovery agents.
GitHub Copilot extension for JupyterLab
Generating figures from research papers, using textual captions from the paper.
LangXAI: Integrating Large Vision Models for Generating Textual Explanations to Enhance Explainability in Visual Perception Tasks
Official Repository for the Endoscapes Dataset for Surgical Scene Segmentation, Object Detection, and Critical View of Safety Assessment
A repository for surgical action triplet dataset. Data are videos of laparoscopic cholecystectomy that have been annotated with <instrument, verb, target> labels for every surgical fine-grained act…
Learning multi-modal representations by watching hundreds of surgical video lectures
Automatic Ontology and Knowledge Graph construction with LLM
Model interpretability and understanding for PyTorch
NeurIPS 2024 (spotlight): A Textbook Remedy for Domain Shifts Knowledge Priors for Medical Image Analysis
Interpretability for sequence generation models 🐛 🔍
Layer-Wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]
A Survey of Attributions for Large Language Models
A small library for automatically adjustment of text position in matplotlib plots to minimize overlaps.
Most popular metrics used to evaluate object detection algorithms.
A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also includes support for Shapley Values sampling. (ICLR 2018)
Word Mover's Distance from Matthew J Kusner's paper "From Word Embeddings to Document Distances"
[EMNLP 2023] Enabling Large Language Models to Generate Text with Citations. Paper: https://arxiv.org/abs/2305.14627
The official Python library for the OpenAI API
(TPAMI2022) The ImageNet-S benchmark/method for large-scale unsupervised/semi-supervised semantic segmentation.
Implementation of the Integrated Directional Gradients method for Deep Neural Network model explanations.
Code for paper "Search Methods for Sufficient, Socially-Aligned Feature Importance Explanations with In-Distribution Counterfactuals"
Official Code Repo for the Paper: "How does This Interaction Affect Me? Interpretable Attribution for Feature Interactions", In NeurIPS 2020