-
Harvard University
- Boston, MA
- satyapriyakrishna.com
- @SatyaScribbles
Highlights
- Pro
Stars
The Granite Guardian models are designed to detect risks in prompts and responses.
A bibliography and survey of the papers surrounding o1
What would you do with 1000 H100s...
Welcome to the LLMs Interview Prep Guide! This GitHub repository offers a curated set of interview questions and answers tailored for Data Scientists. Enhance your understanding of Large Language Mβ¦
Ongoing Research Project for continaual pre-training LLM(dense mode)
OpenXAI : Towards a Transparent Evaluation of Model Explanations
MediaWiki API wrapper in python http://pymediawiki.readthedocs.io/en/latest/
This repository collects all relevant resources about interpretability in LLMs
Interpretability for sequence generation models π π
Collection of my assignments and work in the class MATH51 at Stanford
A curated list of Large Language Model (LLM) Interpretability resources.
Reasoning in Large Language Models: Papers and Resources, including Chain-of-Thought and OpenAI o1 π
A guidance language for controlling large language models.
[NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors
Data and code for the Corr2Cause paper (ICLR 2024)
tiktoken is a fast BPE tokeniser for use with OpenAI's models.
A benchmark to evaluate language models on questions I've previously asked them to solve.
Repo with papers related to Multi-lingual LLMs
Run Mixtral-8x7B models in Colab or consumer desktops
Representation Engineering: A Top-Down Approach to AI Transparency
A tiling window manager for macOS based on binary space partitioning
The hub for EleutherAI's work on interpretability and learning dynamics