-
SAIL Lab, University of New Haven
- @upadhayay_bibek
- https://scholar.google.com/citations?user=lo-RWCgAAAAJ&hl=en
Stars
Training Sparse Autoencoders on Language Models
Large Concept Models: Language modeling in a sentence representation space
Unofficial implementation of "Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection"
cognitive-overload-attack
Effortlessly run LLM backends, APIs, frontends, and services with one command.
Transformer Explained Visually: Learn How LLM Transformer Models Work with Interactive Visualization
Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"
A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide researchers, practitioners, and enthusiasts with insights i…
RevLLM -- Reverse Engineering Tools for Large Language Models
iBibek / ascii_art
Forked from sepandhaghighi/art🎨 ASCII art library for Python
Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".
iBibek / attention-visualization
Forked from mattneary/attentionvisualizing attention for LLM users
iBibek / annotated_diffusion_pytorch
Forked from huggingface/blogPublic repo for HF blog posts
iBibek / alpaca-lora
Forked from tloen/alpaca-loraInstruct-tune LLaMA on consumer hardware
The project is build on Google colaboratory using Python. The scripts extract the first five feeds from the website and convert them to audio file using GTTS.
Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".
LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces
Science Parse parses scientific papers (in PDF form) and returns them in structured form.
LLaMA 2 implemented from scratch in PyTorch
A fast inference library for running LLMs locally on modern consumer-class GPUs
A high-throughput and memory-efficient inference and serving engine for LLMs
An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.