- Los Angeles, U.S.
Stars
XQUIC Library released by Alibaba is a cross-platform implementation of QUIC and HTTP/3 protocol.
Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.
Lock-free implementation of the token bucket algorithm in C++
This project aims to collect the latest "call for reviewers" links from various top CS/ML/AI conferences/journals
Everything we actually know about the Apple Neural Engine (ANE)
A simple pip-installable Python tool to generate your own HTML citation world map from your Google Scholar ID.
EMP: Edge-assisted Multi-vehicle Perception (MobiCom '21)
Kandinsky 2 β multilingual text2image latent diffusion model
The code for the paper "Pre-trained Vision-Language Models Learn Discoverable Concepts"
Implementation of Encoder-based Domain Tuning for Fast Personalization of Text-to-Image Models
Official Implementation for "ConceptLab: Creative Generation using Diffusion Prior Constraints"
Diffusion attentive attribution maps for interpreting Stable Diffusion.
Democratizing Internet-scale financial data.
FinGPT: Open-Source Financial Large Language Models! Revolutionize π₯ We release the trained model on HuggingFace.
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
Diffusers-Interpret π€π§¨π΅οΈββοΈ: Model explainability for π€ Diffusers. Get explanations for your generated images.
NeurIPS 2023, Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models
Cooperative Contextual Bandit Code from the MayAI Project
[ICCV'23] CiteTracker: Correlating Image and Text for Visual Tracking
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Modular and structured prompt caching for low-latency LLM inference
The pytorch implementation of our CVPR 2023 paper "Conditional Image-to-Video Generation with Latent Flow Diffusion Models"
A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems