-
Stanford CS PhD
- Stanford, CA 94305
- https://cs.stanford.edu/~jiaxuan/
Stars
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
FastAPI framework, high performance, easy to learn, fast to code, ready for production
为GPT/GLM等LLM大语言模型提供实用化交互接口,特别优化论文阅读/润色/写作体验,模块化设计,支持自定义快捷按钮&函数插件,支持Python和C++等项目剖析&自译解功能,PDF/LaTex论文翻译&总结功能,支持并行问询多种LLM模型,支持chatglm3等本地模型。接入通义千问, deepseekcoder, 讯飞星火, 文心一言, llama2, rwkv, claude2, m…
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
The simplest, fastest repository for training/finetuning medium-sized GPTs.
LlamaIndex is the leading framework for building LLM-powered agents over your data.
Deep Learning papers reading roadmap for anyone who are eager to learn this amazing tech!
TensorFlow code and pre-trained models for BERT
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
A toolkit for developing and comparing reinforcement learning algorithms.
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Code and documentation to train Stanford's Alpaca models, and generate the data.
Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
Graph Neural Network Library for PyTorch
Fully open reproduction of DeepSeek-R1
Fast and memory-efficient exact attention
Python package built to ease deep learning on graph, on top of existing DL frameworks.
Open source code for AlphaFold 2.
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
Ongoing research training transformer models at scale
Clean, minimal, accessible reproduction of DeepSeek R1-Zero
tensorboard for pytorch (and chainer, mxnet, numpy, ...)
Collection of generative models, e.g. GAN, VAE in Pytorch and Tensorflow.
🚴 Call stack profiler for Python. Shows you why your code is slow!