-
University of Oklahoma
- Norman, Oklahoma
- https://mlciv.com
- in/jiecao-mlciv
Stars
An Open-Ended Embodied Agent with Large Language Models
AppAgent: Multimodal Agents as Smartphone Users, an LLM-based multimodal agent framework designed to operate smartphone apps.
An Easy-to-use, Scalable and High-performance RLHF Framework based on Ray (PPO & GRPO & REINFORCE++ & LoRA & vLLM & RFT)
A visual tool to interpret and understand PyTorch machine learning models
Official code repo for the O'Reilly Book - "Hands-On Large Language Models"
⌨ Supercharge your Control key: Tap it for Escape. Hold it for Control.
📋 A list of open LLMs available for commercial use.
The dataset contains texts about drug-trafficking in Spanish. Note that the copyrighted material, for instance, the literature and essay categories are not provided within the dataset. If you need …
Google Drive Public File Downloader when Curl/Wget Fails
Code, Dataset, and Pretrained Models for Audio and Speech Large Language Model "Listen, Think, and Understand".
DSPy: The framework for programming—not prompting—language models
lightweight, standalone C++ inference engine for Google's Gemma models.
[ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning
Digital planner for Supernote and ReMarkable // Support Ukraine 🇺🇦 https://savelife.in.ua/en
A curated list of projects, templates or anything interesting related to the Supernote tablet
Awesome-LLM-Prompt-Optimization: a curated list of advanced prompt optimization and tuning methods in Large Language Models
A central, open resource for data and tools related to chain-of-thought reasoning in large language models. Developed @ Samwald research group: https://samwald.info/
[Paper List] Papers integrating knowledge graphs (KGs) and large language models (LLMs)
Transformer-based Multi-Party Conversation Generation using Dialogue Discourse Acts Planning
Datasets, SOTA results of every fields of Chinese NLP
Calculate perplexity on a text with pre-trained language models. Support MLM (eg. DeBERTa), recurrent LM (eg. GPT3), and encoder-decoder LM (eg. Flan-T5).
古文语言理解测评基准 Classical Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
GuwenBERT: 古文预训练语言模型(古文BERT) A Pre-trained Language Model for Classical Chinese (Literary Chinese)