Stars
【三年面试五年模拟】AI算法工程师面试秘籍。涵盖AIGC、传统深度学习、自动驾驶、机器学习、计算机视觉、自然语言处理、强化学习、具身智能、元宇宙、AGI等AI行业面试笔试经验与干货知识。
This repository contains demos I made with the Transformers library by HuggingFace.
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Quick Start for Large Language Models (Theoretical Learning and Practical Fine-tuning) 大语言模型快速入门(理论学习与微调实战)
TexTeller can convert image to latex formulas (image2latex, latex OCR) with higher accuracy and exhibits superior generalization ability, enabling it to cover most usage scenarios.
Implementation of Nougat Neural Optical Understanding for Academic Documents
pix2tex: Using a ViT to convert images of equations into LaTeX code.
Codebase for fine-tuning / evaluating nougat-based image2latex generation models
A plugin dveloped for ChesireCat AI to detect complex math formulas from any image and reproduce them in Latex syntax
Fine-tuning Open-Source LLMs for Adaptive Machine Translation
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Implementation of Vision Transformer from scratch and performance compared to standard CNNs (ResNets) and pre-trained ViT on CIFAR10 and CIFAR100.
ChatGLM3 series: Open Bilingual Chat LLMs | 开源双语对话语言模型
Implementation of Conv-based and Vit-based networks designed for CIFAR.
《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程
LLM (Large Language Model) FineTuning
LLM Finetuning with peft
深度学习500问,以问答形式对常用的概率知识、线性代数、机器学习、深度学习、计算机视觉等热点问题进行阐述,以帮助自己及有需要的读者。 全书分为18个章节,50余万字。由于水平有限,书中不妥之处恳请广大读者批评指正。 未完待续............ 如有意合作,联系[email protected] 版权所有,违权必究 Tan 2018.06
This is a repository for test