-
The University of Hong Kong
Lists (1)
Sort Name ascending (A-Z)
Starred repositories
LLMs interview notes and answers:该仓库主要记录大模型(LLMs)算法工程师相关的面试题和参考答案
🟣 LLMs interview questions and answers to help you prepare for your next machine learning and data science interview in 2024.
Welcome to the LLMs Interview Prep Guide! This GitHub repository offers a curated set of interview questions and answers tailored for Data Scientists. Enhance your understanding of Large Language M…
⚡ InstaFlow! One-Step Stable Diffusion with Rectified Flow (ICLR 2024)
[ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
C4RepSet: Representative Subset from C4 data for Training Pre-trained LMs
Code implementation of our NeurIPS 2023 paper: Vocabulary-free Image Classification
Code for "Sequence Level Contrastive Learning for Text Summarization"
An (unofficial) implementation of Focal Loss, as described in the RetinaNet paper, generalized to the multi-class case.
PyTorch implementation of MAE https//arxiv.org/abs/2111.06377
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
An open source implementation of CLIP.
Code for our CVPR 2022 paper 'Generalized Category Discovery'. Project page: https://www.robots.ox.ac.uk/~vgg/research/gcd/
SimVLM ---SIMPLE VISUAL LANGUAGE MODEL PRETRAINING WITH WEAK SUPERVISION
Simple image captioning model
Code for "Open Vocabulary Extreme Classification Using Generative Models"
Test-time Prompt Tuning (TPT) for zero-shot generalization in vision-language models (NeurIPS 2022))
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Implementation of the deepmind Flamingo vision-language model, based on Hugging Face language models and ready for training
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
This repository is the official implementation of the aaai2022 paper "Zero-Shot Out-of-Distribution Detection Based on the Pre-trained Model CLIP"