-
Notifications
You must be signed in to change notification settings - Fork 150
Issues: predibase/lorax
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Project Roadmap
enhancement
New feature or request
#57
opened Nov 22, 2023 by
tgaddair
16 of 36 tasks
Is there any plan to support dynamic lora for qwen/chatglm models?
enhancement
New feature or request
#101
opened Dec 5, 2023 by
KrisWongz
Does lorax currently support GPT2 finetuned adapters?
enhancement
New feature or request
#84
opened Nov 30, 2023 by
abhijithnair1
2 of 4 tasks
Error while running the pre-built container using Podman
question
Further information is requested
#266
opened Feb 21, 2024 by
chaser06
Sample command with mistral-7b failed
question
Further information is requested
#312
opened Mar 7, 2024 by
hayleyhu
2 of 4 tasks
how does this differ from s-Lora?
question
Further information is requested
#90
opened Nov 30, 2023 by
priyankat99
Support custom tokenizer when loading a local model
bug
Something isn't working
#151
opened Dec 25, 2023 by
yinjiaoyuan
decapoda-research/llama-13b-hf is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
question
Further information is requested
#310
opened Mar 7, 2024 by
sleepwalker2017
Llama3-8b-Instruct won't stop generating
bug
Something isn't working
#442
opened Apr 27, 2024 by
ekim322
4 tasks
Using Source = Local for Base Model
enhancement
New feature or request
#347
opened Mar 20, 2024 by
silveranalytics
Second GPU is not found when running --sharded true
question
Further information is requested
#150
opened Dec 24, 2023 by
psych0v0yager
2 of 4 tasks
Quantization appears to be broken, at least for AWQ and BnB
#722
opened Dec 21, 2024 by
codybum
2 of 4 tasks
Can't run Mistral quantized on T4
enhancement
New feature or request
#417
opened Apr 16, 2024 by
emillykkejensen
2 of 4 tasks
Want Lorax with newer version of TGI
question
Further information is requested
#329
opened Mar 14, 2024 by
yangelaboy
Issue using adapter with large prompt + sharded
bug
Something isn't working
#283
opened Feb 26, 2024 by
tgaddair
Error deploying GPTQ models to sagemaker
#235
opened Feb 11, 2024 by
GlacierPurpleBison
2 of 4 tasks
ValueError: Adapter '/data/llama2-lora' is not compatible with model '/data/Llama-2-7b-chat-hf'. Use --model-id '/new-model/llama2-7b/Llama-2-7b-chat-hf' instead.
question
Further information is requested
#172
opened Jan 10, 2024 by
Senna1960321
2 of 4 tasks
Latency increase when run on multi-GPU
question
Further information is requested
#116
opened Dec 8, 2023 by
prd-tuong-nguyen
2 of 4 tasks
Some error records and questions
question
Further information is requested
#115
opened Dec 8, 2023 by
KrisWongz
1 of 4 tasks
Phi 3.5 vision (4B model)
enhancement
New feature or request
#637
opened Oct 8, 2024 by
CheeseAndMeat
2 tasks done
make install
insufficient for running llama3-8B-Instruct
documentation
#484
opened May 22, 2024 by
fozziethebeat
2 of 4 tasks
When caching adapters, cache the adapter ID + the API token pair
enhancement
New feature or request
good first issue
Good for newcomers
#479
opened May 20, 2024 by
noyoshi
Previous Next
ProTip!
no:milestone will show everything without a milestone.