Skip to content

Issues: predibase/lorax

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

Project Roadmap enhancement New feature or request
#57 opened Nov 22, 2023 by tgaddair
16 of 36 tasks
Does lorax currently support GPT2 finetuned adapters? enhancement New feature or request
#84 opened Nov 30, 2023 by abhijithnair1
2 of 4 tasks
Error while running the pre-built container using Podman question Further information is requested
#266 opened Feb 21, 2024 by chaser06
Sample command with mistral-7b failed question Further information is requested
#312 opened Mar 7, 2024 by hayleyhu
2 of 4 tasks
how does this differ from s-Lora? question Further information is requested
#90 opened Nov 30, 2023 by priyankat99
Support custom tokenizer when loading a local model bug Something isn't working
#151 opened Dec 25, 2023 by yinjiaoyuan
Fails hard on CUDA error
#523 opened Jun 22, 2024 by yunmanger1
2 of 4 tasks
Llama3-8b-Instruct won't stop generating bug Something isn't working
#442 opened Apr 27, 2024 by ekim322
4 tasks
Using Source = Local for Base Model enhancement New feature or request
#347 opened Mar 20, 2024 by silveranalytics
Second GPU is not found when running --sharded true question Further information is requested
#150 opened Dec 24, 2023 by psych0v0yager
2 of 4 tasks
Can't run Mistral quantized on T4 enhancement New feature or request
#417 opened Apr 16, 2024 by emillykkejensen
2 of 4 tasks
Want Lorax with newer version of TGI question Further information is requested
#329 opened Mar 14, 2024 by yangelaboy
Issue using adapter with large prompt + sharded bug Something isn't working
#283 opened Feb 26, 2024 by tgaddair
Latency increase when run on multi-GPU question Further information is requested
#116 opened Dec 8, 2023 by prd-tuong-nguyen
2 of 4 tasks
Some error records and questions question Further information is requested
#115 opened Dec 8, 2023 by KrisWongz
1 of 4 tasks
Phi 3.5 vision (4B model) enhancement New feature or request
#637 opened Oct 8, 2024 by CheeseAndMeat
2 tasks done
Quickstart example not working
#489 opened May 23, 2024 by jmorenobl
2 of 4 tasks
make install insufficient for running llama3-8B-Instruct documentation Improvements or additions to documentation
#484 opened May 22, 2024 by fozziethebeat
2 of 4 tasks
ProTip! no:milestone will show everything without a milestone.