Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add bash script to install packages #245

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open

Conversation

tohrnii
Copy link

@tohrnii tohrnii commented Mar 14, 2024

This PR adds a simple bash script to install unsloth and it's dependencies.

@tohrnii tohrnii force-pushed the bash branch 3 times, most recently from 6cf86f3 to 7b880f1 Compare March 14, 2024 06:31
@tohrnii tohrnii marked this pull request as draft March 14, 2024 08:48
@tohrnii
Copy link
Author

tohrnii commented Mar 14, 2024

seems like colab updated torch version so putting this PR in draft mode for now.

@danielhanchen
Copy link
Contributor

OOO I like this bash script!! Ye I'm working on fixing Colab for now :(

@tohrnii
Copy link
Author

tohrnii commented Mar 16, 2024

@danielhanchen I checked and this script works for conda and pip for torch versions before 2.2.1. What changes would required to support 2.2.1? Maybe the instructions can be added to readme as well if the fix is stable.

@danielhanchen
Copy link
Contributor

@tohrnii It's mainly torch 2.2.1 causing issues for multiple packages. I found:

# RTX 3090, 4090 Ampere GPUs:
pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes

# Pre Ampere RTX 2080, T4, GTX 1080 GPUs:
pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
pip install --no-deps xformers trl peft accelerate bitsandbytes

to work

@tohrnii tohrnii marked this pull request as ready for review March 20, 2024 19:02
@tohrnii
Copy link
Author

tohrnii commented Mar 20, 2024

Also changed the naming slightly. colab-new -> colab-221

@tohrnii
Copy link
Author

tohrnii commented Mar 22, 2024

@bet0x Thanks, I'm not sure what you mean? Doesn't torch.cuda.get_device_capability()[0] work? Have you tried running this bash script and faced any problems using it?

@bet0x
Copy link
Contributor

bet0x commented Mar 22, 2024

@tohrnii

Never mind, I got confused with another PR. I was also conducting verification and working towards the replacement of NVCC (including detection and related processes).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants