You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I followed the instructions in the CUSTOM_FINETUNE.md file to run the Pokémon dataset for LoRA fine-tuning and encountered an issue when using bash scripts/train/custom_finetune.sh. The error message is as follows:
Traceback (most recent call last):
File "/root/llm-project/TinyLLaVA_Factory/tinyllava/train/custom_finetune.py", line 52, in <module>
train()
File "/root/llm-project/TinyLLaVA_Factory/tinyllava/train/custom_finetune.py", line 34, in train
model = training_recipe(model)
File "/root/llm-project/TinyLLaVA_Factory/tinyllava/training_recipe/base.py", line 14, in __call__
model = self.training_model_converse(model)
File "/root/llm-project/TinyLLaVA_Factory/tinyllava/training_recipe/lora_recipe.py", line 46, in training_model_converse
if model.peft_config is None:
File "/root/anaconda3/envs/tinyllava_factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'TinyLlavaForConditionalGeneration' object has no attribute 'peft_config'
[2024-12-04 12:09:01,833] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 73410
Simply changing if model.peft_config is None: to if not hasattr(model, 'peft_config') or model.peft_config is None: can resolve this issue, and no anomalies were found during the subsequent fine-tuning process.
pspdada
added a commit
to pspdada/TinyLLaVA_Factory
that referenced
this issue
Dec 4, 2024
I followed the instructions in the
CUSTOM_FINETUNE.md
file to run the Pokémon dataset for LoRA fine-tuning and encountered an issue when usingbash scripts/train/custom_finetune.sh
. The error message is as follows:The context of this function is as follows:
My bash scripe is:
My environment details are:
transformers
version: 4.40.1The text was updated successfully, but these errors were encountered: