Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix] handle prompt length for multi-GPU #87

Merged
merged 10 commits into from
Dec 9, 2024
Prev Previous commit
Next Next commit
👍 changed the import source of models and added comments.
  • Loading branch information
hppRC committed Dec 9, 2024
commit f177e60436fc7edb8739086a3102d837462d7d7b
7 changes: 5 additions & 2 deletions src/jmteb/embedders/data_parallel_sbert_embedder.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,8 @@
import torch
from accelerate.utils import find_executable_batch_size
from loguru import logger
from sentence_transformers import SentenceTransformer, models
from sentence_transformers import SentenceTransformer
from sentence_transformers.models import Pooling
from sentence_transformers.quantization import quantize_embeddings
from sentence_transformers.util import truncate_embeddings
from torch import Tensor
Expand Down Expand Up @@ -166,9 +167,11 @@ def encode(

return all_embeddings

# Sentence Transformersの`include_prompt`判定メソッドを参考に実装
# ref: https://github.com/UKPLab/sentence-transformers/blob/679ab5d38e4cf9cd73d4dcf1cda25ba2ef1ad837/sentence_transformers/trainer.py#L931
def include_prompt_for_pooling(self) -> bool:
for module in self:
if isinstance(module, models.Pooling):
if isinstance(module, Pooling):
return module.include_prompt
return True

Expand Down
Loading