-
Notifications
You must be signed in to change notification settings - Fork 489
Issues: pytorch/xla
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[Torch_xla2]
torch_xla2.default_env()
guard doesn't enforce XLATensor2
torchxla2
#8546
opened Jan 9, 2025 by
zpcore
[Q][GPU][BF16] torch.mul is lowered to HLO as an f32 multiply
#8545
opened Jan 8, 2025 by
apivovarov
multi_queries_paged_attention_kernel fails with Llama3 70B on a TPU-v4-16 with sequence length of 256
#8515
opened Dec 21, 2024 by
OhadRubin
Program Hang/Stuck after using F.interpolate? VAE Decode Step of HunyuanVideo model.
#8470
opened Dec 9, 2024 by
radna0
TPU memory use increased significantly in torch/xla - 2.6.0.dev20241107
#8423
opened Nov 27, 2024 by
dudulightricks
Previous Next
ProTip!
What’s not been updated in a month: updated:<2024-12-14.