-
Notifications
You must be signed in to change notification settings - Fork 489
Insights: pytorch/xla
Overview
Could not load contribution data
Please try again later
21 Pull requests merged by 11 people
-
Introduce HLO graph bindings
#8564 merged
Jan 14, 2025 -
pin update
#8559 merged
Jan 13, 2025 -
Set gradient_as_bucket_view=True in test_train_mp_imagenet
#8558 merged
Jan 13, 2025 -
fix batch_norm amp autocast
#8556 merged
Jan 13, 2025 -
[scan] Test we don't recompile under debugging env flags
#8555 merged
Jan 13, 2025 -
Metadata agnostic user computation hash
#8557 merged
Jan 11, 2025 -
[Computation Hash] Introduce deterministic hash for user computations
#8554 merged
Jan 11, 2025 -
Introduce HLO graph bindings
#8551 merged
Jan 11, 2025 -
Always run TPU tests on new PRs
#8552 merged
Jan 10, 2025 -
Metadata agnostic user computation hash
#8550 merged
Jan 10, 2025 -
Lower cummax op
#8491 merged
Jan 10, 2025 -
[torch_xla2] Fix reenabled op info tests
#8548 merged
Jan 10, 2025 -
[Computation Hash] Introduce deterministic hash for user computations
#8539 merged
Jan 10, 2025 -
Revert "fix batch_norm amp autocast"
#8547 merged
Jan 9, 2025 -
Fix typos in pytorch on xla docs
#8543 merged
Jan 9, 2025 -
fix batch_norm amp autocast
#8498 merged
Jan 8, 2025 -
doc change 1
#8544 merged
Jan 8, 2025 -
[torch_xla2] Update test_ops.py
#8403 merged
Jan 7, 2025 -
fix ab in flash attention
#8540 merged
Jan 7, 2025 -
Fully revamp aten embedding bag for JAX backend
#8535 merged
Jan 7, 2025 -
pin update
#8536 merged
Jan 7, 2025
4 Pull requests opened by 3 people
-
Improve and refine MLP tests for extensibility and A/B testing
#8561 opened
Jan 13, 2025 -
[scan] Reduce memory usage
#8562 opened
Jan 14, 2025 -
Cherrypick #8521 into r2.6
#8563 opened
Jan 14, 2025 -
Lower cummin op
#8565 opened
Jan 14, 2025
6 Issues closed by 5 people
-
[Computation Hash] User Computation hash is agnostic to debug metadata
#8538 closed
Jan 10, 2025 -
Lower cummax
#8230 closed
Jan 10, 2025 -
[Computation Hash] User Computation hash disregards protobuf requirements
#8537 closed
Jan 10, 2025 -
Export training model to StableHlo
#8366 closed
Jan 9, 2025 -
AMP BF16 issue with batch norm layer
#8496 closed
Jan 8, 2025 -
Input tensor is not an XLA tensor on AWS Trainium instance
#8510 closed
Jan 8, 2025
5 Issues opened by 5 people
-
Multigpu training hangs using single and multiple nodes
#8549 opened
Jan 10, 2025 -
[Torch_xla2] `torch_xla2.default_env()` guard doesn't enforce XLATensor2
#8546 opened
Jan 9, 2025 -
[Q][GPU][BF16] torch.mul is lowered to HLO as an f32 multiply
#8545 opened
Jan 8, 2025 -
[RFC] Add HuggingFace tests with pinned dependencies to CI
#8542 opened
Jan 7, 2025
9 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
Lowering Aten op to composite op instead of small ops
#8502 commented on
Jan 10, 2025 • 8 new comments -
Implement torch.linspace and torch.logspace
#8533 commented on
Jan 8, 2025 • 6 new comments -
2 questions for the composite op feature
#8486 commented on
Jan 9, 2025 • 0 new comments -
Slow XLA training performance.
#8541 commented on
Jan 9, 2025 • 0 new comments -
Build slowdown issue
#3569 commented on
Jan 9, 2025 • 0 new comments -
PyTorch ResNet18 GPU Training Failed on Colab
#3403 commented on
Jan 9, 2025 • 0 new comments -
PyTorch toy model fails to execute on DTensor constructed from "xla" type device mesh
#8534 commented on
Jan 10, 2025 • 0 new comments -
PyTorch DTensor device mesh interface with device type "xla" fails at `get_rank()`
#8528 commented on
Jan 12, 2025 • 0 new comments -
2.6 backport PR request list
#8455 commented on
Jan 14, 2025 • 0 new comments