Skip to content

Tags: pytorch/torcheval

Tags

0.0.6

Toggle 0.0.6's commit message
Bumping to Version 0.0.6 (#124)

Summary:
Pull Request resolved: #124

# TorchEval Version 0.0.6

## Change Log

 - New metrics:
   - AUC
   - Binary, Multiclass, Multilabel AUPRC (also called Average Precision) #108 #109
   - Multilabel Precision Recall Curve #87
   - Recall at Fixed Precision #88 #91
   - Windowed Mean Square Error #72 #86
   - Blue Score #93 #95
   - Perplexity #90
   - Word Error Rate #97
   - Word Information Loss #111
   - Word Information Preserved #110
 - Features
   - Added Sync for Dictionaries of Metrics #98
   - Improved FLOPS counter #81
   - Improved Module Summary, added forward elapsed times #100 #103 #104 #105 #114
   - AUROC now supports weighted inputs #94
 - Other
   - Improved Documentation #80 #117 #121
   - Added Module Summary to Quickstart #113
   - Updates several unit tests #77 #96 #101 #73
   - Docs Automatically Add New Metrics #118
   - Several Aggregation Metrics now Support fp64 #116 #123

### [BETA] Sync Dictionaries of Metrics

We're looking forward to building tooling for metric collections. The first important feature towards this end is collective syncing of groups of metrics. In the example below, we show how easy it is to sync all your metrics at the same time with `sync_and_compute_collection`.

This method is not only for convenience, on the backend we only use one torch distributed sync collective for the entire group of metrics, meaning that the overhead from repeated network directives is maximally reduced.

```python
import torch
from torcheval.metrics import BinaryAUPRC, BinaryAUROC, BinaryAccuracy
from torcheval.metrics.toolkit import sync_and_compute_collection, reset_metrics

# Collections should be Dict[str, Metric]
train_metrics = {
    "train_auprc": BinaryAUPRC(),
    "train_auroc": BinaryAUROC(),
    "train_accuracy": BinaryAccuracy(),
}

# Hydrate metrics with some random data
preds = torch.rand(size=(100,))
targets = torch.randint(low=0, high=2, size=(100,))

for name, metric in train_metrics.items():
    metric.update(preds, targets)

# Sync the whole group with a single gather
print(sync_and_compute_collection(train_metrics))
>>> {'train_auprc': tensor(0.5913), 'train_auroc': tensor(0.5161, dtype=torch.float64), 'train_accuracy': tensor(0.5100)}

# reset all metrics in collection
reset_metrics(train_metrics.values())
```

Be on the lookout for more metric collection code coming in future releases.

## Contributors

We're grateful for our community, which helps us improving torcheval by highlighting issues and contributing code. The following persons have contributed patches for this release: Rohit Alekar lindawangg Julia Reinspach jingchi-wang Ekta Sardana williamhufb @\andreasfloros Erika Lal samiwilf

Reviewed By: ananthsub

Differential Revision: D42737308

fbshipit-source-id: dfd852345e1a9f3069ea33b056f5a60a3adde5aa

0.0.5

Toggle 0.0.5's commit message
add the missing metrics to the doc (#78)

Summary:
Please read through our [contribution guide](https://github.com/pytorch-labs/torcheval/blob/main/CONTRIBUTING.md) prior to creating your pull request.
As title

Pull Request resolved: #78

Test Plan: Fixes #{issue number}

Reviewed By: ananthsub

Differential Revision: D40821583

Pulled By: ninginthecloud

fbshipit-source-id: 5847bb61e90e3b69a6f5e7907e5183f8b8103b8b