Skip to content

Commit

Permalink
Updated READMEs for the examples - Batch 1 (rerun-io#5620)
Browse files Browse the repository at this point in the history
Updated READMEs for the examples:
Detect and Track Objects
Dicom MRI
Face Tracking
Gesture Detection
Human Pose Tracking
LiDAR
Live Camera Edge Detection
Live Depth Sensor


<!--
Open the PR up as a draft until you feel it is ready for a proper
review.

Do not make PR:s from your own `main` branch, as that makes it difficult
for reviewers to add their own fixes.

Add any improvements to the branch as new commits to make it easier for
reviewers to follow the progress. All commits will be squashed to a
single commit once the PR is merged into `main`.

Make sure you mention any issues that this PR closes in the description,
as well as any other related issues.

To get an auto-generated PR description you can put "copilot:summary" or
"copilot:walkthrough" anywhere.
-->

### What

### Checklist
* [x] I have read and agree to [Contributor
Guide](https://github.com/rerun-io/rerun/blob/main/CONTRIBUTING.md) and
the [Code of
Conduct](https://github.com/rerun-io/rerun/blob/main/CODE_OF_CONDUCT.md)
* [x] I've included a screenshot or gif (if applicable)
* [x] I have tested the web demo (if applicable):
* Using newly built examples:
[app.rerun.io](https://app.rerun.io/pr/5620/index.html)
* Using examples from latest `main` build:
[app.rerun.io](https://app.rerun.io/pr/5620/index.html?manifest_url=https://app.rerun.io/version/main/examples_manifest.json)
* Using full set of examples from `nightly` build:
[app.rerun.io](https://app.rerun.io/pr/5620/index.html?manifest_url=https://app.rerun.io/version/nightly/examples_manifest.json)
* [x] The PR title and labels are set such as to maximize their
usefulness for the next release's CHANGELOG
* [x] If applicable, add a new check to the [release
checklist](https://github.com/rerun-io/rerun/blob/main/tests/python/release_checklist)!

- [PR Build Summary](https://build.rerun.io/pr/5620)
- [Docs
preview](https://rerun.io/preview/53229bd36daf2158b782ef62b9d76bd90befbdee/docs)
<!--DOCS-PREVIEW-->
- [Examples
preview](https://rerun.io/preview/53229bd36daf2158b782ef62b9d76bd90befbdee/examples)
<!--EXAMPLES-PREVIEW-->
- [Recent benchmark results](https://build.rerun.io/graphs/crates.html)
- [Wasm size tracking](https://build.rerun.io/graphs/sizes.html)

---------

Co-authored-by: Nikolaus West <[email protected]>
Co-authored-by: Emil Ernerfeldt <[email protected]>
  • Loading branch information
3 people authored Apr 4, 2024
1 parent 211b9f2 commit 6de5e14
Show file tree
Hide file tree
Showing 9 changed files with 677 additions and 441 deletions.
7 changes: 7 additions & 0 deletions docs/cspell.json
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,8 @@
"binsearching",
"binstall",
"binutils",
"blendshape",
"blendshapes",
"Birger",
"Birkl",
"booktitle",
Expand Down Expand Up @@ -124,6 +126,8 @@
"ewebsock",
"extrinsics",
"farbfeld",
"FACEMESH",
"facemesh",
"Farooq",
"Feichtenhofer",
"fieldname",
Expand Down Expand Up @@ -177,6 +181,7 @@
"keypointid",
"keypoints",
"Kirillov",
"klass",
"kpreid",
"Landmarker",
"Larsson",
Expand Down Expand Up @@ -329,6 +334,7 @@
"scipy",
"scrollwheel",
"segs",
"Segmentations",
"serde",
"Shaohui",
"Shap",
Expand Down Expand Up @@ -404,6 +410,7 @@
"Viktor",
"virtualenv",
"visualizability",
"voxels",
"Vizzo",
"vstack",
"vsuryamurthy",
Expand Down
159 changes: 155 additions & 4 deletions examples/python/detect_and_track_objects/README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
<!--[metadata]
title = "Detect and Track Objects"
tags = ["2D", "huggingface", "object-detection", "object-tracking", "opencv"]
description = "Visualize object detection and segmentation using the Huggingface `transformers` library."
description = "Visualize object detection and segmentation using the Huggingface `transformers` library and CSRT from OpenCV."
thumbnail = "https://static.rerun.io/detect-and-track-objects/63d7684ab1504c86a5375cb5db0fc515af433e08/480w.png"
thumbnail_dimensions = [480, 480]
channel = "release"
-->



<picture data-inline-viewer="examples/detect_and_track_objects">
<source media="(max-width: 480px)" srcset="https://static.rerun.io/detect_and_track_objects/59f5b97a8724f9037353409ab3d0b7cb47d1544b/480w.png">
<source media="(max-width: 768px)" srcset="https://static.rerun.io/detect_and_track_objects/59f5b97a8724f9037353409ab3d0b7cb47d1544b/768w.png">
Expand All @@ -16,11 +17,161 @@ channel = "release"
<img src="https://static.rerun.io/detect_and_track_objects/59f5b97a8724f9037353409ab3d0b7cb47d1544b/full.png" alt="">
</picture>

Another more elaborate example applying simple object detection and segmentation on a video using the Huggingface `transformers` library. Tracking across frames is performed using [CSRT](https://arxiv.org/pdf/1611.08461.pdf) from OpenCV.
Visualize object detection and segmentation using the [Huggingface's Transformers](https://huggingface.co/docs/transformers/index) and [CSRT](https://arxiv.org/pdf/1611.08461.pdf) from OpenCV.

# Used Rerun Types
[`Image`](https://www.rerun.io/docs/reference/types/archetypes/image), [`SegmentationImage`](https://www.rerun.io/docs/reference/types/archetypes/segmentation_image), [`AnnotationContext`](https://www.rerun.io/docs/reference/types/archetypes/annotation_context), [`Boxes2D`](https://www.rerun.io/docs/reference/types/archetypes/boxes2d), [`TextLog`](https://www.rerun.io/docs/reference/types/archetypes/text_log)

# Background
In this example, CSRT (Channel and Spatial Reliability Tracker), a tracking API introduced in OpenCV, is employed for object detection and tracking across frames.
Additionally, the example showcases basic object detection and segmentation on a video using the Huggingface transformers library.


# Logging and Visualizing with Rerun
The visualizations in this example were created with the following Rerun code.


## Timelines
For each processed video frame, all data sent to Rerun is associated with the [`timelines`](https://www.rerun.io/docs/concepts/timelines) `frame_idx`.

```python
rr.set_time_sequence("frame", frame_idx)
```

## Video
The input video is logged as a sequence of [`Image`](https://www.rerun.io/docs/reference/types/archetypes/image) to the `image` entity.

```python
rr.log(
"image",
rr.Image(rgb).compress(jpeg_quality=85)
)
```

Since the detection and segmentation model operates on smaller images the resized images are logged to the separate `segmentation/rgb_scaled` entity.
This allows us to subsequently visualize the segmentation mask on top of the video.

```python
rr.log(
"segmentation/rgb_scaled",
rr.Image(rgb_scaled).compress(jpeg_quality=85)
)
```

## Segmentations
The segmentation results is logged through a combination of two archetypes.
The segmentation image itself is logged as an
[`SegmentationImage`](https://www.rerun.io/docs/reference/types/archetypes/segmentation_image) and
contains the id for each pixel. It is logged to the `segmentation` entity.


```python
rr.log(
"segmentation",
rr.SegmentationImage(mask)
)
```

The color and label for each class is determined by the
[`AnnotationContext`](https://www.rerun.io/docs/reference/types/archetypes/annotation_context) which is
logged to the root entity using `rr.log("/", …, timeless=True)` as it should apply to the whole sequence and all
entities that have a class id.

```python
class_descriptions = [ rr.AnnotationInfo(id=cat["id"], color=cat["color"], label=cat["name"]) for cat in coco_categories ]
rr.log(
"/",
rr.AnnotationContext(class_descriptions),
timeless=True
)
```

## Detections
The detections and tracked bounding boxes are visualized by logging the [`Boxes2D`](https://www.rerun.io/docs/reference/types/archetypes/boxes2d) to Rerun.

### Detections
```python
rr.log(
"segmentation/detections/things",
rr.Boxes2D(
array=thing_boxes,
array_format=rr.Box2DFormat.XYXY,
class_ids=thing_class_ids,
),
)
```

For more info see [here](https://huggingface.co/docs/transformers/index)
```python
rr.log(
f"image/tracked/{self.tracking_id}",
rr.Boxes2D(
array=self.tracked.bbox_xywh,
array_format=rr.Box2DFormat.XYWH,
class_ids=self.tracked.class_id,
),
)
```
### Tracked bounding boxes
```python
rr.log(
"segmentation/detections/background",
rr.Boxes2D(
array=background_boxes,
array_format=rr.Box2DFormat.XYXY,
class_ids=background_class_ids,
),
)
```

The color and label of the bounding boxes is determined by their class id, relying on the same
[`AnnotationContext`](https://www.rerun.io/docs/reference/types/archetypes/annotation_context) as the
segmentation images. This ensures that a bounding box and a segmentation image with the same class id will also have the
same color.

Note that it is also possible to log multiple annotation contexts should different colors and / or labels be desired.
The annotation context is resolved by seeking up the entity hierarchy.

## Text Log
Rerun integrates with the [Python logging module](https://docs.python.org/3/library/logging.html).
Through the [`TextLog`](https://www.rerun.io/docs/reference/types/archetypes/text_log#textlogintegration) text at different importance level can be logged. After an initial setup that is described on the
[`TextLog`](https://www.rerun.io/docs/reference/types/archetypes/text_log#textlogintegration), statements
such as `logging.info("…")`, `logging.debug("…")`, etc. will show up in the Rerun viewer.

```python
def setup_logging() -> None:
logger = logging.getLogger()
rerun_handler = rr.LoggingHandler("logs")
rerun_handler.setLevel(-1)
logger.addHandler(rerun_handler)

def main() -> None:
# … existing code …
setup_logging() # setup logging
track_objects(video_path, max_frame_count=args.max_frame) # start tracking
```
In the viewer you can adjust the filter level and look at the messages time-synchronized with respect to other logged data.

# Run the Code
To run this example, make sure you have the Rerun repository checked out and the latest SDK installed:
```bash
# Setup
pip install --upgrade rerun-sdk # install the latest Rerun SDK
git clone [email protected]:rerun-io/rerun.git # Clone the repository
cd rerun
git checkout latest # Check out the commit matching the latest SDK release
```

Install the necessary libraries specified in the requirements file:
```bash
pip install -r examples/python/detect_and_track_objects/requirements.txt
python examples/python/detect_and_track_objects/main.py
```
To experiment with the provided example, simply execute the main Python script:
```bash
python examples/python/detect_and_track_objects/main.py # run the example
```

If you wish to customize it for various videos, adjust the maximum frames, explore additional features, or save it use the CLI with the `--help` option for guidance:

```bash
python examples/python/detect_and_track_objects/main.py --help
```
45 changes: 43 additions & 2 deletions examples/python/dicom_mri/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,50 @@ channel = "main"
<img src="https://static.rerun.io/dicom_mri/e39f34a1b1ddd101545007f43a61783e1d2e5f8e/full.png" alt="">
</picture>

Example using a [DICOM](https://en.wikipedia.org/wiki/DICOM) MRI scan. This demonstrates the flexible tensor slicing capabilities of the Rerun viewer.
Visualize a [DICOM](https://en.wikipedia.org/wiki/DICOM) MRI scan. This demonstrates the flexible tensor slicing capabilities of the Rerun viewer.

# Used Rerun Types
[`Tensor`](https://www.rerun.io/docs/reference/types/archetypes/tensor), [`TextDocument`](https://www.rerun.io/docs/reference/types/archetypes/text_document)

# Background
Digital Imaging and Communications in Medicine (DICOM) serves as a technical standard for the digital storage and transmission of medical images. In this instance, an MRI scan is visualized using Rerun.

# Logging and Visualizing with Rerun

The visualizations in this example were created with just the following line.
```python
rr.log("tensor", rr.Tensor(voxels_volume_u16, dim_names=["right", "back", "up"]))
```

A `numpy.array` named `voxels_volume_u16` representing volumetric MRI intensities with a shape of `(512, 512, 512)`.
To visualize this data effectively in Rerun, we can log the `numpy.array` as [`Tensor`](https://www.rerun.io/docs/reference/types/archetypes/tensor) to the `tensor` entity.

In the Rerun viewer you can also inspect the data in detail. The `dim_names` provided in the above call to `rr.log` help to
give semantic meaning to each axis. After selecting the tensor view, you can adjust various settings in the Blueprint
settings on the right-hand side. For example, you can adjust the color map, the brightness, which dimensions to show as
an image and which to select from, and more.

# Run the Code
To run this example, make sure you have the Rerun repository checked out and the latest SDK installed:
```bash
# Setup
pip install --upgrade rerun-sdk # install the latest Rerun SDK
git clone [email protected]:rerun-io/rerun.git # Clone the repository
cd rerun
git checkout latest # Check out the commit matching the latest SDK release
```

Install the necessary libraries specified in the requirements file:
```bash
pip install -r examples/python/dicom_mri/requirements.txt
python examples/python/dicom_mri/main.py
```
To experiment with the provided example, simply execute the main Python script:
```bash
python examples/python/dicom_mri/main.py # run the example
```

If you wish to customize it, explore additional features, or save it, use the CLI with the `--help` option for guidance:

```bash
python examples/python/dicom_mri/main.py --help
```
Loading

0 comments on commit 6de5e14

Please sign in to comment.