Skip to content

Commit

Permalink
Video hover interaction like on images (rerun-io#7457)
Browse files Browse the repository at this point in the history
### What

* Fixes rerun-io#7353

This makes image interaction on videos look the same as on image!


https://github.com/user-attachments/assets/279bf6c3-2d87-4f99-bda5-78ba291e4e5f


There's once again quite a bit of code movement: Commit by commit review
recommended to easier skip over these pure refactors.

Image & video hover share almost all code. There's a few places where it
looks we did repeated image->gputexture creation which are now
eliminated since everything re-uses the textures acquired during
visualization (most of that happens in [this
commit](rerun-io@7e24d92).

At the very end (both in the commit history & in the codeflow of the
hover interaction) there's a very special piece of code for read-back of
the texture values from the video. This necessary since we never have
the entire video frame accessible in regular ram.
We don't want to grab the entire picture since this may be slow for high
resolution video and we don't want to grab a single pixel either,
because by the time the result arrives, the mouse might have moved on
(delay depends on various factors but we expect 1-3 frames). Our
strategy is similar to our picking read-back: we query a region around
the mouse cursor and once it arrives see where the mouse is inside this
region. This a bit more complicated than picking though, because there
we already have a small enough texture whereas where we have to do a
read from a bigger one. Details see comments.

I validated the rgb values displayed by taking screenshots of a running
video and then compare with an external color picker. In my experiments
the rgb numbers have always been accurate with no delay albeit in theory
they may lag behind ever so slightly.

Small style change: Both images/videos and regular hovers show the
entity path now formatted the same:

![image](https://github.com/user-attachments/assets/b4a30462-952a-41eb-9a63-9fe9d71f8edc)


### Checklist
* [x] I have read and agree to [Contributor
Guide](https://github.com/rerun-io/rerun/blob/main/CONTRIBUTING.md) and
the [Code of
Conduct](https://github.com/rerun-io/rerun/blob/main/CODE_OF_CONDUCT.md)
* [x] I've included a screenshot or gif (if applicable)
* [x] I have tested the web demo (if applicable):
* Using examples from latest `main` build:
[rerun.io/viewer](https://rerun.io/viewer/pr/7457?manifest_url=https://app.rerun.io/version/main/examples_manifest.json)
* Using full set of examples from `nightly` build:
[rerun.io/viewer](https://rerun.io/viewer/pr/7457?manifest_url=https://app.rerun.io/version/nightly/examples_manifest.json)
* [x] The PR title and labels are set such as to maximize their
usefulness for the next release's CHANGELOG
* [x] If applicable, add a new check to the [release
checklist](https://github.com/rerun-io/rerun/blob/main/tests/python/release_checklist)!
* [x] If have noted any breaking changes to the log API in
`CHANGELOG.md` and the migration guide

- [PR Build Summary](https://build.rerun.io/pr/7457)
- [Recent benchmark results](https://build.rerun.io/graphs/crates.html)
- [Wasm size tracking](https://build.rerun.io/graphs/sizes.html)

To run all checks from `main`, comment on the PR with `@rerun-bot
full-check`.
  • Loading branch information
Wumpf authored Sep 23, 2024
1 parent e12e809 commit 47c6e47
Show file tree
Hide file tree
Showing 19 changed files with 1,003 additions and 738 deletions.
317 changes: 3 additions & 314 deletions crates/viewer/re_data_ui/src/image.rs
Original file line number Diff line number Diff line change
@@ -1,12 +1,9 @@
use egui::{Color32, NumExt as _, Vec2};
use egui::{NumExt as _, Vec2};

use re_renderer::renderer::ColormappedTexture;
use re_types::{
components::ClassId, datatypes::ColorModel, image::ImageKind, tensor_data::TensorElement,
};
use re_viewer_context::{
gpu_bridge::{self, image_to_gpu},
Annotations, ImageInfo, ImageStats, ImageStatsCache, UiLayout, ViewerContext,
ImageInfo, ImageStatsCache, UiLayout, ViewerContext,
};

/// Show a button letting the user copy the image
Expand Down Expand Up @@ -143,7 +140,7 @@ fn show_image_preview(
minification: egui::TextureFilter::Linear,
..Default::default()
},
debug_name,
debug_name.into(),
) {
let color = ui.visuals().error_fg_color;
painter.text(
Expand All @@ -168,311 +165,3 @@ fn largest_size_that_fits_in(aspect_ratio: f32, max_size: Vec2) -> Vec2 {
egui::vec2(max_size.x, max_size.x / aspect_ratio)
}
}

// Show the surrounding pixels:
const ZOOMED_IMAGE_TEXEL_RADIUS: isize = 10;

pub fn show_zoomed_image_region_area_outline(
egui_ctx: &egui::Context,
ui_clip_rect: egui::Rect,
image_resolution: egui::Vec2,
[center_x, center_y]: [isize; 2],
image_rect: egui::Rect,
) {
use egui::{pos2, remap, Rect};

let width = image_resolution.x;
let height = image_resolution.y;

// Show where on the original image the zoomed-in region is at:
// The area shown is ZOOMED_IMAGE_TEXEL_RADIUS _surrounding_ the center.
// Since the center is the top-left/rounded-down, coordinate, we need to add 1 to right/bottom.
let left = (center_x - ZOOMED_IMAGE_TEXEL_RADIUS) as f32;
let right = (center_x + ZOOMED_IMAGE_TEXEL_RADIUS + 1) as f32;
let top = (center_y - ZOOMED_IMAGE_TEXEL_RADIUS) as f32;
let bottom = (center_y + ZOOMED_IMAGE_TEXEL_RADIUS + 1) as f32;

let left = remap(left, 0.0..=width, image_rect.x_range());
let right = remap(right, 0.0..=width, image_rect.x_range());
let top = remap(top, 0.0..=height, image_rect.y_range());
let bottom = remap(bottom, 0.0..=height, image_rect.y_range());

let sample_rect = Rect::from_min_max(pos2(left, top), pos2(right, bottom));
// TODO(emilk): use `parent_ui.painter()` and put it in a high Z layer, when https://github.com/emilk/egui/issues/1516 is done
let painter = egui_ctx.debug_painter().with_clip_rect(ui_clip_rect);
painter.rect_stroke(sample_rect, 0.0, (2.0, Color32::BLACK));
painter.rect_stroke(sample_rect, 0.0, (1.0, Color32::WHITE));
}

/// `meter`: iff this is a depth map, how long is one meter?
#[allow(clippy::too_many_arguments)]
pub fn show_zoomed_image_region(
render_ctx: &re_renderer::RenderContext,
ui: &mut egui::Ui,
image: &ImageInfo,
image_stats: &ImageStats,
annotations: &Annotations,
meter: Option<f32>,
debug_name: &str,
center_texel: [isize; 2],
) {
let texture =
match gpu_bridge::image_to_gpu(render_ctx, debug_name, image, image_stats, annotations) {
Ok(texture) => texture,
Err(err) => {
ui.label(format!("Error: {err}"));
return;
}
};

if let Err(err) = try_show_zoomed_image_region(
render_ctx,
ui,
image,
texture,
annotations,
meter,
debug_name,
center_texel,
) {
ui.label(format!("Error: {err}"));
}
}

/// `meter`: iff this is a depth map, how long is one meter?
#[allow(clippy::too_many_arguments)]
fn try_show_zoomed_image_region(
render_ctx: &re_renderer::RenderContext,
ui: &mut egui::Ui,
image: &ImageInfo,
texture: ColormappedTexture,
annotations: &Annotations,
meter: Option<f32>,
debug_name: &str,
center_texel: [isize; 2],
) -> anyhow::Result<()> {
let (width, height) = (image.width(), image.height());

const POINTS_PER_TEXEL: f32 = 5.0;
let size = Vec2::splat(((ZOOMED_IMAGE_TEXEL_RADIUS * 2 + 1) as f32) * POINTS_PER_TEXEL);

let (_id, zoom_rect) = ui.allocate_space(size);
let painter = ui.painter();

painter.rect_filled(zoom_rect, 0.0, ui.visuals().extreme_bg_color);

let center_of_center_texel = egui::vec2(
(center_texel[0] as f32) + 0.5,
(center_texel[1] as f32) + 0.5,
);

// Paint the zoomed in region:
{
let image_rect_on_screen = egui::Rect::from_min_size(
zoom_rect.center() - POINTS_PER_TEXEL * center_of_center_texel,
POINTS_PER_TEXEL * egui::vec2(width as f32, height as f32),
);

gpu_bridge::render_image(
render_ctx,
&painter.with_clip_rect(zoom_rect),
image_rect_on_screen,
texture.clone(),
egui::TextureOptions::NEAREST,
debug_name,
)?;
}

// Outline the center texel, to indicate which texel we're printing the values of:
{
let center_texel_rect =
egui::Rect::from_center_size(zoom_rect.center(), Vec2::splat(POINTS_PER_TEXEL));
painter.rect_stroke(center_texel_rect.expand(1.0), 0.0, (1.0, Color32::BLACK));
painter.rect_stroke(center_texel_rect, 0.0, (1.0, Color32::WHITE));
}

let [x, y] = center_texel;
if 0 <= x && (x as u32) < width && 0 <= y && (y as u32) < height {
ui.separator();

ui.vertical(|ui| {
ui.style_mut().wrap_mode = Some(egui::TextWrapMode::Extend);

image_pixel_value_ui(ui, image, annotations, [x as _, y as _], meter);

// Show a big sample of the color of the middle texel:
let (rect, _) =
ui.allocate_exact_size(Vec2::splat(ui.available_height()), egui::Sense::hover());
// Position texture so that the center texel is at the center of the rect:
let zoom = rect.width();
let image_rect_on_screen = egui::Rect::from_min_size(
rect.center() - zoom * center_of_center_texel,
zoom * egui::vec2(width as f32, height as f32),
);
gpu_bridge::render_image(
render_ctx,
&ui.painter().with_clip_rect(rect),
image_rect_on_screen,
texture,
egui::TextureOptions::NEAREST,
debug_name,
)
})
.inner?;
}
Ok(())
}

fn image_pixel_value_ui(
ui: &mut egui::Ui,
image: &ImageInfo,
annotations: &Annotations,
[x, y]: [u32; 2],
meter: Option<f32>,
) {
egui::Grid::new("hovered pixel properties").show(ui, |ui| {
ui.label("Position:");
ui.label(format!("{x}, {y}"));
ui.end_row();

// Check for annotations on any single-channel image
if image.kind == ImageKind::Segmentation {
if let Some(raw_value) = image.get_xyc(x, y, 0) {
if let (ImageKind::Segmentation, Some(u16_val)) =
(image.kind, raw_value.try_as_u16())
{
ui.label("Label:");
ui.label(
annotations
.resolved_class_description(Some(ClassId::from(u16_val)))
.annotation_info()
.label(None)
.unwrap_or_else(|| u16_val.to_string()),
);
ui.end_row();
};
}
}
if let Some(meter) = meter {
// This is a depth map
if let Some(raw_value) = image.get_xyc(x, y, 0) {
let raw_value = raw_value.as_f64();
let meters = raw_value / (meter as f64);
ui.label("Depth:");
if meters < 1.0 {
ui.monospace(format!("{:.1} mm", meters * 1e3));
} else {
ui.monospace(format!("{meters:.3} m"));
}
}
}
});

let text = match image.kind {
ImageKind::Segmentation | ImageKind::Depth => {
image.get_xyc(x, y, 0).map(|v| format!("Val: {v}"))
}

ImageKind::Color => match image.color_model() {
ColorModel::L => image.get_xyc(x, y, 0).map(|v| format!("L: {v}")),

ColorModel::RGB => {
if let Some([r, g, b]) = {
if let [Some(r), Some(g), Some(b)] = [
image.get_xyc(x, y, 0),
image.get_xyc(x, y, 1),
image.get_xyc(x, y, 2),
] {
Some([r, g, b])
} else {
None
}
} {
match (r, g, b) {
(TensorElement::U8(r), TensorElement::U8(g), TensorElement::U8(b)) => {
Some(format!("R: {r}, G: {g}, B: {b}, #{r:02X}{g:02X}{b:02X}"))
}
_ => Some(format!("R: {r}, G: {g}, B: {b}")),
}
} else {
None
}
}

ColorModel::RGBA => {
if let (Some(r), Some(g), Some(b), Some(a)) = (
image.get_xyc(x, y, 0),
image.get_xyc(x, y, 1),
image.get_xyc(x, y, 2),
image.get_xyc(x, y, 3),
) {
match (r, g, b, a) {
(
TensorElement::U8(r),
TensorElement::U8(g),
TensorElement::U8(b),
TensorElement::U8(a),
) => Some(format!(
"R: {r}, G: {g}, B: {b}, A: {a}, #{r:02X}{g:02X}{b:02X}{a:02X}"
)),
_ => Some(format!("R: {r}, G: {g}, B: {b}, A: {a}")),
}
} else {
None
}
}

ColorModel::BGR => {
if let Some([b, g, r]) = {
if let [Some(b), Some(g), Some(r)] = [
image.get_xyc(x, y, 0),
image.get_xyc(x, y, 1),
image.get_xyc(x, y, 2),
] {
Some([r, g, b])
} else {
None
}
} {
match (b, g, r) {
(TensorElement::U8(b), TensorElement::U8(g), TensorElement::U8(r)) => {
Some(format!("B: {b}, G: {g}, R: {r}, #{b:02X}{g:02X}{r:02X}"))
}
_ => Some(format!("B: {b}, G: {g}, R: {r}")),
}
} else {
None
}
}

ColorModel::BGRA => {
if let (Some(b), Some(g), Some(r), Some(a)) = (
image.get_xyc(x, y, 0),
image.get_xyc(x, y, 1),
image.get_xyc(x, y, 2),
image.get_xyc(x, y, 3),
) {
match (b, g, r, a) {
(
TensorElement::U8(b),
TensorElement::U8(g),
TensorElement::U8(r),
TensorElement::U8(a),
) => Some(format!(
"B: {b}, G: {g}, R: {r}, A: {a}, #{r:02X}{g:02X}{b:02X}{a:02X}"
)),
_ => Some(format!("B: {b}, G: {g}, R: {r}, A: {a}")),
}
} else {
None
}
}
},
};

if let Some(text) = text {
ui.label(text);
} else {
ui.label("No Value");
}
}
5 changes: 1 addition & 4 deletions crates/viewer/re_data_ui/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -23,10 +23,7 @@ mod tensor;

pub mod item_ui;

pub use crate::{
image::{show_zoomed_image_region, show_zoomed_image_region_area_outline},
tensor::tensor_summary_ui_grid_contents,
};
pub use crate::tensor::tensor_summary_ui_grid_contents;
pub use component::ComponentPathLatestAtResults;
pub use component_ui_registry::{add_to_registry, register_component_uis};

Expand Down
13 changes: 10 additions & 3 deletions crates/viewer/re_renderer/src/allocator/gpu_readback_belt.rs
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,9 @@ struct PendingReadbackRange {
pub enum GpuReadbackError {
#[error("Texture format {0:?} is not supported for readback.")]
UnsupportedTextureFormatForReadback(wgpu::TextureFormat),

#[error("Texture or buffer does not have the required copy-source usage flag.")]
MissingSrcCopyUsage,
}

/// A reserved slice for GPU readback.
Expand Down Expand Up @@ -63,18 +66,22 @@ impl GpuReadbackBuffer {
sources_and_extents: &[(wgpu::ImageCopyTexture<'_>, wgpu::Extent3d)],
) -> Result<(), GpuReadbackError> {
for (source, copy_extents) in sources_and_extents {
let src_texture = source.texture;
if !src_texture.usage().contains(wgpu::TextureUsages::COPY_SRC) {
return Err(GpuReadbackError::MissingSrcCopyUsage);
}

let start_offset = wgpu::util::align_to(
self.range_in_chunk.start,
source
.texture
src_texture
.format()
.block_copy_size(Some(source.aspect))
.ok_or(GpuReadbackError::UnsupportedTextureFormatForReadback(
source.texture.format(),
))? as u64,
);

let buffer_info = Texture2DBufferInfo::new(source.texture.format(), *copy_extents);
let buffer_info = Texture2DBufferInfo::new(src_texture.format(), *copy_extents);

// Validate that stay within the slice (wgpu can't fully know our intention here, so we have to check).
debug_assert!(
Expand Down
2 changes: 1 addition & 1 deletion crates/viewer/re_renderer/src/context.rs
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ pub struct RenderContext {
pub mesh_manager: RwLock<MeshManager>,
pub texture_manager_2d: TextureManager2D,
pub(crate) cpu_write_gpu_read_belt: Mutex<CpuWriteGpuReadBelt>,
pub(crate) gpu_readback_belt: Mutex<GpuReadbackBelt>,
pub gpu_readback_belt: Mutex<GpuReadbackBelt>,

/// List of unfinished queue submission via this context.
///
Expand Down
Loading

0 comments on commit 47c6e47

Please sign in to comment.