Skip to content

Commit

Permalink
Lint: Use instead of ... (rerun-io#3100)
Browse files Browse the repository at this point in the history
### What
The ellipsis character `…` should be used in GUIs and docs instead of
the similar-looking `...`. The former looks better over all, and the
latter can have a linebreak inserted in the middle of it which doesn't
look great. It is also for consistency.

Ideally we would only lint strings in Rust using something like
https://github.com/trailofbits/dylint but I'll leave that as an exercise
for later 😬

For our developers: I suggest you figure out how to type `…` quickly on
your keyboard. On my Mac I press `⌥;` (but I can't remember if that is
something special I've configured…)

This one is for you @martenbjork 

### Checklist
* [x] I have read and agree to [Contributor
Guide](https://github.com/rerun-io/rerun/blob/main/CONTRIBUTING.md) and
the [Code of
Conduct](https://github.com/rerun-io/rerun/blob/main/CODE_OF_CONDUCT.md)
* [x] I've included a screenshot or gif (if applicable)
* [x] I have tested [demo.rerun.io](https://demo.rerun.io/pr/3100) (if
applicable)

- [PR Build Summary](https://build.rerun.io/pr/3100)
- [Docs
preview](https://rerun.io/preview/9de65e45790b192899248abe12d1a7b43b1ca488/docs)
<!--DOCS-PREVIEW-->
- [Examples
preview](https://rerun.io/preview/9de65e45790b192899248abe12d1a7b43b1ca488/examples)
<!--EXAMPLES-PREVIEW--><!--EXAMPLES-PREVIEW-->
- [Recent benchmark results](https://ref.rerun.io/dev/bench/)
- [Wasm size tracking](https://ref.rerun.io/dev/sizes/)
  • Loading branch information
emilk authored Aug 25, 2023
1 parent 1f6b87b commit 82cbbce
Show file tree
Hide file tree
Showing 40 changed files with 136 additions and 131 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,5 +87,5 @@ The commercial product targets the needs specific to teams that build and run co
## Installing a pre-release Python SDK

1. Download the correct `.whl` from [GitHub Releases](https://github.com/rerun-io/rerun/releases)
2. Run `pip install rerun_sdk<...>.whl` (replace `<...>` with the actual filename)
2. Run `pip install rerun_sdk<>.whl` (replace `<>` with the actual filename)
3. Test it: `rerun --version`
6 changes: 3 additions & 3 deletions crates/re_analytics/src/pipeline_native.rs
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ impl Pipeline {
//
// Joining these threads is not a viable strategy for two reasons:
// 1. We _never_ want to delay the shutdown process, analytics must never be in the way.
// 2. We need to deal with unexpected shutdowns anyway (crashes, SIGINT, SIGKILL, ...),
// 2. We need to deal with unexpected shutdowns anyway (crashes, SIGINT, SIGKILL, ),
// and we do indeed.
//
// This is an at-least-once pipeline: in the worst case, unexpected shutdowns will lead to
Expand Down Expand Up @@ -213,7 +213,7 @@ fn realtime_pipeline(
return;
}
if let Err(err) = session_file.rewind() {
// We couldn't reset the session file... That one is a bit messy and will likely break
// We couldn't reset the session file That one is a bit messy and will likely break
// analytics for the entire duration of this session, but that really _really_ should
// never happen.
re_log::debug_once!("couldn't seek into analytics data file: {err}");
Expand All @@ -223,7 +223,7 @@ fn realtime_pipeline(
let on_event = |session_file: &mut _, event| {
re_log::trace!(
%analytics_id, %session_id,
"appending event to current session file..."
"appending event to current session file"
);
if let Err(event) = append_event(session_file, &analytics_id, &session_id, event) {
// We failed to append the event to the current session, so push it back at the end of
Expand Down
8 changes: 4 additions & 4 deletions crates/re_arrow_store/src/store.rs
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ impl<T: Clone> std::ops::DerefMut for MetadataRegistry<T> {
}
}

/// Used to cache auto-generated cluster cells (`[0]`, `[0, 1]`, `[0, 1, 2]`, ...) so that they
/// Used to cache auto-generated cluster cells (`[0]`, `[0, 1]`, `[0, 1, 2]`, ) so that they
/// can be properly deduplicated on insertion.
#[derive(Debug, Default, Clone)]
pub struct ClusterCellCache(pub IntMap<u32, DataCell>);
Expand Down Expand Up @@ -200,7 +200,7 @@ pub struct DataStore {
/// Only used to map `RowId`s to their original [`TimePoint`]s at the moment.
pub(crate) metadata_registry: MetadataRegistry<TimePoint>,

/// Used to cache auto-generated cluster cells (`[0]`, `[0, 1]`, `[0, 1, 2]`, ...)
/// Used to cache auto-generated cluster cells (`[0]`, `[0, 1]`, `[0, 1, 2]`, )
/// so that they can be properly deduplicated on insertion.
pub(crate) cluster_cell_cache: ClusterCellCache,

Expand Down Expand Up @@ -409,7 +409,7 @@ pub struct IndexedTable {
/// buckets, in bytes.
///
/// This is a best-effort approximation, adequate for most purposes (stats,
/// triggering GCs, ...).
/// triggering GCs, ).
pub buckets_size_bytes: u64,
}

Expand Down Expand Up @@ -506,7 +506,7 @@ pub struct IndexedBucketInner {
/// included, in bytes.
///
/// This is a best-effort approximation, adequate for most purposes (stats,
/// triggering GCs, ...).
/// triggering GCs, ).
///
/// We cache this because there can be many, many buckets.
pub size_bytes: u64,
Expand Down
2 changes: 1 addition & 1 deletion crates/re_arrow_store/src/store_polars.rs
Original file line number Diff line number Diff line change
Expand Up @@ -300,7 +300,7 @@ fn column_as_series(
ListArray::<i32>::default_datatype(datatype.clone()),
Offsets::try_from_lengths(comp_lengths).unwrap().into(),
// It's possible that all rows being referenced were already garbage collected (or simply
// never existed to begin with), at which point `comp_rows` will be empty... and you can't
// never existed to begin with), at which point `comp_rows` will be empty and you can't
// call `concatenate` on an empty list without panicking.
if comp_values.is_empty() {
new_empty_array(datatype)
Expand Down
14 changes: 7 additions & 7 deletions crates/re_arrow_store/src/store_read.rs
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ impl DataStore {
id = self.query_id.load(Ordering::Relaxed),
timeline = ?timeline,
entity = %ent_path,
"query started..."
"query started"
);

let timeless: Option<IntSet<_>> = self
Expand Down Expand Up @@ -215,7 +215,7 @@ impl DataStore {
entity = %ent_path,
%primary,
?components,
"query started..."
"query started"
);

let cells = self
Expand Down Expand Up @@ -405,7 +405,7 @@ impl DataStore {
query = ?query,
entity = %ent_path,
?components,
"query started..."
"query started"
);

let temporal = self
Expand Down Expand Up @@ -694,14 +694,14 @@ impl IndexedBucket {
?components,
timeline = %self.timeline.name(),
time = self.timeline.typ().format(time),
"searching for primary & secondary cells..."
"searching for primary & secondary cells"
);

let time_row_nr = col_time.partition_point(|t| *t <= time.as_i64()) as i64;

// The partition point is always _beyond_ the index that we're looking for.
// A partition point of 0 thus means that we're trying to query for data that lives
// _before_ the beginning of time... there's nothing to be found there.
// _before_ the beginning of time there's nothing to be found there.
if time_row_nr == 0 {
return None;
}
Expand Down Expand Up @@ -818,7 +818,7 @@ impl IndexedBucket {
?components,
timeline = %self.timeline.name(),
time_range = self.timeline.typ().format_range(time_range),
"searching for time & component cell numbers..."
"searching for time & component cell numbers"
);

let time_row_nr = col_time.partition_point(|t| *t < time_range.min.as_i64()) as u64;
Expand Down Expand Up @@ -1005,7 +1005,7 @@ impl PersistentIndexedTable {
%primary,
?components,
timeless = true,
"searching for primary & secondary cells..."
"searching for primary & secondary cells"
);

// find the primary row number's row.
Expand Down
2 changes: 1 addition & 1 deletion crates/re_arrow_store/src/store_stats.rs
Original file line number Diff line number Diff line change
Expand Up @@ -368,7 +368,7 @@ impl IndexedBucketInner {
/// stack and heap included, in bytes.
///
/// This is a best-effort approximation, adequate for most purposes (stats,
/// triggering GCs, ...).
/// triggering GCs, ).
#[inline]
pub fn compute_size_bytes(&mut self) -> u64 {
re_tracing::profile_function!();
Expand Down
4 changes: 2 additions & 2 deletions crates/re_arrow_store/src/store_write.rs
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ impl DataStore {
.collect::<Vec<_>>(),
entity = %ent_path,
components = ?cells.iter().map(|cell| cell.component_name()).collect_vec(),
"insertion started..."
"insertion started"
);

let cluster_cell_pos = cells
Expand All @@ -155,7 +155,7 @@ impl DataStore {
None
} else {
// The caller has not specified any cluster component, and so we'll have to generate
// one... unless we've already generated one of this exact length in the past,
// one unless we've already generated one of this exact length in the past,
// in which case we can simply re-use that cell.

Some(self.generate_cluster_cell(num_instances))
Expand Down
2 changes: 1 addition & 1 deletion crates/re_arrow_store/tests/internals.rs
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ use re_types::{components::InstanceKey, Loggable as _};
// is performance: getting the datastore into a pathological topology should show up in
// integration query benchmarks.
//
// In the current state of things, though, it is much easier to test for it that way... so we
// In the current state of things, though, it is much easier to test for it that way so we
// make an exception, for now...
#[test]
fn pathological_bucket_topology() {
Expand Down
2 changes: 1 addition & 1 deletion crates/re_build_info/src/build_info.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
///
/// There are a few other cases though, like
/// - `git` is not installed
/// - the user downloaded rerun as a tarball and then imported via a `path = ...` import
/// - the user downloaded rerun as a tarball and then imported via a `path = ` import
/// - others?
#[derive(Copy, Clone, Debug)]
pub struct BuildInfo {
Expand Down
4 changes: 2 additions & 2 deletions crates/re_build_tools/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,9 @@ pub use self::rebuild_detector::{
//
// When working within the workspace, we can simply try and call `git` and we're done.
//
// # Using an unpublished crate (e.g. `path = "..."` or `git = "..."` or `[patch.crates-io]`)
// # Using an unpublished crate (e.g. `path = ""` or `git = ""` or `[patch.crates-io]`)
//
// In these cases we may or may not have access to the workspace (e.g. a `path = ...` import likely
// In these cases we may or may not have access to the workspace (e.g. a `path = ` import likely
// will, while a crate patch won't).
//
// This is not an issue however, as we can simply try and see what we get.
Expand Down
4 changes: 2 additions & 2 deletions crates/re_components/src/tensor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -438,8 +438,8 @@ impl Tensor {

match shape_short.len() {
1 => {
// Special case: Nx1(x1x1x...) tensors are treated as Nx1 gray images.
// Special case: Nx1(x1x1x...) tensors are treated as Nx1 gray images.
// Special case: Nx1(x1x1x) tensors are treated as Nx1 gray images.
// Special case: Nx1(x1x1x) tensors are treated as Nx1 gray images.
if self.shape.len() >= 2 {
Some([shape_short[0].size, 1, 1])
} else {
Expand Down
4 changes: 2 additions & 2 deletions crates/re_data_store/src/store_db.rs
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ impl EntityDb {
DataCell::from_arrow_empty(cell.component_name(), cell.datatype().clone());

// NOTE(cmc): The fact that this inserts data to multiple entity paths using a
// single `RowId` is... interesting. Keep it in mind.
// single `RowId` is interesting. Keep it in mind.
let row = DataRow::from_cells1(
row_id,
row.entity_path.clone(),
Expand Down Expand Up @@ -134,7 +134,7 @@ impl EntityDb {
DataCell::from_arrow_empty(component_path.component_name, data_type.clone());

// NOTE(cmc): The fact that this inserts data to multiple entity paths using a
// single `RowId` is... interesting. Keep it in mind.
// single `RowId` is interesting. Keep it in mind.
let row = DataRow::from_cells1(
row_id,
component_path.entity_path.clone(),
Expand Down
2 changes: 1 addition & 1 deletion crates/re_log_encoding/src/stream_rrd_from_http.rs
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ mod web_event_listener {
///
/// From javascript you can send an rrd using:
/// ``` ignore
/// var rrd = new Uint8Array(...); // Get an RRD from somewhere
/// var rrd = new Uint8Array(); // Get an RRD from somewhere
/// window.postMessage(rrd, "*")
/// ```
pub fn stream_rrd_from_event_listener(on_msg: Arc<HttpMessageCallback>) {
Expand Down
8 changes: 4 additions & 4 deletions crates/re_log_types/src/data_cell.rs
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ pub type DataCellResult<T> = ::std::result::Result<T, DataCellError>;
///
/// ## Layout
///
/// A cell is an array of component instances: `[C, C, C, ...]`.
/// A cell is an array of component instances: `[C, C, C, ]`.
///
/// Consider this example:
/// ```ignore
Expand Down Expand Up @@ -133,7 +133,7 @@ pub struct DataCellInner {
/// costly operation.
pub(crate) size_bytes: u64,

/// A uniformly typed list of values for the given component type: `[C, C, C, ...]`
/// A uniformly typed list of values for the given component type: `[C, C, C, ]`
///
/// Includes the data, its schema and probably soon the component metadata
/// (e.g. the `ComponentName`).
Expand Down Expand Up @@ -332,12 +332,12 @@ impl DataCell {
///
/// Useful when dealing with cells of different lengths in context that don't allow for it.
///
/// * Before: `[C, C, C, ...]`
/// * Before: `[C, C, C, ]`
/// * After: `ListArray[ [C, C, C, C] ]`
//
// TODO(#1696): this shouldn't be public, need to make it private once the store has been
// patched to use datacells directly.
// TODO(cmc): effectively, this returns a `DataColumn`... think about that.
// TODO(cmc): effectively, this returns a `DataColumn` think about that.
#[doc(hidden)]
#[inline]
pub fn to_arrow_monolist(&self) -> Box<dyn arrow2::array::Array> {
Expand Down
2 changes: 1 addition & 1 deletion crates/re_log_types/src/data_row.rs
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ impl std::ops::DerefMut for RowId {
/// ## Layout
///
/// A row is a collection of cells where each cell must either be empty (a clear), unit-lengthed
/// (a splat) or `num_instances` long (standard): `[[C1, C1, C1], [], [C3], [C4, C4, C4], ...]`.
/// (a splat) or `num_instances` long (standard): `[[C1, C1, C1], [], [C3], [C4, C4, C4], ]`.
///
/// Consider this example:
/// ```ignore
Expand Down
14 changes: 7 additions & 7 deletions crates/re_log_types/src/data_table.rs
Original file line number Diff line number Diff line change
Expand Up @@ -207,9 +207,9 @@ impl std::ops::DerefMut for TableId {
/// (standard):
/// ```text
/// [
/// [[C1, C1, C1], [], [C3], [C4, C4, C4], ...],
/// [None, [C2, C2], [], [C4], ...],
/// [None, [C2, C2], [], None, ...],
/// [[C1, C1, C1], [], [C3], [C4, C4, C4], ],
/// [None, [C2, C2], [], [C4], ],
/// [None, [C2, C2], [], None, ],
/// ...
/// ]
/// ```
Expand Down Expand Up @@ -698,8 +698,8 @@ impl DataTable {

/// Transforms an array of unit values into a list of unit arrays.
///
/// * Before: `[C, C, C, C, C, ...]`
/// * After: `ListArray[ [C], [C], [C], [C], [C], ... ]`
/// * Before: `[C, C, C, C, C, ]`
/// * After: `ListArray[ [C], [C], [C], [C], [C], ]`
// NOTE: keeping that one around, just in case.
#[allow(dead_code)]
fn unit_values_to_unit_lists(array: Box<dyn Array>) -> Box<dyn Array> {
Expand Down Expand Up @@ -795,8 +795,8 @@ impl DataTable {

/// Create a list-array out of a flattened array of cell values.
///
/// * Before: `[C, C, C, C, C, C, C, ...]`
/// * After: `ListArray[ [[C, C], [C, C, C], None, [C], [C], ...] ]`
/// * Before: `[C, C, C, C, C, C, C, ]`
/// * After: `ListArray[ [[C, C], [C, C, C], None, [C], [C], ] ]`
fn data_to_lists(
column: &[Option<DataCell>],
data: Box<dyn Array>,
Expand Down
2 changes: 1 addition & 1 deletion crates/re_log_types/src/data_table_batcher.rs
Original file line number Diff line number Diff line change
Expand Up @@ -418,7 +418,7 @@ fn batching_thread(
// table.sort();

// NOTE: This can only fail if all receivers have been dropped, which simply cannot happen
// as long the batching thread is alive... which is where we currently are.
// as long the batching thread is alive which is where we currently are.
tx_table.send(table).ok();

acc.reset();
Expand Down
Loading

0 comments on commit 82cbbce

Please sign in to comment.