Skip to content

Commit

Permalink
maint: remove stray spaces (pydata#4504)
Browse files Browse the repository at this point in the history
* remove stray spaces

* black

* whats new

* Apply suggestions from code review
  • Loading branch information
mathause authored Oct 12, 2020
1 parent 569a4da commit 98e9692
Show file tree
Hide file tree
Showing 31 changed files with 57 additions and 69 deletions.
2 changes: 1 addition & 1 deletion conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ def pytest_runtest_setup(item):
pytest.skip("set --run-flaky option to run flaky tests")
if "network" in item.keywords and not item.config.getoption("--run-network-tests"):
pytest.skip(
"set --run-network-tests to run test requiring an " "internet connection"
"set --run-network-tests to run test requiring an internet connection"
)


Expand Down
2 changes: 1 addition & 1 deletion doc/examples/apply_ufunc_vectorize_1d.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -333,7 +333,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Now our function currently only works on one vector of data which is not so useful given our 3D dataset.\n",
"Now our function currently only works on one vector of data which is not so useful given our 3D dataset.\n",
"Let's try passing the whole dataset. We add a `print` statement so we can see what our function receives."
]
},
Expand Down
2 changes: 1 addition & 1 deletion doc/plotting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ The simplest way to make a plot is to call the :py:func:`DataArray.plot()` metho
@savefig plotting_1d_simple.png width=4in
air1d.plot()
xarray uses the coordinate name along with metadata ``attrs.long_name``, ``attrs.standard_name``, ``DataArray.name`` and ``attrs.units`` (if available) to label the axes. The names ``long_name``, ``standard_name`` and ``units`` are copied from the `CF-conventions spec <http://cfconventions.org/Data/cf-conventions/cf-conventions-1.7/build/ch03s03.html>`_. When choosing names, the order of precedence is ``long_name``, ``standard_name`` and finally ``DataArray.name``. The y-axis label in the above plot was constructed from the ``long_name`` and ``units`` attributes of ``air1d``.
xarray uses the coordinate name along with metadata ``attrs.long_name``, ``attrs.standard_name``, ``DataArray.name`` and ``attrs.units`` (if available) to label the axes. The names ``long_name``, ``standard_name`` and ``units`` are copied from the `CF-conventions spec <http://cfconventions.org/Data/cf-conventions/cf-conventions-1.7/build/ch03s03.html>`_. When choosing names, the order of precedence is ``long_name``, ``standard_name`` and finally ``DataArray.name``. The y-axis label in the above plot was constructed from the ``long_name`` and ``units`` attributes of ``air1d``.

.. ipython:: python
Expand Down
2 changes: 1 addition & 1 deletion doc/reshaping.rst
Original file line number Diff line number Diff line change
Expand Up @@ -237,7 +237,7 @@ of multi-index levels:
mda.reorder_levels(x=["wavenumber", "band"])
As of xarray v0.9 coordinate labels for each dimension are optional.
You can also use ``.set_index`` / ``.reset_index`` to add / remove
You can also use ``.set_index`` / ``.reset_index`` to add / remove
labels for one or several dimensions:

.. ipython:: python
Expand Down
9 changes: 5 additions & 4 deletions doc/whats-new.rst
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,8 @@ Documentation

Internal Changes
~~~~~~~~~~~~~~~~

- Removed stray spaces that stem from black removing new lines (:pull:`4504`).
By `Mathias Hauser <https://github.com/mathause>`_.

.. _whats-new.0.16.1:

Expand Down Expand Up @@ -178,7 +179,7 @@ Internal Changes
older than 2.9)
- all versions of other packages released in the last 12 months

All are up from 6 months (:issue:`4295`)
All are up from 6 months (:issue:`4295`)
`Guido Imperiale <https://github.com/crusaderky>`_.
- Use :py:func:`dask.array.apply_gufunc` instead of :py:func:`dask.array.blockwise` in
:py:func:`xarray.apply_ufunc` when using ``dask='parallelized'``. (:pull:`4060`, :pull:`4391`, :pull:`4392`)
Expand Down Expand Up @@ -2516,7 +2517,7 @@ Breaking changes
- A new resampling interface to match pandas' groupby-like API was added to
:py:meth:`Dataset.resample` and :py:meth:`DataArray.resample`
(:issue:`1272`). :ref:`Timeseries resampling <resampling>` is
fully supported for data with arbitrary dimensions as is both downsampling
fully supported for data with arbitrary dimensions as is both downsampling
and upsampling (including linear, quadratic, cubic, and spline interpolation).

Old syntax:
Expand Down Expand Up @@ -3647,7 +3648,7 @@ Bug fixes
- Restore checks for shape consistency between data and coordinates in the
DataArray constructor (:issue:`758`).
- Single dimension variables no longer transpose as part of a broader
``.transpose``. This behavior was causing ``pandas.PeriodIndex`` dimensions
``.transpose``. This behavior was causing ``pandas.PeriodIndex`` dimensions
to lose their type (:issue:`749`)
- :py:class:`~xarray.Dataset` labels remain as their native type on ``.to_dataset``.
Previously they were coerced to strings (:issue:`745`)
Expand Down
2 changes: 1 addition & 1 deletion doc/why-xarray.rst
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ Core data structures
--------------------

xarray has two core data structures, which build upon and extend the core
strengths of NumPy_ and pandas_. Both data structures are fundamentally N-dimensional:
strengths of NumPy_ and pandas_. Both data structures are fundamentally N-dimensional:

- :py:class:`~xarray.DataArray` is our implementation of a labeled, N-dimensional
array. It is an N-D generalization of a :py:class:`pandas.Series`. The name
Expand Down
4 changes: 2 additions & 2 deletions xarray/backends/api.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ def _get_default_engine_grib():
if msgs:
raise ValueError(" or\n".join(msgs))
else:
raise ValueError("PyNIO or cfgrib is required for accessing " "GRIB files")
raise ValueError("PyNIO or cfgrib is required for accessing GRIB files")


def _get_default_engine_gz():
Expand Down Expand Up @@ -1228,7 +1228,7 @@ def save_mfdataset(
"""
if mode == "w" and len(set(paths)) < len(paths):
raise ValueError(
"cannot use mode='w' when writing multiple " "datasets to the same path"
"cannot use mode='w' when writing multiple datasets to the same path"
)

for obj in datasets:
Expand Down
2 changes: 1 addition & 1 deletion xarray/backends/h5netcdf_.py
Original file line number Diff line number Diff line change
Expand Up @@ -280,7 +280,7 @@ def prepare_variable(
and "compression_opts" in encoding
and encoding["complevel"] != encoding["compression_opts"]
):
raise ValueError("'complevel' and 'compression_opts' encodings " "mismatch")
raise ValueError("'complevel' and 'compression_opts' encodings mismatch")
complevel = encoding.pop("complevel", 0)
if complevel != 0:
encoding.setdefault("compression_opts", complevel)
Expand Down
3 changes: 2 additions & 1 deletion xarray/backends/netcdf3.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,8 @@
"uint",
"int64",
"uint64",
"float" "real",
"float",
"real",
"double",
"bool",
"string",
Expand Down
8 changes: 2 additions & 6 deletions xarray/backends/scipy_.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,9 +70,7 @@ def _open_scipy_netcdf(filename, mode, mmap, version):
except TypeError as e:
# TODO: gzipped loading only works with NetCDF3 files.
if "is not a valid NetCDF 3 file" in e.message:
raise ValueError(
"gzipped file loading only supports " "NetCDF 3 files."
)
raise ValueError("gzipped file loading only supports NetCDF 3 files.")
else:
raise

Expand Down Expand Up @@ -110,9 +108,7 @@ def __init__(
self, filename_or_obj, mode="r", format=None, group=None, mmap=None, lock=None
):
if group is not None:
raise ValueError(
"cannot save to a group with the " "scipy.io.netcdf backend"
)
raise ValueError("cannot save to a group with the scipy.io.netcdf backend")

if format is None or format == "NETCDF3_64BIT":
version = 2
Expand Down
2 changes: 1 addition & 1 deletion xarray/backends/zarr.py
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@ def extract_zarr_variable_encoding(variable, raise_on_invalid=False, name=None):
invalid = [k for k in encoding if k not in valid_encodings]
if invalid:
raise ValueError(
"unexpected encoding parameters for zarr " "backend: %r" % invalid
"unexpected encoding parameters for zarr backend: %r" % invalid
)
else:
for k in list(encoding):
Expand Down
4 changes: 2 additions & 2 deletions xarray/coding/cftime_offsets.py
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ def __sub__(self, other):
import cftime

if isinstance(other, cftime.datetime):
raise TypeError("Cannot subtract a cftime.datetime " "from a time offset.")
raise TypeError("Cannot subtract a cftime.datetime from a time offset.")
elif type(other) == type(self):
return type(self)(self.n - other.n)
else:
Expand All @@ -122,7 +122,7 @@ def __radd__(self, other):

def __rsub__(self, other):
if isinstance(other, BaseCFTimeOffset) and type(self) != type(other):
raise TypeError("Cannot subtract cftime offsets of differing " "types")
raise TypeError("Cannot subtract cftime offsets of differing types")
return -self + other

def __apply__(self):
Expand Down
8 changes: 3 additions & 5 deletions xarray/core/accessor_str.py
Original file line number Diff line number Diff line change
Expand Up @@ -496,7 +496,7 @@ def zfill(self, width):
Strings in the array are padded with '0' characters on the
left of the string to reach a total string length `width`. Strings
in the array with length greater or equal to `width` are unchanged.
in the array with length greater or equal to `width` are unchanged.
Parameters
----------
Expand Down Expand Up @@ -879,7 +879,7 @@ def replace(self, pat, repl, n=-1, case=None, flags=0, regex=True):
if is_compiled_re:
if (case is not None) or (flags != 0):
raise ValueError(
"case and flags cannot be set" " when pat is a compiled regex"
"case and flags cannot be set when pat is a compiled regex"
)
else:
# not a compiled regex
Expand All @@ -903,9 +903,7 @@ def replace(self, pat, repl, n=-1, case=None, flags=0, regex=True):
"pattern with regex=False"
)
if callable(repl):
raise ValueError(
"Cannot use a callable replacement when " "regex=False"
)
raise ValueError("Cannot use a callable replacement when regex=False")
f = lambda x: x.replace(pat, repl, n)
return self._apply(f)

Expand Down
2 changes: 1 addition & 1 deletion xarray/core/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -626,7 +626,7 @@ def pipe(
func, target = func
if target in kwargs:
raise ValueError(
"%s is both the pipe target and a keyword " "argument" % target
"%s is both the pipe target and a keyword argument" % target
)
kwargs[target] = self
return func(*args, **kwargs)
Expand Down
4 changes: 2 additions & 2 deletions xarray/core/dataarray.py
Original file line number Diff line number Diff line change
Expand Up @@ -2565,7 +2565,7 @@ def from_dict(cls, d: dict) -> "DataArray":
}
where "t" is the name of the dimesion, "a" is the name of the array,
and x and t are lists, numpy.arrays, or pandas objects.
and x and t are lists, numpy.arrays, or pandas objects.
Parameters
----------
Expand Down Expand Up @@ -2949,7 +2949,7 @@ def roll(
Positive offsets roll to the right; negative offsets roll to the
left.
roll_coords : bool
Indicates whether to roll the coordinates by the offset
Indicates whether to roll the coordinates by the offset
The current default of roll_coords (None, equivalent to True) is
deprecated and will change to False in a future version.
Explicitly pass roll_coords to silence the warning.
Expand Down
12 changes: 5 additions & 7 deletions xarray/core/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@ def calculate_dimensions(variables: Mapping[Hashable, Variable]) -> Dict[Hashabl
for dim, size in zip(var.dims, var.shape):
if dim in scalar_vars:
raise ValueError(
"dimension %r already exists as a scalar " "variable" % dim
"dimension %r already exists as a scalar variable" % dim
)
if dim not in dims:
dims[dim] = size
Expand Down Expand Up @@ -286,7 +286,7 @@ def merge_indexes(
new_variables = {k: v for k, v in variables.items() if k not in vars_to_remove}
new_variables.update(vars_to_replace)

# update dimensions if necessary GH: 3512
# update dimensions if necessary, GH: 3512
for k, v in new_variables.items():
if any(d in dims_to_replace for d in v.dims):
new_dims = [dims_to_replace.get(d, d) for d in v.dims]
Expand Down Expand Up @@ -1314,7 +1314,7 @@ def __setitem__(self, key: Hashable, value) -> None:
"""
if utils.is_dict_like(key):
raise NotImplementedError(
"cannot yet use a dictionary as a key " "to set Dataset values"
"cannot yet use a dictionary as a key to set Dataset values"
)

self.update({key: value})
Expand Down Expand Up @@ -1673,7 +1673,7 @@ def to_zarr(
if mode not in ["w", "w-", "a"]:
# TODO: figure out how to handle 'r+'
raise ValueError(
"The only supported options for mode are 'w'," "'w-' and 'a'."
"The only supported options for mode are 'w', 'w-' and 'a'."
)
from ..backends.api import to_zarr

Expand Down Expand Up @@ -5107,9 +5107,7 @@ def diff(self, dim, n=1, label="upper"):
elif label == "lower":
kwargs_new = kwargs_start
else:
raise ValueError(
"The 'label' argument has to be either " "'upper' or 'lower'"
)
raise ValueError("The 'label' argument has to be either 'upper' or 'lower'")

variables = {}

Expand Down
2 changes: 1 addition & 1 deletion xarray/core/indexes.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ def isel_variable_and_index(

if len(variable.dims) > 1:
raise NotImplementedError(
"indexing multi-dimensional variable with indexes is not " "supported yet"
"indexing multi-dimensional variable with indexes is not supported yet"
)

new_variable = variable.isel(indexers)
Expand Down
2 changes: 1 addition & 1 deletion xarray/core/indexing.py
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ def convert_label_indexer(index, label, index_name="", method=None, tolerance=No
if isinstance(label, slice):
if method is not None or tolerance is not None:
raise NotImplementedError(
"cannot use ``method`` argument if any indexers are " "slice objects"
"cannot use ``method`` argument if any indexers are slice objects"
)
indexer = index.slice_indexer(
_sanitize_slice_element(label.start),
Expand Down
2 changes: 1 addition & 1 deletion xarray/core/options.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ def _get_keep_attrs(default):
return global_choice
else:
raise ValueError(
"The global option keep_attrs must be one of" " True, False or 'default'."
"The global option keep_attrs must be one of True, False or 'default'."
)


Expand Down
2 changes: 1 addition & 1 deletion xarray/core/resample.py
Original file line number Diff line number Diff line change
Expand Up @@ -270,7 +270,7 @@ def __init__(self, *args, dim=None, resample_dim=None, **kwargs):

def map(self, func, args=(), shortcut=None, **kwargs):
"""Apply a function over each Dataset in the groups generated for
resampling and concatenate them together into a new Dataset.
resampling and concatenate them together into a new Dataset.
`func` is called like `func(ds, *args, **kwargs)` for each dataset `ds`
in this group.
Expand Down
2 changes: 1 addition & 1 deletion xarray/core/rolling_exp.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ def _get_center_of_mass(comass, span, halflife, alpha):
"""
valid_count = count_not_none(comass, span, halflife, alpha)
if valid_count > 1:
raise ValueError("comass, span, halflife, and alpha " "are mutually exclusive")
raise ValueError("comass, span, halflife, and alpha are mutually exclusive")

# Convert to center of mass; domain checks ensure 0 < alpha <= 1
if comass is not None:
Expand Down
6 changes: 3 additions & 3 deletions xarray/core/variable.py
Original file line number Diff line number Diff line change
Expand Up @@ -1060,7 +1060,7 @@ def _as_sparse(self, sparse_format=_default, fill_value=dtypes.NA):
"""
import sparse

# TODO what to do if dask-backended?
# TODO: what to do if dask-backended?
if fill_value is dtypes.NA:
dtype, fill_value = dtypes.maybe_promote(self.dtype)
else:
Expand Down Expand Up @@ -1286,7 +1286,7 @@ def pad(
if isinstance(end_values, dict):
end_values = self._pad_options_dim_to_index(end_values)

# workaround for bug in Dask's default value of stat_length https://github.com/dask/dask/issues/5303
# workaround for bug in Dask's default value of stat_length https://github.com/dask/dask/issues/5303
if stat_length is None and mode in ["maximum", "mean", "median", "minimum"]:
stat_length = [(n, n) for n in self.data.shape] # type: ignore

Expand Down Expand Up @@ -2135,7 +2135,7 @@ def func(self, other):
raise TypeError("cannot add a Dataset to a Variable in-place")
self_data, other_data, dims = _broadcast_compat_data(self, other)
if dims != self.dims:
raise ValueError("dimensions cannot change for in-place " "operations")
raise ValueError("dimensions cannot change for in-place operations")
with np.errstate(all="ignore"):
self.values = f(self_data, other_data)
return self
Expand Down
8 changes: 3 additions & 5 deletions xarray/plot/dataset_plot.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ def _infer_meta_data(ds, x, y, hue, hue_style, add_guide):

if not hue_is_numeric and (hue_style == "continuous"):
raise ValueError(
"Cannot create a colorbar for a non numeric" " coordinate: " + hue
f"Cannot create a colorbar for a non numeric coordinate: {hue}"
)

if add_guide is None or add_guide is True:
Expand All @@ -54,9 +54,7 @@ def _infer_meta_data(ds, x, y, hue, hue_style, add_guide):
add_colorbar = False

if hue_style is not None and hue_style not in ["discrete", "continuous"]:
raise ValueError(
"hue_style must be either None, 'discrete' " "or 'continuous'."
)
raise ValueError("hue_style must be either None, 'discrete' or 'continuous'.")

if hue:
hue_label = label_from_attrs(ds[hue])
Expand Down Expand Up @@ -131,7 +129,7 @@ def _parse_size(data, norm):
elif isinstance(norm, tuple):
norm = mpl.colors.Normalize(*norm)
elif not isinstance(norm, mpl.colors.Normalize):
err = "``size_norm`` must be None, tuple, " "or Normalize object."
err = "``size_norm`` must be None, tuple, or Normalize object."
raise ValueError(err)

norm.clip = True
Expand Down
2 changes: 1 addition & 1 deletion xarray/plot/facetgrid.py
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ def __init__(
ncol = len(data[col])
nfacet = nrow * ncol
if col_wrap is not None:
warnings.warn("Ignoring col_wrap since both col and row " "were passed")
warnings.warn("Ignoring col_wrap since both col and row were passed")
elif row and not col:
single_group = row
elif not row and col:
Expand Down
4 changes: 2 additions & 2 deletions xarray/plot/plot.py
Original file line number Diff line number Diff line change
Expand Up @@ -357,7 +357,7 @@ def step(darray, *args, where="pre", drawstyle=None, ds=None, **kwargs):
Additional arguments following :py:func:`xarray.plot.line`
"""
if where not in {"pre", "post", "mid"}:
raise ValueError("'where' argument to step must be " "'pre', 'post' or 'mid'")
raise ValueError("'where' argument to step must be 'pre', 'post' or 'mid'")

if ds is not None:
if drawstyle is None:
Expand Down Expand Up @@ -876,7 +876,7 @@ def imshow(x, y, z, ax, **kwargs):

if x.ndim != 1 or y.ndim != 1:
raise ValueError(
"imshow requires 1D coordinates, try using " "pcolormesh or contour(f)"
"imshow requires 1D coordinates, try using pcolormesh or contour(f)"
)

# Centering the pixels- Assumes uniform spacing
Expand Down
Loading

0 comments on commit 98e9692

Please sign in to comment.