Datashader is a graphics pipeline system for automating the process of creating meaningful representations of large amounts of data. Datashader breaks the creation of images into 3 main steps:
-
Projection
Each record is projected into zero or more bins, based on a specified glyph.
-
Aggregation
Reductions are computed for each bin, compressing the potentially large dataset into a much smaller aggregate.
-
Transformation
These aggregates are then further processed to create an image.
Using this very general pipeline, many interesting data visualizations can be created in a performant and scalable way. Datashader contains tools for easily creating these pipelines in a composable manner, using only a few lines of code. Datashader can be used on its own, but it is also designed to work as a pre-processing stage in a plotting library, allowing that library to work with much larger datasets than it would otherwise.
Datashader is available on most platforms using the conda
package manager,
from the bokeh
channel:
conda install -c bokeh datashader
Alternatively, you can manually install from the repository:
git clone https://github.com/bokeh/datashader.git
cd datashader
conda install -c bokeh --file requirements.txt
python setup.py install
Datashader is not currently provided on pip/PyPI, to avoid broken or low-performance installations that come from not keeping track of C/C++binary dependencies such as LLVM (required by Numba).
There are lots of demonstrations and case studies available in the
examples
directory of
the github repository, which are viewable as rendered notebooks on
Anaconda Cloud. See the
examples README
for instructions on obtaining local copies of the examples and the
data and libraries they require so that you can use them as starting
points for your own work.
Additional resources are linked from the datashader documentation, including API documentation and papers and talks about the approach.