-
Notifications
You must be signed in to change notification settings - Fork 42
/
README.Rmd
145 lines (100 loc) · 7.52 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
---
output: dynbenchmark::github_markdown_nested
editor_options:
chunk_output_type: console
---
<!-- README.md is generated from README.Rmd. Please edit that file -->
```{r setup, echo = FALSE}
knitr::opts_chunk$set(warning = FALSE, message = FALSE, error = FALSE, echo = FALSE)
```
```{r, warning=FALSE, message=FALSE}
library(tidyverse)
library(dynbenchmark)
```
[![Build Status](https://api.travis-ci.org/dynverse/dynbenchmark.svg)](https://travis-ci.org/dynverse/dynbenchmark)
![Lifecycle](https://img.shields.io/badge/lifecycle-experimental-orange.svg) [![doi](https://zenodo.org/badge/doi/10.1038/s41587-019-0071-9.svg)](https://doi.org/10.1038/s41587-019-0071-9) [**ℹ️ Tutorials**](https://dynverse.org) <br><img src="package/man/figures/logo.png" align="right" width="125" height="144" />
# Benchmarking trajectory inference methods
This repo contains the scripts to reproduce the manuscript
```{r}
get_altmetric_badge <- function() {
curl::curl_fetch_memory("https://api.altmetric.com/v1/doi/10.1101/276907")$content %>%
rawToChar() %>%
jsonlite::fromJSON() %>%
purrr::pluck("images") %>%
purrr::pluck("medium") %>%
paste0("&style=bar")
}
```
> A comparison of single-cell trajectory inference methods
<strong> Wouter Saelens\* </strong> <a href='https://orcid.org/0000-0002-7114-6248'><img src='https://github.com/dynverse/dynmethods/raw/master/man/figures/orcid_logo.svg?sanitize=true' height='16'></a> <a href='https://github.com/zouter'><img src='https://github.com/dynverse/dynmethods/raw/master/man/figures/github_logo.png' height='16'></a>,
<strong> Robrecht Cannoodt\* </strong> <a href='https://orcid.org/0000-0003-3641-729X'><img src='https://github.com/dynverse/dynmethods/raw/master/man/figures/orcid_logo.svg?sanitize=true' height='16'></a> <a href='https://github.com/rcannood'><img src='https://github.com/dynverse/dynmethods/raw/master/man/figures/github_logo.png' height='16'></a>,
Helena Todorov <a href='https://github.com/Helena-todd'><img src='https://github.com/dynverse/dynmethods/raw/master/man/figures/github_logo.png' height='16'></a>,
<em> Yvan Saeys </em> <a href='https://github.com/saeyslab'><img src='https://github.com/dynverse/dynmethods/raw/master/man/figures/github_logo.png' height='16'></a>
[doi:10.1038/s41587-019-0071-9](https://doi.org/10.1038/s41587-019-0071-9) [![altmetric](`r get_altmetric_badge()`)](https://altmetric.com/details/33972849)
## Dynverse
Under the hood, dynbenchmark makes use of most dynverse package for running the methods, comparing them to a gold standard, and plotting the output. Check out **[dynverse.org](https://dynverse.org)** for an overview!
## Experiments
From start to finish, the repository is divided into several experiments, each with their own scripts and results. These are accompanied by documentation using github readmes and can thus be easily explored by going to the appropriate folders:
```{r results = 'asis'}
extract_scripts_documentation("scripts", recursive = FALSE) %>%
mutate(
scripts = location,
results = map_chr(scripts, dynbenchmark::link_to_results)
) %>%
mutate(
ix = case_when(is.na(ix) ~ "", TRUE ~ as.character(ix)),
id = label_long(id),
scripts = paste0("[\U1F4C4\U27A1](", scripts, ")"),
results = case_when(is.na(results) ~ "", TRUE ~ paste0("[\U1F4CA\U27A1](", results, ")"))
) %>%
select(`\\#` = ix, id, scripts, results) %>%
knitr::kable()
```
We also have several additional subfolders:
* [Manuscript](manuscript): Source files for producing the manuscript.
* [Package](package): An R package with several helper functions for organizing the benchmark and rendering the manuscript.
* [Raw](raw): Files generated by hand, such as figures and spreadsheets.
* [Derived](derived): Intermediate data files produced by the scripts. These files are not git committed.
## Guidelines
Based on the results of the benchmark, we provide context-dependent user guidelines, [available as a shiny app](https://github.com/dynverse/dynguidelines). This app is integrated within the [dyno pipeline](https://github.com/dynverse/dyno), which also includes the wrappers used in the benchmarking and other packages for visualising and interpreting the results.
[![dynguidelines](https://github.com/dynverse/dynguidelines/raw/master/man/figures/demo.gif)](https://github.com/dynverse/dynguidelines)
## Datasets
The benchmarking pipeline generates (and uses) the following datasets:
* **Gold standard single-cell datasets**, both real and synthetic, used to evaluated the trajectory inference methods [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.1443566.svg)](https://doi.org/10.5281/zenodo.1443566)
```{r, echo = FALSE, message=FALSE}
set.seed(3)
datasets <- list_datasets() %>% group_by(source) %>% sample_n(1) %>% pull(id) %>% load_datasets()
datasets$plot_dimred <- mapdf(datasets, ~plot_dimred(., dimred = dyndimred::dimred_landmark_mds) + ggtitle(.$source) + theme(plot.title = element_text(size = 10)))
p <- patchwork::wrap_plots(datasets$plot_dimred, ncol = nrow(datasets))
ggplot2::ggsave("package/man/figures/datasets.png", width = nrow(datasets)*3, height = 1*3)
```
![datasets](package/man/figures/datasets.png)
* **The performance of methods** used for the [results overview figure](`r dynbenchmark::link_to_results("scripts/08-summary/results_suppfig.pdf")`) and the [dynguidelines](http://guidelines.dynverse.org) app.
* **General information about trajectory inference methods**, available as a data frame in `dynmethods::methods`
## Methods
All wrapped methods are wrapped as both docker and singularity containers. These can be easily run using [*dyn*methods](https://github.com/dynverse/dynmethods).
## Installation
dynbenchmark has been tested using R version 3.5.1 on Linux. While running the methods also works on on Windows and Mac (see [dyno](https://github.com/dynverse/dyno)), running the benchmark is currently not supported on these operating system, given that a lot of commands are linux specific.
In R, you can install the dependencies of dynbenchmark from github using:
``` r
# install.packages("devtools")
devtools::install_github("dynverse/dynbenchmark/package")
```
This will install several other "dynverse" packages. Depending on the number of R packages already installed, this installation should take approximately 5 to 30 minutes.
On Linux, you will need to install udunits and ImageMagick:
* Debian / Ubuntu / Linux Mint: `sudo apt-get install libudunits2-dev imagemagick`
* Fedora / CentOS / RHEL: `sudo dnf install udunits2-devel ImageMagick-c++-devel`
[Docker](https://docs.docker.com/install) or [Singularity](https://www.sylabs.io/guides/3.0/user-guide/) (version ≥ 3.0) has to be installed to run TI methods. We suggest docker on Windows and MacOS, while both docker and singularity are fine when running on linux. Singularity is strongly recommended when running the method on shared computing clusters.
For windows 10 you can install [Docker CE](https://store.docker.com/editions/community/docker-ce-desktop-windows), older Windows installations require the [Docker toolbox](https://docs.docker.com/toolbox/overview/).
You can test whether docker is correctly installed by running:
```{r, message = TRUE, echo = TRUE}
dynwrap::test_docker_installation(detailed = TRUE)
```
Same for singularity:
```{r, message = TRUE, echo = TRUE}
dynwrap::test_singularity_installation(detailed = TRUE)
```
These commands will give helpful tips if some parts of the installation are missing.
## Dynverse dependencies
<!-- Generated by "update_dependency_graphs.R" in the main dynverse repo -->
![](package/man/figures/dependencies.png)