Skip to content

Commit

Permalink
DLC 2.0.6. voila...
Browse files Browse the repository at this point in the history
  • Loading branch information
AlexEMG committed May 3, 2019
1 parent d1bd29c commit 20bb84d
Show file tree
Hide file tree
Showing 25 changed files with 716 additions and 326 deletions.
54 changes: 31 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,12 @@
<img src="https://static1.squarespace.com/static/57f6d51c9f74566f55ecf271/t/5c489e83aa4a992e80059d8c/1548263081887/DLCheader.png?format=1000w" width="100%">
</p>

DeepLabCut is a toolbox for markerless pose estimation of animals performing various tasks, like [trail tracking](https://vnmurthylab.org/), [reaching in mice](http://www.mousemotorlab.org/) and various Drosophila behaviors during egg-laying (see [Mathis et al.](https://www.nature.com/articles/s41593-018-0209-y) for details). There is, however, nothing specific that makes the toolbox only applicable to these tasks and/or species. The toolbox has also already been successfully applied (by us and others) to [rats](http://www.mousemotorlab.org/deeplabcut), humans, various fish species, bacteria, leeches, various robots, cheetahs, [mouse whiskers](http://www.mousemotorlab.org/deeplabcut) and [race horses](http://www.mousemotorlab.org/deeplabcut). This work utilizes the feature detectors (ResNets + readout layers) of one of the state-of-the-art algorithms for human pose estimation by Insafutdinov et al., called DeeperCut, which inspired the name for our toolbox (see references below).
DeepLabCut is a toolbox for markerless pose estimation of animals performing various tasks. Originally, we demonstrated the capabilities for [trail tracking](https://vnmurthylab.org/), [reaching in mice](http://www.mousemotorlab.org/) and various Drosophila behaviors during egg-laying (see [Mathis et al.](https://www.nature.com/articles/s41593-018-0209-y) for details). There is, however, nothing specific that makes the toolbox only applicable to these tasks and/or species. The toolbox has also already been successfully applied (by us and others) to [rats](http://www.mousemotorlab.org/deeplabcut), humans, various fish species, bacteria, leeches, various robots, cheetahs, [mouse whiskers](http://www.mousemotorlab.org/deeplabcut) and [race horses](http://www.mousemotorlab.org/deeplabcut). This work utilizes the feature detectors (ResNets + readout layers) of one of the state-of-the-art algorithms for human pose estimation by Insafutdinov et al., called DeeperCut, which inspired the name for our toolbox (see references below).

VERSION 2.0: This is the **Python package** of [DeepLabCut](https://www.nature.com/articles/s41593-018-0209-y).
This package includes graphical user interfaces to label your data, and take you from data set creation to automatic behavioral analysis. It also introduces an active learning framework to efficiently use DeepLabCut on large experimental projects.
VERSION 2.0: This is the **Python package** of [DeepLabCut](https://www.nature.com/articles/s41593-018-0209-y) that is released with our Protocols paper (in press, preprint [here](https://www.biorxiv.org/content/10.1101/476531v1)).
This package includes graphical user interfaces to label your data, and take you from data set creation to automatic behavioral analysis. It also introduces an active learning framework to efficiently use DeepLabCut on large experimental projects, and new data augmentation that improves network performance, especially in challenging cases (see [panel b](https://camo.githubusercontent.com/77c92f6b89d44ca758d815bdd7e801247437060b/68747470733a2f2f737461746963312e73717561726573706163652e636f6d2f7374617469632f3537663664353163396637343536366635356563663237312f742f3563336663316336373538643436393530636537656563372f313534373638323338333539352f636865657461682e706e673f666f726d61743d37353077)).

VERSION 1.0: The initial, Nature Neuroscience version of **DeepLabCut** can be found in the history of git, or here: https://github.com/AlexEMG/DeepLabCut/releases/tag/1.11
VERSION 1.0: The initial, Nature Neuroscience version of [DeepLabCut](https://www.nature.com/articles/s41593-018-0209-y) can be found in the history of git, or here: https://github.com/AlexEMG/DeepLabCut/releases/tag/1.11

<p align="center">
<img src="http://www.people.fas.harvard.edu/~amathis/dlc/MATHIS_2018_odortrail.gif" height="220">
Expand All @@ -25,19 +25,25 @@ Please check out [www.mousemotorlab.org/deeplabcut](https://www.mousemotorlab.or


# [Installation](docs/installation.md)
- How to [install DeeplabCut](docs/installation.md)

How to [install DeeplabCut](docs/installation.md)
# [The DeepLabCut Process](docs/UseOverviewGuide.md)
- An overview of the pipeline and workflow for project management

An overview of the pipeline and workflow for project management. Please also read the [user-guide preprint!](https://www.biorxiv.org/content/early/2018/11/24/476531)

<p align="center">
<img src="https://static1.squarespace.com/static/57f6d51c9f74566f55ecf271/t/5c3e47454fa51a420fa8ecdf/1547585367234/flowfig.png?format=750w" width="90%">
<img src="https://static1.squarespace.com/static/57f6d51c9f74566f55ecf271/t/5cca1d519b747a750d680de5/1556749676166/dlc_overview-01.png?format=1000w" width="95%">
</p>

# [DEMO the code](/examples)
- We provide several Jupyter Notebooks: one that walks you through a demo dataset to test your installation, and another Notebook to run DeepLabCut from the begining on your own data. We also show you how to use the code in Docker, and on Google Colab. Please also read the [user-guide preprint!](https://www.biorxiv.org/content/early/2018/11/24/476531)

We provide several Jupyter Notebooks: one that walks you through a demo dataset to test your installation, and another Notebook to run DeepLabCut from the begining on your own data. We also show you how to use the code in Docker, and on Google Colab. Please also read the [user-guide preprint!](https://www.biorxiv.org/content/early/2018/11/24/476531)

# News (and in the news):

- March 2019: DeepLabCut joined [twitter](https://twitter.com/deeplabcut)
- Jan 2019: We joined the Image Source Forum for user help: [![Image.sc forum](https://img.shields.io/badge/dynamic/json.svg?label=forum&amp;url=https%3A%2F%2Fforum.image.sc%2Ftags%2Fdeeplabcut.json&amp;query=%24.topic_list.tags.0.topic_count&amp;colorB=brightgreen&amp;&amp;suffix=%20topics&amp;logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAYAAAAfSC3RAAABPklEQVR42m3SyyqFURTA8Y2BER0TDyExZ+aSPIKUlPIITFzKeQWXwhBlQrmFgUzMMFLKZeguBu5y+//17dP3nc5vuPdee6299gohUYYaDGOyyACq4JmQVoFujOMR77hNfOAGM+hBOQqB9TjHD36xhAa04RCuuXeKOvwHVWIKL9jCK2bRiV284QgL8MwEjAneeo9VNOEaBhzALGtoRy02cIcWhE34jj5YxgW+E5Z4iTPkMYpPLCNY3hdOYEfNbKYdmNngZ1jyEzw7h7AIb3fRTQ95OAZ6yQpGYHMMtOTgouktYwxuXsHgWLLl+4x++Kx1FJrjLTagA77bTPvYgw1rRqY56e+w7GNYsqX6JfPwi7aR+Y5SA+BXtKIRfkfJAYgj14tpOF6+I46c4/cAM3UhM3JxyKsxiOIhH0IO6SH/A1Kb1WBeUjbkAAAAAElFTkSuQmCC)](https://forum.image.sc/tags/deeplabcut)

- Nov 2018: We posted a detailed guide for DeepLabCut 2.0 on [BioRxiv](https://www.biorxiv.org/content/early/2018/11/24/476531). It also contains a case study for 3D pose estimation in cheetahs.
- Nov 2018: Various (post-hoc) analysis scripts contributed by users (and us) will be gathered at [DLCutils](https://github.com/AlexEMG/DLCutils). Feel free to contribute! In particular, there is a script guiding you through
importing a project into the new data format for DLC 2.0
Expand Down Expand Up @@ -75,14 +81,16 @@ importing a project into the new data format for DLC 2.0

This is an actively developed package and we welcome community development and involvement.

For **help and questions that don't fit a GitHub code issue,** we ask you to post them here: https://forum.image.sc/
## Support and help:

If you would like to join the [code development community](https://deeplabcut.slack.com), please drop us a note to be invited by emailing: [email protected]

Please check out the following references for more details:
For **help and questions that don't fit a GitHub code issue,** we ask you to post them here: https://forum.image.sc/ [![Image.sc forum](https://img.shields.io/badge/dynamic/json.svg?label=forum&amp;url=https%3A%2F%2Fforum.image.sc%2Ftags%2Fdeeplabcut.json&amp;query=%24.topic_list.tags.0.topic_count&amp;colorB=brightgreen&amp;&amp;suffix=%20topics&amp;logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAYAAAAfSC3RAAABPklEQVR42m3SyyqFURTA8Y2BER0TDyExZ+aSPIKUlPIITFzKeQWXwhBlQrmFgUzMMFLKZeguBu5y+//17dP3nc5vuPdee6299gohUYYaDGOyyACq4JmQVoFujOMR77hNfOAGM+hBOQqB9TjHD36xhAa04RCuuXeKOvwHVWIKL9jCK2bRiV284QgL8MwEjAneeo9VNOEaBhzALGtoRy02cIcWhE34jj5YxgW+E5Z4iTPkMYpPLCNY3hdOYEfNbKYdmNngZ1jyEzw7h7AIb3fRTQ95OAZ6yQpGYHMMtOTgouktYwxuXsHgWLLl+4x++Kx1FJrjLTagA77bTPvYgw1rRqY56e+w7GNYsqX6JfPwi7aR+Y5SA+BXtKIRfkfJAYgj14tpOF6+I46c4/cAM3UhM3JxyKsxiOIhH0IO6SH/A1Kb1WBeUjbkAAAAAElFTkSuQmCC)](https://forum.image.sc/tags/deeplabcut)

## References:

If you use this code or data please [cite Mathis et al, 2018](https://www.nature.com/articles/s41593-018-0209-y) and if you use the Python package (DeepLabCut2.0) please also cite [Nath, Mathis et al, 2019](https://www.biorxiv.org/content/10.1101/476531v1)

Please check out the following references for more details:

@article{Mathisetal2018,
title={DeepLabCut: markerless pose estimation of user-defined body parts with deep learning},
author = {Alexander Mathis and Pranav Mamidanna and Kevin M. Cury and Taiga Abe and Venkatesh N. Murthy and Mackenzie W. Mathis and Matthias Bethge},
Expand All @@ -97,16 +105,7 @@ Please check out the following references for more details:
booktitle = {ECCV'16},
url = {http://arxiv.org/abs/1605.03170}
}

Our open-access pre-prints:

@article{mathis2018markerless,
title={Markerless tracking of user-defined features with deep learning},
author={Mathis, Alexander and Mamidanna, Pranav and Abe, Taiga and Cury, Kevin M and Murthy, Venkatesh N and Mathis, Mackenzie W and Bethge, Matthias},
journal={arXiv preprint arXiv:1804.03142},
year={2018}
}


@article {NathMathis2018,
author = {Nath*, Tanmay and Mathis*, Alexander and Chen, An Chi and Patel, Amir and Bethge, Matthias and Mathis, Mackenzie W},
title = {Using DeepLabCut for 3D markerless pose estimation across species and behaviors},
Expand All @@ -118,6 +117,15 @@ Our open-access pre-prints:
journal = {bioRxiv}
}

Our open-access pre-prints:

@article{mathis2018markerless,
title={Markerless tracking of user-defined features with deep learning},
author={Mathis, Alexander and Mamidanna, Pranav and Abe, Taiga and Cury, Kevin M and Murthy, Venkatesh N and Mathis, Mackenzie W and Bethge, Matthias},
journal={arXiv preprint arXiv:1804.03142},
year={2018}
}

@article {MathisWarren2018speed,
author = {Mathis, Alexander and Warren, Richard A.},
title = {On the inference speed and video-compression robustness of DeepLabCut},
Expand All @@ -131,4 +139,4 @@ Our open-access pre-prints:

## License:

This project is licensed under the GNU Lesser General Public License v3.0. Note that the software is provided "as is", without warranty of any kind, express or implied. If you use this code, please [cite us!](https://www.nature.com/articles/s41593-018-0209-y).
This project is licensed under the GNU Lesser General Public License v3.0. Note that the software is provided "as is", without warranty of any kind, express or implied. If you use this code or data, please [cite us!](https://www.nature.com/articles/s41593-018-0209-y).
17 changes: 14 additions & 3 deletions conda-environments/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Quick Anaconda Install for Windows and MacOS!
### Please use one (or more) of the supplied Anaconda environments for a fast, easy install.
### Please use one (or more) of the supplied Anaconda environments for a fast and easy installation process.

(0) Be sure you have Anaconda 3 installed! https://www.anaconda.com/distribution/, and get familiar with using "cmd" or terminal!

Expand All @@ -20,7 +20,18 @@ or

``conda env create -f dlc-windowsGPU.yaml``

If you plant to use Jupyter Notebooks, once you are inside the environment you need to run this line one time to link to Jupyter: ``conda install nb_conda``
If you plan to use Jupyter Notebooks once you are inside the environment you need to run this line one time to link to Jupyter: ``conda install nb_conda``


Great, that's it! Now just follow the user guide to acvitate your environment and get DeepLabCut up and running in no time!
Great, that's it!

Now just follow the user guide, to activate your environment and get DeepLabCut up and running in no time!

Just as a reminder, you can exit the environment anytime and (later) come back! So the environments really allow you to manage multiple packages that you might want to install on your computer.

Once you are in the terminal type:
- Windows: ``activate nameoftheenvironment``
- Linux/MacOS: ``source activate nameoftheenvironment``

Here are some conda environment management tips: https://kapeli.com/cheat_sheets/Conda.docset/Contents/Resources/Documents/index

4 changes: 2 additions & 2 deletions deeplabcut/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,12 +47,12 @@
if os.environ.get('Colab', default=False) == 'True':
print("Project loaded in colab-mode. Apparently Colab has trouble loading statsmodels, so the smooting & outlier frame extraction is disabled. Sorry!")
else:
from deeplabcut.refine_training_dataset import extract_outlier_frames,merge_datasets,filterpredictions
from deeplabcut.refine_training_dataset import extract_outlier_frames, merge_datasets, filterpredictions

#Direct import for convenience
from deeplabcut.pose_estimation_tensorflow import train_network
from deeplabcut.pose_estimation_tensorflow import evaluate_network
from deeplabcut.pose_estimation_tensorflow import analyze_videos, analyze_time_lapse_frames

from deeplabcut.utils import create_labeled_video,plot_trajectories, auxiliaryfunctions, convertcsv2h5, analyze_videos_converth5_to_csv
from deeplabcut.utils import create_labeled_video,plot_trajectories, auxiliaryfunctions, convertcsv2h5, analyze_videos_converth5_to_csv,select_crop_parameters
from deeplabcut.version import __version__, VERSION
4 changes: 3 additions & 1 deletion deeplabcut/create_project/add.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,9 @@ def add_new_videos(config,videos,copy_videos=False,coords=None):
# adds the video list to the config.yaml file
for idx,video in enumerate(videos):
try:
video_path = os.path.realpath(video)
# For windows os.path.realpath does not work and does not link to the real video.
video_path = str(Path.resolve(Path(video)))
# video_path = os.path.realpath(video)
except:
video_path = os.readlink(video)

Expand Down
4 changes: 2 additions & 2 deletions deeplabcut/create_project/new.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,8 +130,8 @@ def create_new_project(project, experimenter, videos, working_directory=None, co
for video in videos:
print(video)
try:
#rel_video_path = os.path.realpath(video)
rel_video_path=str(Path.resolve(Path(video)))
# For windows os.path.realpath does not work and does not link to the real video. [old: rel_video_path = os.path.realpath(video)]
rel_video_path = str(Path.resolve(Path(video)))
except:
rel_video_path = os.readlink(str(video))

Expand Down
24 changes: 13 additions & 11 deletions deeplabcut/generate_training_dataset/frame_extraction.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ def extract_frames(config,mode='automatic',algo='kmeans',crop=False,userfeedback
Note: color information is discarded for kmeans, thus e.g. for camouflaged octopus clustering one might want to change this.
crop : bool, optional
If this is set to True, the selected frames are cropped based on the ``crop`` parameters in the config.yaml file.
If this is set to True, a user interface pops up with a frame to select the cropping parameters. Use the left click to draw a cropping area and hit the button set cropping parameters to save the cropping parameters for a video.
The default is ``False``; if provided it must be either ``True`` or ``False``.
userfeedback: bool, optional
Expand All @@ -58,13 +58,13 @@ def extract_frames(config,mode='automatic',algo='kmeans',crop=False,userfeedback
Examples
--------
for selecting frames automatically with 'kmeans' and want to crop the frames based on the ``crop`` parameters in config.yaml
for selecting frames automatically with 'kmeans' and want to crop the frames.
>>> deeplabcut.extract_frames('/analysis/project/reaching-task/config.yaml','automatic','kmeans',True)
--------
for selecting frames automatically with 'kmeans' and considering the color information.
>>> deeplabcut.extract_frames('/analysis/project/reaching-task/config.yaml','automatic','kmeans',cluster_color=True)
--------
for selecting frames automatically with 'uniform' and want to crop the frames based on the ``crop`` parameters in config.yaml
for selecting frames automatically with 'uniform' and want to crop the frames.
>>> deeplabcut.extract_frames('/analysis/project/reaching-task/config.yaml','automatic',crop=True)
--------
for selecting frames manually,
Expand All @@ -86,6 +86,7 @@ def extract_frames(config,mode='automatic',algo='kmeans',crop=False,userfeedback
import matplotlib.patches as patches
from deeplabcut.utils import frameselectiontools
from deeplabcut.utils import auxiliaryfunctions
from deeplabcut.utils import select_crop_parameters
from matplotlib.widgets import RectangleSelector

if mode == "manual":
Expand All @@ -96,8 +97,7 @@ def extract_frames(config,mode='automatic',algo='kmeans',crop=False,userfeedback

elif mode == "automatic":
config_file = Path(config).resolve()
with open(str(config_file), 'r') as ymlfile:
cfg = yaml.load(ymlfile)
cfg = auxiliaryfunctions.read_config(config_file)
print("Config file read successfully.")

numframes2pick = cfg['numframes2pick']
Expand Down Expand Up @@ -155,9 +155,12 @@ def extract_frames(config,mode='automatic',algo='kmeans',crop=False,userfeedback
if output_path.exists() :
fig,ax = plt.subplots(1)
# Display the image
ax.imshow(image)
cid = RectangleSelector(ax, line_select_callback,drawtype='box', useblit=True,button=[1], minspanx=5, minspany=5,spancoords='pixels',interactive=True)
plt.show()
# ax.imshow(image)
# Call the GUI to select the cropping parameters
coords = select_crop_parameters.show(config,image)
# Update the config.yaml file with current cropping parameters
cfg['video_sets'][video] = {'crop': ', '.join(map(str, [int(coords[0]), int(coords[1]), int(coords[2]), int(coords[3])]))}
auxiliaryfunctions.write_config(config_file,cfg)

if len(os.listdir(output_path))==0: #check if empty
#store full frame from random location (good for augmentation)
Expand Down Expand Up @@ -199,8 +202,7 @@ def extract_frames(config,mode='automatic',algo='kmeans',crop=False,userfeedback
numframes2pick=cfg['numframes2pick']+1 # without cropping a full size frame will not be extracted >> thus one more frame should be selected in next stage.

print("Extracting frames based on %s ..." %algo)
cfg['video_sets'][video] = {'crop': ', '.join(map(str, [int(coords[0]), int(coords[1]), int(coords[2]), int(coords[3])]))}
auxiliaryfunctions.write_config(config_file,cfg)

if algo =='uniform': #extract n-1 frames (0 was already stored)
if opencv:
frames2pick=frameselectiontools.UniformFramescv2(cap,numframes2pick-1,start,stop)
Expand Down Expand Up @@ -256,4 +258,4 @@ def line_select_callback(eclick, erelease):
global coords
new_x1, new_y1 = eclick.xdata, eclick.ydata
new_x2, new_y2 = erelease.xdata, erelease.ydata
coords = [str(int(new_x1)),str(int(new_x2)),str(int(new_y1)),str(int(new_y2))]
coords = [str(int(new_x1)),str(int(new_x2)),str(int(new_y1)),str(int(new_y2))]
Loading

0 comments on commit 20bb84d

Please sign in to comment.