Skip to content

MIDL 2018 / MEDIA 2019: one binary extremely large and inflecting sparse kernel (pytorch)

License

Notifications You must be signed in to change notification settings

mattiaspaul/OBELISK

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

47 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OBELISK one binary extremely large and inflecting sparse kernel

(pytorch v1.0 implementation)

This repository contains code for the Medical Image Anaylsis (MIDL Special Issue) paper: OBELISK-Net: Fewer Layers to Solve 3D Multi-Organ Segmentation with Sparse Deformable Convolutions by Mattias P. Heinrich, Ozan Oktay, Nassim Bouteldja (winner of the MIDL 2018 best paper award)

The main idea of OBELISK is to learn a large spatially deformable filter kernel for (3D) image analysis. It replaces a conventional (say 5x5) convolution with

  1. trainable spatial filter offsets xy(z)-coordinates and
  2. a linear 1x1 convolution that contains the filter coefficients (values). During training OBELISK will adapt its receptive field to the given problem in a completely data-driven manner and thus automatically solve many tuning steps that are usually done by 'network engineering'. The OBELISK layers have substantially fewer trainable parameters than conventional CNNs used in 3D U-Nets and perform often better for medical segmentation tasks (see Table below).

The working principle (and the basis of its implementation) are visualised below. The idea is to replace the im2col operator heavily used in matrix-multiplication based convolution in many DL frameworks with a continuous off-grid grid_sample operator (available for 3D since pytorch v0.4). Please also have a look at https://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/ if you're not familiar with im2col.

Overview

You will find many more details in the upcoming MEDIA paper or for now in the original MIDL version: https://openreview.net/forum?id=BkZu9wooz

How to use this code: The easiest use-case is to first run the inference on the pre-processed TCIA multi-label data. You need:

inference.py -dataset tcia -model obeliskhybrid -input pancreas_ct1.nii.gz -output mylabel_ct1.nii.gz

Note that the folds are defined as follows: fold 1 has not seen labels/scans #1-#10, fold 2 has not seen labels #11-#21 etc.

  • you can now visualise the outcome in ITK Snap or measure the Dice overlap of the pancreas with the manual segmentation
c3d label_ct1.nii.gz mylabel_ct1.nii.gz -overlap 2

which should return 0.783 and a visual segmentation like below

ITK visualisation of automatic segmentation

  • you can later train your own models using the train.py function by providing the respective datafolders

Visual Overlay and Table from MEDIA preprint, demonstrating results on TCIA

Visual Overlay and Table from MEDIA preprint

About

MIDL 2018 / MEDIA 2019: one binary extremely large and inflecting sparse kernel (pytorch)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages