Skip to content

Commit 0d4d689

Browse files
nicjacgablanouette
andauthored
Initial mmsegmentation support (airctic#934)
* Fixed PIL size bug in ImageRecordComponent (airctic#889) * Preliminary work on mmsegmentation integration This is currently very crude, mostly copy/pasting of the mmdet integration with a few changes. * Updated mmseg dataloaders based on unet implementation * Training now kind of working! - Fixed issue with number of classes - Fixed dataloader issue (mask tensor size) * Improved predictions handling * Fixed prediction, general code clean-up * Simplification of folder structure * Getting closer to final structure * Finished first structure update - This implementation was created based on the existing mmdet integration, and some remaining, unused folders, were removed * Attempt to fix show_batch for mmseg * Added binary and multi-class dice coefficient metrics * Removed test-only code in binary Dice coefficient method * Initial support for multiple pre-trained variants * Reformatting using black * First version of DeepLabV3plus support * Exceptions to elegantly handled unknown pre-trained variants * Added all DeepLabV3 pretrained variants * Ground truth masks saved with predictions if keep_images set to True * Added all DeepLabV3Plus pre-training variants * Added proper support for param_groups * Removed erroneous pre-trained variants for DeepLabV3Plus * Fixed erroneous DeepLabV3 pre-trained variants * re-added default values to DeepLabV3 pretrained variants * Improved model loading and initialization * Improved how distributed BNs are handled * - Proper integration of loop_mmseg - First test! * Updated tests * __init__.py file * jaccard index metric * update init file to include multilabel dice coef * updates to logical statements * update to handle denominator equal to 0 * update to handle denominator equal to 0 * removed repeated code * removed repeated code * Getting started with seg notebook * Formatting fix (black) * Removed temporary file * Added mmsegmentation to Dockerfile * Added mmseg installation to workflows * Updating mmcv to 1.3.7 for mmseg support * Added artifacts to gitignore (wandb) * Testing mim-based install in actions * Fixing mim-based install * Pinning mmcv version in mim installs * Bumping mmcv-full to 1.3.13 * Improved CPU support * Reverted workflow files to match current master * Improved tests on CPU devices * Update docs action to match new dependencies versions * Better handling of the device mess? * Attempt to remove hacky device code * Added mmseg to soft dependency test * changed to binary jaccard index * delete jaccard_index * added jaccard index * up to date metric test files * Fixed init for binary jaccard index * Argument to exclude classes from multiclass dice coefficient metric * Added background exclusion case for multilabel dice coefficient tests * adjusted setup, need to verify values * added comment for correct values * updated cases and setup * updated cases and setup * added class map to dictionary for each case * Adapted RLE mask saving code, might need to revisit later * Added support for wandb logging of semantic seg masks * Added option not to pad dimensions when creating a MaskArray object * Fixed typo * Resampling of masks should use nearest neighbour * Added TODO to masks.py * Fixed loss keys for mssg * mmseg installation in icevision_install.sh * Fixed typo in install script * Fixed albumentation for polygon mask * Black formatting * Fix handlig of special case of Polygon masks * Started updating getting started notebook * More updates to the notebook * Adding after_pred convert raw pred callback to unet fastai learner * Fixed unet prediction when GT not available * More fixes to unet prediction callback * Fixing predict batch for mmseg models * Fixed predictions conversion for mmseg * Improved unet prediction conversion * Black formatting * segformer support * Updated mmseg model creation function * Misc comment * Added options to install script to assist with testing (temporary) * Black formatting * Remove install colab script added by mistake * Actual updated install script * Updated semantic seg notebook * Removed legacy cell from notebook * Further updated notebook * Updated "open in colab" icon URL * Added cell to restart kernel after installation * Reverted notebook changes for merge into master * Started work on README * Updated README somewhat * Updated semantic seg notebook to match updated version on master * Fix draw data for mask files * Implementation of mask resizing for MaskFile objects * Support for MaskFile obnects in draw_data * Fixed mask display in show_batch for mask-enabled mmdet models * import typo * Adding warning if re-creating masks from file in draw_data * Added warning if maskfile loaded from disc for display * README typo fixed * Removed temporary mmsegmentation installation option * Reverted changes to plot_top_losses * Disabling mmseg PL implementation for now (pending update) * Re-added unet callback * Fixed CI * Actually disabled PL implementation * Removing test file uploaded by mistake * mmseg fastai training test * Fixed BackboneConfig location * Fixed formatting * Use latest convert_raw_predictions for Unet * Fixed Unet PL test with new raw prediction conversion method * Removed lightning mmseg files pending proper implementation Co-authored-by: Gabriella Lanouette <[email protected]>
1 parent bc9d8a0 commit 0d4d689

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

63 files changed

+3429
-471
lines changed

.github/workflows/ci-all-testing.yml

+28-29
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@ name: Run unit tests
22

33
on:
44
push:
5-
branches: [ master ]
5+
branches: [master]
66
pull_request:
7-
branches: [ master ]
7+
branches: [master]
88

99
jobs:
1010
test:
@@ -16,33 +16,32 @@ jobs:
1616
python-version: [3.7, 3.8]
1717

1818
steps:
19-
- uses: actions/checkout@v2
20-
- name: Set up Python ${{ matrix.python-version }}
21-
uses: actions/setup-python@v2
22-
with:
23-
python-version: ${{ matrix.python-version }}
19+
- uses: actions/checkout@v2
20+
- name: Set up Python ${{ matrix.python-version }}
21+
uses: actions/setup-python@v2
22+
with:
23+
python-version: ${{ matrix.python-version }}
2424

25-
- name: Install package
26-
run: |
27-
sh ./icevision_install.sh cpu
28-
pip install -e ".[all,dev]"
29-
pip install fiftyone
25+
- name: Install package
26+
run: |
27+
sh ./icevision_install.sh cpu
28+
pip install -e ".[all,dev]"
29+
pip install fiftyone
3030
31-
- name: Lint with flake8
32-
run: |
33-
# stop the build if there are Python syntax errors or undefined names
34-
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
35-
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
36-
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
31+
- name: Lint with flake8
32+
run: |
33+
# stop the build if there are Python syntax errors or undefined names
34+
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
35+
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
36+
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
37+
- name: Unit tests
38+
run: pytest tests -m "not cuda" --cov=icevision --cov-report=xml --color=yes
3739

38-
- name: Unit tests
39-
run: pytest tests -m "not cuda" --cov=icevision --cov-report=xml --color=yes
40-
41-
- name: Upload coverage to Codecov
42-
uses: codecov/codecov-action@v1
43-
with:
44-
token: ${{ secrets.CODECOV_TOKEN }}
45-
file: ./coverage.xml
46-
flags: unittests
47-
name: codecov-umbrella
48-
fail_ci_if_error: false
40+
- name: Upload coverage to Codecov
41+
uses: codecov/codecov-action@v1
42+
with:
43+
token: ${{ secrets.CODECOV_TOKEN }}
44+
file: ./coverage.xml
45+
flags: unittests
46+
name: codecov-umbrella
47+
fail_ci_if_error: false

.github/workflows/mk-docs-build.yml

+8-14
Original file line numberDiff line numberDiff line change
@@ -2,18 +2,17 @@ name: Build mkdocs
22

33
on:
44
pull_request:
5-
branches: [ master ]
5+
branches: [master]
66
push:
7-
branches: [ master ]
7+
branches: [master]
88
release:
9-
types: [ created ]
9+
types: [created]
1010

1111
env:
1212
SITE_BRANCH: ${{ 'gh-pages-ver' }}
1313

1414
jobs:
1515
build-docs:
16-
1716
runs-on: ubuntu-18.04
1817
steps:
1918
- uses: actions/checkout@v2
@@ -32,35 +31,30 @@ jobs:
3231
3332
3433
- name: Prepare the docs
35-
run: |
34+
run: |
3635
cd docs
3736
python autogen.py
38-
3937
- name: Setup git config
4038
run: |
4139
git config user.name "GitHub Actions Bot"
4240
git config user.email "[email protected]"
43-
4441
- name: Build the docs locally only
4542
if: github.event_name == 'pull_request'
46-
run: |
43+
run: |
4744
cd docs
48-
mike deploy dev -b ${{ env.SITE_BRANCH }}
49-
45+
mike deploy dev -b ${{ env.SITE_BRANCH }}
5046
- name: Deploy dev docs
5147
id: deploy_dev
5248
if: github.repository == 'airctic/icevision' && github.event_name == 'push'
5349
run: |
5450
cd docs
5551
mike deploy dev -b ${{ env.SITE_BRANCH }} -p
56-
echo '::set-output name=MIKE_VERSIONS::'$(mike list -b ${{ env.SITE_BRANCH }} | wc -l)
57-
52+
echo '::set-output name=MIKE_VERSIONS::'$(mike list -b ${{ env.SITE_BRANCH }} | wc -l)
5853
- name: Set dev as default
5954
if: steps.deploy_dev.outputs.MIKE_VERSIONS == 1
6055
run: |
6156
cd docs
6257
mike set-default -b ${{ env.SITE_BRANCH }} dev -p
63-
6458
- name: Get latest release tag
6559
if: github.event_name == 'release' && !github.event.release.prerelease
6660
id: latest
@@ -75,4 +69,4 @@ jobs:
7569
cd docs
7670
echo Deploy as ${{ steps.latest.outputs.release }}
7771
mike deploy -b ${{ env.SITE_BRANCH }} ${{ steps.latest.outputs.release }} -p
78-
mike set-default -b ${{ env.SITE_BRANCH }} ${{ steps.latest.outputs.release }} -p
72+
mike set-default -b ${{ env.SITE_BRANCH }} ${{ steps.latest.outputs.release }} -p

.gitignore

+6
Original file line numberDiff line numberDiff line change
@@ -154,4 +154,10 @@ archives/
154154
*.pth
155155
notebooks/wandb/latest-run
156156

157+
artifacts/*
157158
checkpoints
159+
160+
wandb
161+
wandb/*
162+
163+
*/artifacts/*

docker/Dockerfile

+1
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,5 @@ RUN pip install git+https://github.com/airctic/icedata.git -U
66
RUN pip install yolov5-icevision -U
77
RUN pip install mmcv-full==1.3.7 -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.8.0/index.html -U
88
RUN pip install mmdet==2.13.0 -U
9+
RUN pip install mmsegmentation==0.17.0 -U
910
RUN pip install ipywidgets

icevision/core/exceptions.py

+15-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,10 @@
1-
__all__ = ["InvalidDataError", "AutofixAbort", "AbortParseRecord"]
1+
__all__ = [
2+
"InvalidDataError",
3+
"AutofixAbort",
4+
"AbortParseRecord",
5+
"InvalidMMSegModelType",
6+
"PreTrainedVariantNotFound",
7+
]
28

39

410
class InvalidDataError(Exception):
@@ -11,3 +17,11 @@ class AutofixAbort(Exception):
1117

1218
class AbortParseRecord(Exception):
1319
pass
20+
21+
22+
class InvalidMMSegModelType(Exception):
23+
pass
24+
25+
26+
class PreTrainedVariantNotFound(Exception):
27+
pass

icevision/core/mask.py

+27-6
Original file line numberDiff line numberDiff line change
@@ -64,10 +64,11 @@ class MaskArray(Mask):
6464
6565
# Arguments
6666
data: Mask array, with the dimensions: (num_instances, height, width)
67+
pad_dim: bool
6768
"""
6869

69-
def __init__(self, data: np.uint8):
70-
if len(data.shape) == 2:
70+
def __init__(self, data: np.uint8, pad_dim: bool = True):
71+
if pad_dim and (len(data.shape) == 2):
7172
data = np.expand_dims(data, 0)
7273
self.data = data.astype(np.uint8)
7374

@@ -137,10 +138,21 @@ class MaskFile(Mask):
137138
def __init__(self, filepath: Union[str, Path]):
138139
self.filepath = Path(filepath)
139140

140-
def to_mask(self, h, w):
141-
mask = np.array(open_img(self.filepath, gray=True))
141+
def to_mask(self, h=None, w=None):
142+
mask_img = open_img(self.filepath, gray=True)
143+
144+
if (h is not None) and (w is not None):
145+
# If the dimensions provided in h and w do not match the size of the mask, resize the mask accordingly
146+
(w_org, h_org) = mask_img.size
147+
148+
# TODO: Check NEAREST is always the best option or only for binary?
149+
if w_org != w or h_org != h:
150+
mask_img = mask_img.resize((w, h), resample=PIL.Image.NEAREST)
151+
152+
mask = np.array(mask_img)
142153
obj_ids = np.unique(mask)[1:]
143154
masks = mask == obj_ids[:, None, None]
155+
144156
return MaskArray(masks)
145157

146158
def to_coco_rle(self, h, w) -> List[dict]:
@@ -275,17 +287,26 @@ def __init__(self, filepath: Union[str, Path], binary=False):
275287
self.filepath = Path(filepath)
276288
self.binary = binary
277289

278-
def to_mask(self, h, w):
290+
def to_mask(self, h, w, pad_dim=True):
279291
# TODO: convert the 255 masks
280292
mask = open_img(self.filepath, gray=True)
293+
294+
# If the dimensions provided in h and w do not match the size of the mask, resize the mask accordingly
295+
(w_org, h_org) = mask.size
296+
297+
# TODO: Check NEAREST is always the best option or only for binary?
298+
if w_org != w or h_org != h:
299+
mask = mask.resize((w, h), resample=PIL.Image.NEAREST)
300+
281301
# HACK: because open_img now return PIL
282302
mask = np.array(mask)
283303

284304
# convert 255 pixels to 1
285305
if self.binary:
286306
mask[mask == 255] = 1
287307

288-
return MaskArray(mask[None])
308+
# control array padding behaviour
309+
return MaskArray(mask, pad_dim=pad_dim)
289310

290311
def to_coco_rle(self, h, w) -> List[dict]:
291312
raise NotImplementedError

icevision/metrics/__init__.py

+3
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
11
from icevision.metrics.metric import *
22
from icevision.metrics.coco_metric import *
33
from icevision.metrics.confusion_matrix import *
4+
from icevision.metrics.segmentation_accuracy import *
5+
from icevision.metrics.dice_coefficient import *
6+
from icevision.metrics.jaccard_index import *
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
from icevision.metrics.dice_coefficient.binary_dice_coefficient import *
2+
from icevision.metrics.dice_coefficient.multiclass_dice_coefficient import *
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
__all__ = ["BinaryDiceCoefficient"]
2+
3+
from icevision.imports import *
4+
from icevision.utils import *
5+
from icevision.data import *
6+
from icevision.metrics.metric import *
7+
8+
9+
class BinaryDiceCoefficient(Metric):
10+
"""Binary Dice Coefficient for Semantic Segmentation
11+
12+
Calculates Dice Coefficient for semantic segmentation (binary images only).
13+
14+
"""
15+
16+
def __init__(self):
17+
self._reset()
18+
19+
def _reset(self):
20+
self._union = 0
21+
self._intersection = 0
22+
23+
def accumulate(self, preds):
24+
25+
pred = (
26+
np.stack([x.pred.segmentation.mask_array.data for x in preds])
27+
.astype(np.bool)
28+
.flatten()
29+
)
30+
31+
target = (
32+
np.stack([x.ground_truth.segmentation.mask_array.data for x in preds])
33+
.astype(np.bool)
34+
.flatten()
35+
)
36+
37+
self._union += pred.sum() + target.sum()
38+
self._intersection += np.logical_and(pred, target).sum()
39+
40+
def finalize(self) -> Dict[str, float]:
41+
42+
if self._union == 0:
43+
dice = 0
44+
45+
else:
46+
dice = 2.0 * self._intersection / self._union
47+
48+
self._reset()
49+
return {"dummy_value_for_fastai": dice}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
__all__ = ["MulticlassDiceCoefficient"]
2+
3+
from icevision.imports import *
4+
from icevision.utils import *
5+
from icevision.data import *
6+
from icevision.metrics.metric import *
7+
8+
9+
class MulticlassDiceCoefficient(Metric):
10+
"""Multi-class Dice Coefficient for Semantic Segmentation
11+
12+
Calculates Dice Coefficient for semantic segmentation (multi-class tasks).
13+
14+
Heaviliy inspired by fastai's implementation: Implement multi-class version following https://github.com/fastai/fastai/blob/594e1cc20068b0d99bfc30bfe6dac88ab381a157/fastai/metrics.py#L343
15+
16+
# Arguments
17+
classes_to_exclude: A list of class names to exclude from metric computation
18+
"""
19+
20+
def __init__(self, classes_to_exclude: list = []):
21+
self.classes_to_exclude = classes_to_exclude
22+
self._reset()
23+
24+
def _reset(self):
25+
self._union = {}
26+
self._intersection = {}
27+
28+
def accumulate(self, preds):
29+
30+
pred = np.stack([x.pred.segmentation.mask_array.data for x in preds]).flatten()
31+
32+
target = self._seg_masks_gt = np.stack(
33+
[x.ground_truth.segmentation.mask_array.data for x in preds]
34+
).flatten()
35+
36+
unique_classes_id = list(preds[0].segmentation.class_map._class2id.values())
37+
38+
for c in self.classes_to_exclude:
39+
if c in preds[0].segmentation.class_map._class2id:
40+
unique_classes_id.remove(preds[0].segmentation.class_map._class2id[c])
41+
42+
# We iterate through all unique classes across preditions and targets
43+
for c in unique_classes_id:
44+
45+
p = np.where(pred == c, 1, 0)
46+
t = np.where(target == c, 1, 0)
47+
48+
c_inter = np.logical_and(p, t).sum()
49+
c_union = p.sum() + t.sum()
50+
51+
if c in self._intersection:
52+
self._intersection[c] += c_inter
53+
self._union[c] += c_union
54+
else:
55+
self._intersection[c] = c_inter
56+
self._union[c] = c_union
57+
58+
def finalize(self) -> Dict[str, float]:
59+
60+
class_dice_scores = np.array([])
61+
62+
for c in self._intersection:
63+
class_dice_scores = np.append(
64+
class_dice_scores,
65+
2.0 * self._intersection[c] / self._union[c]
66+
if self._union[c] > 0
67+
else np.nan,
68+
)
69+
70+
dice = np.nanmean(class_dice_scores)
71+
72+
self._reset()
73+
return {"dummy_value_for_fastai": dice}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
from icevision.metrics.jaccard_index.binary_jaccard_index import *

0 commit comments

Comments
 (0)