You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/tutorials/mito.rst
+13-18Lines changed: 13 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,8 +13,7 @@ Semantic Segmentation
13
13
14
14
This section provides step-by-step guidance for mitochondria segmentation with the EM benchmark datasets released by `Lucchi et al. (2012) <https://cvlab.epfl.ch/research/page-90578-en-html/research-medical-em-mitochondria-index-php/>`__. We approach the task as a **semantic segmentation** task and predict the mitochondria pixels with encoder-decoder ConvNets similar to the models used for affinity prediction in `neuron segmentation <neuron.html>`_. The evaluation of the mitochondria segmentation results is based on the F1 score and Intersection over Union (IoU).
15
15
16
-
.. note::
17
-
Unlike other EM connectomics datasets used in these tutorials, the dataset released by Lucchi et al. is an isotropic dataset, which means the spatial resolution along all three axes is the same. Therefore a completely 3D U-Net and data augmentation along x-z and y-z planes (alongside the standard practice of applying augmentation along the x-y plane) is applied.
16
+
.. note:: Unlike other EM connectomics datasets used in these tutorials, the dataset released by Lucchi et al. is an isotropic dataset, which means the spatial resolution along all three axes is the same. Therefore a completely 3D U-Net and data augmentation along x-z and y-z planes (alongside the standard practice of applying augmentation along the x-y plane) is applied.
18
17
19
18
The scripts needed for this tutorial can be found at ``pytorch_connectomics/scripts/main.py``. The corresponding configuraion file is found at ``configs/Lucchi-Mitochondria.yaml``.
20
19
@@ -24,6 +23,7 @@ The scripts needed for this tutorial can be found at ``pytorch_connectomics/scri
24
23
25
24
A benchmark model's qualitative results on the Lucchi dataset, presented without any post-processing
26
25
26
+
! rm -r sample_data
27
27
1 - Get the data
28
28
^^^^^^^^^^^^^^^^
29
29
@@ -110,15 +110,15 @@ This section provides step-by-step guidance for mitochondria segmentation with t
110
110
111
111
Complex mitochondria in the MitoEM dataset:(**a**) mitochondria-on-a-string (MOAS), and (**b**) dense tangle of touching instances. Those challenging cases are prevalent but not covered in previous datasets.
112
112
113
-
.. note::
114
-
115
-
The MitoEM dataset has two sub-datasets **MitoEM-Rat** and **MitoEM-Human** based on the source of the tissues. Three training configuration files on **MitoEM-Rat** are provided in ``pytorch_connectomics/configs/MitoEM/`` for different learning setting as described in this `paper <https://donglaiw.github.io/paper/2020_miccai_mitoEM.pdf>`_.
113
+
.. note:: The MitoEM dataset has two sub-datasets **MitoEM-Rat** and **MitoEM-Human** based on the source of the tissues. Three training configuration files on **MitoEM-Rat** are provided in ``pytorch_connectomics/configs/MitoEM/`` for different learning setting as described in this `paper <https://donglaiw.github.io/paper/2020_miccai_mitoEM.pdf>`_.
116
114
117
115
..
118
116
119
-
.. note::
117
+
.. note:: Since the dataset is very large and can not be directly loaded into memory, we designed the :class:`connectomics.data.dataset.TileDataset` class that only loads part of the whole volume each time by opening involved ``PNG`` or ``TIFF`` images.
118
+
119
+
..
120
120
121
-
Since the dataset is very large and can not be directly loaded into memory, we designed the :class:`connectomics.data.dataset.TileDataset` class that only loads part of the whole volume each time by opening involved ``PNG`` or ``TIFF`` images.
121
+
.. note:: A notebook for providing a benchmark evaluation is provided for users in the `Github repository <https://github.com/zudi-lin/pytorch_connectomics/tree/master/notebooks/tutorial_benchmarks/mitoem_benchmark.ipynb>`_. Users are able to download this notebook, upload it standalone to Google Drive, and use Colaboratory to produce evaluation results using a pretrained model.
122
122
123
123
1 - Dataset introduction
124
124
^^^^^^^^^^^^^^^^^^^^^^^^
@@ -155,9 +155,7 @@ The lattermost configuration achieves the best overall performance according to
155
155
156
156
..
157
157
158
-
.. note::
159
-
160
-
By default the path of images and labels are not specified. To run the training scripts, please revise the ``DATASET.IMAGE_NAME``, ``DATASET.LABEL_NAME``, ``DATASET.OUTPUT_PATH`` and ``DATASET.INPUT_PATH`` options in ``configs/MitoEM/MitoEM-R-*.yaml``. The options can also be given as command-line arguments without changing of the ``yaml`` configuration files.
158
+
.. note:: By default the path of images and labels are not specified. To run the training scripts, please revise the ``DATASET.IMAGE_NAME``, ``DATASET.LABEL_NAME``, ``DATASET.OUTPUT_PATH`` and ``DATASET.INPUT_PATH`` options in ``configs/MitoEM/MitoEM-R-*.yaml``. The options can also be given as command-line arguments without changing of the ``yaml`` configuration files.
161
159
162
160
4 (*optional*) - Visualize the training progress
163
161
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -178,13 +176,12 @@ The lattermost configuration achieves the best overall performance according to
178
176
179
177
..
180
178
181
-
.. note::
182
-
If training on personal data, please change the ``INFERENCE.IMAGE_NAME`` ``INFERENCE.OUTPUT_PATH`` ``INFERENCE.OUTPUT_NAME`` options in ``configs/MitoEM-R-*.yaml`` based on your own data path.
179
+
.. note:: If training on personal data, please change the ``INFERENCE.IMAGE_NAME`` ``INFERENCE.OUTPUT_PATH`` ``INFERENCE.OUTPUT_NAME`` options in ``configs/MitoEM-R-*.yaml`` based on your own data path.
183
180
184
181
6 - Post-process
185
182
^^^^^^^^^^^^^^^^
186
183
187
-
The post-processing step requires merging output volumes and applying watershed segmentation. As mentioned before, the dataset is very large and can hardly be directly loaded into memory for processing. Therefore our code run prediction on smaller chunks sequentially, which produces multiple ``*.h5`` files with the coordinate information. To merge the chunks into a single volume and apply the segmentation algorithm:
184
+
The post-processing step requires merging output volumes and applying watershed segmentation. As mentioned before, the dataset is very large and cannot be directly loaded into memory for processing. Therefore our code run prediction on smaller chunks sequentially, which produces multiple ``*.h5`` files with the coordinate information. To merge the chunks into a single volume and apply the segmentation algorithm:
188
185
189
186
.. code-block:: python
190
187
@@ -195,7 +192,7 @@ The post-processing step requires merging output volumes and applying watershed
195
192
196
193
output_files ='outputs/MitoEM_R_BC/test/*.h5'# output folder with chunks
197
194
chunks = glob.glob(output_files)
198
-
195
+
Mitochondria Segmentatio
199
196
vol_shape = (2, 500, 4096, 4096) # MitoEM test set
200
197
pred = np.ones(vol_shape, dtype=np.uint8)
201
198
for x in chunks:
@@ -212,16 +209,14 @@ The post-processing step requires merging output volumes and applying watershed
212
209
213
210
..
214
211
215
-
.. note::
216
-
217
-
The decoding parameters for the watershed step are a set of reasonable thresholds but not optimal given different segmentation models. We suggest conducting a hyper-parameter search on the validation set to decide the decoding parameters.
212
+
.. note:: The decoding parameters for the watershed step are a set of reasonable thresholds but not optimal given different segmentation models. We suggest conducting a hyper-parameter search on the validation set to decide the decoding parameters.
218
213
219
214
The generated segmentation map should be ready for submission to the `MitoEM <https://mitoem.grand-challenge.org/>`_ challenge website for evaluation. Please note that this tutorial only outlines training on **MitoEM-Rat** subset. Results on the **MitoEM-Human** subset, which can be generated using a similar pipeline as above, also need to be provided for online evaluation.
220
215
221
216
7 (*optional*)- Evaluate on the validation set
222
217
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
223
218
224
-
Performance on the MitoEM test data subset can only be evaluated on the Grand Challenge website. Users are encouraged to experiment with the metric code on the validation data subset to optimize performance and understand the Challenge's evaluation process. Evaluation is performed with the ``demo.py`` file provided by the `mAP_3Dvolume <https://github.com/ygCoconut/mAP_3Dvolume/tree/grand-challenge>`__ repository. The ground truth ``.h5`` file can be generated from the 2D images using the following script:
219
+
Performance on the MitoEM test data subset can only be evaluated on the Grand Challenge website. Users are encouraged to experiment with the metric code on the validation data subset to optimize performance and understand the Challenge's evaluation process. Evaluation is performed with the ``demo.py`` file provided by the `mAP_3Dvolume <https://github.com/ygCoconut/mAP_3Dvolume/tree/master>`__ repository. The ground truth ``.h5`` file can be generated from the 2D images using the following script:
0 commit comments