=Paper=
{{Paper
|id=Vol-2491/abstract101
|storemode=property
|title=None
|pdfUrl=https://ceur-ws.org/Vol-2491/abstract101.pdf
|volume=Vol-2491
|dblpUrl=https://dblp.org/rec/conf/bnaic/RoelsS19
}}
==None==
Cost-efficient segmentation of electron microscopy images using active learning? Joris Roels1,2[0000−0002−2058−8134] and Yvan Saeys1,2[0000−0002−0415−1506] 1 Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Ghent, Belgium {jorisb.roels, yvan.saeys}@ugent.be 2 Inflammation Research Center, Flanders Institute for Biotechnology, Ghent, Belgium Abstract. Over the last decade, electron microscopy has improved up to a point that generating high quality gigavoxel sized datasets only requires a few hours. Automated image analysis, particularly image segmentation, however, has not evolved at the same pace. Even though state-of-the- art methods such as U-Net and DeepLab have improved segmentation performance substantially, the required amount of labels remains too expensive. Active learning is the subfield in machine learning that aims to mitigate this burden by selecting the samples that require labeling in a smart way. Many techniques have been proposed, particularly for image classification, to increase the steepness of learning curves. In this work, we extend these techniques to deep CNN based image segmentation. Our experiments on three different electron microscopy datasets show that active learning can improve segmentation quality by 10 to 15% in terms of Jaccard score compared to standard randomized sampling. Keywords: Electron microscopy · Image segmentation · Active learn- ing. 1 Introduction Semantic image segmentation, the task of assigning pixel-level object labels to an image, is a fundamental task in many applications and one of the most chal- lenging problems in generic computer vision. Particularly in biomedical imaging such as electron microscopy (EM), where annotated data is very sparsely avail- able and image data contains high resolution (≈ 5 nm3 ) and ultrastructural con- tent. Even though the impressive advances that have been made so far [1,4,3], state-of-the-art techniques mostly rely on large annotated datasets. 2 Active learning for image segmentation This work focuses on active learning, a subdomain of machine learning that aims to minimize supervision without sacrificing predictive accuracy. This is achieved ? Copyright c 2019 for this paper by its authors. Use permitted under Creative Com- mons License Attribution 4.0 International (CC BY 4.0). 2 J. Roels et al. 0.90 0.9 0.85 0.8 0.80 0.7 Jaccard index Jaccard index 0.75 0.6 Random sam pling Random sam pling 0.70 Ent ropy sam pling 0.5 Ent ropy sam pling Least Confidence Least Confidence K-m eans K-m eans 0.65 BALD 0.4 BALD Core set Core set Full supervision Full supervision 0.60 0.3 0 100 200 300 400 500 0 100 200 300 400 500 Num ber of sam ples Num ber of sam ples (a) EPFL (b) VNC Fig. 1. Learning curves for the discussed active learning approaches for two datasets. by iteratively querying a batch of samples to a label providing oracle, adding them to the train set and retraining the predictor. The challenge is to come up with a smart selection criterion to query samples and maximize the steepness of the training curve [6]. We employ state-of-the-art active learning approaches, commonly used for classification, to image segmentation. Specifically, we com- pare entropy-based, least confidence, k-means, BALD [2] and core set sampling [5] as active learning methods and compare these to the random sampling base- line. We illustrate on three EM datasets that the amount of annotated samples can be reduced to a few hundreds to obtain close to fully supervised performance with entropy, least confidence or BALD sampling (Figure 1 shows two use-cases). References 1. Ciresan, D.C., Giusti, A., Gambardella, M., L., Schmidhuber, J.: Deep Neural Net- works Segment Neuronal Membranes in Electron Microscopy Images. NIPS pp. 1–9 (2012) 2. Gal, Y., Islam, R., Ghahramani, Z.: Deep Bayesian active learning with image data. In: International Conference on Machine Learning (2017) 3. Januszewski, M., Kornfeld, J., Li, P.H., Pope, A., Blakely, T., Lindsey, L., Maitin-Shepard, J., Tyka, M., Denk, W., Jain, V.: High-precision automated reconstruction of neurons with flood-filling networks. Nature Methods (2018). https://doi.org/10.1038/s41592-018-0049-4 4. Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional Networks for Biomed- ical Image Segmentation. Medical Image Computing and Computer-Assisted In- tervention – MICCAI 2015 pp. 234–241 (2015). https://doi.org/10.1007/978-3-319- 24574-4 28 5. Sener, O., Savarese, S.: Active Learning for Convolutional Neural Networks: A Core- Set Approach. In: International Conference on Learning Representations (2018) 6. Settles, B.: Active Learning Literature Survey. Tech. rep., University of Wisconsin (2010). https://doi.org/10.1.1.167.4245