<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>CVPR.</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1063-6919</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>BioSegment: Active Learning segmentation for 3D electron microscopy imaging</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Department of Applied Mathematics</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Computer Science</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Statistics</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Data Mining and Modelling for Biomedicine, VIB-UGent Center for Inflammation Research</institution>
          ,
          <addr-line>Ghent</addr-line>
          ,
          <country country="BE">Belgium</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Faculty of Science, Ghent University</institution>
          ,
          <addr-line>Ghent</addr-line>
          ,
          <country country="BE">Belgium</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>VIB Bioimaging Core, VIB-UGent Center for Inflammation Research</institution>
          ,
          <addr-line>Ghent</addr-line>
          ,
          <country country="BE">Belgium</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2009</year>
      </pub-date>
      <volume>5206627</volume>
      <issue>1</issue>
      <fpage>0000</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Large 3D electron microscopy images require labor-intensive segmentation for further quantitative analysis. Recent deep learning segmentation methods automate this computer vision task, but require large amounts of labeled training data. We present BioSegment, a turnkey platform for experts to automatically process their imaging data and fine-tune segmentation models. It provides a user-friendly annotation experience, integration with familiar microscopy annotation software and a job queue for remote GPU acceleration. Various active learning sampling strategies are incorporated, with maximum entropy selection being the default. For mitochondrial segmentation, these strategies can improve segmentation quality by 10 to 15% in terms of intersection-over-union score compared to random sampling. Additionally, a segmentation of similar quality can be achieved using 25% of the total annotation budget required for random sampling. By comparing the state-of-the-art in human-in-the-loop annotation frameworks, we show that BioSegment is currently the only framework capable of employing deep learning and active learning for 3D electron microscopy data.</p>
      </abstract>
      <kwd-group>
        <kwd>Active learning</kwd>
        <kwd>Electron microscopy</kwd>
        <kwd>Computer vision</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Volume electron microscopy (vEM or 3D EM) describes a set of high-resolution
imaging techniques used in biomedical research to reveal the 3D structure of
cells, tissues and small model organisms at nanometer resolution. EM techniques
have emerged over the past 20 years, largely in response to the demands of
the connectomics field in neuroscience, and vEM is expected to be adopted
into mainstream biological imaging [23]. Generally, vEM data processing can be
divided into four consecutive steps: preprocessing, segmentation, post-processing
and downstream analysis.</p>
      <p>© 2022 for this paper by its authors. Use permitted under CC BY 4.0.</p>
      <p>
        For the imaging data to be used by deep learning networks, some additional
preprocessing transformations include normalization and data augmentation. An
imaging experiment often includes metadata of the multiple samples, which need
to be compared against each other in downstream analysis. This is documented
using a folder structure or a data table. Some preprocessing steps to improve
imaging data include denoising [
        <xref ref-type="bibr" rid="ref19 ref28">29,38</xref>
        ], histogram equalization [
        <xref ref-type="bibr" rid="ref29">39</xref>
        ] and artifact
removal. Usually, the imaging data is downsampled or binned in order to reduce
data size and to speed up expert and model annotation, while still retaining
enough resolution to allow correct segmentation.
      </p>
      <p>Next is segmentation, the detection and delineation of structures of interest.
Segmentation is required for extraction of quantitative information from rich vEM
data sets. Non-discriminant contrast, diversity of appearance of structures and
large image volumes turn vEM segmentation into a highly non-trivial problem,
where cutting-edge methods relying on state-of-the-art computer vision
techniques are still far from reaching human parity in segmentation accuracy [23].
Here, we only consider segmentation of mitochondria, but other cellular
components or tissue regions can also be of interest. Pretrained models can be applied
to a small sample in order to evaluate segmentation quality. If no model of
sufficient quality is available, a new model is created by using some training data
annotated by an expert (microscopist or biologist). Machine learning methods
can be trained to produce different flavors of segmentation, labelling the pixels
either by semantics (for example, label all mitochondria pixels as 1 and the rest
as 0) or by the objects they belong to (for example, label all pixels of the first
mitochondrion as 1, of the second mitochondrion as 2, of the nth mitochondrion
as n, with non-mitochondrion pixels as 0).</p>
      <p>There are various post-processing steps to transform a semantic segmentation to
an object instance segmentation, such as connected components and watershed
transform. To further clean up the segmentation, there is usually some filtering
based on instance size.</p>
      <p>After processing all samples of the experiment, a research question is answered in
a downstream analysis. Statistics of interest are calculated such as number of
mitochondria, mitochondria surface and volume. These statistics are summarized
in a data table and combined with the experiment metadata to quantify effects.
Although significant progress has been made in recent years, largely owing to
the introduction of deep learning-based methods, there is not yet a single
reliable and easy-to-use solution for fully automated segmentation of vEM images.
Imaging experts must choose between (or combine) manual, semi-automated and
fully automated solutions based on the difficulty of the segmentation problem,
the data size and the computational expertise and resources of their team or
institution. Furthermore, almost all automated solutions rely on machine
learning and may require large amounts of example segmentations to train a model,
although in some cases models trained for the same task on similar data sets are
available and can be applied directly [23].</p>
      <p>Machine learning-based segmentation models can be divided into two
categories: feature-based learning and deep learning. Feature-based learning methods
use a set of predefined features (usually linear and non-linear image filters) as
input to a non-linear classifier such as a support vector machine or a random
forest that outputs the (semantic) segmentation. They need few examples and
are available via user-friendly tools. Methods using deep learning do not rely
on pre-computed features but, instead, learn features and segmentation jointly.
They can solve more difficult segmentation problems, but their superior accuracy
requires much larger amounts of examples, and the training must be performed
on graphics processing units (GPUs). Efficient training and post-processing
procedures for deep learning methods in vEM constitute an active area of research
[23].</p>
      <p>
        For successful application, the deep learning model needs to be trained on data
very similar to the data at hand, but annotated vEM training data is
timeconsuming to create. Various approaches try to alleviate this problem: increasing
annotator efficiency using professional annotation software (i.e. MIB or Imaris),
sparse labeling [
        <xref ref-type="bibr" rid="ref26">36</xref>
        ] or refining model predictions using only points [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
Additionally, model performance can increase through self-supervised learning on large
unlabeled and heterogeneous data sets [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], generalizability-enhancing tricks
such as data augmentation or domain adaptation [
        <xref ref-type="bibr" rid="ref17">27</xref>
        ]. In any case, additional
fine-tuning on some labeled domain-specific data will improve segmentation
performance and may be even required [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. When fine-tuning, model performance
can be further increased by choosing the most interesting samples to annotation
using active learning [24].
      </p>
      <p>
        Active learning (AL) is a subdomain of machine learning that aims to minimize
label effort without sacrificing model performance. This is achieved by iteratively
querying a batch of samples to a label providing oracle, adding them to the train
set and retraining the predictor. The challenge is to come up with a smart
selection criterion to query samples and maximize the steepness of the training
curve [
        <xref ref-type="bibr" rid="ref23">33</xref>
        ]. In the setting of vEM segmentation, the oracle is a human imaging
expert, such as a microscopist or biologist. This makes our application human
or expert-in-the-loop, as the expert will be queried to provide labels through an
annotation interface. We consider the total volume of EM data as an offline pool
of unlabeled 2D training patches. A general overview of a human-in-the-loop
annotation workflow using AL for semantic segmentation is given in Figure 1.
To our knowledge, segmentation of vEM data in an AL setting is not an
established practice, i.e. the recent Empanada napari plugin [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] for vEM only
supports random sampling. In other fields, various tools employ AL to great
effect: Label Studio [
        <xref ref-type="bibr" rid="ref25">35</xref>
        ] is a flexible data annotation tool that supports
semantic segmentation, AL and prediction refinement. MONAI Label [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] is an open
source image labeling and learning tool that helps researchers and clinicians to
collaborate, create annotated datasets, and build AI models. It features 3D
segmentation refinement using 3D Slicer and AL sample selection. Kaibu [21] is a
web application for visualizing and annotating multidimensional images,
featuring deep learning powered interactive segmentation. Ilastik [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] is an easy-to-use
interactive tool that brings machine-learning-based (bio)image analysis to end
users without substantial computational expertise. It contains pre-defined
workflows for image segmentation, object classification, counting and tracking.
      </p>
      <p>
        In this paper, we propose three new contributions:
1. A comparison of five AL strategies for semantic segmentation on three vEM
datasets, on which we previously reported in our preprint [
        <xref ref-type="bibr" rid="ref18">28</xref>
        ].
2. A feature comparison between current state-of-the-art software frameworks
for human-in-the-loop active learning using deep learning segmentation
models.
3. BioSegment, an integrated platform for imaging experts to process vEM
datasets using AL strategies.
      </p>
      <p>First, we describe the software architecture of an AL semantic segmentation
framework in Section 2.1, the deep learning models in Section 2.2. We continue
with used AL strategies in Section 2.3 and validation datasets in Section 2.4. Our
three contributions are presented and discussed in Section 3. Lastly, we envision
future work in Section 4 and conclude in Section 5.
2
2.1</p>
    </sec>
    <sec id="sec-2">
      <title>Methods</title>
      <p>Software Architecture</p>
      <p>
        We give an overview of the BioSegment software architecture in Figure 2.
A central database is managed by a backend, implemented using FastAPI. It
features a documented REST API, database schemas for all modelled objects
and a job queue using Celery and Redis. For long-running tasks like conversion
and fine-tuning, separate workers are used, communicating via the messaging
bus of the job queue. For data conversion and viewing AICSImageIO [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and
BioFormats [18] are used. The only communication requirement for the workers
is access to the Redis server port and the data storage. They can run on a
different machine with GPU acceleration or a network with access to secure and
confidential imaging data. Segmentation models and tasks are implemented in
PyTorch, and models are serialized to disk. Tensorboard is used to visualize
training progression and predicted segmentation performance on selected image
samples.
      </p>
      <p>The BioSegment software stack is reproducible using conda environments
and Docker containers. Staging and production deployments are managed
using Docker Swarm. Restrictive enterprise firewalls can be overcome through the
Traefik reverse-proxy, which also provides security with automated HTTPS
certificate management. Admin interfaces for network, user, database and job queue
management are also implemented. Clients can communicate with the backend
REST API to add imaging data, manage jobs and visualize results. Using a
code generation tool like OpenAPI Generator, the documented REST API from
the backend can automatically generate the client code library. This automated
step improves maintainability of multiple client interfaces and annotation
software plugins. A JavaScript frontend implements most of the backend API and
provides management of all data objects like users, datasets, segmentation,
annotations and models. A Dash dashboard provides an interface for sparse semantic
labelling. Datasets are accessed using file system paths in the backend and
workers. These paths resolve to a local mount of the remote disk storage. The mount
point is set up using sshfs.
2.2</p>
      <p>
        Deep learning methods
We build on the PyTorch Lightning framework, which allows high-level but
advanced training loops without the boilerplate code. It supports different
accelerator architectures and allows for reproducible and maintainable code. It
also features fine-tuning strategies, automated learning rate, batch size finders
and support for multiple GPUs and mixed integer training. Various
segmentation models are available: our own advanced U-Net implementations in the
published neuralnets [
        <xref ref-type="bibr" rid="ref16">26</xref>
        ] package and torchvision [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] which features pretrained
model weights.
2.3
      </p>
      <p>
        Active learning strategies
Five AL strategies were implemented by us and are explained here. We consider
the task of semantic segmentation, i.e. given an image x ∈ X ⊂ RN with a
total amount of N pixels, we aim to compute a pixel-level labeling y ∈ Y , where
Y = {0, . . . , C −1}N is the label space and C is the number of classes. In
particular, we focus on the case of binary segmentation, i.e. C = 2. Let pj (x) = [fθ(x)]j
be the probability class distribution of pixel j of a parameterized segmentation
algorithm fθ (i.e. an encoder-decoder network, such as U-Net[
        <xref ref-type="bibr" rid="ref20">30</xref>
        ]).
Consider a large pool of n i.i.d. sampled data points over the space Z = X × Y
as {xi, yi}i∈[n], where [n] = {1, . . . , n}, and an initial pool of m randomly
chosen distinct data points indexed by S0 = {ij |ij ∈ [n]}j∈[m]. An active learning
algorithm initially only has access to {xi}i∈[n] and {yi}i∈S0 and iteratively
extends the currently labeled pool St by querying k samples from the unlabeled
set {xi}i∈[n]\St to an oracle. After iteration t, the predictor is retrained with
the available samples {xi}i∈[n] and labels {yi}i∈St , thereby improving the
segmentation quality. Note that, without loss of generalization, the active learning
approaches below are described for k = 1. We can also query k &gt; 1 samples
for k iterations, without retraining, to achieve a batch of samples. The complete
active learning workflow is shown in Figure 1.
      </p>
      <p>Maximum entropy sampling [16,17] Maximum entropy is a straightforward
selection criterion that aims to select samples for which the predictions are
uncertain. Formally speaking, we adjust the selection criterion to a pixel-wise
entropy calculation as follows:
xt∗+1 = arg</p>
      <p>N−1 C−1
max X X [pj (x)]c log [pj (x)]c.</p>
      <p>
        x∈[n]\St − j=0 c=0
In other words, the entropy is calculated for each pixel and summed up. Note
that a high entropy will be obtained when pj (x) = C1 , this is exactly when
there is no real consensus on the predicted class (i.e. high uncertainty).
Least confidence selection [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] Similar to maximum entropy sampling, the
least confidence criterion selects samples for which the predictions are
uncertain:
      </p>
      <p>N−1</p>
      <p>X
xt∗+1 = arg</p>
      <p>min max
x∈[n]\St j=0 c=0,...,C−1
[pj (x)]c.</p>
      <p>As the name suggests, the least confidence criterion selects the probability
that corresponds to the predicted class. Whenever this probability is small,
the predictor is not confident about its decision. For image segmentation,
we sum up the maximum probabilities in order to select the least confident
samples.</p>
      <p>
        Bayesian active learning disagreement [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] The Bayesian active learning
disagreement (BALD) approach is specifically designed for convolutional
neural networks (CNNs). It makes use of Bayesian CNNs in order to cope
with the small amounts of training data that are usually available in active
learning workflows. A Bayesian CNN assumes a prior probability
distribution placed over the model parameters θ ∼ p(θ). The uncertainty in the
weights induces prediction uncertainty by marginalizing over the
approximate posterior:
(1)
(2)
(3)
pj (x; θˆt)i ,
c
where θˆt ∼ q(θ) is the dropout distribution, which approximates the prior
probability distribution p. In other words, a CNN is trained with dropout
and inference is obtained by leaving dropout on. This causes uncertainty in
the outcome that can be used in existing criteria such as maximum entropy
(Equation (1)).
      </p>
      <p>
        K-means sampling [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] Uncertainty-based approaches typically sample close
to the decision boundary of the classifier. This introduces an implicit bias
that does not allow for data exploration. Most explorative approaches that
aim to solve this problem transform the input x to a more compact and
efficient representation z = g(x) (e.g. the feature representation before the
fully connected stage in a classification CNN). The representation that we
used in our segmentation approach was the middle bottleneck
representation in the U-Net, which is the learned encoded embedding of the model.
The k-means sampling approach in particular then finds k clusters in this
embedding using k-means clustering. The selected samples are then the k
samples in the different clusters that are closest to the k centroids.
Core set active learning [
        <xref ref-type="bibr" rid="ref22">32</xref>
        ] The core set approach is an active learning
approach for CNNs that is not based on uncertainty or exploratory sampling.
Similar to k-means, samples are selected from an embedding z = g(x) in
such a way that a model trained on the selection of samples would be
competitive for the remaining samples. Similar as before, the representation that
we used in our segmentation approach was the bottleneck representation in
the U-Net. In order to obtain such competitive samples, this approach aims
to minimize the so-called core set loss. This is the difference between the
average empirical loss over the set of labeled samples (i.e. St) and the average
empirical loss over the entire dataset that includes the unlabeled points (i.e.
[n]).
2.4
      </p>
      <p>
        Validation datasets
Three public EM datasets were used to validate our approach:
– The EPFL dataset4 represents a 5 × 5 × 5 μm3 section taken from the CA1
hippocampus region of the brain, corresponding to a 2048 × 1536 × 1065
volume. Two 1048 × 786 × 165 subvolumes were manually labelled by experts
for mitochondria. The data was acquired by a focused ion-beam scanning
EM, and the resolution of each voxel is approximately 5 × 5 × 5 nm3.
– The VNC dataset5 represents two 4.7 × 4.7 × 1 μm3 sections taken from the
Drosophila melanogaster third instar larva ventral nerve cord, corresponding
to a 1024 × 1024 × 20 volume. One stack was manually labelled by experts
for mitochondria. The data was acquired by a transmission EM and the
resolution of each voxel is approximately 4.6 × 4.6 × 45 nm3.
– The MiRA dataset6[
        <xref ref-type="bibr" rid="ref27">37</xref>
        ] represents a 17 × 17 × 1.6 μm3 section taken from the
mouse cortex, corresponding to a 8624 × 8416 × 31 volume. The complete
volume was manually labelled by experts for mitochondria. The data was
4 Data available at https://cvlab.epfl.ch/data/data-em/
5 Data available at https://github.com/unidesigner/groundtruth-drosophila-vnc/
6 Data available at http://95.163.198.142/MiRA/mitochondria31/
acquired by an automated tape-collecting ultramicrotome scanning EM, and
the resolution of each voxel is approximately 2 × 2 × 50 nm3.
      </p>
      <p>In order to properly validate the discussed approaches, we split the available
labeled data in a training and testing set. In the cases of a single labeled volume
(VNC and MiRA), we split these datasets halfway along the y axis. A smaller
U-Net (with 4 times less feature maps) was initially trained on m = 20 randomly
selected 128 × 128 samples in the training volume (learning rate of 1e−3 for 500
epochs). Next, we consider a pool of n = 2000 samples in the training data to be
queried. Each iteration, k = 20 samples are selected from this pool based on one
of the discussed selection criteria, and added to the labeled set St, after which
the segmentation network is fine-tuned (learning rate of 5e−4 for 200 epochs).
This procedure is repeated for T = 25 iterations, leading to a maximum training
set size of 500 samples. We validate the segmentation performance using the
intersection-over-union (IoU) metric, also known as the Jaccard score:
J (y, yˆ) =</p>
      <p>Pi [y · yˆ]i
Pi [y]i + Pi [yˆ]i − Pi [y · yˆ]i
(4)
3
3.1</p>
    </sec>
    <sec id="sec-3">
      <title>Results</title>
      <p>Active Learning validation
We validated five AL strategies on three public EM datasets. The resulting
learning curves of the discussed approaches on the three datasets are shown in Figure
3. We additionally show the performance obtained by full supervision (i.e. all
labels are available during training), which is the maximum achievable
segmentation performance. There is an indication that maximum entropy sampling, least
confidence selection and BALD outperform the random sampling baseline. These
methods obtain about 10 to 15% performance increase for the same amount of
available labels for all datasets. Additionally, a segmentation of similar quality
can be achieved using 25% of the total annotation budget required for random
sampling. The core set approach performs similar to slightly better than the
baseline. We expect that this method can be improved by considering
alternative embeddings. Lastly, we see that k-means performs significantly worse than
random sampling. Even though this could also be an embedding problem such
as with the core set approach, we think that exploratory sampling alone will not
allow the predictor to learn from challenging samples, which are usually outliers.
We expect that a hybrid approach based on both exploration and uncertainty
might lead to better results, and consider this future work.</p>
      <p>Figure 4 shows qualitative segmentation results on the EPFL dataset. In
particular, we show results of the random, k-means and maximum entropy sampling
methods using 120 samples, and compare this to the fully supervised approach.
The maximum entropy sampling technique is able to improve the others by a
large margin and closes the gap towards fully supervised learning significantly.</p>
      <p>Lastly, we are interested in what type of samples the active learning
approaches select for training. Figure 5 shows 4 samples of the VNC dataset that
0.90
0.85
0.80
x
e
d
idn0.75
r
a
c
c
a
J
0.70
0.65
0.60
(a) Input
(b) Ground truth
(c) Full supervision (0.857)
(d) Random (0.733)
(e) k-means (0.710)
(f) Maximum entropy (0.813)
Fig. 4: Segmentation results obtained from an actively learned U-Net with 120
samples of the EPFL dataset based on random, k-means and maximum entropy
sampling, and a comparison to the fully supervised approach. Jaccard scores are
indicated between brackets.
correspond to the highest prioritized samples, according to the least confidence
criterion, that were selected in the first 4 iterations. The top row illustrates the
probability predictions of the network at that point in time, whereas the
bottom row shows the pixel-wise uncertainty of the sample (i.e. the maximum in
Equation (2)). Note that the initial predictions at t = 1 are of poor quality, as
the network was only trained on 20 samples. Moreover, the uncertainty is high
in regions where the network is uncertain, but it is low in regions where the
network is wrong. The latter is a common issue in active learning and related
to the exploration vs. uncertainty trade-off. However, over time, we see that the
network performance improves, and more challenging samples are being queried
to the oracle.
3.2</p>
      <p>Feature comparison
We define five software features of interest for an AL software framework for
vEM data:
Interactive fine-tuning The expert should be able to fine-tune a segmentation
model with their own newly annotated data. For deep learning models, this
t = 1
t = 2
t = 3
t = 4
Fig. 5: Illustration of the selected samples in the VNC dataset over time in the
active learning process. The top row shows the pixel-wise prediction of the
selected samples at iterations 1 through 4. The bottom row show the pixel-wise
least confidence score on the corresponding images.</p>
      <p>involves optional GPU acceleration and reporting on training status and
accuracy. All considered frameworks have this feature.</p>
      <p>Active learning The framework should support sampling the unlabeled data
using an AL strategy. Some frameworks have only proposed this feature for
future work and only implemented a random sampling strategy.
Large datasets The expert should be able to apply existing and newly trained
models on their whole dataset, no matter the size. This feature is the most
lacking, as it requires support for tiled inference and long-running jobs.
3D support The supported annotation interfaces of the framework should
allow the expert to freely browse consecutive slices or volumes in 3D.
Remote resources In order to process large datasets, large storage and
computational resources such as workstations and GPU’s are needed. This usually
requires a flexible software architecture and communication over a network
interface or software worker queue.</p>
      <p>
        BioSegment combines the desirable software features needed for
analyzing vEM data in one framework and is the only AL framework currently used
as such. Ilastik is an established interactive annotation tool with support for
standard ML segmentation. Recently, it has added beta support for a remote
GPU task server (tiktorch [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]) and an active learning ML segmentation
workflow [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] using SLIC features and supervoxels for vEM. All this functionality is
however still in beta, sparsely documented and not yet applied for deep learning
models or mitochondria segmentation. napari-empanada is the most recent
development in vEM segmentation, but has no support for AL. The lack of support
for remote resources could however be solved by running napari remotely using
VirtualGL [22] or using a remote Dask cluster or data store [25]. Lastly, the
recently developed feature set of MONAI Label is exciting. However, it is little
over a year old, has no reported usage by the EM community, and mostly targets
radiology and pathology use cases. Nevertheless, it can be adapted for EM and
integrated in our BioSegment workflow, as shown in 6c. We note that a remote
GPU-accelerated model execution is a hallmark of most frameworks: the worker
queue in BioSegment and MONAI Label, the Label Studio ML backend and
the ilastik tiktorch server.
3.3
      </p>
      <p>BioSegment workflow
After image capturing and storing the raw microscopy data on disk, experts
start the BioSegment workflow. Through a dedicated dashboard (Figure 6a),
the expert can create a new dataset holder and import the imaging data directly
by providing the folder path. This starts a new annotation workflow. The expert
can start preprocessing and segmentation jobs for the whole dataset and
visualize the result (Figure 6b).</p>
      <p>
        If no existing model has the desired quality, experts can choose a model to
fine-tune. A batch of sampled images from the unlabeled dataset is chosen for
annotation. An interface for sparse semantic labelling is provided, and the
subset can be exported to different bioimaging annotation software like 3D Slicer
(Figure 6c), Amira (ThermoFisher Scientific), Imaris (Oxford Instruments), Fiji
[
        <xref ref-type="bibr" rid="ref21">31</xref>
        ] or napari [
        <xref ref-type="bibr" rid="ref24">34</xref>
        ]. The chosen model can be fine-tuned on the created training
data and model performance can again be evaluated.
      </p>
      <p>The annotation workflow can be augmented using active learning loops: the
subset of images to be sampled can be selected by one of the five implemented
active learning strategies, informed by the chosen model. After annotation by the
expert, this model will be fine-tuned and again be used for selecting the following
batch of images, creating an active learning loop and immediately incorporating
the expert feedback in the sampling process. By empowering imaging experts
with a dashboard to run by themselves multiple active learning iterations and
segmentation jobs on their datasets, active learning can be incorporated into
their normal annotation workflow. The expert can stop the iterations when they
(a) BioSegment Dashboard
(b) BioSegment model
viewer
(c) External viewer (3D Slicer)
Fig. 6: Three example BioSegment interfaces. 6a: The dashboard where users
can manage all settings. 6b: Models can be viewed and fine-tuned with training
data using the viewer interface data. 6c: Results can be exported and used in
external programs such as 3D Slicer and MONAI Label.
are satisfied with the segmentation quality in the preview or their annotation
budget is depleted. The number of iterations is usually three or higher, but this
highly depends on the dataset and on the computer vision task.
When a segmentation model of high enough quality is achieved, it can be applied
to the whole dataset like the other pre-existing models. The labelled data can
be added to a pool of general training data in order to train better performing
models for future fine-tuning tasks. Experts can download the segmented dataset
for further downstream analysis.</p>
      <p>The BioSegment software stack is deployed at biosegment.ugent.be and used
internally at the Flemish Institute for Biotechnology (VIB) for annotating new
vEM datasets. It automates the previous manual active learning loops between
imaging experts at a partnering imaging facility and deep learning scientists in
our computational lab. The code is available at GitHub and features a
documentation site.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Future work</title>
      <p>
        Computer vision is not limited to single class semantic segmentation problems.
Mitochondria form 3D shapes and networks, requiring 3D post-processing to
achieve accurate instance segmentation. Other cell organelles are of equal
interest, and large amounts of existing data are now available through the
OpenOrganelle data portal [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Multi-class semantic segmentation is currently
implemented, but the label map is not standardized. Interfacing with the BioImage
Model Zoo [20] would help in this regard. We also plan to further integrate
preprocessing steps like denoising, as these are still done with a separate script.
Beside image enhancement, volume reconstruction and multimodal registration
are two different data processing workflows in EM that would be beneficial to
implement.
      </p>
      <p>
        Recent advances in tooling include napari, an interactive, multidimensional
image viewer for Python and the Java-based Paintera [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] for dense labeling of
large 3D datasets. Together with cloud-based file formats like NGFF [19] these
would facilitate annotating and processing large imaging experiments.
Integration with Dask [25], a flexible open-source Python library for parallel computing,
would allow immediate preview of complex workflows and scaling for the whole
dataset using long-running jobs. These advances allow for new annotation
experiences. For example, a region-of-interest free approach where the annotator
freely browses the whole dataset and the current model prediction and
uncertainty is lazily updated depending on the view-port. By creating multi-resolution
maps of the model uncertainty, the expert is informed on the model performance
over the whole dataset and is free to choose which regions to annotate.
Complexity of the software stack can be out-sourced to existing free software
libraries. Lightning AI further removes boilerplate code in deep learning models
by providing App and Flow interfaces. Data management and worker
communication in BioSegment can be handled by Girder, which also utilizes the Celery
job queue. By creating or integrating with plugins for already established
annotation tools, adoption of the BioSegment workflow can be improved. Active
development in the 3D Slicer and napari communities for chunked and
multidimensional file formats, instance segmentation and collaborative annotation
proofreading tools will also improve the future BioSegment feature set. For AL
research, it would be valuable to add instrumentation to these annotation tools
in order to better capture the burden of the annotation work by the expert.
Currently, number of samples and total annotated pixels can be measured, but
actual time and number of clicks would be more accurate metrics. BioSegment
can be adapted to capture these interesting metrics. Greater model performance
can be achieved by including automated hyperparameter optimization such as
Optuna [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. This and other AutoML strategies would further automate model
training.
5
      </p>
    </sec>
    <sec id="sec-5">
      <title>Conclusions</title>
      <p>We present BioSegment, a turnkey solution for Active Learning segmentation
of vEM imaging. It provides a user-friendly annotation experience, integration
with familiar microscopy annotation software and a job queue for remote GPU
acceleration. Expert annotation is augmented using active learning strategies.
For mitochondrial segmentation, these strategies can improve segmentation
quality by 10 to 15% in terms of intersection-over-union score compared to random
sampling. Additionally, a segmentation of similar quality can be achieved using
25% of the total annotation budget required for random sampling. The
software stack is maintainable through various automated tests, and the code base
is published under an open-source license. By comparing the state-of-the-art in
human-in-the-loop annotation frameworks, we show that BioSegment is
currently the only framework capable of employing deep learning and active learning
for 3D electron microscopy data.</p>
      <p>Acknowledgements The computational resources and services used in this
work were provided by NVIDIA, VIB IRC IT and the VSC (Flemish
Supercomputer Center), funded by the Research Foundation – Flanders (FWO) and
the Flemish Government. Imaging data and feedback was provided by the VIB
BioImaging Core. Funding was provided by the Flanders AI Research Program.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1. ilastik - Voxel
          <source>Segmentation Workflow (beta)</source>
          , https://www.ilastik.org/ documentation/voxelsegmentation/voxelsegmentation
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>tiktorch</surname>
          </string-name>
          (
          <year>Dec 2021</year>
          ), https://github.com/ilastik/tiktorch, original-date:
          <fpage>2017</fpage>
          -
          <lpage>07</lpage>
          -18T10:
          <fpage>25</fpage>
          :
          <fpage>47Z</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>AICSImageIO</surname>
          </string-name>
          (Jun
          <year>2022</year>
          ), https://github.com/AllenCellModeling/ aicsimageio, original-date:
          <fpage>2019</fpage>
          -
          <lpage>06</lpage>
          -27T16:
          <fpage>43</fpage>
          :
          <fpage>22Z</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Paintera</surname>
          </string-name>
          (Jun
          <year>2022</year>
          ), https://github.com/saalfeldlab/paintera, original-date:
          <fpage>2018</fpage>
          -
          <lpage>04</lpage>
          -26T21:
          <fpage>55</fpage>
          :
          <fpage>50Z</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5. pytorch/vision (Jun
          <year>2022</year>
          ), https://github.com/pytorch/vision, original-date:
          <fpage>2016</fpage>
          -
          <lpage>11</lpage>
          -09T23:
          <fpage>11</fpage>
          :
          <fpage>43Z</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Akiba</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sano</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yanase</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ohta</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koyama</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <string-name>
            <surname>Optuna</surname>
            :
            <given-names>A Nextgeneration</given-names>
          </string-name>
          <string-name>
            <surname>Hyperparameter Optimization Framework</surname>
          </string-name>
          (
          <year>Jul 2019</year>
          ). https://doi. org/10.48550/arXiv.
          <year>1907</year>
          .
          <volume>10902</volume>
          , http://arxiv.org/abs/
          <year>1907</year>
          .10902, number: arXiv:
          <year>1907</year>
          .10902 arXiv:
          <year>1907</year>
          .10902 [cs, stat]
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Berg</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kutra</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kroeger</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Straehle</surname>
            ,
            <given-names>C.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kausler</surname>
            ,
            <given-names>B.X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haubold</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schiegg</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ales</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beier</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rudy</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eren</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cervantes</surname>
            ,
            <given-names>J.I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beuttenmueller</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wolny</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koethe</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hamprecht</surname>
            ,
            <given-names>F.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kreshuk</surname>
          </string-name>
          , A.:
          <article-title>ilastik: interactive machine learning for (bio)image analysis</article-title>
          .
          <source>Nature Methods</source>
          <volume>16</volume>
          (
          <issue>12</issue>
          ),
          <fpage>1226</fpage>
          -
          <lpage>1232</lpage>
          (
          <year>Dec 2019</year>
          ). https://doi.org/10.1038/s41592-019-0582-9, https://www.nature.com/articles/s41592-019-0582-9, number: 12 Publisher: Nature Publishing Group
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Bodó</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Minier</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Csató</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Active Learning with Clustering</article-title>
          .
          <source>In: Active Learning and Experimental Design workshop In conjunction with AISTATS</source>
          <year>2010</year>
          . pp.
          <fpage>127</fpage>
          -
          <lpage>139</lpage>
          . JMLR Workshop and Conference Proceedings (
          <year>Apr 2011</year>
          ), https: //proceedings.mlr.press/v16/bodo11a.html, iSSN:
          <fpage>1938</fpage>
          -
          <lpage>7228</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Brinker</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Incorporating diversity in active learning with support vector machines</article-title>
          .
          <source>In: Proceedings of the Twentieth International Conference on International Conference on Machine Learning</source>
          . pp.
          <fpage>59</fpage>
          -
          <lpage>66</lpage>
          . ICML'03, AAAI Press, Washington, DC, USA (Aug
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          , Zhang,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Duan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Qi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Zhao</surname>
          </string-name>
          , H.:
          <article-title>FocalClick: Towards Practical Interactive Image Segmentation</article-title>
          .
          <source>arXiv:2204.02574 [cs] (Apr</source>
          <year>2022</year>
          ), http: //arxiv.org/abs/2204.02574, arXiv:
          <fpage>2204</fpage>
          .02574 version:
          <fpage>1</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Conrad</surname>
            ,
            <given-names>R.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Narayan</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Instance segmentation of mitochondria in electron microscopy images with a generalist deep learning model</article-title>
          .
          <source>Tech. rep., bioRxiv (May</source>
          <year>2022</year>
          ). https://doi.org/10.1101/
          <year>2022</year>
          .03.17.484806, https:// www.biorxiv.org/content/10.1101/
          <year>2022</year>
          .03.17.484806v2, section: New Results Type: article
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Diaz-Pinto</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alle</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ihsani</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Asad</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nath</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pérez-García</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mehta</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roth</surname>
            ,
            <given-names>H.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vercauteren</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dogra</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ourselin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Feng</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cardoso</surname>
            ,
            <given-names>M.J.: MONAI</given-names>
          </string-name>
          <string-name>
            <surname>Label</surname>
          </string-name>
          :
          <article-title>A framework for AI-assisted Interactive Labeling of 3D Medical Images</article-title>
          . arXiv:
          <volume>2203</volume>
          .12362 [cs, eess] (
          <year>Mar 2022</year>
          ), arXiv:
          <fpage>2203</fpage>
          .
          <fpage>12362</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Gal</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Islam</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ghahramani</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          :
          <article-title>Deep Bayesian Active Learning with Image Data</article-title>
          .
          <source>Tech. Rep. arXiv:1703.02910</source>
          ,
          <string-name>
            <surname>arXiv</surname>
          </string-name>
          (
          <year>Mar 2017</year>
          ). https://doi.org/10. 48550/arXiv.1703.02910, http://arxiv.org/abs/1703.02910, arXiv:
          <fpage>1703</fpage>
          .02910 [cs, stat] type: article
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14. Han,
          <string-name>
            <given-names>H.</given-names>
            ,
            <surname>Dmitrieva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Sauer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Tam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.H.</given-names>
            ,
            <surname>Rittscher</surname>
          </string-name>
          , J.:
          <string-name>
            <surname>Self-Supervised</surname>
          </string-name>
          Voxel-Level
          <source>Representation Rediscovers Subcellular Structures in Volume Electron Microscopy</source>
          . pp.
          <fpage>1874</fpage>
          -
          <lpage>1883</lpage>
          (
          <year>2022</year>
          ), https://openaccess.thecvf.com/content/ CVPR2022W/CVMI/html/Han_
          <string-name>
            <surname>Self-Supervised_</surname>
          </string-name>
          Voxel-Level_Representation_ Rediscovers_Subcellular_Structures_in_Volume_Electron_Microscopy_ CVPRW_
          <year>2022</year>
          <article-title>_paper</article-title>
          .html
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Heinrich</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bennett</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ackerman</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Park</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bogovic</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eckstein</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Petruncio</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Clements</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>C.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Funke</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Korff</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hess</surname>
            ,
            <given-names>H.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>LippincottSchwartz</surname>
          </string-name>
          , J.,
          <string-name>
            <surname>Saalfeld</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weigel</surname>
            ,
            <given-names>A.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Team</surname>
            ,
            <given-names>C.P.</given-names>
          </string-name>
          :
          <article-title>Automatic whole cell organelle segmentation in volumetric electron microscopy</article-title>
          (
          <year>Nov 2020</year>
          ). https:// doi.org/10.1101/
          <year>2020</year>
          .11.14.382143, https://www.biorxiv.org/content/10. 1101/
          <year>2020</year>
          .11.14.382143v1, pages:
          <year>2020</year>
          .
          <volume>11</volume>
          .14.382143 Section: New Results 126-
          <fpage>132</fpage>
          (
          <year>2015</year>
          ). https://doi.org/10.25080/Majora-7b98e3ed-013, https:// conference.scipy.org/proceedings/scipy2015/matthew_rocklin.html,
          <source>conference Name: Proceedings of the 14th Python in Science Conference</source>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          26.
          <string-name>
            <surname>Roels</surname>
          </string-name>
          , J.:
          <source>NeuralNets</source>
          (May
          <year>2022</year>
          ), https://github.com/JorisRoels/neuralnets, original-date:
          <fpage>2019</fpage>
          -
          <lpage>11</lpage>
          -29T09:
          <fpage>59</fpage>
          :
          <fpage>01Z</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          27.
          <string-name>
            <surname>Roels</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hennies</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Saeys</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Philips</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kreshuk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Domain Adaptive Segmentation In Volume Electron Microscopy Imaging</article-title>
          .
          <source>In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI</source>
          <year>2019</year>
          ). pp.
          <fpage>1519</fpage>
          -
          <lpage>1522</lpage>
          . IEEE, Venice,
          <source>Italy (Apr</source>
          <year>2019</year>
          ). https://doi.org/10.1109/ISBI.
          <year>2019</year>
          .
          <volume>8759383</volume>
          , https: //ieeexplore.ieee.org/document/8759383/
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          28.
          <string-name>
            <surname>Roels</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Saeys</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Cost-efficient segmentation of electron microscopy images using active learning</article-title>
          .
          <source>arXiv:1911.05548 [cs] (Nov</source>
          <year>2019</year>
          ), http://arxiv.org/abs/
          <year>1911</year>
          .05548, arXiv:
          <year>1911</year>
          .05548
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          29.
          <string-name>
            <surname>Roels</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vernaillen</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kremer</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gonçalves</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aelterman</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Luong</surname>
            ,
            <given-names>H.Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Goossens</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Philips</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lippens</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Saeys</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>An interactive ImageJ plugin for semi-automated image denoising in electron microscopy</article-title>
          .
          <source>Nature Communications</source>
          <volume>11</volume>
          (
          <issue>1</issue>
          ),
          <volume>771</volume>
          (Feb
          <year>2020</year>
          ). https://doi.org/10.1038/s41467-020-14529-0, https: //www.nature.com/articles/s41467-020-14529-0, number: 1 Publisher: Nature Publishing Group
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          30.
          <string-name>
            <surname>Ronneberger</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fischer</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brox</surname>
          </string-name>
          , T.:
          <article-title>U-Net: Convolutional Networks for Biomedical Image Segmentation</article-title>
          (May
          <year>2015</year>
          ). https://doi.org/10.48550/arXiv. 1505.04597, http://arxiv.org/abs/1505.04597, number: arXiv:
          <fpage>1505</fpage>
          .04597 arXiv:
          <fpage>1505</fpage>
          .04597 [cs]
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          31.
          <string-name>
            <surname>Schindelin</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Arganda-Carreras</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frise</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kaynig</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Longair</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pietzsch</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Preibisch</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rueden</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Saalfeld</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schmid</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tinevez</surname>
            ,
            <given-names>J.Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>White</surname>
            ,
            <given-names>D.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hartenstein</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eliceiri</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tomancak</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cardona</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Fiji: an open-source platform for biological-image analysis</article-title>
          .
          <source>Nature Methods</source>
          <volume>9</volume>
          (
          <issue>7</issue>
          ),
          <fpage>676</fpage>
          -
          <lpage>682</lpage>
          (
          <year>Jul 2012</year>
          ). https://doi.org/10.1038/nmeth.2019, https://www.nature. com/articles/nmeth.2019, number: 7 Publisher: Nature Publishing Group
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          32.
          <string-name>
            <surname>Sener</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Savarese</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Active Learning for Convolutional Neural Networks: A Core-Set Approach</article-title>
          . arXiv:
          <volume>1708</volume>
          .00489 [cs, stat] (
          <year>Jun 2018</year>
          ), http://arxiv.org/ abs/1708.00489, arXiv:
          <fpage>1708</fpage>
          .
          <fpage>00489</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          33.
          <string-name>
            <surname>Settles</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Active Learning Literature Survey</article-title>
          .
          <source>Technical Report</source>
          , University of Wisconsin-Madison Department of Computer Sciences (
          <year>2009</year>
          ), https://minds. wisconsin.edu/handle/1793/60660, accepted:
          <fpage>2012</fpage>
          -
          <lpage>03</lpage>
          -15T17:
          <fpage>23</fpage>
          :
          <fpage>56Z</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          34.
          <string-name>
            <surname>Sofroniew</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lambert</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Evans</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nunez-Iglesias</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bokota</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Winston</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peña-Castellanos</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yamauchi</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bussonnier</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Doncila</given-names>
            <surname>Pop</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Can</given-names>
            <surname>Solak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            ,
            <surname>Wadhwa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Burt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Buckley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Sweet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Migas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Hilsenstein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            ,
            <surname>Gaifas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Bragantini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Rodríguez-Guerra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Muñoz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            ,
            <surname>Freeman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Boone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Lowe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Gohlke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Royer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>PIERRÉ</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Har-Gil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            ,
            <surname>McGovern</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          <article-title>: napari: a multi-dimensional image viewer for Python (May</article-title>
          <year>2022</year>
          ). https://doi. org/10.5281/zenodo.6598542, https://zenodo.org/record/6598542
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          35.
          <string-name>
            <surname>Tkachenko</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Malyuk</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Holmanyuk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liubimov</surname>
          </string-name>
          , N.:
          <article-title>Label Studio: Daata labeling software (</article-title>
          <year>2020</year>
          ), https://github.com/heartexlabs/label-studio, original-date:
          <fpage>2019</fpage>
          -
          <lpage>06</lpage>
          -19T02:
          <fpage>00</fpage>
          :
          <fpage>44Z</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          36.
          <string-name>
            <surname>Wolny</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yu</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pape</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kreshuk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Sparse Object-level Supervision for Instance Segmentation with Pixel Embeddings (Apr</article-title>
          <year>2022</year>
          ). https://doi. org/10.48550/arXiv.2103.14572, http://arxiv.org/abs/2103.14572, number: arXiv:
          <fpage>2103</fpage>
          .14572 arXiv:
          <fpage>2103</fpage>
          .14572 [cs]
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          37.
          <string-name>
            <surname>Xiao</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xie</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          , Han, H.:
          <article-title>Automatic Mitochondria Segmentation for EM Data Using a 3D Supervised Convolutional Network</article-title>
          .
          <source>Frontiers in Neuroanatomy</source>
          <volume>12</volume>
          (
          <year>2018</year>
          ). https://doi.org/10.3389/fnana.
          <year>2018</year>
          .
          <volume>00092</volume>
          , https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6224513/, publisher: Frontiers Media SA
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          38.
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zuo</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          , Zhang, L.,
          <string-name>
            <surname>Van Gool</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Timofte</surname>
          </string-name>
          , R.:
          <article-title>Plug-andPlay Image Restoration with Deep Denoiser Prior (Jul</article-title>
          <year>2021</year>
          ). https://doi. org/10.48550/arXiv.
          <year>2008</year>
          .
          <volume>13751</volume>
          , http://arxiv.org/abs/
          <year>2008</year>
          .13751, number: arXiv:
          <year>2008</year>
          .13751 arXiv:
          <year>2008</year>
          .13751 [cs, eess]
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          39.
          <string-name>
            <surname>Zuiderveld</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          : VIII.
          <article-title>5. - Contrast Limited Adaptive Histogram Equalization</article-title>
          . In: Heckbert, P.S. (ed.)
          <source>Graphics Gems</source>
          , pp.
          <fpage>474</fpage>
          -
          <lpage>485</lpage>
          . Academic Press (
          <year>Jan 1994</year>
          ). https://doi.org/10.1016/B978-0
          <source>-12-336156-1</source>
          .
          <fpage>50061</fpage>
          -
          <lpage>6</lpage>
          , https://www. sciencedirect.com/science/article/pii/B9780123361561500616
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>