<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sabrina Musatian</string-name>
          <email>sabrinamusatian@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alexander Lomakin</string-name>
          <email>alexander.lomakin@protonmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Angelina Chizhova</string-name>
          <email>chilina4@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Saint Petersburg State University</institution>
          ,
          <addr-line>Saint Petersburg</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <abstract>
        <p>-with a growing interest in medical research problems and the introduction of machine learning methods for solving those, a need in an environment for integrating modern solutions and algorithms into medical applications developed. The main goal of our research is to create medical images research framework (MIRF) as a solution for the above problem. MIRF is a free open-source platform for the development of medical tools with image processing. We created it to fill in the gap between innovative research with medical images and integrating it into real-world patients treatments workflow. Within a short time, a developer can create a rich medical tool, using MIRF's modular architecture and a set of included features. MIRF takes the responsibility of handling common functionality for medical images processing. The only thing required from the developer is integrating his functionality into a module and choosing which of the other MIRF's features are needed in the app. MIRF platform will handle everything else. In this paper, we overview and compare existing applications for handling operations with medical images, as well as describing basic ideas and functionality behind our own MIRF framework.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Over the past decade, many kinds of approaches for
solving problems in the field of medical images were explored.
Because of these researches, the scientific community can
now rapidly open new and more challenging tasks. However,
these studies should go beyond just algorithmic decisions
related to diagnosis and treatment using CT (Computed
Tomography) and MRI (Magnetic resonance imaging)
images. Doctors require high–performance real–time software
systems that can assist in the diagnosis's determination of the
patient and solve various related tasks. Hence, it is necessary
not only to develop highly efficient algorithms for medical
images analysis but also to integrate them into a convenient
environment in which many other instruments essential for
physicians may be seamlessly used. A set of medical tasks
share many of these tools, which means that these tools
can be provided within a single platform. In this paper, we
investigate existing software systems for medical images and
introduce our own framework (MIRF) for medical diagnosis,
simplifying the development of medical instruments. The
objectives of this work are to create an extensible platform
for the development of medical instruments and to show
successful applications of this library on some real medical
cases.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Existing systems for medical image processing</title>
      <p>There are many open–source packages and software
systems for working with medical images. Some of them are
specifically dedicated for these purposes, others are adapted
to be used for medical procedures.</p>
      <p>
        Many of them comprise a set of instruments, dedicated
to solving typical tasks, such as images pre–processing
and analysis of the results – ITK [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], visualization –
VTK [2], real–time pre–processing of images and video –
OpenCV [3].
      </p>
      <p>
        Others solve problems related to image analysis of
certain organs or diseases. For example, brain images analysis
(FreeSurfer [
        <xref ref-type="bibr" rid="ref3">4</xref>
        ], SPM [
        <xref ref-type="bibr" rid="ref4">5</xref>
        ] and others). The extension of
such software systems for solving a wide range of tasks
in medicine is quite complicated or even impossible since
most often the architecture of such applications was written
for solving a specific task and it may be hard to generalize
these approaches.
      </p>
      <p>
        There are also many general–purpose medical imaging
applications. Such systems provide basic functionality for
working with images. However, they cannot be expanded
to address any specific tasks (for example, segmentation or
finding features inherent in certain diseases). Such systems
are: Ginkgo CAD [6] and ClearCanvas [
        <xref ref-type="bibr" rid="ref6">7</xref>
        ].
      </p>
      <p>
        Another class of medical software form expandable
medical applications that focus primarily on the final usage
by the doctors. They already provide all the basic methods
in an integrated user interface, for example, Slicer [
        <xref ref-type="bibr" rid="ref7">8</xref>
        ],
Weasis [9] and OsiriX [
        <xref ref-type="bibr" rid="ref9">10</xref>
        ]. The last one is an expensive
commercial product and is not available to a wide audience.
Such applications can be expanded with specifically written
plugins for these platforms. However, this approach does not
give the developers enough flexibility to create and adjust
their own systems and functionality.
      </p>
      <p>
        The most generalized and flexible product for
working with medical images is MITK [
        <xref ref-type="bibr" rid="ref10">11</xref>
        ] – an open–source
framework for developing interactive medical software
systems. MITK combines the algorithms presented in ITK [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]
with the visualization algorithms from the VTK library [2].
MITK also supplements the functionality of these two
libraries with some unique features, allowing its users to
create a variety of medical programs from a broad range
of functions. While MITK is a cross–platform framework,
some versions have not been supported for years. Because it
is originally written in C++, it requires to be built separately
for each platform. Moreover, the developers have to use a
custom build procedure provided by MITK to create and
add new modules.
      </p>
      <p>
        In this paper, we introduce our own open–source medical
images research framework (MIRF) as an alternative to
existing software systems for medical applications
development. MIRF is written in Kotlin programming language with
a focus on enabling a smooth integration between modern
research in medical imaging. With the Kotlin at its core,
MIRF can be smoothly integrated into any projects with
Java Environment. We pay close attention to the possibility
of integration of artificial intelligence and various machine
learning approaches for diagnosis and treatment of various
diseases. This is because nowadays the most effective
solutions for medical image analysis problems are solved using
machine learning or deep learning algorithms [
        <xref ref-type="bibr" rid="ref11">12</xref>
        ].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. MIRF architecture</title>
      <sec id="sec-3-1">
        <title>3.1. Structure</title>
        <p>MIRF framework is represented as a collection of
generic modules for various tasks. These modules are
divided into two global packages:</p>
        <p>Core – the minimum set of necessary modules for
the correct operation of the MIRF framework. This
package includes modules that are used for
transferring data into the internal representation,
communication between modules and creating data processing
pipelines.</p>
        <p>Features – contains modules with core user
functionality that are needed to facilitate development:
mechanisms for accessing data storages, adapters for
various medical data formats, various pre–processing
filters and image analysis tools. Any custom modules
should extend the capabilities of this package.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Pipeline</title>
        <p>
          Execution of any workflows in MIRF is implemented
with Pipes &amp; Filters [
          <xref ref-type="bibr" rid="ref12">13</xref>
          ] approach. For these purposes,
various data handlers should be used to stick individual
blocks together.
        </p>
        <p>In the framework, any computational logic must extend
the Algorithm interface. The Algorithm is a handler class
that, when invoked, changes only the data submitted to it
at the input. The Algorithm does not invoke any third–
party code associated with data processing. It does not
save data and acts solely as a data handler. This approach
provides opportunities for flexible creation of algorithms and
organization of hierarchies.</p>
        <p>Algorithm instances act as filters in our architecture.
The Algorithm class is encapsulated by the PipelineBlock
class, which is the main entity used to transfer data between
algorithms. The communication between the blocks is based
on the Observer pattern – after the block executes the
algorithm, it informs all its listeners about the completion of
the calculations. Some blocks may also be engaged in the
aggregation of data for the following blocks or have another
specific purpose (for example, they indicate the completion
of calculations in the pipeline).</p>
        <p>The core architecture of MIRF may be seen at figure 1.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Data representation</title>
        <p>Any data in MIRF should be derived from an abstract
class Data. The main task of this class is to take over the
management of the metadata, namely the list of attributes
(AttributeCollection class). Any class inherited from the
Data class should be used only as a data storage
object. Instances of Data class are passed through the MIRF
pipeline and act as Pipes in our Pipes &amp; Filters approach.
This ensures the clarity of the entity's purpose within the
framework.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Pipeline initialization</title>
        <p>MIRF provides common blocks that may assemble
custom pipelines and control the functions that the user wants to
be executed on the provided data. The pipeline initialization
can be done with a few simple steps:
val pipe = Pipeline("Pipeline name")
// Creating the blocks
val firstBlock = PipelineBlock(</p>
        <p>Block parameters
)
...Initialization of other blocks...
// Creating connections between blocks
firstBlock.dataReady +=</p>
        <p>secondBlock::inputReady
...Initialization of other connections...
// Setting the first block and input
pipe.rootBlock = firstBlock
pipe.run(Data)</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Medical images representation</title>
      <sec id="sec-4-1">
        <title>4.1. MedImage</title>
        <p>
          There are several common medical images formats, for
example, DICOM [
          <xref ref-type="bibr" rid="ref13">14</xref>
          ] or NIfTI [
          <xref ref-type="bibr" rid="ref14">15</xref>
          ]. For a unified workflow
with these formats and applications of common analysis
algorithms, we have implemented a general class for
representing medical images in MIRF. MedImage is a class,
that contains a list of attributes extracted by certain rules,
depending on the source format and the pixel representation
of the image. Thus, all algorithms for working with medical
images work with the MedImage class, which allows the
library user to reuse and extend the existing code.
4.2. DICOM
        </p>
        <p>
          DICOM format is represented as a set of key–value
items, and the image itself is also stored by key, as a value.
All sets of keys for DICOM images are strictly defined and
are used everywhere by the medical community. To read
DICOM images, we considered several libraries for working
with this format in Kotlin: ImageJ [
          <xref ref-type="bibr" rid="ref15">16</xref>
          ], DCM4CHE [
          <xref ref-type="bibr" rid="ref16">17</xref>
          ],
and PixelMed [
          <xref ref-type="bibr" rid="ref17">18</xref>
          ]. While ImageJ supports the DICOM
reading, it does not provide the functionality to output
images with this format. DCM4CHE is a rich toolkit for
working with DICOM images, it provides a lot of functions
to work with those images, using medical servers. Because
we don't want to overwhelm our library with unnecessary
dependencies, we made our final decision towards PixelMed,
which supports reading, working with attributes and writing
of DICOM images without complicated workflows such as
in DCM4CHE. After reading the list of attributes for a
DICOM image MIRF converts it to the MedImage class
by creating an internal representation of the attributes and
extracting an array of images from the original format.
4.3. NIfTI
        </p>
        <p>
          Another popular type of medical images is NIfTI [
          <xref ref-type="bibr" rid="ref14">15</xref>
          ].
There are a few differences between DICOM and NIfTI
file formats, such as the data they store and storage
representations. For instance, NIfTI metadata does not include
patients or hospital related information. It only stores the
image and MRI settings metadata. Also, NIfTI stores a set
of medical slices within one file (a set of medical images),
while DICOM usually stores them as separate files. To
enable NIfTI usage in our framework, we used ImageJ [
          <xref ref-type="bibr" rid="ref15">16</xref>
          ].
Then, similarly to DICOM images, we convert the
information received from the NIfTI to our internal MedImage
representation, to make it possible for the same algorithms
to work with different file formats.
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Unique features</title>
      <sec id="sec-5-1">
        <title>5.1. Tensorflow models integration</title>
        <p>
          Because modern researchers are often using deep
learning techniques for solving various problems in medicine,
we paid special attention to the possibility of integration
of those approaches effortlessly within our framework. We
started with the most commonly used deep learning
frameworks such as Tensorflow [
          <xref ref-type="bibr" rid="ref18">19</xref>
          ] and Keras [
          <xref ref-type="bibr" rid="ref19">20</xref>
          ]. As a result,
integrating Tensorflow models is possible within MIRF
Tensorflow block. Since Tensorflow provides a Java API for
working with its models, it was possible for us to create a
block which may run the provided models. To run inference
on the prepared Tensorflow model, the Tensorflow Block
with the models parameters should be instantiated. It is
sufficient to pass in the path to the saved model and the
names of the input and output nodes.
        </p>
        <p>Also, since Tensorflow package provides Keras
interfaces, it is possible to integrate not only Tensorflow models
but Keras as well.</p>
        <p>To the best of our knowledge, no other software for
creating medical applications, provide such integration within
its core functionality. We believe that this feature is very
important in the modern medical applications development
because it completely encapsulates the integration of the
complex artificial intelligence models in real medical
applications and enables developers to focus on creating new
algorithms in their preferred languages and environments.</p>
        <p>To use Tensorflow API in C++ or Java developers have
to specify the Graph of the model and define many fields
before they can run it. However, MIRF users can set up the
Tensorflow block within just a few lines:
val tensorflowModel = TensorflowModel(
MODEL_NAME, INPUT_NODE_NAME,
OUTPUT_NODE_NAME, OUTPUT_DIMS
)
val tensorflowModelRunner =
AlgorithmHostBlock&lt;Data, Data&gt;(
{
tensorflowModel.runModel(</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. PDF reports generation</title>
        <p>There are various types of medical documentation that
doctors generate after the patients appointment. Those
documents usually include CT and MRI images as additions to
the final diagnosis paper and recommendation. The outline
and contents of medical reports are strictly regulated by the
government standards and they vary by different criteria,
such as organs, diseases or medical procedures performed.
Doctors have to fill in those reports manually or
semiautomatically and include images into them. MIRF provides
tools for creating these reports automatically, based on the
results of the specified pipelines. MIRF generates the report
in PDF format and has all the necessary images already
included.</p>
        <p>
          We use algorithm class implementation for this
purpose. It generates a report in the form of
PdfElementData from input data. The final report is then created by
PdfElementsAccumulator class, which takes the sequence of
PdfElementData as input and draws them on the document.
We use IText 7.1.2 [
          <xref ref-type="bibr" rid="ref20">21</xref>
          ] as the main library for working with
PDF format.
        </p>
        <p>MIRF provides a set of primitive modules that may
be included in the final PDF report. We currently support
tables, images, and raw text. If the user needs other instances
in his report, he may create his own implementation of
PdfElementData and include it in the final report.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. MIRF applications</title>
      <sec id="sec-6-1">
        <title>6.1. Multiple sclerosis analysis</title>
        <p>With our framework, developers may easily create
custom pipelines for specific tasks. This automates many
manual scenarios and it can bring new features, unused before,
to doctors workflows. We take Multiple sclerosis analysis as
an example of such a workflow.</p>
        <p>Multiple sclerosis is an immune–mediated disorder,
affecting the central nervous system. Patients with this disease
have multiple lesions in the brain. Such patients have to take
MRI scans twice a year. Doctors are comparing scans over
time and check the growing process of lesions in the brain.
They generate the report about this.</p>
        <p>We implemented an application, that generates MS
reports based on the baseline and follow–up sets of scans. This
procedure saves a lot of time for doctors and optimizes their
work at several steps.</p>
        <p>The data flow diagram for this pipeline may be seen at
figure 2. First, it reads a set of DICOM images and loads
lesion masks from a baseline set. The follow–up set of images
is pre–processed and the segmentation masks are calculated.
We use Tensorflow block to perform segmentation on the
images. Then, MIRF compares the baseline and follow–
up images and generates a report based on this data. An
example report for the MS pipeline may be seen at figure 3.</p>
        <p>With this application, the segmentation, comparison and
report generation is performed automatically for the doctor.</p>
        <p>These steps are usually done manually and require a lot of
time.</p>
      </sec>
      <sec id="sec-6-2">
        <title>6.2. Brain tumor analysis</title>
        <p>Another example, that shares common functionality with
MS analysis is the brain tumor segmentation and a report
generation from the obtained information. With this case,
we show how MIRF core functions may be used in various
scenarios and optimize doctors workflows.</p>
        <p>
          According to [
          <xref ref-type="bibr" rid="ref11">12</xref>
          ], brain tumor segmentations are
performed either manually or semi–automatically, as well as
there is no registered case of bringing the modern research
for this problem into real clinical trials. The main
information that can be inferred from such segmentation on
the early stages of treatment is the tumor volume and its
relative volume to the whole patients brain. Hence, these
discoveries should be added to the final disease statement.
These actions (analyzing MRI scans, calculating the volume
and including this information in a report) are performed
manually by the specialists. As part of the final tool for
working with various medical images, this pipeline may
be easily included in our framework. For the brain tumor
segmentation, we take an implementation of the state of
the art solution of this problem [
          <xref ref-type="bibr" rid="ref21">22</xref>
          ]. The algorithm for
segmentation is implemented using Tensorflow framework
and may be integrated as a model file with our general
purpose Tensorflow block, described above. It takes MRI
brain images in NIfTI format [
          <xref ref-type="bibr" rid="ref14">15</xref>
          ] and creates a mask,
indicating different types of tumor tissues, where they are
present (figure 4). Since MRI images are represented as
a set of slices, where voxels in the slice correspond to
some particular volume, it is possible to calculate volume,
based on the number of voxels. The information about this
encoding is stored in a medical image metadata and depends
on the MRI machine settings. MIRF calculates the tumor
volume based on the segmented mask. Using the initial
brain images the whole brain volume may be determined
and relativities between those volumes are deduced. Then,
MIRF creates a complete report with this information, using
PDF generating tools. The data workflow and the blocks
used in this pipeline may be seen on figure 5.
        </p>
        <p>This example shares such blocks as image reading,
segmentation and report generation with MS analysis workflow.
It demonstrates how the core MIRF blocks may facilitate
very different workflows from the medical point of view.</p>
      </sec>
      <sec id="sec-6-3">
        <title>6.3. Skin cancer detection on Android</title>
        <p>
          Due to the fact that we develop our framework on
Kotlin programming language, it is possible to create not
only cross–platform desktop applications but also mobile.
This enables developers to deploy the same pipelines and
scenarios on a wide range of devices without rewriting any
code. To show the benefits of this approach, we created
an Android application for skin cancer detection. For image
classification, we use an open–source implementation of one
of the deep learning algorithms for this problem [
          <xref ref-type="bibr" rid="ref22">23</xref>
          ]. It
implements the deep learning model with the Keras
framework usage. Since it is possible to convert Keras models
into pure Tensorflow models, we may use the Tensorflow
integration block from MIRF to deploy this model. As a
result, we may use the same pipeline as for the desktop app
on the phone to detect skin cancer from the phones images.
It is required from the developer to write custom layouts
on Android for the GUI. We are planning to resolve this
issue in the future, by creating a pre–defined library of such
graphical interfaces, so the development of these apps may
be performed with more ease and automaticity.
        </p>
        <p>To show the simplicity of our approach, we provide the
pipeline code that is used on Android to run this example.
It takes an image path as an input and generates a label
showing whether the mole is benign or malignant.
val pipe = Pipeline("Detect moles")
val assetsBlock =</p>
        <p>AlgorithmHostBlock&lt;Data, AssetsData&gt;
(...algorithm parameters...)
val imageReader =</p>
        <p>AlgorithmHostBlock&lt;AssetsData,</p>
        <p>BitmapRawImage&gt;
(...algorithm parameters...)
val tensorflowModelRunner =
AlgorithmHostBlock&lt;BitmapRawImage,</p>
        <p>ParametrizedData&lt;Int&gt;&gt;
(...algorithm parameters...)
val root = PipeStarter()
// Make connections
root.dataReady +=</p>
        <p>assetsBlock::inputReady
assetsBlock.dataReady +=</p>
        <p>imageReader::inputReady
imageReader.dataReady +=</p>
        <p>tensorflowModelRunner::inputReady
// Run
pipe.rootBlock = root
pipe.run(MirfData.empty)</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion</title>
      <p>In this paper, we introduced the Medical Images
Research Framework for the development of complex medical
applications with various types of medical images. We
investigated the existent solutions in this area and argued why
we created such a tool.</p>
      <p>We introduced the basic overview of the proposed
architecture and its benefits in this paper, as well as the unique
features of our platform. We believe that our framework
will help in mending the gap between innovative research
made in medical images analysis and delivering it to the
final users. As our research is still early in development,
we have many plans for further integration, such as adding
most commonly used features (scales, segmentation masks,
zooming, working with patients data) and GUI for them.
We also plan on creating a visual programming environment
based on our framework, so creating medical apps would be
possible for people with little programming experience.</p>
      <p>Out project is publically available and may
be found at https://github.com/MathAndMedLab/
Medical-images-research-framework</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Itk</surname>
          </string-name>
          . [Online]. Available: http://www.itk.
          <source>org [Accessed: 17.12</source>
          .
          <year>2018</year>
          ].
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <given-names>G.</given-names>
            <surname>Bradski</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Kaehler</surname>
          </string-name>
          , Learning OpenCV: Computer Vision in C+
          <article-title>+ with the OpenCV Library</article-title>
          , 2nd ed.
          <source>O'Reilly Media</source>
          , Inc.,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Freesurfer</surname>
          </string-name>
          . [Online].
          <source>[Accessed: 17.12</source>
          .
          <year>2018</year>
          ].
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Spm</surname>
          </string-name>
          . [Online]. Available: http://www.fil.ion.ucl.ac.uk/spm/ [Accessed:
          <fpage>17</fpage>
          .
          <fpage>12</fpage>
          .
          <year>2018</year>
          ].
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <article-title>Ginkgo cad</article-title>
          . [Online]. Available: https://github.com/gerddie/ ginkgocadx [Accessed:
          <fpage>17</fpage>
          .
          <fpage>12</fpage>
          .
          <year>2018</year>
          ].
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Clearcanvas</surname>
          </string-name>
          . [Online]. Available: https://www.clearcanvas.ca/ [Accessed:
          <fpage>17</fpage>
          .
          <fpage>12</fpage>
          .
          <year>2018</year>
          ].
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Pieper</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Halle</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Kikinis</surname>
          </string-name>
          , “3d slicer,” in
          <year>2004</year>
          2nd IEEE International Symposium on Biomedical Imaging:
          <article-title>Nano to Macro (IEEE Cat No</article-title>
          .
          <year>04EX821</year>
          ),
          <source>April</source>
          <year>2004</year>
          , pp.
          <fpage>632</fpage>
          -
          <lpage>635</lpage>
          Vol.
          <volume>1</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Weasis.</surname>
          </string-name>
          [Online]. Available: http://nroduit.github.io/en/ [Accessed:
          <fpage>17</fpage>
          .
          <fpage>12</fpage>
          .
          <year>2018</year>
          ].
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rosset</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Spadola</surname>
          </string-name>
          , and
          <string-name>
            <given-names>O.</given-names>
            <surname>Ratib</surname>
          </string-name>
          , “
          <article-title>Osirix: An open-source software for navigating in multidimensional dicom images,” Journal of digital imaging : the official journal of the Society for Computer Applications in Radiology</article-title>
          , vol.
          <volume>17</volume>
          , pp.
          <fpage>205</fpage>
          -
          <lpage>16</lpage>
          ,
          <year>10 2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Nolden</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zelzer</surname>
          </string-name>
          et al., “
          <article-title>The medical imaging interaction toolkit: challenges and advances</article-title>
          ,”
          <source>International journal of computer assisted radiology and surgery</source>
          , vol.
          <volume>8</volume>
          ,
          <issue>04</issue>
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [12]
          <string-name>
            <surname>M. S.A.</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. A.V.</surname>
          </string-name>
          et al.,
          <source>“Medical images segmentation operations,” Trudy ISP RAN/Proc. ISP RAS</source>
          , vol.
          <volume>30</volume>
          , pp.
          <fpage>183</fpage>
          -
          <lpage>194</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A. H.</given-names>
            <surname>Vermeulen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Beged-Dov</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P. W.</given-names>
            <surname>Thompson</surname>
          </string-name>
          , “
          <article-title>The pipeline design pattern,”</article-title>
          <source>in OOPSLA</source>
          <year>1995</year>
          ,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>W. D.</given-names>
            <surname>Bidgood</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Horii</surname>
          </string-name>
          et al.,
          <article-title>Understanding and Using DICOM, The Data Interchange Standard for Biomedical Imaging</article-title>
          . Boston, MA: Springer US,
          <year>1998</year>
          , pp.
          <fpage>25</fpage>
          -
          <lpage>52</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>V.</given-names>
            <surname>Patel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. D</given-names>
            <surname>Dinov</surname>
          </string-name>
          et al., “
          <article-title>Loni mind: metadata in nifti for dwi,” NeuroImage</article-title>
          , vol.
          <volume>51</volume>
          , pp.
          <fpage>665</fpage>
          -
          <lpage>76</lpage>
          ,
          <year>03 2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Imagej</surname>
          </string-name>
          . [Online]. Available: https://imagej.nih.gov/ij/index.html [Accessed:
          <fpage>17</fpage>
          .
          <fpage>12</fpage>
          .
          <year>2018</year>
          ].
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [17]
          <fpage>Dcm4che</fpage>
          . [Online]. Available: https://www.dcm4che.
          <source>org [Accessed: 17.12</source>
          .
          <year>2018</year>
          ].
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Pixelmed</surname>
          </string-name>
          . [Online]. Available: dicomtoolkit.
          <source>html [Accessed: 17.12</source>
          .
          <year>2018</year>
          ].
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>M.</given-names>
            <surname>Abadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          et al.,
          <source>“Tensorflow: Largescale machine learning on heterogeneous distributed systems,” CoRR</source>
          , vol.
          <source>abs/1603.04467</source>
          ,
          <year>2016</year>
          . [Online]. Available: http://arxiv.org/abs/1603.04467
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Keras</surname>
          </string-name>
          . [Online]. Available: https://github.com/fchollet/keras [Accessed:
          <fpage>17</fpage>
          .
          <fpage>12</fpage>
          .
          <year>2018</year>
          ].
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Itext</surname>
          </string-name>
          . [Online].
          <volume>07</volume>
          .02.
          <year>2019</year>
          ].
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>G.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Li</surname>
          </string-name>
          et al., “
          <article-title>Automatic brain tumor segmentation using cascaded anisotropic convolutional neural networks</article-title>
          ,
          <source>” CoRR</source>
          , vol.
          <source>abs/1709.00382</source>
          ,
          <year>2017</year>
          . [Online]. Available: http://arxiv.org/abs/1709. 00382
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [23]
          <article-title>Skin cancer detection project</article-title>
          . [Online]. Available: https://github. com/dasoto/skincancer [Accessed:
          <fpage>10</fpage>
          .
          <fpage>02</fpage>
          .
          <year>2019</year>
          ].
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>