<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Brightness Levels in MRI Should Correspond With Echogenicity Grade in Ultrasound B-MODE images: A Pilot Study of Reproducibility Using ROI-based Measurement Between Two Blind Observers</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jirˇí Blahuta</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tomáš Soukup</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Silesian University in Opava, The Department of Computer Science</institution>
          ,
          <addr-line>Opava</addr-line>
          ,
          <country country="CZ">Czech Republic</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In 2011, we developed a software tool to analysis of the echogenicity level in ultrasound B-MODE images. This software is based on binary thresholding in a predefined Region of Interest (ROI). The goal of this paper is to observe if the echogenicity grade in B-MODE images corresponds with brightness level in MR images using the echogenicity index. Achieved results obtained by two, non-experienced observers in radiology, shows the software can be used also for MRI images. The reproducibility of the measurement evinces the high level of agreement. We use three ROI areas for which the exact position in MR image is not important at this moment. Totally of 52 images were analyzed. Achieved results show the error between measurements by two non-experienced observers does not exceed 5 %; calculated based on the range of the measurements and computed average difference for each image set. Thus, the echogenicity index can be considered as reproducible marker; a small shift of the ROI does not evince significant change. Average range of the index is computed from 28.17 up to 67.95; minimal index value was &lt; 20 and the highest value was 101.2 due to different brightness level in the examined ROI. The range for the same ROI are almost equal, the difference does not exceed 2 %.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Our software has been developed for ultrasound
Bimaging [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] in neurology to detect hyperechogenicity of
the substantia nigra [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] which is probably one of the
most common markers of Parkinson’s Disease detectable
on transcranial (TCS) ultrasound B-MODE images. The
principle of the core algorithm based on binary
thresholding enables to load not only ultrasound images. Thus, MR
images could be also analyzed using this software tool.
Clinical studies were published since 2014. The core of
the software was improved, especially new ROI areas for
different diagnoses.
      </p>
      <p>
        In modern neurology and neurosurgery, MRI is one of
the most progressive medical imaging for all perioperative
phases [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. MRI and diagnostic ultrasound are commonly
considered as complementary diagnostic modalities; also
for diagnosis confirmation.
      </p>
      <p>Copyright ©2021 for this paper by its authors. Use permitted under
Creative Commons License Attribution 4.0 International (CC BY 4.0).</p>
    </sec>
    <sec id="sec-2">
      <title>1.1 Input MR Images</title>
      <p>
        In this study, we have three sets of T1 and T2 MR images
(two basic types of MR images) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] with different image
resolution to analyze using the same approach as for
ultrasound images; using echogenicity index as a feature to
distinguish different brightness level. In comparison with
ultrasound B-MODE images which we have used in
previous studies, there is no native scale how to select a window
50 50 mm so we use the full width of the image, see Fig.
1.
1.2
      </p>
    </sec>
    <sec id="sec-3">
      <title>Methodology of the Analysis</title>
      <p>We analyze three ROI areas with different size and shape,
see Fig. 2. For each image, all three ROI areas are placed
in the same position in the image. In other words, each
image is analyzed thrice using three different ROI in the
same position.</p>
      <p>
        The size and shape of the ROI were defined in the past
for B-MODE images. Originally, ROI1 was used for ncl.
raphe analysis and ROI2 has been defined for substantia
nigra area; both in B-MODE images. Square-shaped ROI3
has been used to analyze medial temporal lobe (MTL) in
different case; in measurement of the black/white pixel
ratio in the ROI 20 20 mm to judge a probability of MTL
atrophy as a marker for the dementia [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <sec id="sec-3-1">
        <title>Echogenicity Index Evaluation in MR</title>
      </sec>
      <sec id="sec-3-2">
        <title>Images</title>
        <p>In the case of B-MODE images, the echogenicity index
should corresponds with echogenicity grade of the tissue.
We can use the same index for MR images, in which the
index should corresponds with brightness level of the
examined part inside the ROI. The index is one numerical
value computed using our software.</p>
        <p>
          More information about the methodology, see [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]; the
paper focused on atherosclerotic plaques in B-MODE
images, in which we have defined the index and its purpose.
Simply, the index is computed as one number which can
describe visual brightness level (echogenicity grade in US
imaging). Our software is based on computing the area of
remaining pixels after binary thresholding in the ROI. Let
we have 256 intensity levels Hi where i = 0; 1; :::; 255, the
area is computed for each level. After that, the all
computed areas are summed and the sum is divided by 100 to
obtain the index given by
        </p>
        <p>ECHOINDEX =
(1)
å2H5=50 AH
100</p>
        <p>Due to the principle of binary thresholding, for lower
echogenicity grade, the Echo-Index should be lower and
for higher echogenicity the Echo-Index should be higher.
This is an assumption which proceeds from the principle
of binary thresholding. We have used the index in MR
images to judge general reproducibility between two
nonexperienced observers.</p>
        <p>An example of achieved results for a selected image set
of 14 images, is stated in Table 1.</p>
        <p>From achieved results we can judge the echogenicity
index could be well-applicable in general as a feature in
MRI analysis.</p>
        <p>In Table 2 you can see the average differences for each
ROI in four image sets. It is closely related to judge level
of agreement which is almost perfect.
The data in Table 2 shows that the differences between
observers are minimal in the case of the same position
of the ROI including a small shift. It seems that small
ROI position changes which are not recognizable visually,
have no significant influence on resulted echogenicity
index. From achieved results, the range of the index is from
28.17 up to 38.89; very similar for each image set.
According to the range and computed average differences
between the observers, the difference between observers is
smaller than 5 %.
In the case of US imaging, the image can be adjusted
dynamically during examination by ultrasound probe
settings; we can increase or decrease the brightness level
according to examined tissue density. The echogenicity
grade displayed on acquired digitized image can be
different visually for the same tissue density. Due to this fact we
need to analyze the image sets with same probe (image)
settings to avoid incorrect echogenicity evaluation. See
Fig. 3 in which three TCS B-MODE images with different
global brightness level and corresponding histogram
profiles are shown.</p>
        <p>
          In MRI, the settings of the MR machine are
determined by the manufacturer. All image enhancements are
set in post-processing phase in digitized MR images, like
histogram equalization using different algorithms (LHE,
GHE) [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. In Fig. 5 you can see the example of three
MR images which are different by visual assessment but
the histogram is very similar. So, MR images should be
considered as more stable from the point of brightness
settings.
        </p>
        <p>The brightness settings should not be affected by
settings during examination but there is other limitation in
MR images corresponding with ROI selection in MR
slices, see the following chapter.
2.2</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>A Limitation of the Echogenicity Index</title>
    </sec>
    <sec id="sec-5">
      <title>Evaluation in MRI</title>
      <p>Although it seems the echogenicity index is well
reproducible, there is one important limitation. In Fig. 5, there
is the example of using the same ROI size and shape to
select a structure (it is not important from medical point of
view at this moment). Due to weighted MRI, the
examined structure can be smaller, larger, deformed or may not
visible. In the case of ultrasound B-MODE images, like
the substantia nigra in TCS images, the position and size
is determined; only echogenicity grade is different
corresponding the gain settings, angle, etc.</p>
      <p>In this case, totally different echogenicity grade can
be obtained for the same patient but in different MR
image. Thus, another ROI types will be defined in the future
which should be better adapted for different MR images to
examine structures. This limitation is also a barrier for
automatic ROI selection discussed in the following chapter.
3</p>
      <sec id="sec-5-1">
        <title>Possibility of Automatic Finding</title>
      </sec>
      <sec id="sec-5-2">
        <title>Closed ROI of Examined Area Using</title>
      </sec>
      <sec id="sec-5-3">
        <title>Convolutional Neural Network</title>
        <p>
          In our previous study dedicated to atherosclerotic plaques
analysis in B-MODE images, we also have discussed the
possibility to automatic learning of the plaque detection
using ANN [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] and also a possibility to create a
decisionmaking expert system to evaluate the echogenicity as a
risk marker of the plaque [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. Ultrasound imaging is
widely used in atherosclerosis recognition to early
diagnosis [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. We have presented a draft of back-propagation
ANN model to find a closed region of the plaque. In this
field, ANN based on deep learning approach are widely
used. In general, the ANN could be used to place ROI
according to learning of some structure in MR image like in
Fig. 1. However, the most important barrier is the fact that
examined structure may vary in weighted MR images in
each image due to intensity level [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. See an example in
Fig. 6 how the weighted MR images are different for the
same examined patient.
        </p>
        <p>Thus, it could be hard to apply an automatic recognition
a ROI described by shape or size when is changing in the
weighted MR imaging.</p>
        <p>
          The principle could be based on iterative learning using
a convolutional neural network (CNN) which uses
filtering to extract some features to recognize the region. CNN
are designed to work with grid-structured inputs, like 2D
images. There are many advanced techniques using CNN
in medical imaging like in [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ].
3.1
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>From a Boundary to Learn a Feature</title>
      <p>
        In 2020, we presented an idea to automatic segmentation
based on boundary recognition of atherosclerotic plaques
in B-MODE images [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. It could be realized using
iterative boundary recognition based on active contour
algorithm boosted with CNN to train corresponding pairs
input-output to learn the rules how to obtain the plaque
border, see Fig. 7 in which the contours are shown and
segmented plaque shapes after 25 iterations.
      </p>
      <p>In the case of MRI, there is a different task. There are
no exact borders to find the ROI. In Fig. 6, the weighted
MR images are displayed; it could be hard to learn what to
consider as a feature. Let to have a structure in MR image
which is probably located equally based on radiologist’s
experience. There is difficult to learn any shape and size
of the ROI due to changing weighted MR images.</p>
      <p>
        In this field, there is an interesting inspiration how to
develop an automatic segmentation using deep learning
approach in T1 and T2 weighted images [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. The desired
goal is to train the ANN to extract some features of the
examined structure to place the ROI to the correct position.
      </p>
      <p>To CNN training, the back-propagation algorithm is
used; similarly as in linear feed-forward ANN
architecture. Input image is represented as a single vector w h
d where w and h represent image resolution and d to be
color depth, in this case d = 1 (for RGB channels d = 3).
Each pixel is represented as intensity value in the range of
0 to 255. CNN uses ReLU (Rectified Linear Unit)
activation function instead of sigmoid or hyperbolic tangent in
traditional multi-layer backpropagation networks. In
general, CNN has the following layers and functions:
1. input layer (as a single vector w
h
d)
2. convolutional layer (3 3 or 5 5 convolutional
masks are commonly used) to extract feature map</p>
      <sec id="sec-6-1">
        <title>3. activation function like ReLU</title>
        <p>4. pooling (sub-sampling) layer (to reduce
dimensionality of feature maps using MaxPooling algorithm)</p>
      </sec>
      <sec id="sec-6-2">
        <title>5. fully-connected layer</title>
      </sec>
      <sec id="sec-6-3">
        <title>6. softmax activation function</title>
      </sec>
      <sec id="sec-6-4">
        <title>7. output layer</title>
        <p>Convolutional layer with ReLU and pooling layer are
designed for feature extraction and the fully-connected
layer with softmax function is used to classification. In
our case, we need to recognize a structure in MR image
which is defined by a radiologist, e.g. in Fig. 5 and/or in
Fig. 10. The process is illustrated in Fig. 9.</p>
        <p>
          Deep learning paradigm is based on learning rules from
inputs and desired outputs. This is main difference from
traditional programming when we have inputs, rules and
we need to create outputs. Deep learning requires large
data amount to efficiency. In comparison with
traditional neural networks and learning, deep learning should
achieve better accuracy related to increasing data amount
[
          <xref ref-type="bibr" rid="ref18">18</xref>
          ].
        </p>
        <p>In a critical point, depending on data complexity and its
structure, conventional paradigms could be inefficient due
to overfitting so the learning rate is low or stopped.</p>
        <p>In MR images, we can use deep learning approach to
learn the rules to recognize the ROI. In general, deep
learning is focused on training with pairs input-output from
large datasets, e.g. thousands of images. Thus, when we
need to learn a specific structure in MR images, the
training is based on input-output training set to learn rules, i.e.
features, to find an appropriate structure to place a
predefined ROI. The idea of deep learning using CNN is
illustrated in Fig. 9.</p>
        <p>Consider the following six MR images in Fig. 10.
Consider a task to find the highlighted anatomic structure
(square-shaped ROI). It seems, there is really hard to learn
the features of the structure because it is from small to
bigger area in which the structure is located.</p>
        <p>
          To effective training and learning the network, a large
set of images is needed to learn how to recognize the
structure from input-output training set. For example, we can
learn the edge, the brightness difference, the shape, e.g.
roundness, height/width ratio, etc. and another feature. In
this task, deep learning could be applied to help to extract
the features to recognize the structure. The background of
the principle of the image convolution algorithm in CNN
you can find in [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] and also in [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] which is a
comprehensive guide to deep learning paradigm. In 2021, a
paper focused on multi-classification of brain tumors in MRI
using CNN, including deep performance evaluation, has
been published [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]. In future, automatic finding of the
ROI could be one of the main goals in our long-term
research.
        </p>
        <p>There are many ways for practical implementation. One
of the most known to be Keras, a high-level modular API
developed for Python programming language using GPU
acceleration. More information, code samples (including
using for CT scans) are available on Keras.io website.
4</p>
        <sec id="sec-6-4-1">
          <title>Conclusions and Using Results in Clinical</title>
        </sec>
        <sec id="sec-6-4-2">
          <title>Studies</title>
          <p>The goal of the paper is to show how to use echogenicity
index, computed with ultrasound B-MODE images, in MR
images. To this purpose, we have analyzed sets of T1 and
T2 MR images. The principle of the analysis is equal as for
B-MODE images. The core of the algorithm is based on
binary thresholding of the images in grayscale. Within this
MR images analysis, the main idea is also applicable in
MR images; the higher index value should correlate with
higher brightness intensity and vice versa.</p>
          <p>Achieved results show the principle of the
echogenicity index could be applied for B-MODE images and MR
images independently. It seems, the echogenicity index
is well applicable to observe different brightness in MRI
equally as in the case of B-MODE images. The obtained
differences are not significant, but the software is more
sensitive than visual assessment in general.</p>
          <p>Finally, we can recommend using this methodology in
future clinical studies focused on the analysis of MRI
using different ROI shapes and sizes according to examined
structure in MR image. In future, we will use a new ROI
areas, like a circle-shaped and/or free-hand closed area,
defined by an experienced sonographer. It is related to
examined structures in MR images, see Fig. 8 as the
example.</p>
          <p>In parallel, we are working on analysis of the
echogenicity index differences between a light area and a dark area
within the same ROI.</p>
          <p>This work was supported by European Union
under European Structural and Investment Funds
Operational Programme Research, Development and
Education project "Zvýšení kvality vzdeˇlávání na Slezské
univerziteˇ v Opaveˇ ve vazbeˇ na potrˇeby Moravskoslezského
kraje" CZ.02.2.69/0.0/0.0/18-058/0010238 and project
CZ.02.2.69/0.0/0.0/18-054/0014696 "Rozvoj VaV kapacit
Slezské univerzity v Opaveˇ, "Rozvoj metod teoretické a
aplikované informatiky" SGS/11/2019 and the image use
from grant No.16-28628A.</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Blahuta</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , Cˇ ermák, P.,
          <string-name>
            <surname>Soukup</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vecerek</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>A reproducible method to transcranial B-MODE ultrasound images analysis based on echogenicity evaluation in selectable ROI</article-title>
          .
          <article-title>(2014)</article-title>
          <source>International Journal of Biology and Biomedical Engineering</source>
          , Vol.
          <volume>8</volume>
          , pp.
          <fpage>98</fpage>
          -
          <lpage>106</lpage>
          . ISSN:
          <fpage>1998</fpage>
          -
          <lpage>4510</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Blahuta</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Soukup</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jelínková</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bártová</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , Cˇermák,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Herzig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Školoudík</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.:</surname>
          </string-name>
          <article-title>A new program for highly reproducible automatic evaluation of the substantia nigra from transcranial sonographic images</article-title>
          .
          <source>Biomedical Papers</source>
          Vol.
          <volume>158</volume>
          <issue>Issue 4</issue>
          , pp.
          <fpage>621</fpage>
          -
          <lpage>627</lpage>
          (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Školoudík</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jelinkova</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Blahuta</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , Cˇermák,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Soukup</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Bártová</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Langová</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Herzig</surname>
          </string-name>
          ,
          <string-name>
            <surname>R.</surname>
          </string-name>
          <article-title>Transcranial Sonography of the Substantia Nigra: Digital Image Analysis</article-title>
          .
          <source>American Journal of Neuroradiology Dec</source>
          <year>2014</year>
          ,
          <volume>35</volume>
          (
          <issue>12</issue>
          )
          <fpage>2273</fpage>
          -
          <lpage>2278</lpage>
          ; DOI: 10.3174/ajnr.A4049.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Azar</surname>
            ,
            <given-names>R.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Donaldson</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <article-title>Ultrasound Imaging (Radcases) (1st Edition) Kindle Edition</article-title>
          . Thieme,
          <year>2014</year>
          , ASIN:
          <fpage>B00SRLKPOU</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Levin</surname>
            ,
            <given-names>C.M.</given-names>
          </string-name>
          <string-name>
            <surname>Magnetic Resonance</surname>
          </string-name>
          <article-title>Imaging (MRI) in Neurologic Disorders</article-title>
          .
          <year>2018</year>
          ,
          <string-name>
            <given-names>Merck</given-names>
            <surname>Manual</surname>
          </string-name>
          , Available online: https://www.merckmanuals.com/professional/neurologicdisorders/neurologic
          <article-title>-tests-and-procedures/magneticresonance-imaging-mri-in-neurologic-disorders</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Mohan</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Subashini</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>MRI based medical image analysis: Survey on brain tumor grade classification</article-title>
          .
          <source>Biomed. Signal Process. Control.</source>
          ,
          <year>2018</year>
          , Vol.
          <volume>39</volume>
          , pp.
          <fpage>139</fpage>
          -
          <lpage>161</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Blahuta</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Soukup</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pavlík</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <article-title>The Black-White Pixels Ratio in Medial Temporal Lobe Brain Structure in Transcranial B-Images as a Measurable Marker of Alzheimer's Disease Probability: The Reproducibility Overview</article-title>
          .
          <source>International Conference on Software, Telecommunications and Computer Networks (SoftCOM)</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          , DOI: 10.23919/SoftCOM50211.
          <year>2020</year>
          .
          <volume>9238214</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Blahuta</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Soukup</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pavlík</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <article-title>The classification of the progression of atherosclerotic plaques in B-MODE images between computer image analysis using echogenicity index and visual assessment. 20th International Multidisciplinary Scientific GeoConference</article-title>
          ,
          <string-name>
            <surname>Proceedings</surname>
            <given-names>SGEM</given-names>
          </string-name>
          ,
          <year>2020</year>
          pp.
          <fpage>341</fpage>
          -
          <lpage>348</lpage>
          . DOI:
          <volume>10</volume>
          .5593/sgem2020/2.1/s07.
          <fpage>044</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Senthilkumaran</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thimmiaraja</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <article-title>Histogram Equalization for Image Enhancement Using MRI Brain Images," 2014 World Congress on Computing and Communication Technologies</article-title>
          , IEEE,
          <year>2014</year>
          , pp.
          <fpage>80</fpage>
          -
          <lpage>83</lpage>
          , E-ISBN:
          <fpage>978</fpage>
          -1-
          <fpage>4799</fpage>
          -2877- 4, DOI: 10.1109/WCCCT.
          <year>2014</year>
          .
          <volume>45</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Blahuta</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Soukup</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <article-title>Cˇ ermák, P. An Expert System Based on Using Artificial Neural Network and RegionBased Image Processing to Recognition Substantia Nigra and Atherosclerotic Plaques in B-Images: A Prospective Study</article-title>
          . 14th
          <source>International Work-Conference on Artificial Neural Networks, IWANN</source>
          <year>2017</year>
          , Cadiz, Spain, June 14-16,
          <year>2017</year>
          , Proceedings,
          <source>Part I. Lecture Notes in Computer Science 10305</source>
          , Springer 2017, pp.
          <fpage>236</fpage>
          -
          <lpage>245</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Blahuta</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Soukup</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skacel</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <article-title>Pilot Design of a RuleBased System and an Artificial Neural Network to Risk Evaluation of Atherosclerotic Plaques in Long-Range Clinical Research</article-title>
          .
          <source>ICANN 2018, Lecture Notes in Computer Science book series (LNCS</source>
          , volume
          <volume>11140</volume>
          ), Springer,
          <year>2018</year>
          , pp.
          <fpage>90</fpage>
          -
          <lpage>100</lpage>
          , ISSN:
          <fpage>978</fpage>
          -3-
          <fpage>030</fpage>
          -01420-9.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Steinl</surname>
            ,
            <given-names>D.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kaufmann</surname>
            <given-names>B.A.</given-names>
          </string-name>
          <string-name>
            <surname>Ultrasound</surname>
          </string-name>
          <article-title>Imaging for Risk Assessment in Atherosclerosis</article-title>
          .
          <source>Int J Mol Sci</source>
          . 2015 May;
          <volume>16</volume>
          (
          <issue>5</issue>
          ):
          <fpage>9749</fpage>
          -
          <lpage>9769</lpage>
          . DOI:
          <volume>10</volume>
          .3390/ijms16059749.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Ito</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shirai</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hattori</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <article-title>Putaminal hyperintensity on T1-weighted MR imaging in patients with the Parkinson variant of multiple system atrophy</article-title>
          .
          <source>AJNR. American Journal of Neuroradiology</source>
          ,
          <year>2009</year>
          ,
          <volume>30</volume>
          (
          <issue>4</issue>
          ), pp.
          <fpage>689</fpage>
          -
          <lpage>692</lpage>
          . DOI:
          <volume>10</volume>
          .3174/ajnr.A1443.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Blahuta</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Soukup</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sosík</surname>
            .,
            <given-names>P.</given-names>
          </string-name>
          <article-title>Approach to Automatic Segmentation of Atherosclerotic Plaque in B-MODE images Using Active Contour Algorithm Adapted by Convolutional Neural Network to Echogenicity Index Computation</article-title>
          .
          <source>ITAT Conference</source>
          <year>2020</year>
          , CEUR Workshop Proceedings,
          <year>2020</year>
          , pp.
          <fpage>223</fpage>
          -
          <lpage>229</lpage>
          , ISSN:
          <fpage>1613</fpage>
          -
          <lpage>0073</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Hoogi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Subramaniam</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Veerapaneni</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rubin</surname>
            ,
            <given-names>L.D.</given-names>
          </string-name>
          <article-title>Adaptive Estimation of Active Contour Parameters Using Convolutional Neural Networks and Texture Analysis</article-title>
          .
          <source>IEEE TRANSACTIONS ON MEDICAL IMAGING</source>
          , Vol.
          <volume>36</volume>
          , No. 3,
          <year>March 2017</year>
          , pp.
          <fpage>781</fpage>
          -
          <lpage>791</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Breger</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cho</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Ning</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Westin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>O'Donnell</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pasternak</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <article-title>Deep Learning Based Segmentation of Brain Tissue from Diffusion MRI</article-title>
          . NeuroImage,
          <year>2021</year>
          ,
          <volume>233</volume>
          . 117934. DOI:
          <volume>10</volume>
          .1016/j.neuroimage.
          <year>2021</year>
          .
          <volume>117934</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>O</given-names>
            <surname>'Shea</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Nash</surname>
          </string-name>
          ,
          <string-name>
            <surname>R.</surname>
          </string-name>
          <article-title>An Introduction to Convolutional Neural Networks</article-title>
          .
          <year>2015</year>
          , ArXiv e-prints.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Aggarwal</surname>
            ,
            <given-names>C.C.</given-names>
          </string-name>
          <article-title>Convolutional Neural Networks</article-title>
          .
          <source>In: Neural Networks and Deep Learning</source>
          . Springer, Cham. Online-ISBN:
          <fpage>978</fpage>
          -3-
          <fpage>319</fpage>
          -94463-
          <lpage>0</lpage>
          . DOI:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          - 94463-0-3.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Irmak</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <article-title>Multi-Classification of Brain Tumor MRI Images Using Deep Convolutional Neural Network with Fully Optimized Framework</article-title>
          .
          <string-name>
            <surname>Iran J Sci Technol Trans Electr Eng</surname>
          </string-name>
          (
          <year>2021</year>
          ).
          <source>DOI: 10.1007/s40998-021-00426-9.</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>