<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Simply Segmentation Technique for Computed Tomography Images</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Figure</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>: Segmentation on an exemplary database.</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Kalina Serwata Institute of Mathematics Silesian University of Technology Kaszubska 23</institution>
          ,
          <addr-line>44-100 Gliwice</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
      </contrib-group>
      <fpage>44</fpage>
      <lpage>49</lpage>
      <abstract>
        <p>-Computed tomography is one of the most accurate studies of every fragment of the human body. Especially compared to the X-ray, tomography allows the detection of much smaller, and sometimes even any embryos of the disease that are invisible during other tests. In the case of a doctor, the analysis of the results from the tomography may be time-consuming due to the number of photos, and for the computer, quite the opposite. The occurrence of such activity, i.e. detection and classification of diseases in photographs obtained from a CT scanner, requires segmentation before classification. In this work, I suggest a simple combination of various filters that allow segmentation of these images. Numerous tests were carried out, which allowed for discussion on the advantages and disadvantages of this solution.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>Nowadays, going to the doctor is understood as setting
yourself in a queue and waiting for a short visit. The next step
is to do the prescribed tests, although the queue dates again. In
addition, the doctor indicating the test must make a decision
not only about his selection, but also the costs associated
with it. An additional problem is the aforementioned queue,
because often the dates are so remote that from the embryo
of the disease until the examination can grow quite quickly.
Additionally, making the test does not mean knowing the
diagnosis. Measurements are made, which in the next stage
are analyzed and given back to the patient, who again has to
queue up to the doctor who will assess these measurements.</p>
      <p>Making a test becomes a long-term process that can be
very disadvantageous to the patient. To make it easier,
automation of these tasks is being introduced. An important
aspect is allowing machines to segment and classify found
objects on the images obtained from various medical tests.
In this type of solutions, different types of algorithms and
techniques are used. Especially, artificial intelligence methods
are a basic components of such a support decision system.
An interesting approach is to use heuristics to locate embryos
of various diseases on X-ray images [11]. These types of
methods can be very burdensome for computers, so there are
various possibilities for their parallelization [6], [7]. Another
well-known solution is artificial neural networks [2], [10], or
mathematical models of neuronal activity in the human brain.</p>
      <p>Copyright held by the author(s).</p>
      <p>Not only artificial intelligence, but also mathematical fields
can be used such as statistics [5], [9], [14].</p>
      <p>In this paper, I propose a segmentation technique that can be
used as detection of suspicious elements on images obtained
from a computer tomograph.</p>
    </sec>
    <sec id="sec-2">
      <title>II. SEGMENTATION TECHNIQUE</title>
      <p>Image segmentation is a process of dividing a picture into
parts defined as areas that are homogeneous in terms of certain
selected properties, i.e. they are consistent, i.e. of the same
color and brightness, with a similar texture, without a clear
boundary and the criterion is sometimes difficult to determine.
The areas are sets of pixels. Properties that are often chosen as
criteria for homogeneity of areas are: gray level, hue, texture.</p>
      <p>The image obtained as a result of segmentation is simplified
in relation to the image subjected to segmentation – this image
does not contain many detailed information appearing in the
original image. A similar situation also occurs when detecting
edges in an image.</p>
      <p>There is no single segmentation method – only the goal is
defined, there are many ways:
they are competing with each other or complementing
universal and specialized methods,
two-dimensional and three-dimensional,
automatic, semi-automatic, not all automatic methods are
accepted
often multi-stage, hybrid methods,
self-learning methods,
global and local methods,</p>
      <p>It is possible to distinguish two main groups of segmentation
methods. Based on the similarities inside the areas – the result
is a set of pixels that do not differ from each other. Based on
the boundaries between areas – the result is a set of edges
across which the pixels are very different.</p>
      <p>Segmentation is the method of the simplest division of areas.
We can save the steps of this algorithm in a fairly consistent
way. The image is treated as a whole should be a square, the
number of pixels defining the height and width is a multiple of
2, if an image that does not meet this criterion has been loaded,
an error should be returned. Next, the condition of uniformity
is checked. The area that does not meet this criterion is divided
into four sub-images. In the next step, 4 areas are considered.
If one of them does not meet the criterion of uniformity – it</p>
      <sec id="sec-2-1">
        <title>A. Sobel</title>
        <p>Sobel Edge Detection Algorithm [4], [12] uses the derivate
approximation to find edges. This method returns edges at
those points where the gradient of the consider image is
maximum. The Sobel operator performs a 2D spatial gradient
measurement on an image and so emphasizes regions of high
spatial frequency that correspond to edges. Each pixel from
the environmental brings his own contribution – weigh while
calculating.</p>
        <p>These weights are saved in the form of a mask. Typical
mask sizes are 3 3, 5 5, or 7 7. Mask sizes are usually
odd because the pixel in the center represents the pixel for
which the filter transformation operation is performed. Each
pixel from the environmental brings his own contribution
– weigh while calculating. These weights are saved in the
form of a mask. Typical mask sizes are 3 3, 5 5, or
7 7. Mask sizes are usually odd because the pixel in the
center represents the pixel for which the filter transformation
operation is performed.</p>
        <p>The operator consists of a pair of 3 3 convolution kernels
smooths the input image to a greater extent and so makes the
operator less sensitive to noise and also generally produces
considerably higher output values for similar edges. They are
significant local changes of intensity in an image and typically
occur on the boundary between two different regions in an
image. The Sobel algorithm uses two masks filtering horizontal
Sx and vertical Sy. The Sx component determines the gradient
value in the direction rows, while the Sy component in the
direction of the columns. The value of the edge response and
its the direction is determined in accordance with equations
q
Gmag =</p>
        <p>(Sx)2 (Sy)2
Gdir = arctan</p>
        <p>Sy
Sx</p>
        <p>Normally a 3 3 Sobel mask is understood as gradient along
x-axis and align y-axis by using following equations
= f (x + 1; y)</p>
        <p>f (x; y)
= f (x; y + 1)
f (x; y)</p>
        <p>Then the total magnitude or the gradient can be found by
this formula</p>
        <p>
          G = Gx + Gy
To find the edge direction is minor when the gradient in the x
and y direction are known. But it can still create errors when
sum GX is equal to zero. So during implementation a restrain
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
(
          <xref ref-type="bibr" rid="ref4">4</xref>
          )
(
          <xref ref-type="bibr" rid="ref5">5</xref>
          )
        </p>
        <p>It takes place by means of the operation of a
twodimensional discrete plexus of the image matrix with a 3 3
matrix characteristic for a given direction called the
kernel (kernel) of the transformation. These matrices are
antisymmetrical in relation to the direction of the detected edge.</p>
        <p>The set of 8 matrices allows to determine the direction from
0 to 315 with a 45 step. Vertical edges are detected for the
0 direction, and horizontal edges for 90 . The convolution
operation determines in the first case the partial derivative
estimate with respect to the X axis and the second with
respect to the Y axis. The obtained partial derivative values
define the gradient vector for each point of the image. Another
simpler way to approach gradient approximation is the
socalled "compass method". In this method, the mask giving the
maximum value of the derivative determines the module and
gradient direction with a resolution of 45 .</p>
        <p>The next masks are obtained by rotating the masks given
by 180 . It is worth noting that it is enough to calculate the
tangles with the first four masks, because the others differ only
by the sign Sj + 4 = Sj .</p>
        <p>S1 = 4
No use of spatial coherence, nor any other notion of
object structure,
Assumes stationary statistics, but can be modified to be
locally adaptive. (exercises).</p>
        <p>Assumes uniform illumination (implicitly), so the
bimodal brightness behavior arises from object appearance
differences only.</p>
        <p>The Otsu method is an optimal prediction, in which the
found threshold value is optimal in the sense of optimizing
the given function. In this case, the function is the intra-class
variance or inter-class variance. The selected thresholds divide
the image into two classes: the object class and the background
class. A feature of the Otsu method is the statistical description
of both classes by two different probability functions, i.e.
the distribution of the object class and the distribution of
the background class. An image with separated, independent
classes can be described by mean values and variances in
individual classes.</p>
        <p>The average values, respectively, of the object class and
background class are equal to</p>
        <p>T =
n 1
X (i
i=0
n 1
X i pi
i=0
T2 =</p>
        <p>T )
2 pi</p>
        <p>The global variance can be equivalently written as the sum
of intra-class variation W and inter-class variance B. The
global variance within one image is a constant, independent
of the accepted threshold. The threshold influences the values
intra- and inter-class variance, the sum of which gives a
global variance. Minimizing intra-class variance is equivalent
to maximizing inter-class variance.</p>
        <p>As a criterion function, it is usually assumed that the
interclass variance requires less computational effort
B2 = pobpb ( ob
b)
2</p>
        <p>
          The maximum value of the inter-class variance corresponds
to the optimal separation of the two classes in the image. The
threshold that makes this separation is the optimal one. The
weighted within-class variance is
(
          <xref ref-type="bibr" rid="ref7">7</xref>
          )
(
          <xref ref-type="bibr" rid="ref8">8</xref>
          )
(
          <xref ref-type="bibr" rid="ref9">9</xref>
          )
Where the class probabilities are estimated as
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>And the class means are given by</title>
      <p>q1(t) =
q2(t) =</p>
      <p>t
X P (i)
i=1</p>
      <p>I
X P (i)
i=t+1
1(t) =
2(t) =</p>
      <p>t
X iP (i)
i=1 q1(t)
XI iP (i)
i=t+1 q2(t)
Finally, the individual class variances are
12(t) =
22(t) =</p>
      <p>t
X [i
i=1</p>
      <p>I
X [i
i=t+1
1(t)]2 P (i)</p>
      <p>q1(t)
2(t)]2 P (i)
q2(t)</p>
      <p>Otsu’s thresholding method involves iterating through all
the possible threshold values and calculating a measure of
spread for the pixel levels each side of the threshold, i.e. the
pixels that either fall in foreground or background. The aim is
to find the threshold value where the sum of foreground and
background spreads is at its minimum.</p>
      <sec id="sec-3-1">
        <title>C. Modified binarization</title>
        <p>
          Binarization is the process of converting color or
monochrome images (in shades of gray) to a two-level (binary)
image. Performing binarization on the image significantly
reduces the amount of information in it. It is most often
implemented by thresholding, consisting in establishing a
threshold value below which obease’s pixels are classified
as object pixels, while extraneous pixels are classified as
background pixels.
(
          <xref ref-type="bibr" rid="ref10">10</xref>
          )
(
          <xref ref-type="bibr" rid="ref11">11</xref>
          )
(
          <xref ref-type="bibr" rid="ref12">12</xref>
          )
(
          <xref ref-type="bibr" rid="ref13">13</xref>
          )
(
          <xref ref-type="bibr" rid="ref14">14</xref>
          )
(15)
(16)
        </p>
        <p>Depending on the image, the object pixel value is the
minimum value, the maximum pixel value (for 8-bit images,
respectively: 0 and 255). Binarization is widely used as
a preliminary stage of the process of document analysis,
handwriting, digitalization of maps. The effect of binarization
affects the final result of the complex analysis process. The
reduction of the amount of information carried out at the
binarization stage reduces the complexity of the recognition
algorithms and reduces the time complexity of the entire
document analysis process. The purpose of binarization is to
reduce unnecessary information and leave information relevant
for further processing.</p>
        <p>Binarization is the simplest method of image segmentation
– the division of the image into separate regions characterized
by the homogeneity of pixel values or other features taken into
consideration. The purpose of binarization is to significantly
reduce information in the image. The basic problem with
binarization is to find the appropriate binarization threshold.
In most cases, an image histogram is created to find the right
threshold value, and then the binarization threshold is set.</p>
        <p>Suppose we have an image after applying these filters and
binarization process. Binarization allows to remove more than
a dozen pixels, which are probably unnecessary in the analysis.
This type of simplification should be modified by removing
other elements which are expected properties are not very
significant. Suppose that for each white color pixel, we will
analyze the area given by the grid of size k k, where the
selected pixel is in the center of it. Let ( ) be a function that
counts the number of white pixels in a given set of points .
Then, the decision to remove or leave the pixel is made by
the following condition
where
so b k2 c.</p>
        <p>( ( ) &gt;
( )
remove
leave
;
(17)
is the limit value – usually half the length of the grid,</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>III. EXPERIMENTS</title>
      <p>In order to check the operation of the proposed method,
an available database of medical images was used [3] 1 . In
the case of segmentation, it is difficult to analyze the obtained
data. The best test is to assess the found areas, their quality
and usefulness in further classification. Exemplary images
subjected to segmentation are shown on Fig. 2 and 3. All areas
were intuitively evaluated, and the results can be determined
at 75% efficiency. To get a greater value, a classifier should
be designed that would evaluate these areas.</p>
    </sec>
    <sec id="sec-5">
      <title>IV. CONCLUSIONS</title>
      <p>The described segmentation technique allows detection of
suspect areas on computed tomography images. It is the first
step to model a decision support system for disease detection.
The next step is to use these areas to classify and identify
the found areas. The proposed solution enables fast detection
using classical actions, so it is a kind of hierarchical process
that leaves many areas. By many, it is understood as much as
possible to accidentally not remove the embryo of the disease.
Unfortunately, this is a naive solution, because during the
segmentation could be performed a certain classification of
1The results here are in whole or part based upon data generated by the
TCGA Research Network: http://cancergenome.nih.gov/
the removed pixels which may indicate something worrying.
However, this is an approach that has its advantages and
disadvantages.</p>
      <p>In future research, I will consider creating segmentation and
classification methods that should be much more effective,
efficient and accurate for support decision systems.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bonanno</surname>
          </string-name>
          , G. Capizzi,
          <string-name>
            <given-names>S.</given-names>
            <surname>Coco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Laudani</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G. L.</given-names>
            <surname>Sciuto</surname>
          </string-name>
          .
          <article-title>Optimal thicknesses determination in a multilayer structure to improve the spp efficiency for photovoltaic devices by an hybrid fem-cascade neural network based approach</article-title>
          . In Power Electronics, Electrical Drives,
          <source>Automation and Motion (SPEEDAM)</source>
          ,
          <source>2014 International Symposium on</source>
          , pages
          <fpage>355</fpage>
          -
          <lpage>362</lpage>
          . IEEE,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>G.</given-names>
            <surname>Capizzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. L.</given-names>
            <surname>Sciuto</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Woz´niak, and</article-title>
          <string-name>
            <given-names>R.</given-names>
            <surname>Damaševicius</surname>
          </string-name>
          .
          <article-title>A clustering based system for automated oil spill detection by satellite remote sensing</article-title>
          .
          <source>In International Conference on Artificial Intelligence and Soft Computing</source>
          , pages
          <fpage>613</fpage>
          -
          <lpage>623</lpage>
          . Springer,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>K.</given-names>
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Vendt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Freymann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kirby</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Koppel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Moore</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Phillips</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Maffitt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pringle</surname>
          </string-name>
          , et al.
          <article-title>The cancer imaging archive (tcia): maintaining and operating a public information repository</article-title>
          .
          <source>Journal of digital imaging</source>
          ,
          <volume>26</volume>
          (
          <issue>6</issue>
          ):
          <fpage>1045</fpage>
          -
          <lpage>1057</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>N.</given-names>
            <surname>Kanopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Vasanthavada</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R. L.</given-names>
            <surname>Baker</surname>
          </string-name>
          .
          <article-title>Design of an image edge detection filter using the sobel operator</article-title>
          .
          <source>IEEE Journal of solid-state circuits</source>
          ,
          <volume>23</volume>
          (
          <issue>2</issue>
          ):
          <fpage>358</fpage>
          -
          <lpage>367</lpage>
          ,
          <year>1988</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>K.</given-names>
            <surname>Keerthana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Rajeshwari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Keerthi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H. P.</given-names>
            <surname>Menon</surname>
          </string-name>
          .
          <article-title>Classification of tooth type from dental x-ray image using projection profile analysis</article-title>
          .
          <source>In Signal Processing and Communication (ICSPC)</source>
          , 2017 International Conference on, pages
          <fpage>394</fpage>
          -
          <lpage>398</lpage>
          . IEEE,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Marszałek</surname>
          </string-name>
          .
          <article-title>Parallelization of modified merge sort algorithm</article-title>
          .
          <source>Symmetry</source>
          ,
          <volume>9</volume>
          (
          <issue>9</issue>
          ):
          <fpage>176</fpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D.</given-names>
            <surname>Połap</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.</surname>
          </string-name>
          <article-title>Ke˛sik, M. Woz´niak, and</article-title>
          <string-name>
            <given-names>R.</given-names>
            <surname>Damaševicˇius</surname>
          </string-name>
          .
          <article-title>Parallel technique for the metaheuristic algorithms using devoted local search and manipulating the solutions space</article-title>
          .
          <source>Applied Sciences</source>
          ,
          <volume>8</volume>
          (
          <issue>2</issue>
          ):
          <fpage>293</fpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M. H. J.</given-names>
            <surname>Vala</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Baxi</surname>
          </string-name>
          .
          <article-title>A review on otsu image segmentation algorithm</article-title>
          .
          <source>International Journal of Advanced Research in Computer Engineering &amp; Technology (IJARCET)</source>
          ,
          <volume>2</volume>
          (
          <issue>2</issue>
          ):pp-
          <fpage>387</fpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>T.</given-names>
            <surname>Wan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Shang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Li</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Z.</given-names>
            <surname>Qin</surname>
          </string-name>
          .
          <article-title>Automated coronary artery tree segmentation in x-ray angiography using improved hessian based enhancement and statistical region merging</article-title>
          .
          <source>Computer methods and programs in biomedicine</source>
          ,
          <volume>157</volume>
          :
          <fpage>179</fpage>
          -
          <lpage>190</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Wozniak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          , E. Tramontana, G. Capizzi,
          <string-name>
            <given-names>G. L.</given-names>
            <surname>Sciuto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. K.</given-names>
            <surname>Nowicki</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J. T.</given-names>
            <surname>Starczewski</surname>
          </string-name>
          .
          <article-title>A multiscale image compressor with rbfnn and discrete wavelet decomposition</article-title>
          . pages
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Woz</surname>
          </string-name>
          <article-title>´niak and</article-title>
          <string-name>
            <given-names>D.</given-names>
            <surname>Połap</surname>
          </string-name>
          .
          <article-title>Bio-inspired methods modeled for respiratory disease detection from medical images</article-title>
          .
          <source>Swarm and Evolutionary Computation</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Woz</surname>
          </string-name>
          ´niak,
          <string-name>
            <given-names>D.</given-names>
            <surname>Połap</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gabryel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. K.</given-names>
            <surname>Nowicki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          , and
          <string-name>
            <given-names>E.</given-names>
            <surname>Tramontana</surname>
          </string-name>
          .
          <article-title>Can we process 2d images using artificial bee colony?</article-title>
          <source>In International Conference on Artificial Intelligence and Soft Computing</source>
          , pages
          <fpage>660</fpage>
          -
          <lpage>671</lpage>
          . Springer,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Wózniak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Połap</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. K.</given-names>
            <surname>Nowicki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          , G. Pappalardo, and
          <string-name>
            <given-names>E.</given-names>
            <surname>Tramontana</surname>
          </string-name>
          .
          <article-title>Novel approach toward medical signals classifier</article-title>
          .
          <source>In International Joint Conference on Neural Networks (IJCNN)</source>
          , pages
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          . IEEE,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>O.</given-names>
            <surname>Ziv</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. N.</given-names>
            <surname>Goldberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Nissenbaum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sosna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Weiss</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Azhari</surname>
          </string-name>
          .
          <article-title>Optical flow and image segmentation analysis for noninvasive precise mapping of microwave thermal ablation in x-ray ct scans-ex vivo study</article-title>
          .
          <source>International Journal of Hyperthermia</source>
          , pages
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>