<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Brightness Levels in MRI Should Correspond With Echogenicity Grade in Ultrasound B-MODE images: A Pilot Study of Reproducibility Using ROI-based Measurement Between Two Blind Observers</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Jiří</forename><surname>Blahuta</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">Silesian University in</orgName>
								<address>
									<settlement>Opava, Opava</settlement>
									<country>The, Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Tomáš</forename><surname>Soukup</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">Silesian University in</orgName>
								<address>
									<settlement>Opava, Opava</settlement>
									<country>The, Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Brightness Levels in MRI Should Correspond With Echogenicity Grade in Ultrasound B-MODE images: A Pilot Study of Reproducibility Using ROI-based Measurement Between Two Blind Observers</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">0E1851949879C5644FCDA456DF6BDDDD</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T12:29+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In 2011, we developed a software tool to analysis of the echogenicity level in ultrasound B-MODE images. This software is based on binary thresholding in a predefined Region of Interest (ROI).</p><p>The goal of this paper is to observe if the echogenicity grade in B-MODE images corresponds with brightness level in MR images using the echogenicity index. Achieved results obtained by two, non-experienced observers in radiology, shows the software can be used also for MRI images. The reproducibility of the measurement evinces the high level of agreement.</p><p>We use three ROI areas for which the exact position in MR image is not important at this moment. Totally of 52 images were analyzed.</p><p>Achieved results show the error between measurements by two non-experienced observers does not exceed 5 %; calculated based on the range of the measurements and computed average difference for each image set. Thus, the echogenicity index can be considered as reproducible marker; a small shift of the ROI does not evince significant change. Average range of the index is computed from 28.17 up to 67.95; minimal index value was &lt; 20 and the highest value was 101.2 due to different brightness level in the examined ROI. The range for the same ROI are almost equal, the difference does not exceed 2 %.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Motivation and Input Data</head><p>Our software has been developed for ultrasound Bimaging <ref type="bibr" target="#b0">[1]</ref> in neurology to detect hyperechogenicity of the substantia nigra <ref type="bibr" target="#b1">[2]</ref>, <ref type="bibr" target="#b2">[3]</ref> which is probably one of the most common markers of Parkinson's Disease detectable on transcranial (TCS) ultrasound B-MODE images. The principle of the core algorithm based on binary thresholding enables to load not only ultrasound images. Thus, MR images could be also analyzed using this software tool. Clinical studies were published since 2014. The core of the software was improved, especially new ROI areas for different diagnoses.</p><p>In modern neurology and neurosurgery, MRI is one of the most progressive medical imaging for all perioperative phases <ref type="bibr" target="#b4">[5]</ref>. MRI and diagnostic ultrasound are commonly considered as complementary diagnostic modalities; also for diagnosis confirmation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1">Input MR Images</head><p>In this study, we have three sets of T1 and T2 MR images (two basic types of MR images) <ref type="bibr" target="#b5">[6]</ref> with different image resolution to analyze using the same approach as for ultrasound images; using echogenicity index as a feature to distinguish different brightness level. In comparison with ultrasound B-MODE images which we have used in previous studies, there is no native scale how to select a window 50 × 50 mm so we use the full width of the image, see Fig. <ref type="figure">1</ref>.</p><p>Figure <ref type="figure">1</ref>: Input MR image with selected ROI</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.2">Methodology of the Analysis</head><p>We analyze three ROI areas with different size and shape, see Fig. <ref type="figure" target="#fig_0">2</ref>. For each image, all three ROI areas are placed in the same position in the image. In other words, each image is analyzed thrice using three different ROI in the same position.</p><p>The size and shape of the ROI were defined in the past for B-MODE images. Originally, ROI1 was used for ncl. raphe analysis and ROI2 has been defined for substantia nigra area; both in B-MODE images. Square-shaped ROI3 has been used to analyze medial temporal lobe (MTL) in different case; in measurement of the black/white pixel ratio in the ROI 20 × 20 mm to judge a probability of MTL atrophy as a marker for the dementia <ref type="bibr" target="#b6">[7]</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Echogenicity Index Evaluation in MR Images</head><p>In the case of B-MODE images, the echogenicity index should corresponds with echogenicity grade of the tissue. We can use the same index for MR images, in which the index should corresponds with brightness level of the examined part inside the ROI. The index is one numerical value computed using our software. More information about the methodology, see <ref type="bibr" target="#b7">[8]</ref>; the paper focused on atherosclerotic plaques in B-MODE images, in which we have defined the index and its purpose. Simply, the index is computed as one number which can describe visual brightness level (echogenicity grade in US imaging). Our software is based on computing the area of remaining pixels after binary thresholding in the ROI. Let we have 256 intensity levels H i where i = 0, 1, ..., 255, the area is computed for each level. After that, the all computed areas are summed and the sum is divided by 100 to obtain the index given by</p><formula xml:id="formula_0">ECHOINDEX = ∑ 255 H=0 A H 100<label>(1)</label></formula><p>Due to the principle of binary thresholding, for lower echogenicity grade, the Echo-Index should be lower and for higher echogenicity the Echo-Index should be higher. This is an assumption which proceeds from the principle of binary thresholding. We have used the index in MR images to judge general reproducibility between two nonexperienced observers.</p><p>An example of achieved results for a selected image set of 14 images, is stated in Table <ref type="table" target="#tab_0">1</ref>. From achieved results we can judge the echogenicity index could be well-applicable in general as a feature in MRI analysis.</p><p>In Table <ref type="table" target="#tab_1">2</ref> you can see the average differences for each ROI in four image sets. It is closely related to judge level of agreement which is almost perfect. The data in Table <ref type="table" target="#tab_1">2</ref> shows that the differences between observers are minimal in the case of the same position of the ROI including a small shift. It seems that small ROI position changes which are not recognizable visually, have no significant influence on resulted echogenicity index. From achieved results, the range of the index is from 28.17 up to 38.89; very similar for each image set. According to the range and computed average differences between the observers, the difference between observers is smaller than 5 %.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Echogenicity Grade in US imaging vs MRI</head><p>In the case of US imaging, the image can be adjusted dynamically during examination by ultrasound probe settings; we can increase or decrease the brightness level according to examined tissue density. The echogenicity grade displayed on acquired digitized image can be different visually for the same tissue density. Due to this fact we need to analyze the image sets with same probe (image) settings to avoid incorrect echogenicity evaluation. See Fig. <ref type="figure" target="#fig_2">3</ref>   The brightness settings should not be affected by settings during examination but there is other limitation in MR images corresponding with ROI selection in MR slices, see the following chapter.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">A Limitation of the Echogenicity Index</head><p>Evaluation in MRI</p><p>Although it seems the echogenicity index is well reproducible, there is one important limitation. In Fig. <ref type="figure">5</ref>, there is the example of using the same ROI size and shape to select a structure (it is not important from medical point of view at this moment). Due to weighted MRI, the examined structure can be smaller, larger, deformed or may not visible. In the case of ultrasound B-MODE images, like the substantia nigra in TCS images, the position and size is determined; only echogenicity grade is different corresponding the gain settings, angle, etc. In this case, totally different echogenicity grade can be obtained for the same patient but in different MR image. Thus, another ROI types will be defined in the future which should be better adapted for different MR images to examine structures. This limitation is also a barrier for automatic ROI selection discussed in the following chapter.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Possibility of Automatic Finding Closed ROI of Examined Area Using Convolutional Neural Network</head><p>In our previous study dedicated to atherosclerotic plaques analysis in B-MODE images, we also have discussed the possibility to automatic learning of the plaque detection using ANN <ref type="bibr" target="#b9">[10]</ref> and also a possibility to create a decisionmaking expert system to evaluate the echogenicity as a risk marker of the plaque <ref type="bibr" target="#b10">[11]</ref>. Ultrasound imaging is widely used in atherosclerosis recognition to early diagnosis <ref type="bibr" target="#b11">[12]</ref>. We have presented a draft of back-propagation ANN model to find a closed region of the plaque. In this field, ANN based on deep learning approach are widely used. In general, the ANN could be used to place ROI according to learning of some structure in MR image like in Fig. <ref type="figure">1</ref>. However, the most important barrier is the fact that examined structure may vary in weighted MR images in each image due to intensity level <ref type="bibr" target="#b12">[13]</ref>. See an example in Fig. <ref type="figure" target="#fig_4">6</ref> how the weighted MR images are different for the same examined patient. Thus, it could be hard to apply an automatic recognition a ROI described by shape or size when is changing in the weighted MR imaging.</p><p>The principle could be based on iterative learning using a convolutional neural network (CNN) which uses filtering to extract some features to recognize the region. CNN are designed to work with grid-structured inputs, like 2D images. There are many advanced techniques using CNN in medical imaging like in <ref type="bibr" target="#b13">[14]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">From a Boundary to Learn a Feature</head><p>In 2020, we presented an idea to automatic segmentation based on boundary recognition of atherosclerotic plaques in B-MODE images <ref type="bibr" target="#b14">[15]</ref>. It could be realized using iterative boundary recognition based on active contour algorithm boosted with CNN to train corresponding pairs input-output to learn the rules how to obtain the plaque border, see Fig. <ref type="figure" target="#fig_5">7</ref> in which the contours are shown and segmented plaque shapes after 25 iterations.</p><p>In the case of MRI, there is a different task. There are no exact borders to find the ROI. In Fig. <ref type="figure" target="#fig_4">6</ref>, the weighted MR images are displayed; it could be hard to learn what to consider as a feature. Let to have a structure in MR image which is probably located equally based on radiologist's In this field, there is an interesting inspiration how to develop an automatic segmentation using deep learning approach in T1 and T2 weighted images <ref type="bibr" target="#b15">[16]</ref>. The desired goal is to train the ANN to extract some features of the examined structure to place the ROI to the correct position.</p><p>To CNN training, the back-propagation algorithm is used; similarly as in linear feed-forward ANN architecture. Input image is represented as a single vector w × h × d where w and h represent image resolution and d to be color depth, in this case d = 1 (for RGB channels d = 3). Each pixel is represented as intensity value in the range of 0 to 255. CNN uses ReLU (Rectified Linear Unit) activation function instead of sigmoid or hyperbolic tangent in traditional multi-layer backpropagation networks. In general, CNN has the following layers and functions:</p><p>1. input layer (as a single vector w × h × d) Convolutional layer with ReLU and pooling layer are designed for feature extraction and the fully-connected layer with softmax function is used to classification. In our case, we need to recognize a structure in MR image which is defined by a radiologist, e.g. in Fig. <ref type="figure">5</ref> and/or in Fig. <ref type="figure">10</ref>. The process is illustrated in Fig. <ref type="figure" target="#fig_8">9</ref>.</p><p>Deep learning paradigm is based on learning rules from inputs and desired outputs. This is main difference from traditional programming when we have inputs, rules and we need to create outputs. Deep learning requires large data amount to efficiency. In comparison with traditional neural networks and learning, deep learning should achieve better accuracy related to increasing data amount <ref type="bibr" target="#b17">[18]</ref>. In a critical point, depending on data complexity and its structure, conventional paradigms could be inefficient due to overfitting so the learning rate is low or stopped.</p><p>In MR images, we can use deep learning approach to learn the rules to recognize the ROI. In general, deep learning is focused on training with pairs input-output from large datasets, e.g. thousands of images. Thus, when we need to learn a specific structure in MR images, the training is based on input-output training set to learn rules, i.e. features, to find an appropriate structure to place a predefined ROI. The idea of deep learning using CNN is illustrated in Fig. <ref type="figure" target="#fig_8">9</ref>.  <ref type="figure">10</ref>. Consider a task to find the highlighted anatomic structure (square-shaped ROI). It seems, there is really hard to learn the features of the structure because it is from small to bigger area in which the structure is located.</p><p>To effective training and learning the network, a large set of images is needed to learn how to recognize the structure from input-output training set. For example, we can learn the edge, the brightness difference, the shape, e.g. roundness, height/width ratio, etc. and another feature. In Figure <ref type="figure">10</ref>: Six MR images with highlighted square-shaped ROI this task, deep learning could be applied to help to extract the features to recognize the structure. The background of the principle of the image convolution algorithm in CNN you can find in <ref type="bibr" target="#b16">[17]</ref> and also in <ref type="bibr" target="#b17">[18]</ref> which is a comprehensive guide to deep learning paradigm. In 2021, a paper focused on multi-classification of brain tumors in MRI using CNN, including deep performance evaluation, has been published <ref type="bibr" target="#b18">[19]</ref>. In future, automatic finding of the ROI could be one of the main goals in our long-term research.</p><p>There are many ways for practical implementation. One of the most known to be Keras, a high-level modular API developed for Python programming language using GPU acceleration. More information, code samples (including using for CT scans) are available on Keras.io website.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Conclusions and Using Results in Clinical Studies</head><p>The goal of the paper is to show how to use echogenicity index, computed with ultrasound B-MODE images, in MR images. To this purpose, we have analyzed sets of T1 and T2 MR images. The principle of the analysis is equal as for B-MODE images. The core of the algorithm is based on binary thresholding of the images in grayscale. Within this MR images analysis, the main idea is also applicable in MR images; the higher index value should correlate with higher brightness intensity and vice versa. Achieved results show the principle of the echogenicity index could be applied for B-MODE images and MR images independently. It seems, the echogenicity index is well applicable to observe different brightness in MRI equally as in the case of B-MODE images. The obtained differences are not significant, but the software is more sensitive than visual assessment in general.</p><p>Finally, we can recommend using this methodology in future clinical studies focused on the analysis of MRI using different ROI shapes and sizes according to examined structure in MR image. In future, we will use a new ROI areas, like a circle-shaped and/or free-hand closed area, defined by an experienced sonographer. It is related to examined structures in MR images, see Fig. <ref type="figure" target="#fig_7">8</ref> as the example.</p><p>In parallel, we are working on analysis of the echogenicity index differences between a light area and a dark area within the same ROI.</p><p>This </p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Three different ROI in the same position are used</figDesc><graphic coords="2,56.69,129.61,231.02,227.94" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>in which three TCS B-MODE images with different global brightness level and corresponding histogram profiles are shown.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Three TCS B-MODE images with different brightness global levels and the histogram profile</figDesc><graphic coords="3,56.69,113.95,231.03,111.59" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :Figure 5 :</head><label>45</label><figDesc>Figure 4: Four MR images in which the histogram is very similar</figDesc><graphic coords="3,307.56,107.15,231.02,414.45" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Weighted MR images example</figDesc><graphic coords="4,56.69,272.30,231.03,151.31" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: The example of active contours for atherosclerotic plaques boundary in B-MODE images</figDesc><graphic coords="4,307.56,80.50,231.03,187.66" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>2 .</head><label>2</label><figDesc>convolutional layer (3 × 3 or 5 × 5 convolutional masks are commonly used) to extract feature map 3. activation function like ReLU 4. pooling (sub-sampling) layer (to reduce dimensionality of feature maps using MaxPooling algorithm)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 8 :</head><label>8</label><figDesc>Figure 8: Estimated deep learning accuracy vs. conventional paradigms</figDesc><graphic coords="5,104.71,212.10,135.00,108.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 9 :</head><label>9</label><figDesc>Figure 9: Deep learning idea with CNN</figDesc><graphic coords="5,56.69,524.87,231.02,82.03" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Achieved differences of the echogenicity index between 2 observers</figDesc><table><row><cell cols="4">ROI1 ROI1 ROI2 ROI2 ROI3 ROI3 diffROI1 diffROI2 diffROI3</cell></row><row><cell>86.99 85.92 71.45 72.22 57.55 58.36</cell><cell>1.07</cell><cell>-0.76</cell><cell>-0.81</cell></row><row><cell>38.69 33.97 26.12 24.10 22.08 21.14</cell><cell>4.72</cell><cell>2.02</cell><cell>0.94</cell></row><row><cell>55.71 62.77 50.43 53.18 41.61 45.33</cell><cell>-7.06</cell><cell>-2,75</cell><cell>-3.72</cell></row><row><cell>75.47 81.18 46.94 48.24 38.19 36.12</cell><cell>-5.71</cell><cell>-1.30</cell><cell>2.07</cell></row><row><cell>67.19 70.42 55.83 52.06 39.30 41.52</cell><cell>-3.23</cell><cell>3.77</cell><cell>-2.22</cell></row><row><cell>71.32 73.93 58.18 60.54 44.98 44.63</cell><cell>-2.61</cell><cell>-2,36</cell><cell>0.35</cell></row><row><cell>73.87 73.95 46.92 48.63 47.53 48.08</cell><cell>-0.07</cell><cell>-1,71</cell><cell>-0.55</cell></row><row><cell>83.47 84.27 58.22 58.45 52.00 51.06</cell><cell>-0.80</cell><cell>-0.23</cell><cell>0.94</cell></row><row><cell>74.30 74.85 47.44 47.33 53.23 54.15</cell><cell>-0.55</cell><cell>0.11</cell><cell>-0.92</cell></row><row><cell>79.27 82.01 52.18 53.58 51.89 52.36</cell><cell>-2.74</cell><cell>-1.40</cell><cell>-0.47</cell></row><row><cell>74.79 74.67 51.69 49.54 47.16 48.23</cell><cell>0.12</cell><cell>2,15</cell><cell>-1.07</cell></row><row><cell>73.57 70.13 45.88 42.32 40.60 40.22</cell><cell>3.44</cell><cell>3.56</cell><cell>0.38</cell></row><row><cell>69.43 62.74 45.01 45.98 42.20 41.78</cell><cell>6.69</cell><cell>-0.97</cell><cell>0.42</cell></row><row><cell>66.92 58.35 48.78 50.55 45.07 47.17</cell><cell>8.57</cell><cell>-1.77</cell><cell>-2.10</cell></row><row><cell></cell><cell>0.13</cell><cell>-0.12</cell><cell>-0.48</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 :</head><label>2</label><figDesc></figDesc><table><row><cell>SET 3</cell><cell cols="2">0.89 -0.90</cell><cell>0.21</cell></row><row><cell>SET 4</cell><cell>-0.16</cell><cell>0.64</cell><cell>0.52</cell></row></table><note>Average computed differences of the echogenicity index between 2 observers image set / avg difference ROI1 ROI2 ROI3 SET 1 1.00 -1.00 -0.40 SET 2 0.13 -0.12 -0.48</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head></head><label></label><figDesc>work was supported by European Union under European Structural and Investment Funds Operational Programme Research, Development and Education project "Zvýšení kvality vzdělávání na Slezské univerzitě v Opavě ve vazbě na potřeby Moravskoslezského kraje" CZ.02.2.69/0.0/0.0/18-058/0010238 and project CZ.02.2.69/0.0/0.0/18-054/0014696 "Rozvoj VaV kapacit Slezské univerzity v Opavě, "Rozvoj metod teoretické a aplikované informatiky" SGS/11/2019 and the image use from grant No.16-28628A.</figDesc><table /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A reproducible method to transcranial B-MODE ultrasound images analysis based on echogenicity evaluation in selectable ROI</title>
		<author>
			<persName><forename type="first">J</forename><surname>Blahuta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Čermák</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Soukup</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vecerek</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Biology and Biomedical Engineering</title>
		<idno type="ISSN">1998-4510</idno>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="98" to="106" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A new program for highly reproducible automatic evaluation of the substantia nigra from transcranial sonographic images</title>
		<author>
			<persName><forename type="first">J</forename><surname>Blahuta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Soukup</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Jelínková</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Bártová</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Čermák</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Herzig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Školoudík</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Biomedical Papers</title>
		<imprint>
			<biblScope unit="volume">158</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="621" to="627" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Transcranial Sonography of the Substantia Nigra: Digital Image Analysis</title>
		<author>
			<persName><forename type="first">D</forename><surname>Školoudík</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Jelinkova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Blahuta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Čermák</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Soukup</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Bártová</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Langová</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Herzig</surname></persName>
		</author>
		<idno type="DOI">10.3174/ajnr.A4049</idno>
	</analytic>
	<monogr>
		<title level="j">American Journal of Neuroradiology Dec</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="2273" to="2278" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">N</forename><surname>Azar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Donaldson</surname></persName>
		</author>
		<title level="m">Ultrasound Imaging (Radcases) (1st Edition) Kindle Edition</title>
				<imprint>
			<publisher>B00SRLKPOU</publisher>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Magnetic Resonance Imaging (MRI) in Neurologic Disorders</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">M</forename><surname>Levin</surname></persName>
		</author>
		<ptr target="https://www.merckmanuals.com/professional/neurologic-disorders/neurologic-tests-and-procedures/magnetic-resonance-imaging-mri-in-neurologic-disorders" />
	</analytic>
	<monogr>
		<title level="m">Merck Manual</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">MRI based medical image analysis: Survey on brain tumor grade classification</title>
		<author>
			<persName><forename type="first">G</forename><surname>Mohan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Subashini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Biomed. Signal Process. Control</title>
		<imprint>
			<biblScope unit="volume">39</biblScope>
			<biblScope unit="page" from="139" to="161" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">The Black-White Pixels Ratio in Medial Temporal Lobe Brain Structure in Transcranial B-Images as a Measurable Marker of Alzheimer&apos;s Disease Probability: The Reproducibility Overview</title>
		<author>
			<persName><forename type="first">J</forename><surname>Blahuta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Soukup</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Pavlík</surname></persName>
		</author>
		<idno type="DOI">10.23919/Soft-COM50211.2020.9238214</idno>
	</analytic>
	<monogr>
		<title level="m">International Conference on Software, Telecommunications and Computer Networks (SoftCOM)</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">The classification of the progression of atherosclerotic plaques in B-MODE images between computer image analysis using echogenicity index and visual assessment</title>
		<author>
			<persName><forename type="first">J</forename><surname>Blahuta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Soukup</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Pavlík</surname></persName>
		</author>
		<idno type="DOI">10.5593/sgem2020/2.1/s07.044</idno>
	</analytic>
	<monogr>
		<title level="m">20th International Multidisciplinary Scientific GeoConference</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="341" to="348" />
		</imprint>
	</monogr>
	<note>Proceedings SGEM</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Histogram Equalization for Image Enhancement Using MRI Brain Images</title>
		<author>
			<persName><forename type="first">N</forename><surname>Senthilkumaran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Thimmiaraja</surname></persName>
		</author>
		<idno type="DOI">10.1109/WCCCT.2014.45</idno>
	</analytic>
	<monogr>
		<title level="m">World Congress on Computing and Communication Technologies</title>
				<imprint>
			<publisher>E-ISBN</publisher>
			<date type="published" when="2014">2014. 2014</date>
			<biblScope unit="page" from="80" to="83" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">An Expert System Based on Using Artificial Neural Network and Region-Based Image Processing to Recognition Substantia Nigra and Atherosclerotic Plaques in B-Images: A Prospective Study</title>
		<author>
			<persName><forename type="first">J</forename><surname>Blahuta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Soukup</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Čermák</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">14th International Work-Conference on Artificial Neural Networks, IWANN 2017</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<meeting><address><addrLine>Cadiz, Spain</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2017">June 14-16, 2017. 2017</date>
			<biblScope unit="volume">10305</biblScope>
			<biblScope unit="page" from="236" to="245" />
		</imprint>
	</monogr>
	<note>Proceedings, Part I</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Pilot Design of a Rule-Based System and an Artificial Neural Network to Risk Evaluation of Atherosclerotic Plaques in Long-Range Clinical Research</title>
		<author>
			<persName><forename type="first">J</forename><surname>Blahuta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Soukup</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Skacel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICANN 2018</title>
		<title level="s">Lecture Notes in Computer Science book series</title>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">11140</biblScope>
			<biblScope unit="page" from="90" to="100" />
		</imprint>
	</monogr>
	<note>LNCS</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Ultrasound Imaging for Risk Assessment in Atherosclerosis</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">C</forename><surname>Steinl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">A</forename><surname>Kaufmann</surname></persName>
		</author>
		<idno type="DOI">10.3390/ijms16059749</idno>
	</analytic>
	<monogr>
		<title level="j">Int J Mol Sci</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="9749" to="9769" />
			<date type="published" when="2015-05">2015 May</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Putaminal hyperintensity on T1-weighted MR imaging in patients with the Parkinson variant of multiple system atrophy</title>
		<author>
			<persName><forename type="first">S</forename><surname>Ito</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Shirai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Hattori</surname></persName>
		</author>
		<idno type="DOI">10.3174/a-jnr.A1443</idno>
	</analytic>
	<monogr>
		<title level="j">AJNR. American Journal of Neuroradiology</title>
		<imprint>
			<biblScope unit="volume">30</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="689" to="692" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Approach to Automatic Segmentation of Atherosclerotic Plaque in B-MODE images Using Active Contour Algorithm Adapted by Convolutional Neural Network to Echogenicity Index Computation</title>
		<author>
			<persName><forename type="first">J</forename><surname>Blahuta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Soukup</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Sosík</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proceedings</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="223" to="229" />
		</imprint>
	</monogr>
	<note>ITAT Conference 2020</note>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Adaptive Estimation of Active Contour Parameters Using Convolutional Neural Networks and Texture Analysis</title>
		<author>
			<persName><forename type="first">A</forename><surname>Hoogi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Subramaniam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Veerapaneni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">D</forename><surname>Rubin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE TRANSACTIONS ON MEDICAL IMAGING</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="781" to="791" />
			<date type="published" when="2017-03">March 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Deep Learning Based Segmentation of Brain Tissue from Diffusion MRI</title>
		<author>
			<persName><forename type="first">F</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Breger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Cho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ning</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Westin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>O'donnell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Pasternak</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.neuroimage.2021.117934</idno>
	</analytic>
	<monogr>
		<title level="j">NeuroImage</title>
		<imprint>
			<biblScope unit="volume">233</biblScope>
			<biblScope unit="page">117934</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">An Introduction to Convolutional Neural Networks</title>
		<author>
			<persName><forename type="first">K</forename><surname>O'shea</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Nash</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
	<note>ArXiv e-prints</note>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Convolutional Neural Networks</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">C</forename><surname>Aggarwal</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-319-94463-0-3</idno>
	</analytic>
	<monogr>
		<title level="m">Neural Networks and Deep Learning</title>
				<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Multi-Classification of Brain Tumor MRI Images Using Deep Convolutional Neural Network with Fully Optimized Framework</title>
		<author>
			<persName><forename type="first">E</forename><surname>Irmak</surname></persName>
		</author>
		<idno type="DOI">10.1007/s40998-021-00426-9</idno>
	</analytic>
	<monogr>
		<title level="j">Iran J Sci Technol Trans Electr Eng</title>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
