<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Deep Learning based approach for Photographs and Painting Classification using CNN Model</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Hitesh</forename><surname>Kumar</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">School of Computer Science</orgName>
								<orgName type="institution">University of Petroleum &amp; Energy Studies (UPES)</orgName>
								<address>
									<addrLine>Energy Acres</addrLine>
									<postCode>248007</postCode>
									<settlement>Bidholi</settlement>
									<region>Dehradun, Uttarakhand</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Tanupriya</forename><surname>Choudhury</surname></persName>
							<email>tanupriya1986@gmail.com</email>
							<affiliation key="aff1">
								<orgName type="department">School of Computer Science</orgName>
								<orgName type="institution">University of Petroleum &amp; Energy Studies (UPES)</orgName>
								<address>
									<addrLine>Energy Acres</addrLine>
									<postCode>248007</postCode>
									<settlement>Bidholi</settlement>
									<region>Dehradun-, Uttarakhand</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Nandan</forename><surname>Sachi</surname></persName>
							<email>sachinandan09@gmail.com</email>
						</author>
						<author>
							<persName><surname>Mohanty</surname></persName>
							<affiliation key="aff2">
								<orgName type="department" key="dep1">Dept. of Computer Science</orgName>
								<orgName type="department" key="dep2">School of Computer Science &amp; Engineering</orgName>
								<orgName type="institution">Singidunum University</orgName>
								<address>
									<country key="RS">Serbia</country>
								</address>
							</affiliation>
							<affiliation key="aff3">
								<orgName type="institution">VIT-AP University</orgName>
								<address>
									<settlement>Amaravati</settlement>
									<region>Andhra Pradesh</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Shrabanee</forename><surname>Swagatika</surname></persName>
							<email>shrabaneeswagatika@soa.ac.in</email>
							<affiliation key="aff4">
								<orgName type="department">Dept. of Computer Science and Engineering</orgName>
								<orgName type="institution">Siksha &apos;O&apos; Anusandhan Deemed to be University</orgName>
								<address>
									<postCode>751030</postCode>
									<settlement>Bhubaneswar</settlement>
									<region>Odisha</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Satabdi</forename><surname>Swain</surname></persName>
							<email>satabdiswain1@gmail.com</email>
							<affiliation key="aff5">
								<orgName type="department">Dept. of Information Technology (Mtech IT)</orgName>
								<orgName type="institution">Utkal University</orgName>
								<address>
									<postCode>751004</postCode>
									<settlement>Bhubaneswar</settlement>
									<region>Odisha</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Deep Learning based approach for Photographs and Painting Classification using CNN Model</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">3AA473C43949CD1C7213131D000BEF8E</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T00:47+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>CNN</term>
					<term>Deep Learning</term>
					<term>Image Processing</term>
					<term>Classification</term>
					<term>Machine Learning</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>One of the most powerful technologies for dealing with a wide range of real-world difficulties in the fast-paced world of the twenty-first century is Machine Learning (ML). Both regular and differently-abled persons benefit from machine learning. The Convolutional Neural Network (CNN) has been proposed for a variety of applications such as multimedia processing and so on. Here in this research paper, we have described the way and created a Binary Classification model using CNN for identifying the Paintings and Photographs. Each painting and Photograph have been warped using various procedures such as a convolutional layer, dense layer, and Flatten layers. The model is used for Binary Classifications. The wrap has been done at random on a large dataset for CNN training. We explore the architecture of CNN affects the accuracy of the identification. The proposed Model aims to increase the efficiency and accuracy of the model.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>With the advancement of machine learning algorithms <ref type="bibr" target="#b1">[2]</ref>, there has been development in the field of computer vision challenges, staying up to date on deep learning that is assisting in the quick evolution of AI that is becoming a necessity nowadays. Like how a youngster figures out how to perceive objects, we need to show a calculation and a great many pictures before it can sum up the information and make predictions for pictures.</p><p>CNN is a type of neural network that uses convolutional algorithms to CNN works by taking an image and assigning a weighting to it depending on the image's many objects. CNN <ref type="bibr" target="#b4">[5]</ref>[9] has mainly used image classifications <ref type="bibr" target="#b10">[12]</ref>, such as identifying/classifying photos vs. painting. It also has other functions, such as image segmentation and signal processing.</p><p>A CNN can alternatively be constructed as a U-Net design, which consists of two nearly mirrored CNNs, in this image size output is similar to the image size input and is used in U-net architecture for image improvement and image segmentation. In astrophysics, CNNs are used to evaluate radio telescope data and forecast the most plausible visual representation of the data. One of the hottest IT topics is computer vision, which is to study the machines' ability to identify the images and videos <ref type="bibr" target="#b3">[4]</ref>. Computer vision is utilized in the field of self-driving automobiles, robotics, and facial identification; for this the profoundly specific method known as CNN has gained monstrous headway in PC vision. CNN has been fabricated utilizing deep learning AI innovation <ref type="bibr" target="#b11">[13]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Literature Review</head><p>Neural Network Architecture (NNA) was researched as a technique for picture classification <ref type="bibr" target="#b0">[1]</ref>. The system is comprised of two arrangements of natural eye imitates just as variety grouping autoencoding. It included a lot of convoluted photographs, yet as the review advanced, the calculation continuously further developed the MNIST models. The open-source information base MNIST is used for the preparation set. It also investigated with the dataset of Street View House Numbers, which gave a more significant result on the grounds that even natural eyes can't differentiate. The ImageNet challenge is utilized to survey the adequacy of CNN models for picture classification. Since presenting AlexNet in 2012, they've made gradual enhancements to the plan, which have expanded execution. GoogLeNet was presented in 2015 by (Szegedy et al. <ref type="bibr" target="#b12">[14]</ref>, 2015), which was an enhancement for AlexNet, generally because of a decrease in the measure of boundaries included. Also, in 2014, (Simonyan and Zisserman, 2014 <ref type="bibr" target="#b5">[6]</ref>) presented the VGGNet, which performed well because of the organization's profundity. The image classification technique dependent on the construction of CNN was talked about in the diary (CNN). The grounding was completed by taking out the extra face pictures from the face picture information so that a specific number of face and non-facial pictures were used for preparing the research information. The picture order framework uses a bi-scale CNN with 120 prepared information and auto-stage preparing on the Face Detection Data Set and Benchmark (FDDB) to achieve an 81.6 percent location rate with just six bogus up-sides. In contrast, the current status of the craftsmanship accomplishes around an 80% discovery rate with 50 bogus up-sides.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Proposed CNN model for Paintings and Photograph Identification</head><p>We have investigated and motivated bottom-up strategies for increasing the classification accuracy of CNN models for the picture classification problem. We have used the Keras and TensorFlow <ref type="bibr" target="#b9">[11]</ref> deep learning libraries <ref type="bibr" target="#b2">[3]</ref> to execute the model in Python. This model is open to the public to use and improve.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Convolutional Neural Network (CNN)</head><p>CNN's are a sort of profound neural organization that is habitually used to assess visual information. Clinical picture examination, picture and video acknowledgment, picture arrangement, recommender frameworks <ref type="bibr" target="#b7">[8]</ref>, normal language preparation, and monetary time series are mostly regions where CNN can be utilized. A Facial Recognition framework based on CNN is a Deep Learning calculation <ref type="bibr" target="#b6">[7]</ref> that can take an information picture, allot learnable loads and inclinations to different parts of the picture, and recognize distinctive looks. Deep learning is the innovative way for this research innovation. The deep learning concept is delegated to artificial intelligence as it can imitate the human thought in an intelligent way. Ordinarily, the framework will be pre-stacked with hundreds, if not thousands, of information to make the 'instructional meeting' more productive and faster. It starts by furnishing some sort of 'preparing' with every one of the information input.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Photographs and Painting Classification Dataset Specifications</head><p>We have used Siddesh Sambasivam's Photographs and Painting Classification Dataset on Kaggle for our project <ref type="bibr">[10]</ref>. This dataset can be found on Kaggle. There are 7041 images in all, including paintings and photography. It contains 3010 images of paintings and photographs for validation. (https://www.kaggle.com/iiplutocrat45ii/painting-vs-photograph-classification-dataset). Because of the limited computing capacity, we have checked the correctness after 10 Epochs. It has been taken random data samples and trained our model on the 7041 train dataset before testing it on 3010. The training accuracy was observed to be 94.1 percent after 10 epochs, while the validation accuracy was found to be 90.73 percent. Table <ref type="table" target="#tab_0">2</ref> represents the labels distributions in 0's and 1's, and Basic CNN Model Configuration is represented in Table <ref type="table" target="#tab_0">2</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1</head><p>Labels distribution in 0's and 1's Data Label Used Painting 00 Photographs 01</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">The Architecture of the Proposed CNN Model</head><p>The layered distribution architecture is shown in Figure <ref type="figure" target="#fig_1">3</ref>. It consists of three layers such as dense, Flatten, and Convo2d layer. Model used is Inception_resNet_V2.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 1: CNN layers for Photographs and Paintings Identification</head><p>In the Conv2d layer, the activation function that is used is ReLu (Rectified Linear initiation work), which is, in a way, a piecewise direct capacity that mainly yields positive info and returns 0 for any bad information. It is a direct numerical capacity that requires some investment to prepare and accomplishes higher exactness. This initiation work likewise supports the goal of disappearing inclination issues that are normal with other enactment capacities, for example, Sigmoid or TanH. SoftMax, otherwise called delicate argmax or standardized outstanding capacity, is one more enactment work utilized for yield. It's mostly used to change over a network's output to a likelihood distribution over projected yield classes. Figure <ref type="figure" target="#fig_1">3</ref> shows the layer configuration of the model.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Proposed CNN Model Enactment</head><p>The enactment of the proposed model in the research paper is defined. Kaggle provided the dataset for Photographs and Paintings. Here the image target size has been set to 512px and rescale it using the Image Data Generator. Due to a lack of resources, we have used 7041 example photos to train our model. Figure <ref type="figure" target="#fig_1">3</ref> shows some images that were chosen at random. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Experimental Outcomes</head><p>The proposed model gave the outcomes that are being displayed in image form with great accuracy. The exhibition of our model is assessed utilizing the accompanying measurable boundaries.</p><p>Accuracy: Precision educates us about the rate regarding positive IDs that were genuinely correct. Review: Recall educates us about the rate regarding real up-sides that were precisely distinguished.</p><p>Exactness is a measurement used to survey arrangement models. Exactness advises us about the rate regarding the right expectations made by our model.   The Accuracy (Figure <ref type="figure" target="#fig_1">3</ref> and Figure <ref type="figure" target="#fig_1">3</ref>) is increasing, and loss is decreasing for both Training and validation datasets. After the 15 epochs, the accuracy will become constant, i.e., 97.1%.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion</head><p>In a nutshell, this research work focuses on the classification of the picture by utilizing deep learning and the TensorFlow framework. It has two main objectives and one subobjective. The objectives are indistinguishably knotted to the conclusions, which is proved by us in this research study. Firstly, it can be quantified that all of the findings gained so far have been extremely impressive. Secondly, this research also focuses on the CNN, which is mostly useful in the image categorization technologies. To justify the sub-objective, the CNN technique was further examined in depth, beginning with the assembly, then the training model, and finally, the image was classified into classes. The epochs in CNN were allowed to control the accuracy in a way avoiding issues like overfitting. Furthermore, Python was used as the programming language all through this research work as it is viable with the TensorFlow structure, which in a way takes into account the plan of the framework which is to be done all together in Python.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: The layer configuration of the model</figDesc><graphic coords="4,128.75,72.00,337.20,220.20" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Random image from training datasets</figDesc><graphic coords="5,148.70,166.33,297.60,208.80" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Accuracy of the proposed model as a Confusion matrix representation</figDesc><graphic coords="5,183.50,523.69,228.65,201.86" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: The Classification Report for our model, which is proposed</figDesc><graphic coords="6,197.00,124.10,200.49,94.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Accuracy and Loss Function Graph</figDesc><graphic coords="6,131.00,247.67,332.65,118.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="3,150.50,368.81,293.25,212.20" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 2 Basic</head><label>2</label><figDesc></figDesc><table><row><cell>CNN Model Configuration</cell><cell></cell></row><row><cell>Layers</cell><cell>Details of Layers of Model</cell></row><row><cell>Model_optimization_Layer</cell><cell>Adam_Optimizer Layer(Layer1)</cell></row><row><cell>Model_loss_Layer</cell><cell>model_ crossentropy_categorical(Layer2)</cell></row><row><cell>Model_metrics_Layer</cell><cell>['model_accuracy']</cell></row><row><cell>Model_Con2D_Layer</cell><cell>128 Model_filters,</cell></row><row><cell></cell><cell>3x3 Filter_size,</cell></row><row><cell></cell><cell>ReLU Activation(Layer3)</cell></row><row><cell>Max-Pooling2d_Layer</cell><cell>2x2 size_kernel(Layer4)</cell></row><row><cell>Model_Dropout_Layer</cell><cell>20%(Layer5)</cell></row><row><cell>Model_Conv2D Layer</cell><cell>64 filters,</cell></row><row><cell></cell><cell>5x5 Filter_Size,</cell></row><row><cell></cell><cell>ReLU Activation(Layer6)</cell></row><row><cell cols="2">Model_Max-Pooling2d_Layer 2x2 Size_kernel(Layer7)</cell></row><row><cell>Model_Dropout_Layer</cell><cell>20%(Layer8)</cell></row><row><cell>Model_Con2D Layer</cell><cell>356 model_filters,</cell></row><row><cell></cell><cell>5x5 Filter_size,</cell></row><row><cell></cell><cell>Relu_Activation(Layer9)</cell></row><row><cell cols="2">Model_Max-Pooling2d Layer 2x2 kernel_size(Layer10)</cell></row><row><cell>Model_Dropout</cell><cell>20%(Layer11)</cell></row><row><cell>Model_Flatten layer</cell><cell>2404 Neurons(Layer12)</cell></row><row><cell>Model_Dense Layer</cell><cell>128 Neurons(Layer13)</cell></row><row><cell cols="2">Model_Batch Normalization Relu Activation(Layer14)</cell></row><row><cell>Model_Dropout</cell><cell>25%(Layer15)</cell></row><row><cell>Model_Dense Layer</cell><cell>512 Neurons(Layer16)</cell></row><row><cell cols="2">Model_Batch Normalization Relu_Activation(Layer17)</cell></row><row><cell>Model_Dropout</cell><cell>20%(Layer18)</cell></row><row><cell>Model_Output layer</cell><cell>Softmax_Function</cell></row><row><cell></cell><cell>2 classes(Layer19)</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Image identification using neural networks</title>
		<author>
			<persName><forename type="first">M</forename><surname>Krishna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Neelima</surname></persName>
		</author>
		<author>
			<persName><surname>Mane</surname></persName>
		</author>
		<author>
			<persName><surname>Harshali &amp; Matcha</surname></persName>
		</author>
		<author>
			<persName><surname>Venu</surname></persName>
		</author>
		<idno type="DOI">614.10.14419/ijet.v7i2.7.10892</idno>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">7</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Extreme learning machine: Theory and applications</title>
		<author>
			<persName><forename type="first">G.-B</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q.-Y</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C.-K</forename><surname>Siew</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="volume">70</biblScope>
			<biblScope unit="page" from="489" to="501" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">ML and DL frameworks and libraries for substantial and ample data mining: A survey</title>
		<author>
			<persName><forename type="first">G</forename><surname>Nguyen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artif. Intell. Rev</title>
		<imprint>
			<biblScope unit="volume">52</biblScope>
			<biblScope unit="page" from="77" to="124" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Deep Learning Approach for Traffic Signs Detection</title>
		<author>
			<persName><forename type="first">Hitesh</forename><forename type="middle">&amp;</forename><surname>Sharma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Md</forename><forename type="middle">&amp;</forename><surname>Ahmed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Anurag</forename><forename type="middle">&amp;</forename><surname>Mor</surname></persName>
		</author>
		<author>
			<persName><surname>Paul</surname></persName>
		</author>
		<author>
			<persName><surname>Gaurav</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Prashasti</forename><surname>Gupta</surname></persName>
		</author>
		<idno type="DOI">10.13140/RG.2.2.19147.11048</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Freshness Classification of Hog Plum Fruit Using Deep Learning</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">V R</forename></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">F</forename><surname>Mahdi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Choudhury</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Sarkar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">P</forename><surname>Bhuyan</surname></persName>
		</author>
		<idno type="DOI">10.1109/HORA55278.2022.9799897</idno>
	</analytic>
	<monogr>
		<title level="m">International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA)</title>
				<imprint>
			<date type="published" when="2022">2022. 2022</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><surname>Simonyan</surname></persName>
		</author>
		<author>
			<persName><surname>Zisserman</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1409.1556</idno>
		<ptr target="https://arxiv.org/pdf/1409.1556.pdf" />
		<title level="m">Very deep convolutional networks for large-scale image recognition</title>
				<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Performance Analysis of the Classifiers for Optical Character Recognition</title>
		<author>
			<persName><surname>Vivank</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Sharma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Valaramathi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Shobhit</forename><surname>Santhi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sumit</forename><surname>Srivastava</surname></persName>
		</author>
		<author>
			<persName><surname>Jahagirdar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Intelligent Computing and Control Systems (ICCS)</title>
				<imprint>
			<date type="published" when="2019">2019. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A Framework for Automated Database Tuning Using Dynamic SGA Parameters and Basic Operating System Utilities</title>
		<author>
			<persName><forename type="first">R</forename><surname>Biswas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Database Systems Journal</title>
		<imprint>
			<biblScope unit="volume">III</biblScope>
			<biblScope unit="issue">4</biblScope>
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">A Systematic Approach for Deep Learning Based Brain Tumor Segmentation</title>
		<author>
			<persName><surname>Sille</surname></persName>
		</author>
		<author>
			<persName><surname>Choudhury</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Chauhan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">-</forename><surname>Sharma</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Ingénierie des Systèmes d&apos;Information</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Abadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Barham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Brevdo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Citro</surname></persName>
		</author>
		<idno>arXiv:1603044</idno>
		<title level="m">for example, is one of the most well-known companies in the world. Using Tensorflow, we can do large-scale machine learning on heterogeneous distributed systems</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
	<note type="report_type">preprint</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Image classification based on improved VLAD</title>
		<author>
			<persName><forename type="first">X</forename><surname>Long</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Peng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Feng</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Multimedia Tools Appl</title>
		<imprint>
			<biblScope unit="volume">75</biblScope>
			<biblScope unit="issue">10</biblScope>
			<biblScope unit="page" from="5533" to="5555" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Allergen30: Detecting Food Items with Possible Allergens Using Deep Learning-Based Computer Vision</title>
		<author>
			<persName><forename type="first">M</forename><surname>Mishra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Sarkar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Choudhury</surname></persName>
		</author>
		<idno type="DOI">10.1007/s12161-022-02353-9</idno>
		<ptr target="https://doi.org/10.1007/s12161-022-02353-9" />
	</analytic>
	<monogr>
		<title level="j">Food Anal. Methods</title>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<author>
			<persName><forename type="first">Christian</forename><surname>Szegedy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Wei</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yangqing</forename><surname>Jia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pierre</forename><surname>Sermanet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Scott</forename><surname>Reed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dragomir</forename><surname>Anguelov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dumitru</forename><surname>Erhan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Vincent</forename><surname>Vanhoucke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrew</forename><surname>Rabinovich</surname></persName>
		</author>
		<title level="m">Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</title>
				<meeting>the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="1" to="9" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
