<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">LAPI at MediaEval 2017 -Predicting Media Interestingness</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Mihai</forename><forename type="middle">Gabriel</forename><surname>Constantin</surname></persName>
							<email>mgconstantin@imag.pub.ro</email>
							<affiliation key="aff0">
								<orgName type="department">LAPI</orgName>
								<orgName type="institution">University &quot;Politehnica&quot;</orgName>
								<address>
									<settlement>Bucharest</settlement>
									<country key="RO">Romania</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Bogdan</forename><surname>Boteanu</surname></persName>
							<email>bboteanu@imag.pub.ro</email>
							<affiliation key="aff0">
								<orgName type="department">LAPI</orgName>
								<orgName type="institution">University &quot;Politehnica&quot;</orgName>
								<address>
									<settlement>Bucharest</settlement>
									<country key="RO">Romania</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Bogdan</forename><surname>Ionescu</surname></persName>
							<email>bionescu@imag.pub.ro</email>
							<affiliation key="aff0">
								<orgName type="department">LAPI</orgName>
								<orgName type="institution">University &quot;Politehnica&quot;</orgName>
								<address>
									<settlement>Bucharest</settlement>
									<country key="RO">Romania</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">LAPI at MediaEval 2017 -Predicting Media Interestingness</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">054E1708AEEE62D2F93FA79A49193580</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T04:52+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In the following paper we will present our contribution, approach and results for the MediaEval 2017 Predicting Media Interestingness task. We studied several visual descriptors and created several early and late fusion approaches in our machine learning system, optimized for best results for this benchmarking competition.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Features</head><p>The features used in this system are as follows: Hue, Saturation, Value computed from HSV space (denoted HSV ), Hue, Saturation, Copyright held by the owner/author(s).</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>Multimedia interestingness has been studied more and more extensively in recent years, from several perspectives including psychology and computer vision. From a psychological perspective user studies described a correlation between human interest and several other concepts including, but not limited to aesthetics, enjoyment, complexity, novelty <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b7">8]</ref>, while computer vision approaches studied various sets of features and machine learning techniques that are able to predict the interestingness of multimedia shots, based on low-level attributes such as color histograms, SIFT, edge distributions <ref type="bibr" target="#b7">[8]</ref> or high-level attribute like composition rules or the presence of certain objects <ref type="bibr" target="#b6">[7]</ref>.</p><p>The MediaEval 2017 Predicting Media Interestingness task <ref type="bibr" target="#b5">[6]</ref> creates a benchmarking competition where participants are tasked with the creation of a system that can predict the interestingness of images and video segments annotated by a team of viewers, according to a Video on Demand scenario, where a set of most interesting frames or video shots has to be presented to a certain user. This paper will thus describe our approach for this task.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">APPROACH</head><p>The approach presented in this paper is a continuation of our work described in <ref type="bibr" target="#b2">[3]</ref>, with the addition of a video interestingness prediction system. The first step in our machine learning system is the extraction of the content descriptors, followed by the learning stage for these content descriptors and their early and late fusion combinations executed on the annotated development dataset. In the final stage we evaluate the best performing combinations on the unlabeled testing dataset. The features used here are presented, along with a detailed description in <ref type="bibr" target="#b2">[3]</ref> and are based on the works of <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b8">[9]</ref><ref type="bibr" target="#b9">[10]</ref><ref type="bibr" target="#b10">[11]</ref>. These features have been used in several domains connected with interestingness such as aesthetics, photographic compositional rules, color theory etc. For the machine learning algorithm we used Support Vector Machine (SVM) <ref type="bibr" target="#b3">[4]</ref> with different parameters and kernels.</p><p>Lightness extracted from HSL space (HSL), Colorfulness <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b8">9]</ref>, Hue descriptors (HueDesc) <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b10">11]</ref>, Hue models (HueModel) <ref type="bibr" target="#b10">[11]</ref>, Brightness <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11]</ref>, Edge <ref type="bibr" target="#b8">[9]</ref><ref type="bibr" target="#b9">[10]</ref><ref type="bibr" target="#b10">[11]</ref>, Texture <ref type="bibr" target="#b8">[9]</ref>, RGB entropy (RG-BEntropy) <ref type="bibr" target="#b8">[9]</ref>, HSV wavelet (HSVwavelet) and average value for the HSV wavelet (aHSVwavelet) <ref type="bibr" target="#b4">[5]</ref>, average HSV values based on the Rule of Thirds (aHSVRot) <ref type="bibr" target="#b4">[5]</ref>, average HSL values for the focus region (aHSLFocus) <ref type="bibr" target="#b10">[11]</ref>, size analysis for the largest five segments (LargSegm) <ref type="bibr" target="#b4">[5]</ref>, centroid placement (Centroids) <ref type="bibr" target="#b4">[5]</ref>, Hue, Saturation, Value and Brightness for the largest segments (Hue-Segm, SatSegm, ValSegm, BrightSegm) <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b10">11]</ref>, color model for the largest segments (ColorSegm) <ref type="bibr" target="#b4">[5]</ref>, coordinates of the segments (Co-ordSegm) <ref type="bibr" target="#b10">[11]</ref>, mass variance, skewness and contrast between the segments (MassVarSegm, SkewSegm, ContrastSegm) <ref type="bibr" target="#b10">[11]</ref> and finally a depth of field indicator (DoF) calculated according to the method presented in <ref type="bibr" target="#b4">[5]</ref>.</p><p>While for the image subtask each image generated a set of the presented descriptors, for the video subtask we generated two sets of descriptors for each of the individual segments. These two sets of descriptors were generated by extracting the feature set for each frame and then calculating the average value and median value over all the frames in a video segment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Data fusion</head><p>In both subtasks we used early and late fusion techniques for maximizing out final results. Early fusion combinations consisted of concatenating several features and using the newly created feature as an input for a new training algorithm, while for the late fusion approach we used the confidence output values of several runs and combined them in several strategies, thus generating new confidence outputs.</p><p>For the late fusion trials we used 4 strategies: CombMax and CombMin, where we took the maximum and minimum confidence value for each media sample and used them as new outputs, Comb-Sum, where we added up the individual confidence values of the runs and CombMean where the added confidence values were also multiplied with weights distributed according to the rank of the initial system. This weight was calculated as w = 1/(2 r ), where the rank r had the value 0 for the best component output classifier, 1 for the second and so on.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Learning system</head><p>The learning system we used was SVM, implemented with the LibSVM library <ref type="bibr" target="#b1">[2]</ref>, with linear, polynomial and RBF kernels. For the degree, gamma and cost coefficients we used the combinations of values 2 k , where k ∈ [−6, ..., 6].</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">EXPERIMENTAL RESULTS</head><p>As presented in the task overview paper <ref type="bibr" target="#b5">[6]</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Experiments on the devset</head><p>Our SVM training system used a 10-fold cross-validation approach for choosing the best SVM-feature set combination. Generally, taking into account the MAP@10 metric, the best performing SVM kernel was the RBF kernel. Also another general observation is that the late fusion approaches, especially CombMax and CombMean, outperformed the early fusion combination, while early fusion outperformed learning systems with single descriptors. On the other hand, CombMin and CombSum strategies performed worse than their components with many combinations. Regarding the two descriptor sets for the video subtask (average and median), the results were mixed, some early fusion or single descriptors performing better with the median approach while others performed better when we used the average calculation. The interestingness confidence score for each shot used for the MAP@10 calculation were extracted as the margin to the decision hyperplane.</p><p>Table <ref type="table" target="#tab_0">1</ref> shows the best results registered on both the image and the video subtasks, and as mentioned earlier the best results were achieved for the late fusion approaches. For the video subtask we used the notation AVG for features that were obtained using average and MED for features that were obtained using median. All the components in Table <ref type="table" target="#tab_0">1</ref> were trained using the best performing SVM RBF kernel.</p><p>For the image subtask the best result on the devset was obtained with a CombMax strategy combining the early fusion outputs of HSV + HSL + aHSLFocus and aHSVRot + aHSLFocus and HSV + MassVarSegm + LargSegm, with a MAP@10 score on the devset of 0.0821. For the video subtask the best result was a CombMax strategy containing LargSegmMED + ValSegmMED and TextureMED + MassVarSegmMED early fusion outputs, with a MAP@10 score of 0.0753.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Official results on testset</head><p>For the final submission we trained the systems on the entire devset, using the optimal parameters that we found in the previous experiments and tested the resulting systems on the testset.</p><p>Table <ref type="table" target="#tab_0">1</ref> also presents the official results on the testset runs for the combinations we submitted, as returned by the task organisers, with the MAP and MAP@10 scores for each of the runs. For the image subtask we have a best MAP@10 score of 0.0555, obtained by using a CombMean strategy with the outputs of aHSVRot + aH-SLFocus and HSV + MassVarSegm + LargSegm. The same system also had the best MAP score -0.1873. For the video subtask again it was a single system that got both the best MAP@10 and the best MAP score -a CombMean strategy usign the early fusion outputs of LargSegmMED + ValSegmMED and TextureMED + MassVarSeg-mMED and EdgeAVG + TextureAVG, with a MAP@10 value of 0.0732 and a MAP value of 0.2028.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">CONCLUSIONS</head><p>In this paper we presented several systems that predict media interestingness using content descriptors and early and late fusion approaches. We tested these systems on the MediaEval 2017 Predicting Media Interestingness task and our best results were MAP@10 0.5555 for the image subtask and 0.0732 for the video subtask.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>, the development dataset consisted of 7396 frames for the image subtask and 7396MediaEval'17, 13-15 September 2017, Dublin, Ireland MG Constantin et al. Best results on devset for the image and video subtasks and their final result on testset (best testset results are marked in bold) the video subtask, while the test dataset had 2435 frames for the image subtask and 2435 video segments for the video subtask. The official metric was mean average precision at 10 (MAP@10), and the organisers also calculated the mean average precision (MAP) for each submitted run. A large number of experiments with different early and late fusion strategies and with different SVM systems were carried out and the best performing combinations were in the last phase run on the testset.</figDesc><table><row><cell cols="2">Subtask Run</cell><cell>Approach</cell><cell>MAP@10 devset</cell><cell>MAP testset</cell><cell>MAP@10 testset</cell></row><row><cell cols="2">image run1</cell><cell>CombMax (HSV + HSL + aHSLFocus and aHSVRot + aHSLFocus and HSV + MassVarSegm + LargSegm)</cell><cell>0.0821</cell><cell>0.1791</cell><cell>0.0463</cell></row><row><cell cols="2">image run2</cell><cell>CombMax (HSV + HSL + aHSLFocus and aHSVRot + aHSLFocus)</cell><cell>0.0803</cell><cell>0.1789</cell><cell>0.0442</cell></row><row><cell cols="2">image run3</cell><cell>CombMean (aHSVRot + aHSLFocus and HSV + MassVarSegm + LargSegm)</cell><cell>0.0793</cell><cell>0.1873</cell><cell>0.0555</cell></row><row><cell cols="2">image run4</cell><cell>CombMean (HSVWavelet + aHSVWavelet + aHSLFocus and HSV + HSL + aHSLFocus and HSV + MassVarSegm)</cell><cell>0.0793</cell><cell>0.1851</cell><cell>0.0529</cell></row><row><cell>video</cell><cell cols="2">run1 CombMax (LargSegmMED + ValSegmMED and TextureMED + MassVarSegmMED)</cell><cell>0.0753</cell><cell>0.1937</cell><cell>0.0619</cell></row><row><cell></cell><cell></cell><cell>CombMax (LargSegmMED + ValSegmMED and TextureMED + MassVarSegmMED</cell><cell></cell><cell></cell><cell></cell></row><row><cell>video</cell><cell>run2</cell><cell>and EdgeAVG + TextureAVG)</cell><cell>0.0737</cell><cell>0.1819</cell><cell>0.0564</cell></row><row><cell>video</cell><cell>run3</cell><cell>CombMax (EdgeAVG + TextureAVG and HSVAVG + MassVarSegmAVG)</cell><cell>0.0732</cell><cell>0.1937</cell><cell>0.0619</cell></row><row><cell></cell><cell></cell><cell>CombMean(LargSegmMED + ValSegmMED and TextureMED + MassVarSegmMED</cell><cell></cell><cell></cell><cell></cell></row><row><cell>video</cell><cell>run4</cell><cell>and EdgeAVG + TextureAVG)</cell><cell>0.0725</cell><cell>0.2028</cell><cell>0.0732</cell></row><row><cell></cell><cell></cell><cell>CombMax (EdgeAVG + TextureAVG and HSVAVG + MassVarSegmAVG</cell><cell></cell><cell></cell><cell></cell></row><row><cell>video</cell><cell>run5</cell><cell>and HSLAVG + ColorfulnessAVG)</cell><cell>0.0723</cell><cell>0.1843</cell><cell>0.0571</cell></row><row><cell cols="2">video segments for</cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ACKNOWLEDGMENTS</head><p>Part of this work was funded by UEFISCDI under research grant PNIII-P2-2.1-PED-2016-1065, agreement 30PED/2017, project SPOT-TER</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><surname>Daniel E Berlyne</surname></persName>
		</author>
		<title level="m">Conflict, arousal, and curiosity</title>
				<imprint>
			<date type="published" when="1960">1960. 1960</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">LIBSVM: a library for support vector machines</title>
		<author>
			<persName><forename type="first">Chih-Chung</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Chih-Jen</forename><surname>Lin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM transactions on intelligent systems and technology (TIST)</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page">27</biblScope>
			<date type="published" when="2011">2011. 2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Content Description for Predicting Image Interestingness</title>
		<author>
			<persName><forename type="first">Mihai</forename><surname>Gabriel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Constantin</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Bogdan</forename><surname>Ionescu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Symposium on Signals, Circuits and Systems -ISSCS</title>
				<imprint>
			<date type="published" when="2017">2017. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Support vector machine</title>
		<author>
			<persName><forename type="first">Corinna</forename><surname>Cortes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Vladimir</forename><surname>Vapnik</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Machine learning</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="273" to="297" />
			<date type="published" when="1995">1995. 1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Studying aesthetics in photographic images using a computational approach</title>
		<author>
			<persName><forename type="first">Ritendra</forename><surname>Datta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dhiraj</forename><surname>Joshi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jia</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">James</forename><forename type="middle">Z</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">European Conference on Computer Vision</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="288" to="301" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Mediaeval 2017 predicting media interestingness task</title>
		<author>
			<persName><forename type="first">Claire-Hélène</forename><surname>Demarty</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mats</forename><surname>Sjöberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bogdan</forename><surname>Ionescu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Thanh-Toan</forename><surname>Do</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><surname>Gygli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ngoc</forename><forename type="middle">Qk</forename><surname>Duong</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">MediaEval 2017 Multimedia Benchmark Workshop Working Notes Proceedings of the MediaEval 2017 Workshop</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">High level describable attributes for predicting aesthetics and interestingness</title>
		<author>
			<persName><forename type="first">Sagnik</forename><surname>Dhar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Vicente</forename><surname>Ordonez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tamara</forename><forename type="middle">L</forename><surname>Berg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computer Vision and Pattern Recognition (CVPR)</title>
				<imprint>
			<date type="published" when="2011">2011. 2011</date>
			<biblScope unit="page" from="1657" to="1664" />
		</imprint>
	</monogr>
	<note>IEEE Conference on. IEEE</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">The interestingness of images</title>
		<author>
			<persName><forename type="first">Michael</forename><surname>Gygli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Helmut</forename><surname>Grabner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hayko</forename><surname>Riemenschneider</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Fabian</forename><surname>Nater</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Luc</forename><surname>Van Gool</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE International Conference on Computer Vision</title>
				<meeting>the IEEE International Conference on Computer Vision</meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="1633" to="1640" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Can we measure beauty? Computational evaluation of coral reef aesthetics</title>
		<author>
			<persName><forename type="first">Andreas</forename><forename type="middle">F</forename><surname>Haas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marine</forename><surname>Guibert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Anja</forename><surname>Foerschner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sandi</forename><surname>Calhoun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Emma</forename><surname>George</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mark</forename><surname>Hatay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Elizabeth</forename><surname>Dinsdale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jennifer</forename><forename type="middle">E</forename><surname>Stuart A Sandin</surname></persName>
		</author>
		<author>
			<persName><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Mark</surname></persName>
		</author>
		<author>
			<persName><surname>Vermeij</surname></persName>
		</author>
		<author>
			<persName><surname>Others</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">PeerJ</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page">e1390</biblScope>
			<date type="published" when="2015">2015. 2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">The design of high-level features for photo quality assessment</title>
		<author>
			<persName><forename type="first">Yan</forename><surname>Ke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xiaoou</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Feng</forename><surname>Jing</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computer Vision and Pattern Recognition</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2006">2006. 2006</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="419" to="426" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Aesthetic visual quality assessment of paintings</title>
		<author>
			<persName><forename type="first">Congcong</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tsuhan</forename><surname>Chen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Journal of selected topics in Signal Processing</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="236" to="252" />
			<date type="published" when="2009">2009. 2009</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
