<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main"></title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">DD649169E24218D59E2371FA5618DA88</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T17:59+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>video indexing</term>
					<term>camera motion</term>
					<term>compressed streams</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Fast indexing of video contents in the compressed domain has become an important task as growing quantities of multimedia (MM) digital content are available in this form. In this paper we present a method for fast indexing of camera motion of MPEG1 and 2 compressed video. We use P-frame motion vectors and extract some knowledge on the quality of the compensated motion from the compressed stream. It is then used for decision making on the motion refinement. Then camera motion is indexed in terms of physical motions. Results obtained on the TREC Video test data set are interesting.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>I. INTRODUCTION</head><p>Indexing and annotating large quantities of films and video material has become an increasing problem for the media industry. Today, indexing for large application areas such as broadcast, archives, and home MM devices definitely follows MPEG7 -the compliant way. This is a standard <ref type="bibr" target="#b0">[1]</ref> for describing the multimedia content. For visual media, it defines descriptors to characterize the content on a visual basis. In video, which intrinsic property is motion, it proposes motion descriptors. Nevertheless, MPEG7 does not give hints on how to produce a standard compliant description of e.g. camera motion, and how to translate this description into features easily interpreted by humans such as tilt, zoom, or pan… A lot of multimedia content is already available in compressed form. Furthermore, a digitization of the existing video content and digital production of new content are today unthinkable without compression. Thus a lot of work <ref type="bibr">[2 -4]</ref> has been devoted to the estimation of the camera model from motion vectors contained in the compressed stream. This work is another step forward in the general framework which we call "Rough Indexing Paradigm" and has been developed since <ref type="bibr" target="#b4">[5]</ref>. A whole lot of indexing tasks such as shot boundary detection, scene grouping, video summarization, video object extraction, or motion characterization can be fulfilled on degraded and low-resolution/low-level data produced by encoding video streams with current encoders (MPEG1, 2, H.264 …). We P. Krämer and J. Benois-Pineau are with LABRI UMR CNRS/University of Bordeaux 1/Enseirb/INRIA laboratory, 351, crs de la Libération, 33405 Talence Cedex, France; petra.kraemer, jenny.benois@labri.fr; phone 33 5 40 00 84 24, fax 33 5 40 00 66 69. M. Gràcia Pla has been on master position in LABRI on leave from UPC, Barcelona, Spain. claim that a compressed stream is a rich source of input data for indexing and this is only the matter of interpretation for the intelligent use of it. In this paper we show how we can truly use not only MPEG (1 or 2) motion vectors, but also the information on the quality of their estimation in order to estimate the camera model (Section 2) and to qualify motion in the humanly interpretable way (Section 3). This is for instance a task of camera motion characterization in TREC Video 2005, where we did participate. We show how this knowledge helps us to improve the indexing results and give the perspectives of this work (Section 4).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>II. GLOBAL MOTION ESTIMATION AND CORRECTION FROM MPEG COMPRESSED VIDEO</head><p>In this section we address the problem of estimating the global (camera model) in a video sequence. Here we use motion compensation vectors from P-frames. In order to remain the same temporal resolution and get a smooth motion trajectory, we interpolate it for I-frames. Finally, as MPEG motion vectors are not computed for analysis purposes, but for optimal encoding, they can be very much erroneous (e.g. in case of strong motion), we propose how to detect such encoder failures and how to correct the motion.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>II.1 Global motion estimation from P-frames</head><p>Here we rely on our previous work <ref type="bibr" target="#b4">[5]</ref> and use a 6 parameter affine camera model. We suppose <ref type="bibr" target="#b4">[5]</ref> that an MPEG macro-block displacement vector is expressed as: ) , ( denotes the image center. The estimation by a robust estimator that we proposed in <ref type="bibr" target="#b4">[5]</ref>, allows classifying macro-blocks (MBs) as conformant to the model, what we call the "dominant estimation support", or outliers. The latter contain intra-coded MBs, MBs in moving objects and in occluding areas. This approach supposes that in a current Pframe, there are motion vectors, which express the apparent camera motion. Unfortunately this is not always the case. In order to re-cover the real camera motion in such frames it is necessary to detect encoder failures and to correct the motion.</p><formula xml:id="formula_0">        − −         +         =         g g y y x x</formula><p>Indexing Camera Motion Integrating Knowledge of the Quality of the Encoded Video P. Krämer, J. Benois-Pineau, member IEEE, M. Gràcia Pla</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>II.2 Detection of frames with low-quality motion and motion correction</head><p>If the MPEG encoder motion estimator failed, the motion compensation error encoded in the MPEG stream is strong. Such failures are very much dependent on the parameter settings of the encoder and are specifically observed in the case of strong motion (e.g. soccer content).</p><p>We compute the mean low frequency energy t E on the dominant estimation support D t i.e. excluding the motion outliers: To take the decision if the motion model has to be corrected, we use the temporal mean t γ of (2). If the instantaneous value of (2) exceeds t αγ , with 1 ≥ α then the motion will be corrected.</p><formula xml:id="formula_1">∑ ∈ = t D p err P t t t p DC D E 2 ) ,<label>( 1 (2)</label></formula><p>To fulfill this correction we first interpolate the motion model from neighboring P-frames by a linear regression. This interpolation is used as the initialization of the model estimate in the gradient descent scheme.</p><p>Here we minimize the functional of the mean square error of the motion compensation at DC resolution on the dominant estimation support:</p><formula xml:id="formula_2">( ) ( ) ∑ ∈ − + − = t D p t t t t d p I p I D MSE 2 1 ) ( 1 r<label>(3)</label></formula><p>The optimization is done in the parameter space by gradient descent:</p><formula xml:id="formula_3">i t i t i t G D 2 1 ε − Θ = Θ +</formula><p>with G as the gradient of (3) and ε as the adaptive gain matrix.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>III. CAMERA MOTION INDEXING</head><p>The objective here is to translate the motion model (1) into physical motion, interpretable by humans, such as pan, tilt, or zoom. To do this we follow <ref type="bibr" target="#b5">[6]</ref> and reformulate the model (1) as:</p><formula xml:id="formula_4">        ⋅ + ⋅ − ⋅ + ⋅ ⋅ + ⋅ + ⋅ − ⋅ +         =         y hyp y hyp x rot y zoom y hyp x hyp y rot x zoom tilt pan dy dx 2 1 2 1 (4)</formula><p>Then two statistical hypotheses are tested on each parameter of this model. The first one 0 H consists in supposing that the parameter is significant, the second one 1 H assumes that the component is not significant, i.e. equals zero. The likelihood function f for each hypothesis is defined with respect to the residuals between the estimated model and the MPEG motion vectors. These residuals are supposed to follow the bi-variate Gaussian law. The decision on the significance is made by a comparison of the log-likelihood ratio with a threshold. We used this scheme in our previous work, but in case of the knowledge on a bad estimation that is available from (2), we do not compute residuals between the erroneous MPEG motion vectors and those obtained by the re-estimated model. The interpolated parameters are used as reference (light correction) in this case.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>IV. RESULTS AND CONCLUSION</head><p>To assess the improvement due to the proposed integration of the knowledge on erroneous motion and re-estimation of motion (3), we conducted experiments on the evaluation set of the TREC Video camera motion task http://wwwnlpir.nist.gov/projects/trecvid/ in which we participated in 2005. A subset of 4 videos containing visually observable motion was chosen. Using 0 . 4 = α in the decision rule, about 4% of the P-frame motion is corrected. Due to this correction we obtain a mean precision of 76% and a mean recall of 86.1%. Without the correction 74.5% and 78.7% are obtained respectively. We have to stress that the increase of recall of 8 % is already very much significant for this task.</p><p>Hence in this paper we proposed a new method for motion correction when estimating and indexing camera motion from compressed (MPEG1 and MPEG2) video streams.</p><p>We tested it for indexing purposes on the MPEG1 compressed TREC Video test set. For video summarizing by mosaicing from compressed streams and for other indexing applications (shot boundary detection, object extraction) we work on MPEG2 compressed streams as well. There is no principal difference and the method reveals promising for the whole Rough Indexing Paradigm, we continue developing on compressed streams.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>coefficients extracted from the encoded error in P-frames.</figDesc></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">V.7: Coding of Moving Pictures and Audio</title>
	</analytic>
	<monogr>
		<title level="m">MPEG-7 Requirements Document</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Global motion estimation algorithm for video segmentation</title>
		<author>
			<persName><forename type="first">E</forename><surname>Saez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. SPIE, VCIP&apos;03</title>
				<meeting>SPIE, VCIP&apos;03</meeting>
		<imprint>
			<biblScope unit="page" from="1540" to="1550" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Estimation of arbitrary camera motion in {MPEG} videos</title>
		<author>
			<persName><forename type="first">R</forename><surname>Ewerth</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. ICPR&apos;04</title>
				<meeting>ICPR&apos;04</meeting>
		<imprint>
			<biblScope unit="page" from="512" to="515" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Adaptive Methods for Motion Characterization and Segmentation of MPEG Compressed Frame Sequences</title>
		<author>
			<persName><forename type="first">C</forename><surname>Doulaverakis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. ICIAR&apos;</title>
				<meeting>ICIAR&apos;</meeting>
		<imprint>
			<biblScope unit="volume">04</biblScope>
			<biblScope unit="page" from="310" to="317" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Robust Motion Characterisation for Video Indexing based on Optical Flow</title>
		<author>
			<persName><forename type="first">M</forename><surname>Durik</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. CBMI&apos;</title>
				<meeting>CBMI&apos;</meeting>
		<imprint>
			<biblScope unit="volume">01</biblScope>
			<biblScope unit="page" from="57" to="64" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A unified approach to shot change detection and camera motion characterization</title>
		<author>
			<persName><forename type="first">P</forename><surname>Bouthemy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. on CSVT</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="1030" to="1044" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
