<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Spatio-Temporal Slices for Frame Cut Detection in Video</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">N</forename><forename type="middle">A</forename><surname>Sorokina</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">Samara National Research University</orgName>
								<address>
									<addrLine>Moskovskoe Shosse 34</addrLine>
									<postCode>443086</postCode>
									<settlement>Samara</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">V</forename><forename type="middle">A</forename><surname>Fedoseev</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">Samara National Research University</orgName>
								<address>
									<addrLine>Moskovskoe Shosse 34</addrLine>
									<postCode>443086</postCode>
									<settlement>Samara</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
							<affiliation key="aff2">
								<orgName type="department">Image Processing Systems Institute -Branch of the Federal Scientific Research Centre &quot;Crystallography and Photonics&quot; of Russian Academy of Sciences</orgName>
								<address>
									<addrLine>Molodogvardeyskaya str. 151</addrLine>
									<postCode>443001</postCode>
									<settlement>Samara</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="department">IV International Conference on &quot;Information Technology and Nanotechnology&quot; (ITNT</orgName>
								<address>
									<postCode>2018</postCode>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Spatio-Temporal Slices for Frame Cut Detection in Video</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">E84ECE96644A75AF42B760F82BE394C9</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T04:16+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The paper proposes an approach for unauthorized inter-frame video change detection using spatio-temporal slices. This approach can significantly reduce the amount of data processed and replace video processing with image processing that could be performed much faster. To test the efficiency of this approach, we consider a simple algorithm in analyzing adjacent rows of a slice and then classifying the rows based on its result. Experimental studies have revealed that this algorithm shows moderate results in terms of quality, but it has a great potential for improvement, which confirms the prospects of spatio-temporal slices for the given problem.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1.">Problem statement</head><p>Today digital video plays an increasingly important role in the society. In 2015, according to the Sandvine report <ref type="bibr" target="#b0">[1]</ref>, the percentage of video and audio in North American traffic exceeded 70%. In addition, according to the Ericsson report <ref type="bibr" target="#b1">[2]</ref>, by 2019 the percentage of video in mobile traffic should exceed 50% (and already now it is above 40%). The reasons for that is not only entertainment industry development, but also the growing market for video surveillance systems (up to 20% per annum, according to the Markets and Markets analysts' report <ref type="bibr" target="#b2">[3]</ref>), and their widespread introduction to both large business structures and small companies. As a consequence, the data received by surveillance systems are increasingly used in investigative activities or as evidence in legal proceedings. For this reason, such data must be reliably protected from unauthorized alteration by intruders.</p><p>One of the most common methods of unauthorized video alteration is inter-frame modification, which includes the removal of video fragments or their replacement with copies of other ones. Such changes can remove crime evidence or data about movements of persons or vehicles that are important in a particular context. In video signals obtained from a stationary camera, such changes can be practically invisible. In the case of a moving camera, such changes can also be hard to detect when an intruder cuts or replaces short-term fragments.</p><p>In this paper, we consider the problem of detecting inter-frame artificial changes in video signals taken from a moving or stationary camera. The detection method should work well with video signals stored in various formats, and also combine high detection accuracy with high speed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.2.">Review of related studies</head><p>In practice, the detection of artificial changes in video can be carried out using digital forensics methods developed since the second half of the 2000s <ref type="bibr" target="#b3">[4]</ref><ref type="bibr" target="#b4">[5]</ref><ref type="bibr" target="#b5">[6]</ref>. The main achievements in this direction are associated with H. Farid, A.C. Popescu, S. Prasad, J. Fridrich, A. Piva, M. Barni. So, the latest two in the review paper <ref type="bibr" target="#b6">[7]</ref> classified methods of digital video forensics onto the following groups:</p><p>1) camera-based methods that analyze various video artifacts to determine the optical system of the camera;</p><p>2) coding-based methods that identify artifacts resulting from encoding video using certain codecs; 3) geometry, or physics-based methods that detect violations in the physical or geometric parameters of the objects observed; 4) pixel-based methods based on detecting changes at the pixel level of the video.</p><p>Examples of algorithms aimed to detect inter-frame modifications can be found, in particular, in <ref type="bibr" target="#b7">[8]</ref><ref type="bibr" target="#b8">[9]</ref><ref type="bibr" target="#b9">[10]</ref><ref type="bibr" target="#b10">[11]</ref><ref type="bibr" target="#b11">[12]</ref><ref type="bibr" target="#b12">[13]</ref><ref type="bibr" target="#b13">[14]</ref>. Most of them are coding-based methods <ref type="bibr" target="#b7">[8]</ref><ref type="bibr" target="#b8">[9]</ref><ref type="bibr" target="#b9">[10]</ref><ref type="bibr" target="#b10">[11]</ref> and geometry / physics-based <ref type="bibr" target="#b11">[12]</ref><ref type="bibr" target="#b12">[13]</ref><ref type="bibr" target="#b13">[14]</ref>.</p><p>The coding-based methods <ref type="bibr" target="#b7">[8]</ref><ref type="bibr" target="#b8">[9]</ref><ref type="bibr" target="#b9">[10]</ref><ref type="bibr" target="#b10">[11]</ref> are based on the properties of certain video formats (usually different versions of MPEG) and assume the separation of frames into different types (Pframes, I-frames, etc.). Therefore, these methods do not satisfy the claimed universality requirement regarding the data format. In addition, a significant number of such methods (in particular, <ref type="bibr" target="#b7">[8]</ref><ref type="bibr" target="#b8">[9]</ref><ref type="bibr" target="#b9">[10]</ref>) may be used to determine the fact of video change, but do not allow to determine the exact location of the changes (in the time domain).</p><p>As for the geometry / physics-based methods, many of them (in particular, <ref type="bibr" target="#b11">[12]</ref><ref type="bibr" target="#b12">[13]</ref>) are based on the use of an optical flow to track changes frame by frame. This technology works well for the case of a stationary camera, but for a moving camera, it needs to take into account camera movements which can be unknown. Moreover, the methods based on optical flow do not provide a high computational performance. Another method from this group <ref type="bibr" target="#b13">[14]</ref> utilizes another approach. As a vivid example, this paper considers a simple scene with moving balls. To detect the removal of a video fragment, the method first tracks ball positions, and then detects physically unjustified deviations of the found trajectories, which is possible evidence of unauthorized changes. The main drawback of this method is the need to solve a complex problem of tracking several moving objects. However, the basic idea of this method is rather attractive: to detect unauthorized alterations, we can try to analyze video data over a long time interval.</p><p>In this paper, we test a method based on the same idea of detecting deviations in time. However, as the analyzed data, we propose to use the so-called spatio-temporal slices of video images that are slices of the video data cube along the time axis and one of the spatial axes (for example, the horizontal one). If you build several such horizontal slices of video at a certain vertical interval, the resulting set of images will give quite enough information about object movements, although the data will have much less volume comparing with the original video. Moreover, for the processing of these data, we can use computationally effective image processing methods, in particular, parallel-recursive FIR filters <ref type="bibr" target="#b14">[15,</ref><ref type="bibr">16]</ref>.</p><p>In papers [17, 18] a similar approach is used for road object detection in the problem of autonomous navigation., The algorithms [17, 18] also process not all the pixels of each frame, but only horizontal lines spaced from each other at equal intervals. This allows the authors to ensure the solution of the problem with satisfactory accuracy in real time.</p><p>The paper is organized as follows. Section 2 illustrates the traces of natural events in spatiotemporal slices and outlines the principles for detecting inter-frame video changes with their help. Section 3 describes a simple cut detection method based on these principles. Finally, Section 4 describes the experimental studies.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Object movements at spatio-temporal slices</head><p>Figure <ref type="figure" target="#fig_1">1</ref> shows an example of a spatio-temporal slice of a video obtained by a stationary camera. In this figure, the following events occurred are marked with numbers: (1) -movements by the hands of a standing person, (2) -movements of a person who emerged from one door and entered another, (3)appearance and stopping of a car, (4) -the appearance of a person from the left border of the frame and their movement. As the figure shows, these events lead to the appearance of smooth curves on the cuts characterizing object movements. You can also note the local background shifts due to camera deviations or fluctuations of observable objects (trees, advertising signs, etc.). Figure <ref type="figure" target="#fig_0">2</ref> illustrates four typical types of object movements observed in spatio-temporal slices.</p><p>In case of unauthorized video alteration in the time domain, video slices can clearly visualize them by horizontal offsets, as shown in Figure <ref type="figure" target="#fig_2">3</ref>. These offsets can appear over the entire frame width. Furthermore, if we consider a set of slices made at different vertical positions, such offsets will be observed in the same lines of the slices corresponding to the same time shift. Thus, we assume that the considered video slices contain enough information to detect inter-frame changes.   </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Cut detection method</head><p>As noted in Section 2, the evidence for unnatural inter-frame changes is sharp horizontal shifts in the slice image (see Figure <ref type="figure" target="#fig_2">3</ref>). To detect them, we can estimate the displacement of each next row relative to the previous one. Formally speaking, we are aimed to find the shift k  of the (k+1)-th row relative to the k-th row, which would provide the smallest error between neighboring samples: arg min ( )</p><formula xml:id="formula_0">kk    ,<label>(1)</label></formula><p>where  is an integer argument characterizing the line shift, and () k  is the error function estimated from the equation:</p><formula xml:id="formula_1">  min( 1 , 1) 2 max(0, ) ( ) ( , ) ( 1, ) WW k j I k j I k j               .<label>(2)</label></formula><p>In ( <ref type="formula" target="#formula_1">2</ref>), ( , ) I k j is the grayscale value of the slice image at k-th row and j-th column, W is the frame width. We can numerically solve problem (1)-( <ref type="formula" target="#formula_1">2</ref>) using the correlation approach.</p><p>Next, we can use the obtained k  and () kk   (for simplicity, we will sign the latest one as k  , i.e. It classified slice image rows into two classes: "Cut" and "Non-cut". To train the algorithm, we used the following features of the rows:</p><formula xml:id="formula_2">1 k p  ,<label>(3)</label></formula><formula xml:id="formula_3">2 k p   ,<label>(4)</label></formula><p> </p><formula xml:id="formula_4">3 1 1 min , k k k k p         ,<label>(5)</label></formula><p> </p><formula xml:id="formula_5">4 1 1 min , k k k k p         ,<label>(6)</label></formula><p> </p><formula xml:id="formula_6">5 k k p med     ,<label>(7)</label></formula><p> </p><formula xml:id="formula_7">6 k k p med    .<label>(8)</label></formula><p>We should note especially that we calculated the features ( <ref type="formula" target="#formula_2">3</ref>)-( <ref type="formula" target="#formula_7">8</ref>) in the local maxima only. Function () med x in equations ( <ref type="formula" target="#formula_6">7</ref>)-( <ref type="formula" target="#formula_7">8</ref>) means the median of the 4-point neighborhood of pixel x excluding x itself.</p><p>To speed up the algorithm, we classified the only rows corresponding to the local maxima of </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experimental investigations</head><p>To test the proposed method, we used two types of video: DVR recordings (made from a moving car), and model records (made mainly by a stationary camera and containing typical pedestrian movements).</p><p>To conduct the experiments, we made spatio-temporal slices of the source videos, and then divide them onto 100-line fragments. A half of these fragments were obtained from 100 consecutive frames, while the other half was composite and contained a frame cut in the 50th line. The length of the gap in the frames was a parameter of the experiment. Then for each image, we calculated feature sets. 70% of the data obtained was used as a training set, whereas the remaining 30% was a test sample. When testing the method, it did not use the information about the line of the gap. The experimental studies were carried out in two stages. The first one was aimed to select the most appropriate feature set and classifier model. We considered two models: linear SVM and non-linear SVM with radial basis function. At the second stage, we investigated the algorithm performance for various gap lengths and for different types of video. In addition, at the second stage, we analyzed the efficiency of combining data from different lines of several slices, which was made by summarizing lines.</p><p>The first stage of the experiments was carried out on DVR recordings with the gap length of 30 and without slice summation. The results of this stage in terms of classification accuracy (equal to the fraction of the correct classifications) are given in Figure <ref type="figure" target="#fig_5">6</ref>. The diagram in Figure <ref type="figure" target="#fig_5">6</ref> shows that the best accuracy values are resulted from the use of the 34 ( , ) pp feature set and linear SVM. Therefore, these options were further used in the second stage (see results in Table <ref type="table" target="#tab_0">1</ref>). The obtained results show that slice summation improves the classification quality sufficiently. We may also notice that the algorithm works better on DVR videos which contain a rapidly changing background. In general, the final results allow us to conclude that the proposed method is able to solve the considered problem, even in the simplified version of the algorithm described in Section 2.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>In this paper, we have tested the approach based on spatio-temporal video slices to solve the problem of detecting unauthorized inter-frame changes in video. This method is theoretically capable to solve this problem with a high speed due to the processing of only a part of the video signal, as well as using fast image processing techniques. To test the efficiency of this approach, we proposed a simple algorithm for detecting inter-frame changes and performed some numerical experiments. Our studies showed that the algorithm provides an accuracy of not less than 0.8 and works better for video captured with a moving camera. The results of the studies allow us to conclude that the method of spatio-temporal image slices looks promising, but the algorithm should be significantly improved in terms of increasing the accuracy at short gap lengths and when using video from a stationary camera.</p><p>[16] Myasnikov V V 2007 Fast algorithm for recursive computation of the convolution of an image with a two-dimensional inseparable polynomial FIR filter Pattern Recognit. Image Anal. 17 </p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 2 .</head><label>2</label><figDesc>Four types of object movement in spatio-temporal slices: (a) an object enters the camera view and goes beyond it, (b) an object appears in the camera view and goes beyond it, (c) an object enters the camera view and disappears, (d) an object appears and disappears within the camera view.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 1 .</head><label>1</label><figDesc>Figure 1. A spatio-temporal slice of a video containing several events.</figDesc><graphic coords="3,82.95,162.50,141.20,351.43" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 .</head><label>3</label><figDesc>Figure 3. Examples of video fragment deletion shown in spatio-temporal slices.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head></head><label></label><figDesc>without the argument) to detect artificial changes based on the following assumptions. Low values of k  and k  indicate unaltered videos, while sharp leaps in either k  or k  may give evidence of artificial changes (see Fig. 4-5).To detect artificial changes using k  and k  , we used the algorithm based on supervised learning.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 4 .Figure 5 .</head><label>45</label><figDesc>Dependence of k  (a) and k  (b) on k for an unaltered video. Dependence of k  (a) and k  (b) on k for a video with a cut at frame 130.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 6 .</head><label>6</label><figDesc>Figure 6. Results of the second stage of the experiment: classification model and feature set selection.</figDesc><graphic coords="5,186.50,318.25,236.15,162.70" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head></head><label></label><figDesc>421-427 [17] Kiy K I and Dickmanns E D 2004 A color vision system for real-time analysis of road scenes IEEE Intelligent Vehicles Symposium 54-59 [18] Kiy K I 2015 A New Real-Time Method of Contextual Image Description and Its Application in Robot Navigation and Intelligent Control Computer Vision in Control Systems 2 109-133</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>Results of the second stage of the experiment: accuracy estimation for different video types.</figDesc><table><row><cell>Video Type</cell><cell>Cut Length</cell><cell>Slice Summation</cell><cell>Cut Detection Accuracy</cell></row><row><cell></cell><cell>10</cell><cell>+ -</cell><cell>0.8406 0.8400</cell></row><row><cell>DVR</cell><cell>30</cell><cell>+ -</cell><cell>0.9063 0.8696</cell></row><row><cell></cell><cell>60</cell><cell>+ -</cell><cell>0.9375 0.8732</cell></row><row><cell></cell><cell>10</cell><cell>+ -</cell><cell>0.8732 0.7654</cell></row><row><cell>Model</cell><cell>30</cell><cell>+ -</cell><cell>0.7971 0.7200</cell></row><row><cell></cell><cell>60</cell><cell>+ -</cell><cell>0.8485 0.7308</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This work was supported by the Russian Foundation for Basic Research (grants 16-29-09494, 16-41-630676), by the Ministry of Education and Science (grant МК-1907.2017.9), and by the Federal Agency for Scientific Organizations (Agreement 007-GZ/43363/26).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">Cnw</forename><surname>Anon</surname></persName>
		</author>
		<author>
			<persName><surname>Sandvine</surname></persName>
		</author>
		<ptr target="http://www.newswire.ca/news-releases/sandvine-over-70-of-north-american-traffic-is-now-streaming-video-and-audio-560769981.html" />
		<title level="m">Over 70% of North American traffic is now streaming video and audio</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<ptr target="http://www.ispreview.co.uk/index.php/2014/06/internet-video-streaming-dominate-mobile-data-traffic-2019.html" />
		<title level="m">Anon Internet Video Streaming to Dominate Mobile Data Traffic by 2019 -ISPreview UK</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<ptr target="http://www.marketsandmarkets.com/Market-Reports/surveillance-277.html" />
		<title level="m">Anon Video Surveillance Market by Applications &amp; Management Services 2015 MarketsandMarkets</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Forensics Investigations of Multimedia Data: A Review of the Stateof-the-Art Sixth International Conference on IT Security Incident Management and IT Forensics</title>
		<author>
			<persName><forename type="first">R</forename><surname>Poisel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tjoa</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="48" to="61" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Vision of the Unseen: Current Trends and Challenges in Digital Image and Video Forensics</title>
		<author>
			<persName><forename type="first">A</forename><surname>Rocha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Scheirer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Boult</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Goldenstein</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Comput. Surv</title>
		<imprint>
			<biblScope unit="volume">43</biblScope>
			<biblScope unit="page">42</biblScope>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Hyperspectral remote sensing data compression and protection</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">V</forename><surname>Gashnikov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">I</forename><surname>Glumov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kuznetsov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">A</forename><surname>Mitekin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Myasnikov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V V</forename><surname>Sergeev</surname></persName>
		</author>
		<idno type="DOI">10.18287/2412-6179-2016-40-5-689-712</idno>
	</analytic>
	<monogr>
		<title level="j">Computer Optics</title>
		<imprint>
			<biblScope unit="volume">40</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="689" to="712" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">An overview on video forensics</title>
		<author>
			<persName><forename type="first">P</forename><surname>Bestagini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">M</forename><surname>Fontani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Milani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Barni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Piva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tagliasacchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K S</forename><surname>Tubaro</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 20th European Signal Processing Conference (EUSIPCO)</title>
				<meeting>the 20th European Signal Processing Conference (EUSIPCO)</meeting>
		<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="1229" to="1233" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Detection of frame deletion for digital video forensics Digital</title>
		<author>
			<persName><forename type="first">T</forename><surname>Shanableh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Investigation</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="350" to="360" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Temporal Forensics and Anti-Forensics for Motion Compensated Video</title>
		<author>
			<persName><forename type="first">M</forename><surname>Stamm</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W S And</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K J R</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Information Forensics and Security</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="1315" to="1329" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A video forensic technique for detecting frame deletion and insertion IEEE International Conference on Acoustics</title>
		<author>
			<persName><forename type="first">A</forename><surname>Gironi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Fontani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Bianchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Piva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Barni</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Speech and Signal Processing</title>
				<imprint>
			<publisher>ICASSP</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="6226" to="6230" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Exposing video inter-frame forgery based on velocity field consistency</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sun</forename><forename type="middle">T</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Wang</forename><forename type="middle">W</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</title>
				<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="2674" to="2678" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">A Novel Video Inter-frame Forgery Model Detection Scheme Based on Optical Flow Consistency The International Workshop on Digital Forensics and Watermarking</title>
		<author>
			<persName><forename type="first">J</forename><surname>Chao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Sun</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="267" to="281" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Exposing Digital Forgeries in Video by Detecting Double MPEG Compression</title>
		<author>
			<persName><forename type="first">Wang</forename><forename type="middle">W</forename><surname>Farid</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 8th Workshop on Multimedia and Security</title>
				<meeting>the 8th Workshop on Multimedia and Security<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="37" to="47" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Exposing Digital Video Forgery by Ghost Shadow Artifact Proceedings of the First</title>
		<author>
			<persName><forename type="first">J</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Su</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Zhang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ACM Workshop on Multimedia in Forensics</title>
				<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="49" to="54" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Parallel-recursive local image processing and polynomial bases</title>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">I</forename><surname>Glumov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Myasnikov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V V</forename><surname>Sergeyev</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of Third International Conference on Electronics, Circuits, and Systems</title>
				<meeting>Third International Conference on Electronics, Circuits, and Systems</meeting>
		<imprint>
			<date type="published" when="1996">1996</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="696" to="699" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">Image Processing and Earth Remote Sensing N A Sorokina and V A Fedoseev</title>
		<imprint/>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
