<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Cutting edge video analytics solutions: from the research to the market</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Mattia</forename><surname>Marseglia</surname></persName>
							<email>mattia.marseglia@aitech.vision</email>
							<affiliation key="aff0">
								<orgName type="department">A.I. Tech srl -www.aitech.vision</orgName>
								<address>
									<addrLine>Piazza Vittorio Emanuele 10, Penta(SA)</addrLine>
									<postCode>84123</postCode>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Domenico</forename><surname>Rocco</surname></persName>
							<email>domenico.rocco@aitech.vision</email>
							<affiliation key="aff0">
								<orgName type="department">A.I. Tech srl -www.aitech.vision</orgName>
								<address>
									<addrLine>Piazza Vittorio Emanuele 10, Penta(SA)</addrLine>
									<postCode>84123</postCode>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Stefano</forename><surname>Saldutti</surname></persName>
							<email>stefano.saldutti@aitech.vision</email>
							<affiliation key="aff0">
								<orgName type="department">A.I. Tech srl -www.aitech.vision</orgName>
								<address>
									<addrLine>Piazza Vittorio Emanuele 10, Penta(SA)</addrLine>
									<postCode>84123</postCode>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Bruno</forename><surname>Vento</surname></persName>
							<email>br1.vento@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="department">A.I. Tech srl -www.aitech.vision</orgName>
								<address>
									<addrLine>Piazza Vittorio Emanuele 10, Penta(SA)</addrLine>
									<postCode>84123</postCode>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Cutting edge video analytics solutions: from the research to the market</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">536721E86492F505D7E38875C8B8D71A</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:55+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>A.I. Tech was born as a spinoff company of the University of Salerno and designs and develops cutting edge video analytics solutions based on deep learning, able to run on board of smart cameras and/or on devices with limited resource capabilities. A.I. Tech solutions are designed to serve various vertical markets: retail, business intelligence, security and safety, smart parking, smart city and smart roads. In this paper we present all these solutions, which are the products of years of research transferred to the market.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Company presentation</head><p>A.I. Tech designs and develops cutting edge video analytics solutions based on the most advanced artificial intelligence and deep learning algorithms, also running directly on board of smart cameras, and therefore optimized for low-performance hardware. A.I. Tech boasts partnerships with world leaders in their reference fields, including (the list is not exhaustive) NVIDIA, Panasonic, Samsung, Hanwha Techwin, Mobotix, Axis, Hikvision, Dahua. In particular, Hanwha Techwin, Panasonic and Mobotix resell the video analytics solutions from A.I. Tech on a global scale. In 2017 A.I. Tech has been selected among the Top25 international companies in the field of Artificial Intelligence by CIO Applications Magazine. In 2018 it enters the Top10 Most Innovative AI Solution Providers. Its technology was selected among the finalists in the Benchmark Innovation Award in 2018, 2019, 2020, 2021 and 2022. In 2018 it wins the award in the Business Intelligence category, with the AI-RETAIL video analytics solution. In 2020 A.I. Tech won the Corporate LiveWire award in the "Most Innovative in Video Analytics" category. In 2020 its solutions are finalists in the Security and Fire Excellence Award, for the AI-CROWD-DEEP product (in the Security Software Product Innovation of the Year category) and for the WOW project (in the Security Project of the Year category). The AI-TRAFFIC solution for traffic monitoring is also the winner of the IoMOBILITY AWARD 2020, in the Mobility Analytics category. Corporate LiveWire awarded A.I.</p><p>Tech the "Innovation &amp; Excellence Awards" for the year 2022, renewing the award also for the year 2023, considering the company as the most innovative in the field of "AI Technology".</p><p>The activities that A.I. Tech carries out, with a highly technological and scientific content, require specialized skills in the field of Artificial Intelligence, Artificial Vision and Embedded Systems. For this reason, the company has a very close collaboration relationship with the Department of Information and Electrical Engineering and Applied Mathematics (DIEM) of the University of Salerno. In particular, there is also an agreement for the activation of company internships as well as scientific collaborations for the next years. These activities allow to transfer the scientific skills of the DIEM research group in the field of Artificial Vision and Artificial Intelligence, with a consequent technological transfer of research products which takes the form of a series of cutting edge artificial intelligence products, commercially available at an international level.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Overview of the solutions</head><p>Most of the deep learning based systems available nowadays in the market are realized on top of off-the-shelf detectors. Anyway, designing software solutions engineered to be as accurate as the state-of-the-art without the computational burden typically required by deep neural networks, is definitively more challenging. Realizing computationally inexpensive solutions is a mandatory requirement in several real-world applications where the system is expected to process hundreds of video streams simultaneously in real-time keeping an affordable cost; smart-cities are a noteworthy example of that. Moreover, in different contexts the processing is required to be performed of on the edge due to environmental constraints, therefore the video analytic application has to run on board of smart cameras <ref type="bibr" target="#b0">[1]</ref>, with very limited hardware resources.</p><p>Within this context, a common design choice of all the A.I.Tech applications is to preserve the accuracy comparable with state-of-the-art detectors and classifiers based on heavy neural networks, but achieving the lowest hardware requirement together with the higher processing throughput. Thanks to this, A.I. Tech plugins are able to run directly on board of a huge amount of different smart cameras providing open platforms to specific partners (and in particular on board of specific models of the following camera manufacturers: Androvideo, Axis, Bosch, Dahua, Hanwha Techwin, Hikvision, Mobotix, Panasonic, Topview, Vivotek). A.I. Tech confirms to be, in the world, the video analytics vendor supporting the highest number of camera platforms.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Video analytics products</head><p>In this section we are going to describe 12 video analytics solutions currently available on the market.</p><p>AI-BIO<ref type="foot" target="#foot_1">1</ref> performs face analysis with the purpose of extracting soft-biometric features like age, gender and emotion <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b3">4]</ref>. The application has a multitask architecture based on multiple deep neural networks engineered to be executed on board of embedded platforms and smart cameras. It can be used both for business intelligence and for digital signage applications <ref type="bibr" target="#b4">[5]</ref>. In particular, in the last case, the aim is to personalize advertisement contents on a monitor by taking into account the soft-biometric features extracted from the face of the person who is watching at the monitor. An example is shown in Figure <ref type="figure" target="#fig_0">1a</ref>.</p><p>AI-CROWDCOUNTING<ref type="foot" target="#foot_2">2</ref> is a video analytics application tailored to estimate, for statistical or alerting purposes, the crowd density within specific very crowded areas of interest. Powered by a deep learning model and boosted by a distinctive training strategy <ref type="bibr" target="#b5">[6]</ref>, the system is not only able to detect people fully visible in the scene, but also to identify those that are very occluded, thanks to a point-based head detection algorithm. This makes the application particularly suited for very crowded environments, such as stadiums, concerts or trade fairs. Figure <ref type="figure" target="#fig_0">1b</ref> shows an example of the solution in action.</p><p>AI-CROWD-DEEP <ref type="foot" target="#foot_3">3</ref> is the video analytic solution for people monitoring. Thanks to the combination of a proprietary deep learning based detector, a multi object tracker <ref type="bibr" target="#b6">[7]</ref> and a calibration mechanism, it is capable of: (i) estimating the number of people inside an area; (ii) generating an alarm in case of overcrowding situations or in case of gathering detected; (iii) generating an alarm if two or more persons are not respecting the social distances for a given amount of time; (iv) counting of people that cross virtual lines; (v) counting the number of pedestrians crossing one area and arriving in another, building the origin-destination matrix. An example of the solution in action is shown in Figure <ref type="figure" target="#fig_0">1c</ref>.</p><p>AI-FIREPLUS <ref type="foot" target="#foot_4">4</ref> are the solutions focused on the early detection of fires. It combines the analysis of movement and appearance with a deep neural network to detect the presence of flame or smoke within an area under monitoring <ref type="bibr" target="#b7">[8]</ref>, it can operate in both indoor and outdoor environments. The main benefit of this application is that it does not require thermal or thermographic sensors, but traditional optic ones instead. An example is shown in Figure <ref type="figure" target="#fig_0">1d</ref>.</p><p>AI-INTRUSION <ref type="foot" target="#foot_5">5</ref> is the video analytic solution for the detection of intruders (people or vehicles). It is capable to detect: (i) intrusions or loitering within an area of interest framed by the camera; (ii) the crossing of a virtual line; (iii) the crossing of multiple crossing lines (not necessarily parallel) in sequence. In addition to the size and the aspect ratio of the object, it uses a deep neural network to filter objects according to their class. An example is reported in Figure <ref type="figure" target="#fig_0">1e</ref>.</p><p>AI-LOST <ref type="foot" target="#foot_6">6</ref> is the video analysis application designed to detect removed or abandoned objects in restricted environments where constant surveillance cannot be guaranteed <ref type="bibr" target="#b8">[9]</ref>. The application can use a deep neural network to recognize garbage or, alternatively, baggage. An example is reported in Figure <ref type="figure" target="#fig_0">1f</ref>.</p><p>AI-LPR is the solution for license plate detection and recognition. Unlike other products available in the market, it is fully based on deep learning for both plate detection and license character recognition. An example of the product is shown in Figure <ref type="figure" target="#fig_0">1g</ref>.</p><p>AI-PARKING <ref type="foot" target="#foot_7">7</ref> is designed to monitor both indoor and outdoor parking, so as to verify whether a parking spot is free or occupied. Unlike other solutions based on vehicle detection, this is a very effective application requiring that only a part of the vehicle must be visible to monitor a spot. An example of AI-PARKING in action is available in Figure <ref type="figure" target="#fig_0">1h</ref>. AI-PEOPLE-DEEP <ref type="foot" target="#foot_8">8</ref> is the solution that exploits a deep neural network to count the people framed by a camera positioned in zenithal view. Inspired by <ref type="bibr" target="#b9">[10]</ref>, the application is designed to work both indoors and outdoors where it is possible to ensure that the illumination conditions controlled. An example is reported in Figure <ref type="figure" target="#fig_0">1i</ref>. AI-PPE <ref type="foot" target="#foot_9">9</ref> is designed to detect people wearing personal protective equipment (PPE). The application is based on the architecture described in <ref type="bibr" target="#b10">[11]</ref>. The PPE combinations that the application is able to detect are: "Helmet", "Vest" and "Helmet and Vest". This solution can be used both in the case of access control system and for the surveillance of construction sites or places where works are in progress. In the first case, the use of the product is meant to verify that a worker is wearing the specified PPE, in order to authorize him to enter a work area. In the second, the product can be used for continuous monitoring of a work area with the aim of verifying that workers are wearing all the PPE required. An example of the product is reported in Figure <ref type="figure" target="#fig_0">1j</ref>. AI-RAIL<ref type="foot" target="#foot_10">10</ref> is a video analysis application designed for enhancing railway safety. It combines traditional computer vision techniques along with deep neural networks to identify and analyze the behavior of vehicles, pedestrians, and obstacles within sensitive areas such as level crossings area or along railway lines. The analysis can be activated depending on the barrier status, which can be obtained by either an external signal or through neural networks integrated into the system. An example is shown in Figure <ref type="figure" target="#fig_0">1k</ref>.</p><p>AI-SPILL <ref type="foot" target="#foot_11">11</ref> is designed to monitor a person walking in an unsupervised area and detect if the person falls, raising an alarm if that happens. The analysis is performed using a mathematical model that allows to analyse the behavior of a person moving in the scenario of interest, especially walking and falling dynamics. An advanced neural network, trained with thousands of fallen people samples and optimized for running on board the camera, is then used to confirm the initial outcome of that model. An example is reported in Figure <ref type="figure" target="#fig_0">1l</ref>.</p><p>AI-TRAFFIC-DEEP <ref type="foot" target="#foot_12">12</ref> is the video analysis solution for road monitoring for both statistical and alarmist purposes. Technically speaking, the application is based on a deep learning based vehicle and people detector <ref type="bibr" target="#b11">[12]</ref> followed by a multi-object tracking module <ref type="bibr" target="#b6">[7]</ref> and an advanced 3D scene reconstruction stage. It is capable of: (i) counting and classifying vehicles among cars, motorcycles and trucks; (ii) estimating the average speed and the color of each detected vehicle; (iii) evaluating the density of vehicles on a road branch and raise an alarm if congestion is detected; (iv) detecting vehicles travelling in the wrong direction or that stopped in some forbidden areas; (v) detecting the presence of pedestrians on the road; (vi) counting the number of vehicles and pedestrians crossing one area and arriving in another, building the origin-destination matrix; (vii) detecting lane changes and abnormal maneuvers (such as U-turns in prohibited areas) made by vehicles, based on crossing a set of userconfigured virtual lines. An example is reported in Figure <ref type="figure" target="#fig_0">1m</ref>.</p><p>AI-VIOLATION 13 is a vertical solution able to detect traffic light violations (see Fig. <ref type="figure" target="#fig_0">1n</ref>), namely the presence of vehicles crossing the stopping line while the traffic light is red. It is based on the above mentioned vehicle detector and a classifier that allows surveillance cameras (which are commonly installed over the city) to read the traffic light status without the need to install external devices. The state of a traffic light includes the color of the active traffic light circle and whether it is blinking or not. In particular, the application can identify vehicles crossing the stop line at the traffic light while the traffic light status is red and send a notification to report the violation. This notification contains also information about the vehicle, such as the type (between motorcycle, bicycle, car, truck), the estimated average speed and all the information that are necessary to decide whether there are legal limits for a fine.</p><p>AI-WEATHER 14 is an innovative application that uses deep neural networks to monitor weather and road conditions. This app can recognize a wide range of weather states, including sunny, cloudy, rainy, snowy and foggy, as well as road surface conditions, which can vary between dry, non-dry and flooding. This application is designed to operate effectively in outdoor environments and requires visibility of both the road surface and the sky at the same time (see Fig. <ref type="figure" target="#fig_0">1o</ref>). AI-Weather offers a variety of useful alerts to users, including sending periodic updates on weather and road conditions, as well as instant notifications when the status of one of the sensors changes.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Some examples of A.I. Tech video analytic plugins in action. Fig. 1a AI-BIO: for each person, the rectangle around the face is shown in pink or in blue, depending on the gender of the person; moreover, the figure shows all the soft-biometric features extracted by the software: the emotion and the age. Fig. 1b AI-CROWDCOUNTING: for each detected person, the application draws a red point, showing in real-time the number of people present in the region of interest. Fig.1cAI-CROWD-DEEP: the yellow area highlights the region where the analysis is performed. The dotted white-red bounding box around emphasizes a cluster of people that are not respecting the social distances. Fig.1dAI-FIREPLUS: in green the area of interest, while the red box calls attention to the detected flame. In the black grid, the detected smoke is highlighted in red. Fig.1e AI-INTRUSION: the intrusion area is the red polygon and the multiple crossing lines are the numbered red lines below. In the example, a person has been detected in the intrusion area. The P at the top left of the bounding box indicates that the object is a person (rather than V for vehicle). Fig.1fAI-LOST: the area of interest is the polygon in blue. The red bounding box with the G string indicates that the detected object is garbage (instead of B for baggage). Fig.1gAI-LPR: in green we can see the license plate numbers recognized by the application. Fig.1hAI-PARKING: the red boxes highlight occupied spots, while green boxes those that are free. Fig.1iAI-PEOPLE-DEEP: a red bounding box is drawn when a person crosses the virtual line. Fig.1j AI-PPE:for each detected person, the application draws a bounding box and a string indicating the recognized tool (W for no ppe, WH for only helmet, WV for only vest and WHV for both helmet and vest). Fig.1kAI-RAIL: a red bounding box is drawn around detected objects if they are within a restricted area when the barrier blocks the road. Fig.1lAI-SPILL: in the scene a red bounding box is drawn around a person fallen within the area of interest. Fig.1mAI-TRAFFIC-DEEP: the area of interest where the evaluation is performed is in violet. A three dimensional bounding box is associated to each vehicle, together with the three dimensions of each object (width, length, height), expressed in meters; the speed (s), expressed in km/h; the category of the vehicle (Car in the example). Fig.1nAI-VIOLATION: the status of the traffic light is shown in the box on the side (green in the example), the area where the analysis is performed is in violet, the application allows to draw the limit of the stopping line (red line). Fig.1oAI-WEATHER: A sensor is placed near the road for monitoring, and another sensor covering the entire image is utilized for classifying weather conditions. After the observation time within the sensors has passed, the classification outputs are displayed.</figDesc><graphic coords="3,100.22,400.16,124.99,62.20" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_0">ceur-ws.org ISSN 1613-0073</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_1">https://www.youtube.com/watch?v=awze1fHoQEE</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_2">https://youtu.be/h0qDXkZkObU?si=Su6gStufv9NbUrK9</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_3">https://www.youtube.com/watch?v=BiCyon1KZco</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_4">https://www.youtube.com/watch?v=U1SwnESua0g</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_5">https://www.youtube.com/watch?v=3kUUOcofVow</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_6">https://www.youtube.com/watch?v=gq24PrW6UwQ</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_7">https://www.youtube.com/watch?v=VDQ82Di4fZs</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="8" xml:id="foot_8">https://www.youtube.com/watch?v=x6N5g4Fs6_U</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="9" xml:id="foot_9">https://www.youtube.com/watch?v=-fz25HYcFLo</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="10" xml:id="foot_10">https://youtu.be/cDh1epks3x0?si=TCZlm8QJOG_FJ6bk</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="11" xml:id="foot_11">https://www.youtube.com/watch?v=pCFBnWC8uPQ</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="12" xml:id="foot_12">https://www.youtube.com/watch?v=6yQS6n_nTcI</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">An effective real time gender recognition system for smart cameras</title>
		<author>
			<persName><forename type="first">V</forename><surname>Carletti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Greco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Saggese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vento</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Ambient Intell. Humaniz. Comput</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="2407" to="2419" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A convolutional neural network for gender recognition optimizing the accuracy/speed tradeoff</title>
		<author>
			<persName><forename type="first">A</forename><surname>Greco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Saggese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vento</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Vigilante</surname></persName>
		</author>
		<idno type="DOI">10.1109/ACCESS.2020.3008793</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="130771" to="130781" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Gender recognition in the wild: a robustness evaluation over corrupted images</title>
		<author>
			<persName><forename type="first">A</forename><surname>Greco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Saggese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vento</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Vigilante</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="volume">12</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Effective training of convolutional neural networks for age estimation based on knowledge distillation</title>
		<author>
			<persName><forename type="first">A</forename><surname>Greco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Saggese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vento</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Vigilante</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Comput. Appl</title>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Digital signage by real-time gender recognition from face images</title>
		<author>
			<persName><forename type="first">A</forename><surname>Greco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Saggese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vento</surname></persName>
		</author>
		<ptr target="https://www.youtube.com/watch?v=_gn-odtuWJo2020" />
		<imprint>
			<date type="published" when="2020">0 IoT, 2020</date>
			<biblScope unit="page" from="309" to="313" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Highly crowd detection and counting based on curriculum learning</title>
		<author>
			<persName><forename type="first">L</forename><surname>Fotia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Percannella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Saggese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vento</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Computer Analysis of Images and Patterns</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="13" to="22" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Real-time tracking of single people and groups simultaneously by contextual graph-based reasoning dealing complex occlusions</title>
		<author>
			<persName><forename type="first">P</forename><surname>Foggia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Percannella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Saggese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vento</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2013">2013. 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Real-time fire detection for video-surveillance applications using a combination of experts based on color, shape, and motion</title>
		<author>
			<persName><forename type="first">P</forename><surname>Foggia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Saggese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vento</surname></persName>
		</author>
		<idno type="DOI">10.1109/TCSVT.2015.2392531</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Circuits and Systems for Video Technology</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="page" from="1545" to="1556" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">A method for detecting long term left baggage based on heat map</title>
		<author>
			<persName><forename type="first">P</forename><surname>Foggia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Greco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Saggese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vento</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">VISAPP</title>
		<imprint>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="385" to="391" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A robust and efficient overhead people counting system for retail applications</title>
		<author>
			<persName><forename type="first">A</forename><surname>Greco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Saggese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Vento</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Image Analysis and Processing</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="139" to="150" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Fast and effective detection of personal protective equipment on smart cameras</title>
		<author>
			<persName><forename type="first">A</forename><surname>Greco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Saldutti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Vento</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Pattern Recognition</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="95" to="108" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Vehicles detection for smart roads applications on board of smart cameras: A comparative analysis</title>
		<author>
			<persName><forename type="first">A</forename><surname>Greco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Saggese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vento</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Vigilante</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. Intell. Transp. Syst</title>
		<imprint>
			<biblScope unit="page" from="1" to="13" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
