<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">YOLOv5-based Object Detection for Construction Site Efficiency: Equipment, Tool, and Vehicle Recognition</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Serhii</forename><surname>Dolhopolov</surname></persName>
							<email>dolhopolov@icloud.com</email>
							<affiliation key="aff0">
								<orgName type="institution">Kyiv National University of Construction and Architecture</orgName>
								<address>
									<addrLine>31, Air Force Avenue</addrLine>
									<postCode>03037</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Tetyana</forename><surname>Honcharenko</surname></persName>
							<email>goncharenko.ta@knuba.edu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Kyiv National University of Construction and Architecture</orgName>
								<address>
									<addrLine>31, Air Force Avenue</addrLine>
									<postCode>03037</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Denys</forename><surname>Chernyshev</surname></persName>
							<email>chernyshev.do@knuba.edu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Kyiv National University of Construction and Architecture</orgName>
								<address>
									<addrLine>31, Air Force Avenue</addrLine>
									<postCode>03037</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Olga</forename><surname>Solovei</surname></persName>
							<email>solovey.ol@knuba.edu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Kyiv National University of Construction and Architecture</orgName>
								<address>
									<addrLine>31, Air Force Avenue</addrLine>
									<postCode>03037</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<address>
									<postCode>2024</postCode>
									<settlement>Ternopil</settlement>
									<region>Opole</region>
									<country>Ukraine, Poland</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">YOLOv5-based Object Detection for Construction Site Efficiency: Equipment, Tool, and Vehicle Recognition</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">79696E8916D9C9A09A2B91FFF807BFC7</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:41+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Construction site, YOLOv5, recognition systems, real-time object classification 1 (O. Solovei) 0000-0001-9418-0943 (S. Dolhopolov)</term>
					<term>0000-0003-2577-6916 (T. Honcharenko)</term>
					<term>0000-0002-1946-9242 (D. Chernyshev)</term>
					<term>0000-0001-8774-7243 (O. Solovei)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The integration of YOLOv5-based object detection into construction site management has emerged as a transformative approach to enhancing efficiency and safety. This study aimed to develop a model capable of real-time identification and tracking of construction resources, equipment, and vehicles using CCTV footage. By leveraging the power of computer vision and deep learning, the model facilitates optimized resource allocation, equipment utilization, and improved safety measures through the precise monitoring of tools, machinery, and vehicle movements. Utilizing a bespoke dataset, the YOLOv5 model underwent rigorous training, validation, and testing phases. The model was trained for 30 epochs with a dataset comprising 1,897 images of construction equipment, tools, and vehicles, achieving a final precision of 0.852, recall of 0.723, and mean Average Precision (mAP_0.5) of 0.792. These results underscore the model's high accuracy in detecting and classifying various construction-related objects, thereby demonstrating its potential to significantly enhance operational efficiency and safety on construction sites.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>In recent years, the construction industry has witnessed a significant transformation, driven by technological advancements aimed at enhancing site efficiency and safety. The integration of artificial intelligence (AI) and machine learning (ML) technologies into construction operations has emerged as a pivotal strategy for addressing the perennial challenges of resource and equipment management. Among these technologies, the YOLOv5-based object detection model stands out for its potential to revolutionize the way construction sites operate, particularly in terms of equipment utilization, tool tracking, and vehicle recognition.</p><p>The global construction sector has long grappled with issues related to safety management, with accidents on construction sites posing serious risks to workers and project timelines. such as the detection of workers in complex terrains and the identification of safety hazards. This research exemplifies the versatility of YOLOv5-based models in adapting to diverse construction environments and safety requirements.</p><p>The deployment of YOLOv5-based object detection systems on construction sites facilitates a proactive approach to safety management and resource allocation. By enabling the real-time detection and classification of construction assets and potential hazards, these systems empower site managers to make informed decisions that enhance safety and operational efficiency. Moreover, the continuous improvement and customization of YOLOv5 models, as demonstrated by ongoing research, ensure their relevance and effectiveness in meeting the evolving needs of the construction industry.</p><p>The advent of YOLOv5-based object detection models has not only promised enhancements in construction site safety and efficiency but also opened new avenues for research and development within the construction industry. The ability of these models to accurately detect, classify, and track resources and equipment in real-time presents a significant leap forward in managing the dynamic and often hazardous environment of construction sites.</p><p>The application of YOLOv5 extends to the meticulous tracking of construction equipment and tools, a critical aspect for ensuring project timelines is met and reducing idle times. <ref type="bibr">Yang et al.</ref> showcased the effectiveness of YOLOv5 in monitoring compliance with safety protocols, such as the wearing of helmets and masks, by construction workers <ref type="bibr" target="#b4">[5]</ref>. Their work not only demonstrates the model's high accuracy and efficiency in real-world scenarios but also its potential to significantly reduce the risk of accidents and enhance overall site safety.</p><p>Moreover, the customization and improvement of YOLOv5 models to suit specific construction site conditions have been a focus of recent studies. Zeng et al. introduced an enhanced YOLOv3 model for equipment detection and localization, which, while predating YOLOv5, underscores the continuous evolution and refinement of YOLO architectures for construction site applications <ref type="bibr" target="#b5">[6]</ref>. Their research highlights the importance of adapting object detection models to the unique challenges posed by construction sites, such as the detection of small or occluded objects and the need for real-time processing.</p><p>The integration of YOLOv5-based object detection into construction site management systems represents a significant step towards automating safety and resource management processes. By providing site managers with real-time data on equipment location, usage, and worker safety compliance, these systems enable more informed decision-making, ultimately leading to improved project efficiency and reduced costs. Furthermore, the ongoing development and customization of YOLOv5 models ensure that these systems remain adaptable to the ever-changing landscape of construction site management.</p><p>The precision and efficiency of YOLOv5 in object detection have significant implications for the management of construction resources. By automating the tracking of tools and equipment, YOLOv5 models minimize the likelihood of loss and misplacement, thereby ensuring that resources are optimally utilized and readily available when needed. This capability is crucial for maintaining project schedules and reducing downtime. For instance, the work of Wan et al. on utilizing YOLOv5 for object detection in high-resolution optical remote sensing images, though focused on a different application, underscores the model's robustness and adaptability in detecting objects across various scales and conditions <ref type="bibr" target="#b6">[7]</ref>. Such attributes are invaluable in the complex and ever-changing environment of construction sites.</p><p>Moreover, the application of YOLOv5 extends to enhancing safety measures on construction sites. Through real-time monitoring and detection of safety gear compliance, such as helmets and vests, YOLOv5 models play a pivotal role in preventing accidents and ensuring the wellbeing of construction workers. The research by Zhou et al. on the detection of construction waste using an improved YOLOv5 model illustrates the model's versatility and high performance in identifying specific objects within cluttered scenes <ref type="bibr" target="#b1">[2]</ref>. This capability is directly applicable to safety monitoring on construction sites, where the ability to accurately detect personal protective equipment (PPE) amidst the site's activity can significantly impact overall safety outcomes.</p><p>The ongoing development and customization of YOLOv5 models for construction site management underscore the potential for further innovations in this field. As researchers and practitioners continue to explore new applications and enhancements of YOLOv5 technology, the construction industry stands on the cusp of a new era of digital transformation. This transformation is characterized by increased automation, improved safety protocols, and enhanced resource management, all of which contribute to the overall efficiency and success of construction projects.</p><p>As the construction industry continues to evolve, the integration of cutting-edge technologies like YOLOv5-based object detection into resource and equipment management practices has become increasingly vital. This technology's capacity to enhance construction site efficiency and safety through advanced equipment utilization, precise tool tracking, and accurate vehicle recognition marks a significant leap forward in the sector's operational capabilities.</p><p>The adaptability and efficiency of YOLOv5 in various construction site scenarios have been demonstrated through numerous studies, each contributing to the model's ongoing refinement and application. For instance, the work by Peng et al. on CORY-Net for intelligent safety monitoring on power grid construction sites exemplifies the potential of YOLOv5-based models to enhance worker safety and operational oversight <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b7">8]</ref>. Similarly, the study by Yang et al. on the application of YOLOv5 for PPE compliance monitoring further underscores the model's utility in promoting construction site safety <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b8">9]</ref>. These studies, among others, provide a solid foundation for exploring new avenues for applying YOLOv5 in construction site management.</p><p>The purpose of this research can be summarized as follows: By addressing these objectives, this research aims to contribute to the body of knowledge on the application of advanced object detection technologies in construction site management. Through a detailed analysis of current practices and future potentials, we seek to illuminate the path toward a more efficient, safe, and technologically advanced construction industry.</p><formula xml:id="formula_0"></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Main research</head><p>The proposed study aims to enhance construction site efficiency and safety through the implementation of a YOLOv5-based object detection model. This section outlines the materials and methods used to develop, train, and deploy the model for resource and equipment management on construction sites.</p><p>Proposed Framework for Resource and Equipment Management System:</p><p>1. Data Collection.</p><p>1.1. Public Datasets. Initially, public datasets containing images of construction equipment, tools, and vehicles were utilized. These datasets offer a broad range of object types and scenarios, providing a solid foundation for the initial training of the YOLOv5 model. Public datasets (ACID <ref type="bibr" target="#b9">[10]</ref>, TTM <ref type="bibr" target="#b10">[11]</ref>) are invaluable for introducing the model to a wide variety of objects and conditions it might encounter in realworld construction environments. 1.2.</p><p>Self-captured Images. To tailor the model more closely to the specific needs and conditions of construction sites, a significant portion of the dataset was composed of self-captured images and video footage. This involved on-site data collection at various construction projects, capturing images and videos of resources, equipment in different operational states (e.g., idle, in use), and under diverse environmental conditions. This step was critical for incorporating real-world variability into the dataset, ensuring the model's effectiveness across different construction sites and conditions. 2. Preprocessing.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>2.1.</head><p>Data Cleaning. The first step involved filtering out irrelevant images and correcting any errors within the dataset. This process ensured that only pertinent and accurate data were included, enhancing the quality of the training material.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>2.2.</head><p>Resize/Adjust Brightness and Contrast. To standardize the dataset, all images were resized to a uniform dimension suitable for the YOLOv5 model. Additionally, adjustments to brightness and contrast were made where necessary to simulate various lighting conditions, further improving the model's robustness and accuracy.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>2.3.</head><p>Image Labeling. Using annotation tools like YOLO Label, each image in the dataset was meticulously labeled to identify and classify different types of construction resources and equipment, along with their operational states. This step is crucial for supervised learning, as it provides the model with the necessary information to learn from the visual data.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Model Training and Evaluation.</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>3.1.</head><p>Train the Object Detection Model (YOLOv5). The YOLOv5 model is configured and trained using the prepared dataset. The training process is optimized for accuracy in detecting various resources and equipment specific to construction sites.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>3.2.</head><p>Evaluate the Model. The trained model is evaluated on a separate set of images to assess its performance. Evaluation metrics include precision, recall, and mAP (mean Average Precision), providing insights into the model's effectiveness in real-world scenarios. 4. Integration and Deployment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>4.1.</head><p>Object Detection Model Weights (YOLOv5). The trained model is deployed into the construction site management system, enabling real-time analysis and detection of resources and equipment. 4.2.</p><p>Input Source. The model utilizes input sources such as CCTV footage, static images, or live video feeds for continuous object detection and monitoring. 5. Real-time Detection and Management.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>5.1.</head><p>Detecting Resources and Equipment. The system identifies and classifies resources and equipment in real-time, distinguishing between different types (e.g., tools, machinery) and states (idle, in use), facilitating immediate action and decisionmaking.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>5.2.</head><p>Environmental Conditions. The model optionally integrates environmental condition detection to adjust resource management strategies in response to weather changes, enhancing operational adaptability. 6. Resource and Equipment Status Dashboard.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>6.1.</head><p>Visualization and Alerts. A dashboard presents detected resources and equipment, highlighting their status, location, and usage. Alerts are generated for underutilized resources or when equipment maintenance is due, ensuring optimal resource management. 6.2.</p><p>Decision Support. The system provides actionable insights for resource allocation, maintenance scheduling, and equipment usage optimization based on real-time data, supporting informed decision-making. 7. Feedback Loop for Continuous Improvement.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>7.1.</head><p>Model Retraining. New data and feedback are periodically collected to retrain the model, improving its accuracy and adapting to new types of resources or changes in the construction site environment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>7.2.</head><p>System Updates. The management dashboard and decision support tools are updated based on insights gained from model performance and user feedback, ensuring the system's continuous improvement and relevance.</p><p>This comprehensive framework leverages YOLOv5 for object detection to manage resources and equipment on construction sites effectively and is represented as a model in Figure <ref type="figure" target="#fig_0">1</ref>. By emphasizing the detection and classification of resources and integrating this information into actionable insights for site managers, the system ensures resources are used efficiently and effectively, enhancing overall site safety and operational efficiency. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Dataset of the Study</head><p>Training an object detector is fundamentally a supervised learning problem that requires a wellcurated dataset to inform and refine the model's learning process. The dataset serves as the foundation upon which the object detection model, in this case, YOLOv5, is trained, validated, and tested. The construction of a comprehensive and representative dataset is crucial for the success of the model in accurately identifying and classifying various objects within construction sites <ref type="bibr" target="#b11">[12]</ref>. The following outlines the meticulous process undertaken to build the dataset for this study.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Data Collection</head><p>The data collection process for enhancing construction site efficiency and safety through YOLOv5-based object detection focuses on gathering a diverse array of images representing various states of equipment utilization, tool and machinery tracking, and vehicle recognition. This comprehensive approach ensures the model is well-equipped to accurately identify and classify a wide range of objects under different conditions, crucial for real-world application on construction sites.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.1.">Equipment Utilization</head><p>For the category of equipment utilization, images were collected to represent both idle and active states of essential construction machinery: </p><formula xml:id="formula_1"></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.2.">Tool and Machinery Tracking</head><p>This category involved collecting images of handheld tools and machinery, differentiating between their usage states:</p><p> Hand Drill. 200 images were collected, distinguishing between drills in use and those stored.  Power Saw. Around 170 images differentiating between power saws in operation and those turned off.  Jackhammer. Secured 160 images identifying jackhammers, noting if they are being used on the site.  Welding Machine. Collected 140 images of welding machines, with a focus on capturing them in active use for metal joining tasks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.3.">Vehicle Recognition</head><p>For vehicle recognition, the dataset includes images representing both the operational and idle states of key construction vehicles: </p><formula xml:id="formula_2"></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.4.">Annotation Process</head><p>Each image within the dataset was meticulously annotated to provide the YOLOv5 model with clear, diverse examples of each state or type of equipment, tool, and vehicle. This diversity is crucial for helping the model learn the nuances of each category, thereby improving its ability to accurately identify and classify objects in real-world construction site scenarios. The annotation process included labeling images from various angles, lighting conditions, and distances to build a robust and versatile dataset, ensuring the model's effectiveness across a wide range of construction environments.</p><p>Table <ref type="table" target="#tab_6">1</ref> shows the number of cases across different classes. The fundamental idea is to analyze a sequence of images to identify whether an object, such as a concrete mixer, remains in the same state (indicating inactivity) or transitions between states (indicating activity). This determination is made by observing changes in the object's features or position across the image sequence.</p><p>Object Does Not Change Its State -Not Active. When a sequence of images is fed into a detection system where the object does not change its state, the object is classified as not active. For a concrete mixer, this would mean that across multiple frames, there is no visible change in its position, orientation, or any operational components (e.g., the mixing drum remains stationary). The lack of change suggests that the concrete mixer is idle. Detecting inactivity involves analyzing the object's features across the sequence and noting the absence of significant variation.</p><p>Object Changes Its State -Active. Conversely, if the object changes its state across the sequence of images, it is classified as active. For the concrete mixer example, this would be indicated by visible changes such as the rotation of the mixing drum, movement of the mixer from one location to another, or other signs of operation. Detecting activity involves identifying variations in the object's features, such as changes in texture (rotation patterns of the drum), position, or other operational indicators that signify the mixer is in use. An example of an active equipment recognition system is shown in Figure <ref type="figure" target="#fig_1">2</ref>  This principle of object activity detection is not limited to concrete mixers but can be applied to a wide range of objects and scenarios where understanding the operational state is crucial. Implementing such a system requires careful consideration of the features to be extracted, the method for temporal analysis, and the criteria for classifying the state of the object.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Data Cleaning</head><p>Data cleaning is a critical step in preparing the dataset for training a YOLOv5-based object detection model, especially when the goal is to enhance construction site efficiency and safety. This process involves meticulously reviewing the dataset to remove any irrelevant, duplicate, or poor-quality images that could potentially hinder the model's learning and performance. The objective is to ensure that the dataset is as accurate and representative of real-world scenarios as possible <ref type="bibr" target="#b12">[13]</ref>.</p><p>The first step in the data cleaning process involved identifying and removing images that do not contribute to the model's learning objectives. For instance, images that do not clearly depict construction equipment, tools, or vehicles in the specified states (idle or active) were considered irrelevant. This step is crucial for maintaining the focus of the model on the target objects and scenarios relevant to construction site management.</p><p>Duplicate images can skew the model's learning process, leading to overfitting on specific examples. Therefore, the dataset was carefully scanned to identify and remove any duplicates. This ensures a diverse range of examples for each class, promoting a more generalized understanding and detection capability within the model.</p><p>Mislabelled images present a significant challenge in supervised learning models. Incorrect labels can confuse the model, leading to inaccuracies in object detection and classification. A thorough review of the dataset annotations was conducted to correct any mislabelled images, ensuring that each image accurately represents the intended class and state of the construction equipment, tools, or vehicles.</p><p>Quality control measures were implemented to remove images that are blurry, poorly lit, or obstructed, which could compromise the model's ability to learn effectively. Images were evaluated for clarity, lighting, and visibility of the target objects, with substandard images being removed from the dataset. This step is essential for ensuring that the model is trained on highquality images that accurately reflect the conditions under which it will operate on construction sites.</p><p>Upon completion of the data cleaning process, the dataset underwent a final review to confirm its readiness for model training. This involved a comprehensive assessment of the dataset's diversity, representativeness, and alignment with the study's objectives of improving construction site efficiency and safety through YOLOv5-based object detection.</p><p>The meticulous data cleaning process undertaken in this study ensures that the dataset is optimized for training a highly effective and accurate YOLOv5 model. By focusing on relevance, diversity, accuracy, and quality, the cleaned dataset lays a solid foundation for developing a robust object detection system capable of enhancing resource and equipment management on construction sites.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4.">Image Preprocessing</head><p>Data cleaning is Image preprocessing is a pivotal phase in preparing the dataset for the training of a YOLOv5-based object detection model, aimed at enhancing construction site efficiency and safety <ref type="bibr" target="#b13">[14]</ref>. This stage involves several key processes designed to improve the quality of the images and their suitability for model training. The goal is to standardize the dataset, enhancing the model's ability to learn from the images and accurately detect and classify various objects under different conditions on construction sites.</p><p>To ensure consistency and optimize processing efficiency, all images in the dataset were resized to a uniform dimension recommended for YOLOv5 training. This standardization is crucial for maintaining computational efficiency and ensuring that the model receives input images of a consistent size, which is vital for the internal architecture of the CNN (Convolutional Neural Network) used in YOLOv5.</p><p>Given the variability of lighting conditions on construction sites, images in the dataset were adjusted for brightness and contrast to simulate a wide range of environmental conditions. This step is essential for training the model to perform reliably in different lighting scenarios, from bright sunlight to overcast or poorly lit conditions. By adjusting the brightness and contrast, the model is better equipped to recognize and classify objects regardless of the lighting environment.</p><p>Image normalization was applied to scale pixel values to a standard range, typically between 0 and 1. This process helps in reducing the variance among images and speeds up the convergence of the model during training. Normalization ensures that the model treats each image uniformly, improving the learning efficiency and stability of the YOLOv5 model.</p><p>To further enhance the robustness of the model, data augmentation techniques were employed. These included rotations, translations, flipping, and scaling of images. Data augmentation introduces variability into the training dataset, simulating a broader range of scenarios that the model might encounter in real-world applications. This approach helps in preventing overfitting and improves the model's generalization capabilities.</p><p>Considering the importance of color information in identifying and classifying construction equipment, tools, and vehicles, some images were converted into different color spaces (e.g., HSV or LAB) as part of the augmentation process. This conversion allows the model to learn from a wider variety of color distributions, enhancing its ability to detect objects across different environmental conditions and backgrounds <ref type="bibr" target="#b14">[15]</ref>.</p><p>After completing the preprocessing steps, the dataset was compiled into a format suitable for training the YOLOv5 model. This involved organizing the images and their corresponding annotations (labels) into training, validation, and test sets. The division of the dataset allows for comprehensive training and evaluation of the model's performance, ensuring its effectiveness in enhancing construction site efficiency and safety.</p><p>Through meticulous image preprocessing, the study ensures that the dataset is optimized for training the YOLOv5 model. By focusing on image quality, consistency, and variability, the preprocessing steps lay a solid foundation for developing an object detection system capable of accurately identifying and classifying objects in diverse conditions encountered on construction sites.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.5.">Image Labeling</head><p>Image labeling is a critical step in the development of a YOLOv5-based object detection model for improving construction site efficiency and safety. This process involves annotating images with labels that accurately describe the objects present, their categories, and their states (e.g., idle or active). For this study, Label Studio, a versatile tool for annotating images for machine learning applications, was employed to facilitate the labeling process.</p><p>Label Studio was chosen for its user-friendly interface and flexibility in handling various types of annotations, including bounding boxes, which are essential for object detection tasks. Its compatibility with a wide range of data types and export formats makes it an ideal choice for projects requiring detailed and accurate annotations.</p><p>Based on the study's focus on construction site management, specific classes and states were defined for labeling:  Equipment Utilization. Classes included bulldozers, concrete mixers, and generators, with states designated as idle or active.  Tool and Machinery Tracking. Classes encompassed hand drills, power saws, jackhammers, and welding machines, with annotations indicating whether they were in use or stored.  Vehicle Recognition. Classes covered cranes, dump trucks, excavators, and cement trucks, with states reflecting loading activities, digging, pouring concrete, or being idle.</p><p>To ensure consistency and accuracy in the labeling process, comprehensive annotation guidelines were developed. These guidelines provided detailed instructions on how to identify and label each class and state, including how to draw bounding boxes around objects and the level of detail required in annotations. The guidelines emphasized the importance of precision in bounding box placement to ensure the model learns the exact dimensions and features of each object.</p><p>A team of annotators was trained using the developed guidelines to ensure a uniform understanding of the labeling task. This training included practical exercises in Label Studio, focusing on accurately identifying objects, selecting the correct labels, and drawing bounding boxes. Regular review sessions were held to address any inconsistencies and refine the labeling process.</p><p>To maintain high-quality annotations, a two-step review process was implemented. Initially, each labeled image was reviewed by a senior annotator for accuracy and adherence to the guidelines. Following this, a random sample of the annotations was audited by the project lead to ensure overall quality and consistency across the dataset.</p><p>Upon completion of the labeling process, the annotated data were exported from Label Studio in a format compatible with YOLOv5 training requirements. This included the images and their corresponding labels (bounding box coordinates and class identifiers), organized in a manner that facilitates efficient model training and evaluation.</p><p>Through meticulous image labeling using Label Studio, this study established a comprehensive and accurately annotated dataset for training the YOLOv5 model. The detailed annotations provide the model with the necessary information to learn the characteristics of various construction site objects, enabling effective detection and classification crucial for enhancing site safety and resource management.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.6.">Splitting Data</head><p>In our study, the comprehensive dataset was meticulously divided using a random selection process into three distinct subsets: 70% for training, 20% for validation, and 10% for testing. This division resulted in a training set comprising 1,897 images of construction equipment, tools, and vehicles in various operational states, including 1,610 images of active and idle machinery instances and 287 images highlighting tool and machinery tracking scenarios. The validation set included 542 images, with 460 images dedicated to equipment and vehicle recognition in different states and 82 images focusing on tool and machinery tracking. Lastly, the test set consisted of 271 images, with 230 images showcasing equipment and vehicles in diverse operational conditions and 41 images for the evaluation of tool and machinery tracking performance. This structured approach to dataset allocation ensures a balanced representation of all classes and states, facilitating a comprehensive assessment of the YOLOv5 model's capability to enhance construction site efficiency and safety through advanced object detection.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.7.">Testing and Evaluation</head><p>To rigorously test and evaluate the performance of the proposed YOLOv5-based object detection model for enhancing construction site efficiency and safety, imagery data collected from a local construction site using CCTV cameras were utilized. The evaluation process focused on measuring the accuracy and reliability of the model in detecting and classifying various construction-related objects, employing Intersection over Union (IoU) and a confusion matrix as the primary metrics.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.7.1.">Intersection over Union (IoU)</head><p>IoU is a critical metric in object detection that quantifies the accuracy of the predicted bounding box against the ground truth (actual) bounding box. It is calculated as the area of overlap between the predicted and actual bounding boxes divided by the area of their union. The IoU value ranges from 0 to 1, where 0 indicates no overlap and 1 signifies perfect alignment between the predicted and actual bounding boxes. The equation for IoU is given by:</p><formula xml:id="formula_3">IoU = area of overlap area of ∪¿ , ¿<label>(1)</label></formula><p>where area of overlap is the area where the predicted bounding box and the actual (ground truth) bounding box overlap; area of ∪¿ is the total area covered by both the predicted bounding box and the actual bounding box, minus the area of overlap. It represents the combined area of both boxes where either box has coverage.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.7.2.">Confusion Matrix</head><p>The confusion matrix is a tool that helps visualize the performance of the object detection model. It categorizes the predictions into four types: true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN). From the confusion matrix, several performance metrics can be derived, including precision, recall, and mean average precision (mAP). Precision measures the model's accuracy in predicting positive observations and is defined as the ratio of TP to the sum of TP and FP. It indicates the reliability of the model's positive detections. The equation for Precision is given by:</p><formula xml:id="formula_4">Precision= TP TP+ FP = TP all detections , (<label>2</label></formula><formula xml:id="formula_5">)</formula><p>where TP are the true positive predictions; FP are the false positive predictions.</p><p>Recall assesses the model's sensitivity or its ability to correctly identify all relevant instances. It is calculated as the ratio of TP to the sum of TP and FN. The equation for Recall is given by: Recall= TP TP+ FN ,</p><p>where FN are the false negative predictions.</p><p>Mean Average Precision (mAP) is used to evaluate the model's accuracy across all classes within the dataset. It is the mean of the average precision (AP) scores for each class, where AP is computed as the weighted sum of precisions at each threshold, with the increase in recall from the previous threshold used as the weight. The equation for mAP is given by:</p><formula xml:id="formula_7">mAP= 1 n • ∑ k=1 n AP k , (<label>4</label></formula><formula xml:id="formula_8">)</formula><p>where n is the total number of classes in the dataset; AP is calculated for each class and represents the precision at different recall levels. It takes into account the order of the predictions, rewarding models that return true positives earlier. The equation of AP is given by:</p><formula xml:id="formula_9">AP= ∑ k=0 n−1 [ Recall (k )−Recall( k +1)]• Precision( k ),<label>(5)</label></formula><p>where k is the index used to sum over a sorted list of objects, thresholds, or intervals.</p><p>The proposed model was evaluated using the described metrics on the dataset split into training, validation, and test sets. The IoU threshold was set to 0.5, a common practice in object detection tasks, to determine whether a detection is considered a true positive. The precision, recall, and mAP values were calculated based on the outcomes of the confusion matrix, providing a comprehensive assessment of the model's performance in accurately detecting and classifying objects on construction sites.</p><p>This rigorous testing and evaluation process ensures that the YOLOv5-based model is not only accurate in identifying construction site objects but also reliable and effective in real-world scenarios, contributing significantly to the improvement of construction site safety and efficiency.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Results</head><p>The model underwent training for 30 epochs on the dataset comprising construction equipment, tools, and vehicles, with a batch size set at 16. The training process was completed in approximately 23 minutes utilizing a Google Colab GPU. Figure <ref type="figure" target="#fig_2">3</ref> illustrates the model's performance across the training phase for the construction equipment and tools dataset, showcasing the metrics of precision, recall, and mAP at the 50 IoU threshold. The performance of YOLOv5 on the validation dataset, which included images of classes, is summarized in Table <ref type="table">2</ref>. The model achieved an overall precision of approximately 88%, a recall of 79%, and a mAP at the 50 IoU threshold of 85%.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 2</head><p>Validation results on the different classes 4.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Conclusion</head><p>Thus, the implementation of the YOLOv5-based object detection model for enhancing construction site efficiency and safety has demonstrated significant potential in revolutionizing the management of resources and equipment. Through meticulous training, validation, and testing processes, the model has shown high accuracy in detecting and classifying various construction-related objects, including equipment in idle and active states, tools, and vehicles,</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Object number</head><p>Class The model's training over 30 epochs, utilizing a dataset meticulously prepared with images of construction equipment, tools, and vehicles, resulted in a final precision of 0.852, a recall of 0.723, and a mAP_0.5 of 0.792. These metrics underscore the model's capability to accurately identify and classify objects, which is crucial for real-time monitoring and management applications. The high performance across different classes, particularly in vehicle recognition and equipment utilization, highlights the model's versatility and effectiveness in addressing the dynamic needs of construction site management.</p><p>The validation and testing phases further affirmed the model's reliability, with precision and recall rates consistently above 85% and 79%, respectively, across various object categories. This level of accuracy ensures that the model can serve as a dependable tool for construction site managers, enabling them to make informed decisions based on real-time data regarding the status and location of tools, machinery, and vehicles.</p><p>In conclusion, the YOLOv5-based object detection model represents a significant advancement in leveraging computer vision and deep learning technologies for construction site management. By providing a robust solution for real-time detection and classification of construction resources and equipment, the model paves the way for smarter, safer, and more efficient construction site operations. Future work will focus on further refining the model's accuracy, exploring its integration with other technological solutions, and expanding its application to a broader range of construction site scenarios, ultimately contributing to the ongoing digital transformation of the construction industry.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Image processing workflow for construction site resource management.</figDesc><graphic coords="7,130.85,127.60,333.65,336.85" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Active equipment recognition system. The detection of object activity typically involves the following steps:  Feature Extraction. Identifying and extracting relevant features from each image in the sequence that can indicate the state of the object. For a concrete mixer, features might include the position of the drum, its orientation, and any visible movement.  Temporal Analysis. Comparing these features across the sequence to detect changes over time. This can be achieved through various methods, including frame differencing, optical flow, or more sophisticated temporal modeling techniques.  State Classification. Based on the analysis, classifying the object's state as active or not active. If significant changes in the extracted features are detected, the object is classified as active; otherwise, it is considered not active.  Contextual Information. Incorporating contextual information can enhance accuracy. For instance, understanding the typical operation cycle of a concrete mixer can help differentiate between minor movements (noise) and significant activity (operation).</figDesc><graphic coords="11,85.20,85.00,424.90,216.70" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Performance of YOLOv5 during the training phase with the Vehicle Recognition dataset: (a) precision, (b) recall, and (c) mAP at the 50 IoU threshold.</figDesc><graphic coords="16,113.00,573.90,369.35,91.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>To Evaluate the Effectiveness of YOLOv5-Based Object Detection in improving construction site efficiency by automating the tracking and management of resources and equipment.  To Assess the Impact of YOLOv5 on Construction Site Safety through</head><label></label><figDesc></figDesc><table><row><cell>real-time</cell></row><row><cell>detection of safety gear compliance and potential hazards, thereby reducing the risk of</cell></row><row><cell>accidents and enhancing worker safety.</cell></row><row><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>To Explore the Customization and Adaptation of YOLOv5 Models for</head><label></label><figDesc></figDesc><table><row><cell>specific</cell></row><row><cell>construction site environments, considering the unique challenges posed by diverse</cell></row><row><cell>project sites and operational conditions.</cell></row><row><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>To Investigate the Integration of YOLOv5 with Other Technological Solutions such</head><label></label><figDesc>as drones and IoT devices, for comprehensive site monitoring and management.</figDesc><table><row><cell> To Identify Challenges and Limitations associated with the deployment of YOLOv5-</cell></row><row><cell>based object detection systems in construction site management, including technical,</cell></row><row><cell>operational, and regulatory considerations.</cell></row><row><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>To Provide Recommendations for Future Research and Development in</head><label></label><figDesc></figDesc><table><row><cell>the field</cell></row><row><cell>of construction technology, with a focus on enhancing the capabilities and applications</cell></row><row><cell>of YOLOv5-based object detection for improved site management practices.</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head></head><label></label><figDesc>Idle Bulldozer. Approximately 150 images of bulldozers with no movement or operation, capturing them with engines off or in a state of rest.  Active Bulldozer. Around 200 images of bulldozers engaged in activities like pushing earth or debris, highlighting their operational state.  Idle Concrete Mixer. Collected 120 images of concrete mixers stationary with no mixing activity, emphasizing their idle state.  Active Concrete Mixer. Secured 180 images of concrete mixers in operation, with a focus on capturing the rotating drum.  Idle Generator. Gathered 100 images of generators that are turned off or not providing power, showcasing various models and sizes.  Active Generator. Compiled 150 images of generators in operation, identifiable by noise or operational indicators.</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_5"><head></head><label></label><figDesc>Collected 180 images of dump trucks filled with materials, ready for transport or just arrived.  Dump Truck (Empty). Secured 160 images of empty dump trucks, possibly returning for another load.</figDesc><table><row><cell> Excavator (Digging). Gathered 210 images of excavators in the process of digging or</cell></row><row><cell>moving earth.</cell></row><row><cell> Excavator (Idle). Compiled 170 images of excavators at rest, with the digging arm</cell></row><row><cell>stationary.</cell></row><row><cell> Cement Truck (Pouring). Around 190 images of cement trucks in the process of</cell></row><row><cell>pouring concrete.</cell></row><row><cell> Cement Truck (Idle). Collected 150 images of cement trucks on site but not currently</cell></row><row><cell>pouring concrete.</cell></row><row><cell>Crane (Loading). Approximately 190 images of cranes lifting or moving materials,</cell></row><row><cell>indicating activity.</cell></row><row><cell> Crane (Idle). Around 150 images of cranes with no load and not in operation, showing</cell></row><row><cell>inactivity.</cell></row></table><note> Dump Truck (Loaded).</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_6"><head>Table 1</head><label>1</label><figDesc>Number of instances across the different classes</figDesc><table><row><cell>Object number</cell><cell>Class name</cell><cell>Number of instances</cell></row><row><cell>1</cell><cell>IB</cell><cell>150</cell></row><row><cell>2</cell><cell>AB</cell><cell>200</cell></row><row><cell>3</cell><cell>ICM</cell><cell>120</cell></row><row><cell>4</cell><cell>ACM</cell><cell>180</cell></row><row><cell>5</cell><cell>IG</cell><cell>100</cell></row><row><cell>6</cell><cell>AG</cell><cell>150</cell></row><row><cell>7</cell><cell>HD</cell><cell>200</cell></row><row><cell>8</cell><cell>PS</cell><cell>170</cell></row><row><cell>9</cell><cell>J</cell><cell>160</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_7"><head></head><label></label><figDesc>.</figDesc><table><row><cell>10</cell><cell>WM</cell><cell>140</cell></row><row><cell>11</cell><cell>CL</cell><cell>190</cell></row><row><cell>12</cell><cell>CI</cell><cell>150</cell></row><row><cell>13</cell><cell>DTL</cell><cell>180</cell></row><row><cell>14</cell><cell>DTE</cell><cell>160</cell></row><row><cell>15</cell><cell>ED</cell><cell>210</cell></row><row><cell>16</cell><cell>EI</cell><cell>170</cell></row><row><cell>17</cell><cell>CTP</cell><cell>190</cell></row><row><cell>18</cell><cell>CTI</cell><cell>150</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Multiscale Object Detection Method for Track Construction Safety Based on Improved YOLOv5</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Xue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Zhai</surname></persName>
		</author>
		<idno type="DOI">10.1155/2022/1214644</idno>
		<ptr target="https://doi.org/10.1155/2022/1214644" />
	</analytic>
	<monogr>
		<title level="j">Mathematical Problems in Engineering</title>
		<imprint>
			<biblScope unit="volume">2022</biblScope>
			<biblScope unit="page" from="1" to="10" />
			<date type="published" when="2022-08">August 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Object Detection for Construction Waste Based on an Improved YOLOv5 Model</title>
		<author>
			<persName><forename type="first">Q</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Qiu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Zheng</surname></persName>
		</author>
		<idno type="DOI">10.3390/su15010681</idno>
		<ptr target="https://doi.org/10.3390/su15010681" />
	</analytic>
	<monogr>
		<title level="j">Sustainability</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="15" />
			<date type="published" when="2022-12">December 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">YOLOv4-5D: An Effective and Efficient Object Detector for Autonomous Driving</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Cai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Luan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">Á</forename><surname>Sotelo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Li</surname></persName>
		</author>
		<idno type="DOI">10.1109/TIM.2021.3065438</idno>
		<ptr target="https://doi.org/10.1109/TIM.2021.3065438" />
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Instrumentation and Measurement</title>
		<imprint>
			<biblScope unit="volume">70</biblScope>
			<biblScope unit="page" from="1" to="13" />
			<date type="published" when="2021-03">March 2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">CORY-Net: Contrastive Res-YOLOv5 Network for Intelligent Safety Monitoring on Power Grid Construction Sites</title>
		<author>
			<persName><forename type="first">G</forename><surname>Peng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Lei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Liu</surname></persName>
		</author>
		<idno type="DOI">10.1109/ACCESS.2021.3132301</idno>
		<ptr target="https://doi.org/10.1109/ACCESS.2021.3132301" />
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="160461" to="160470" />
			<date type="published" when="2021-12">December 2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Research on application of object detection based on yolov5 in construction site</title>
		<author>
			<persName><forename type="first">X</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Xie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Liang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Yank</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Peng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>He</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICACI58115.2023.10146151</idno>
		<ptr target="https://doi.org/10.1109/ICACI58115.2023.10146151" />
	</analytic>
	<monogr>
		<title level="m">15th International Conference on Advanced Computational Intelligence (ICACI)</title>
				<imprint>
			<date type="published" when="2023-06">2023. June 2023</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">The equipment detection and localization of large-scale construction jobsite by far-field construction surveillance video based on improving YOLOv3 and grey wolf optimizer improving extreme learning machine</title>
		<author>
			<persName><forename type="first">T</forename><surname>Zeng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Cui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<idno type="DOI">10.1016/J.CONBUILDMAT.2021.123268</idno>
		<ptr target="https://doi.org/10.1016/J.CONBUILDMAT.2021.123268" />
	</analytic>
	<monogr>
		<title level="j">Construction and Building Materials</title>
		<imprint>
			<biblScope unit="volume">291</biblScope>
			<biblScope unit="page">123268</biblScope>
			<date type="published" when="2021-07">July 2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">YOLO-HR: Improved YOLOv5 for Object Detection in High-Resolution Optical Remote Sensing Images</title>
		<author>
			<persName><forename type="first">D</forename><surname>Wan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Shen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Lang</surname></persName>
		</author>
		<idno type="DOI">10.3390/rs15030614</idno>
		<ptr target="https://doi.org/10.3390/rs15030614" />
	</analytic>
	<monogr>
		<title level="j">Remote Sensing</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="1" to="17" />
			<date type="published" when="2023-01">January 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Integration of Building Information Modeling and Artificial Intelligence Systems to Create a Digital Twin of the Construction Site</title>
		<author>
			<persName><forename type="first">D</forename><surname>Chernyshev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Dolhopolov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Honcharenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Haman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Ivanova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Zinchenko</surname></persName>
		</author>
		<idno type="DOI">10.1109/SIST54437.2022.9945752</idno>
		<ptr target="https://doi.org/10.1109/CSIT56902.2022.10000717" />
	</analytic>
	<monogr>
		<title level="m">International Scientific and Technical Conference on Computer Sciences and Information Technologies</title>
				<imprint>
			<date type="published" when="2022-11">November 2022</date>
			<biblScope unit="page" from="36" to="39" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Information system based on multi-value classification of fully connected neural network for construction management</title>
		<author>
			<persName><forename type="first">T</forename><surname>Honcharenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Akselrod</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Shpakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Khomenko</surname></persName>
		</author>
		<idno type="DOI">10.11591/ijai.v12.i2.pp593-601</idno>
		<ptr target="http://doi.org/10.11591/ijai.v12.i2.pp593-601" />
	</analytic>
	<monogr>
		<title level="j">IAES International Journal of Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="593" to="601" />
			<date type="published" when="2023-06">2023. June 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<ptr target="https://universe.roboflow.com/imsmile2000-naver-com/acid7000" />
		<title level="m">ACID7000 Dataset. Roboflow Universe</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<ptr target="https://universe.roboflow.com/object-nfasp/ttm" />
		<title level="m">Roboflow Universe</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note>TTM Dataset</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Digital Object Detection of Construction Site Based on Building Information Modeling and Artificial Intelligence Systems</title>
		<author>
			<persName><forename type="first">D</forename><surname>Chernyshev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Dolhopolov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Honcharenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Sapaiev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Delembovskyi</surname></persName>
		</author>
		<ptr target="http://ceur-ws.org/Vol-3039/paper16.pdf" />
	</analytic>
	<monogr>
		<title level="m">ITTAP&apos;2022 2 nd International Workshop on Information n Technologies: Theoretical and Applied Problems. CEUR Workshop Proceedings</title>
				<imprint>
			<date type="published" when="2022-11">November 2022</date>
			<biblScope unit="volume">3039</biblScope>
			<biblScope unit="page" from="267" to="279" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Classification and Detections using Yolov5</title>
		<author>
			<persName><forename type="first">N</forename><surname>Yashaswini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dr</forename><surname>Manimala</surname></persName>
		</author>
		<idno type="DOI">10.36948/ijfmr.2023.v05i05.6057</idno>
		<ptr target="https://doi.org/10.36948/ijfmr.2023.v05i05.6057" />
	</analytic>
	<monogr>
		<title level="j">International Journal For Multidisciplinary Research (IJFMR)</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="1" to="3" />
			<date type="published" when="2023-10">September-October 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Real-Time Object Detection Algorithm of Autonomous Vehicles Based on Improved YOLOv5s</title>
		<author>
			<persName><forename type="first">B</forename><surname>Xiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>He</surname></persName>
		</author>
		<idno type="DOI">10.1109/CVCI54083.2021.9661149</idno>
		<ptr target="https://doi.org/10.1109/CVCI54083.2021.9661149" />
	</analytic>
	<monogr>
		<title level="m">5th CAA International Conference on Vehicular Control and Intelligence (CVCI)</title>
				<imprint>
			<date type="published" when="2021-01">2021. January 2022</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Construction site safety detection based on object detection with channel-wise attention</title>
		<author>
			<persName><forename type="first">W</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Qiu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wang</surname></persName>
		</author>
		<idno type="DOI">10.1145/3511176.3511190</idno>
		<ptr target="https://doi.org/10.1145/3511176.3511190" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 5th International Conference on Video and Image Processing</title>
				<meeting>the 2021 5th International Conference on Video and Image Processing</meeting>
		<imprint>
			<date type="published" when="2021-12">December 2021</date>
			<biblScope unit="page" from="85" to="91" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Information tools for project management of the building territory at the stage of urban planning</title>
		<author>
			<persName><forename type="first">T</forename><surname>Honcharenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Mihaylenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Borodavka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Dolya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Savenko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proceedings</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="volume">2851</biblScope>
			<biblScope unit="page" from="22" to="33" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Construction Site Hazards Identification Using Deep Learning and Computer Vision</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Alateeq</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">P</forename><surname>Rajeena Fathimathul</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Ali</surname></persName>
		</author>
		<idno type="DOI">10.3390/su15032358</idno>
		<ptr target="https://doi.org/10.3390/su15032358" />
	</analytic>
	<monogr>
		<title level="j">Sustainability</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="1" to="19" />
			<date type="published" when="2023-01">January 2023</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
