<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Estimating Tomato Fruit Masses through Image Processing and Artificial Intelligence</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Elognissè</forename><surname>Erasme</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Guérin</forename><surname>Agossadou</surname></persName>
							<email>agossadourin@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="department">Technologie, Ingénierie et Mathématiques (UNSTIM)</orgName>
								<orgName type="institution">Université nationale des sciences</orgName>
								<address>
									<addrLine>POBox 486</addrLine>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">SOGBO ALIHO</orgName>
								<address>
									<settlement>Abomey</settlement>
									<country key="BJ">Benin</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Mahugnon</forename><surname>Géraud</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Azehoun</forename><surname>Pazou</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Technologie, Ingénierie et Mathématiques (UNSTIM)</orgName>
								<orgName type="institution">Université nationale des sciences</orgName>
								<address>
									<addrLine>POBox 486</addrLine>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">SOGBO ALIHO</orgName>
								<address>
									<settlement>Abomey</settlement>
									<country key="BJ">Benin</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Régis</forename><forename type="middle">Donald</forename><surname>Hontinfinde</surname></persName>
							<email>hontinfinde7@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="department">Technologie, Ingénierie et Mathématiques (UNSTIM)</orgName>
								<orgName type="institution">Université nationale des sciences</orgName>
								<address>
									<addrLine>POBox 486</addrLine>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">SOGBO ALIHO</orgName>
								<address>
									<settlement>Abomey</settlement>
									<country key="BJ">Benin</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ahmed</forename><surname>Dooguy</surname></persName>
							<affiliation key="aff2">
								<orgName type="department">EDMI</orgName>
								<orgName type="institution">Cheikh Anta Diop University</orgName>
								<address>
									<settlement>Dakar</settlement>
									<country key="SN">Senegal</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff3">
								<orgName type="laboratory">Conférence Internationale des Technologies de l&apos;Information et de la Communication de l&apos;ANSALB</orgName>
								<address>
									<addrLine>June 27-28</addrLine>
									<postCode>2024</postCode>
									<settlement>Cotonou</settlement>
									<country key="BJ">BENIN</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Estimating Tomato Fruit Masses through Image Processing and Artificial Intelligence</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">07D2718497EBF681CF56E03BB03D81DD</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:41+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>tomato fruit mass estimation</term>
					<term>image processing</term>
					<term>prediction models</term>
					<term>Neural network</term>
					<term>deep learning</term>
					<term>pix2pix</term>
					<term>rcnn</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The integration of intelligent and connected production systems has positioned artificial intelligence (AI) as a pivotal component in society's digital transformation, becoming indispensable. Leveraging the vast amounts of data generated, AI can now make critical decisions to mitigate potential disasters. This study focuses on developing a method that combines computer vision and machine learning algorithms to estimate tomato weights. A dataset of tomato images was compiled, and a modified Mask R-CNN algorithm was employed to detect, segment, and extract individual fruit masks. Various regression models were evaluated to predict tomato weight based on visual features. The results on the test dataset indicate that this approach can estimate the number and total weight of tomatoes with approximately 93% accuracy. This research highlights the potential for automated monitoring of market garden crop yields through AI.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Agriculture faces major challenges in sustainably feeding a growing global population, making accurate crop yield estimation essential for informed decision-making by farmers. While traditional methods such as field surveys can be helpful, they are often limited by issues of accuracy, cost, and time efficiency.</p><p>Tomato (Solanum lycopersicum) is a crucial vegetable crop globally, boasting 183 million tonnes in 2018 <ref type="bibr" target="#b0">[1]</ref>. Native to Central and South America, the tomato was introduced to Europe in the 16th century, quickly gaining popularity for its delicious, nutrient-rich fruits loaded with vitamins, minerals, and antioxidants <ref type="bibr" target="#b1">[2]</ref>. Major producers include China, India, the United States, and Turkey, with significant cultivation also occurring in African nations such as Nigeria, Egypt, Morocco, and Algeria, primarily for local consumption <ref type="bibr" target="#b2">[3]</ref>. Tomatoes are generally classified into two main varieties: determinate, which have limited growth, and indeterminate, which continue growing throughout their lifecycle.</p><p>Whether cultivated in open fields or under protective covers like greenhouses, tomato farming requires careful irrigation due to the plant's deep taproot system. Furthermore, challenges such as pest infestations-like downy mildew and Botrytis necessitate the use of appropriate cultivation practices and phytosanitary measures to ensure optimal yields.</p><p>Several approaches have been investigated in the literature to address the challenge of fruit weight estimation. For instance, Yamamoto et al. <ref type="bibr" target="#b3">[4]</ref> developed a method to accurately count individual tomato fruits from images of plants grown in a laboratory setting. This method employed decision trees to analyze pixel color characteristics, achieving precise pixel-level segmentation. Post-processing was then applied to group pixels corresponding to fruits, en-abling the extraction and counting of fruit centroids. The study reported a detection precision of 0.88 and recall of 0.80, demonstrating the method's efficacy in controlled environments for tomato detection and counting.</p><p>In Indonesia, the increasing demand for tomatoes necessitates efficient post-harvest handling. A study by Sari et al. <ref type="bibr" target="#b4">[5]</ref> proposed a sorting system that categorizes tomatoes based on color, size, and weight using image processing with the OpenCV <ref type="bibr" target="#b5">[6]</ref> library. The system sorts tomatoes into red, yellow, and green categories and measures dimensions by identifying the outermost points of the detected fruits. It utilizes a weight sensor for mass measurement. The prototype, which incorporates a webcam, Arduino, and conveyor system, achieved 100% accuracy in color detection and 95% in weight measurement, although dimensional measurement accuracy was only 5%.</p><p>Van Daalen et al. <ref type="bibr" target="#b6">[7]</ref> examined the application of augmented reality (AR) in agriculture, focusing on detecting tomato ripeness using the 3D scanning capabilities of the HoloLens [8]. Their experimental setup, which included various tomato varieties, highlighted both the opportunities and challenges of using AR for hands-free tasks like training and harvesting in greenhouse environments.</p><p>Similarly, Lee et al. <ref type="bibr" target="#b7">[9]</ref> proposed an artificial intelligencebased system for tomato detection and mass estimation, utilizing multi-class detection and instance-wise segmentation. By analyzing a tomato image dataset with a calibrated vision system, the study demonstrated a high correlation between fruit dimensions and mass. Their method achieved a mean absolute percentage error of 7.09%, showcasing the effectiveness of computer vision and machine learning for automating tasks such as yield monitoring and fruit sizing.</p><p>In another study, Nyalala et al. <ref type="bibr" target="#b8">[10]</ref> developed seven regression models, including Support Vector Regression (SVR) <ref type="bibr" target="#b9">[11]</ref> and artificial neural networks (ANNs) <ref type="bibr" target="#b10">[12]</ref> with different training algorithms. These models effectively estimated fruit weight and volume, offering significant potential for improvements in fruit sorting and grading processes.</p><p>Basak et al. <ref type="bibr" target="#b11">[13]</ref> introduced a non-destructive method for estimating strawberry fruit weight using machine learning models. By analyzing 900 samples from three different strawberry cultivars, they used image processing to calcu-late pixel numbers. Linear regression (LR) and non-linear SVR models were applied, resulting in training and testing accuracies of 96.3% and 89.6%, respectively.</p><p>This study focuses on applying recent advancements in computer vision, particularly object detection, and machine learning algorithms to estimate tomato weight from realworld images. The subsequent sections describe the equipment used, the structure and composition of the dataset, and the methodology employed to generate accurate quantitative measures such as projected surface area and total weight for detected fruits. Our findings demonstrate the effectiveness of this approach. Additionally, we discuss the challenges faced and propose recommendations for future research.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Material and Methods</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Dataset</head><p>The data used in this study consists of tomato fruit images collected both online and in the field under real-world conditions. The dataset includes a total of 180 images obtained online and 100 images taken in the field, containing a total of 1143 tomato fruit instances. Table <ref type="table" target="#tab_0">1</ref> illustrates the composition of our dataset.</p><p>Images captured in the field helped to collect additional information such as actual fruit area and actual fruit weight, which enriches the dataset by providing accurate and relevant measurements for tomato fruit weight estimation. Table 2 presents additional insights concerning field-captured images. Upon analysis of the table, the average fruit weight is 35.30 g , with a standard deviation of 14.56 g . The average true area is 2673.48 mm 2 , with a standard deviation of 873.68 mm 2 . Quartile values provide insights into the distribution of the data. Thus, 25% of the fruits have a weight of less than 25.21 g, 50% have a weight of less than 37.00 g , and 75% have a weight of less than 43.49 g. For the actual surface area, 25% of fruits have an area less than 2, 024.93 mm 2 , 50% have an area less than 2, 779.53 mm 2 , and 75% have an area less than 3, 219.12 mm 2 .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Methods</head><p>To estimate tomato fruit weights, we developed a four steps approach (see figure <ref type="figure" target="#fig_0">1</ref>)</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.1.">Detection, segmentation and extraction of tomato fruit masks</head><p>To train our segmentation model, we prepared a dataset of tomato images, labeled in the COCO format. The dataset consisted of 180 images containing 1043 instances of tomatoes, sourced from both the internet and field photography, and annotated using the ROboflow platform. We employed the Mask R-CNN instance segmentation model through the Detectron2 framework, selecting the mask_rcnn_R_50_FPN_3x configuration developed by Facebook AI Research. This model, pre-trained on the COCO dataset, combines the Mask R-CNN architecture with a ResNet-50 backbone and Feature Pyramid Network (FPN) for high-performance, multi-scale object detection. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.2.">Projected Surface Area Estimation of Each Tomato</head><p>To evaluate the projected area of each tomato from images, a dataset was constructed, including individual images of tomatoes, their actual weight in grams, the total number of pixels in the image, the number of pixels corresponding tomato (obtained by semantic segmentation), and the total area of the image in square meters, obtained by camera calibration.</p><p>The estimation of the projected area took place in two steps: first, the segmentation mask allows us to calculate the area in pixels occupied by the tomato in the image. Then, a camera calibration converted this pixel area into an actual metric area, using a coin as a reference object. By photographing the tomatoes under the same conditions as the reference piece, the resulting conversion factor was used to convert the pixel area of each fruit into a measure of its actual projected area in metric units. This method uses a rule of three, where the actual surface area of the tomato (𝐴 𝑡𝑜𝑚𝑎𝑡𝑜 ) is estimated based on the number of pixels corresponding to the tomato in the image (𝑃 𝑡𝑜𝑚𝑎𝑡𝑜 ), using the conversion factor established during calibration:</p><formula xml:id="formula_0">𝐴 𝑟𝑒𝑓 𝑃 𝑟𝑒𝑓 . 𝐴 𝑡𝑜𝑚𝑎𝑡𝑒 = 𝑃 𝑡𝑜𝑚𝑎𝑡𝑒 × 𝐴 𝑟𝑒𝑓 𝑃 𝑟𝑒𝑓<label>(1)</label></formula><p>With this method, we were able to estimate the real surface area of each tomato in physical space from segmentation in image space, thanks to precise calibration using a reference object.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.3.">Tomato Mass Estimation</head><p>To estimate the weight of the tomatoes based on their projected surface area, we tested several regression models, including Simple Linear Regression (SLR), Multiple Linear Regression (MLR), and Partial Least Squares Regression (PLSR). These models aimed to establish a mathematical relationship between the surface area (independent variable) and the weight (dependent variable) of the tomatoes.  We also applied 10 -fold cross-validation to each model to reduce the likelihood of overfitting. Figure <ref type="figure" target="#fig_0">1</ref> depicts the summary of the methodology adopted in this study.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Results and Discussion</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Results</head><p>Figure <ref type="figure">2</ref> illustrates the model's accuracy, while Figure <ref type="figure">3</ref> depicts the evolution of the cost function</p><p>The performance of the model was evaluated on the test set consisting of 19 images containing a total of 149 tomato annotations. The Average Precision (AP) metric was used to quantify the model's ability to correctly detect and segment tomatoes under various conditions.</p><p>Table <ref type="table" target="#tab_1">3</ref> presents the results obtained for the detection and semantic segmentation tasks. We observe an average AP of 55.9% for detection and 54.6% for segmentation on different IoU thresholds between 0.5 and 0.95. The model achieves better performance on large fruits (AP of 66.1% in detection) than on small tomatoes (AP of 30.3%).</p><p>These results confirm the model's effectiveness in detecting and segmenting tomatoes in real-world conditions. Further data annotation and model optimization are expected The projected surface area of each fruit was derived from the segmented mask by calculating the pixel area, then converting it to real-world units using camera calibration information as defined in Equation <ref type="formula" target="#formula_0">1</ref>. This method achieved a precision of approximately 95.</p><p>For tomato weight estimation, a subset of the dataset containing real-world images was used, which included precise data on both the actual weight of each tomato and their projected surface area. A mathematical relationship between the weight and projected area was established through the evaluation of several regression methods. The algorithms tested included Least Squares Regression (LSR), Multiple Linear Regression (MLR), and Support Vector Machines (SVM), and their performance was compared using cross-validation and Mean Square Error (MSE) as the evaluation metric.</p><p>Table <ref type="table" target="#tab_2">4</ref> highlights the performance metrics of the tested models.</p><p>Among the evaluated models, Lasso Regression achieved the best performance, with a MAE of 5,776 and an MSE of 62.99.</p><p>The corresponding model equation is: </p><p>Table <ref type="table" target="#tab_1">3</ref>.1 presents the prediction results on the test dataset, where our model achieved a relative error of 7.09% in estimating the total weight. When applied in an autonomous field system, this method shows great potential to enhance yield estimation efficiency, helping farmers save time and reduce labor costs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Discussion</head><p>The study employed a multi-step methodology to estimate tomato fruit weights from images. First, a Mask R-CNN model, using the mask_rcnn_R_50_FPN_3x configuration, was trained on a dataset of 180 images containing 1043 tomato instances. After detection and segmentation, the projected surface area of each tomato was estimated using a calibrated conversion from pixel area to metric units, achieving approximately 95% accuracy. For weight estimation, several regression models were evaluated on a subset of real-world images with known weights and projected areas. Among the regression models evaluated, the Lasso Regression algorithm demonstrated superior performance in estimating tomato weights. This model achieved a Mean Absolute Error (MAE) of 5.776 grams and a Mean Squared Error (MSE) of 62.99 grams2. Our model outperformed the approach described by Lee et al. <ref type="bibr" target="#b7">[9]</ref>, which reported an MAE of 7.09 grams for a similar tomato weight estimation task.</p><p>When applied to the test dataset, this model achieved a relative error of 7.09% in estimating the total weight of tomatoes. These results demonstrate the potential of this combined approach for automated tomato yield estimation, although the ideal conditions of the study (fully visible fruits) suggest that further research is needed to address real-world challenges such as occlusion.</p><p>While this study yielded promising results, it's important to acknowledge its primary limitation: the experiments were conducted under idealized conditions that do not fully represent real-world agricultural environments. All tomatoes in the study were fully visible and unobstructed, which rarely occurs in actual fields where fruits are often partially hidden by leaves, branches, or other fruits. This idealization may lead to overly optimistic performance estimates.</p><p>To bridge this gap and enhance the model's practical applicability, future research will focus on developing robust occlusion handling techniques, such as implementing advanced image processing algorithms for reconstructing partially obscured fruits or using ellipse fitting methods to estimate the full shape of partially visible tomatoes.</p><p>Additionally, creating more representative datasets that reflect the challenging conditions found in real agricultural settings, including various levels of occlusion and diverse growth stages, will be crucial. By addressing these limitations and training on more diverse and challenging datasets, future iterations of this system could significantly improve in accuracy and robustness, making it a more reliable tool for automated agricultural yield estimation in real-world scenarios.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusion</head><p>This study successfully introduced an innovative approach for accurately assessing tomato crop yields through the use of advanced image processing, computer vision, and artificial intelligence techniques. The results align closely with the objectives of estimating both the quantity and total weight of fruits, highlighting the practical benefits of this methodology for farmers.</p><p>Looking ahead, future enhancements will focus on refining the approach by integrating multispectral imaging to improve data acquisition. Additionally, algorithmic advancements, including image generation and ellipse fitting techniques, will be employed to tackle challenges related to occlusion. These developments will enhance the model's scalability and robustness, facilitating large-scale deployment in real-world agricultural settings. The anticipated implementation of this approach in automated systems that utilize drones and ground-based robots presents exciting opportunities for digital agriculture, paving the way for precise, efficient, and automated yield estimation.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Summary illustration of the methodology</figDesc><graphic coords="3,72.00,65.61,451.26,181.53" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 : 3 :</head><label>23</label><figDesc>Figure 2: Model accuracy Figure 3: Evolution of the cost function</figDesc><graphic coords="3,81.24,282.11,203.06,114.22" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Dataset Overview</figDesc><table><row><cell>Source</cell><cell cols="2">Number of Number of</cell></row><row><cell></cell><cell>images</cell><cell>fruit instances</cell></row><row><cell>Online</cell><cell>180</cell><cell>1043</cell></row><row><cell cols="2">Field-collected 100</cell><cell>100</cell></row><row><cell>Total</cell><cell>280</cell><cell>1143</cell></row><row><cell>Table 2</cell><cell></cell><cell></cell></row><row><cell cols="3">Additional information on images taken in the field</cell></row><row><cell></cell><cell cols="2">weight real_surface (mm 2 )</cell></row><row><cell cols="2">count 100.000000</cell><cell>100.000000</cell></row><row><cell>mean</cell><cell>33.341900</cell><cell>2565.479377</cell></row><row><cell>std</cell><cell>13.884898</cell><cell>912.439551</cell></row><row><cell>min</cell><cell>9.930000</cell><cell>856.037079</cell></row><row><cell>25%</cell><cell>19.932500</cell><cell>1723.114236</cell></row><row><cell>50%</cell><cell>35.955000</cell><cell>2609.542487</cell></row><row><cell>75%</cell><cell>42.877500</cell><cell>3186.808853</cell></row><row><cell>max</cell><cell>63.760000</cell><cell>4931.281258</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 3</head><label>3</label><figDesc>Model results in terms of Average Precision</figDesc><table><row><cell>Metric</cell><cell>AP</cell><cell>AP50</cell><cell>AP75</cell><cell>APm</cell><cell>APl</cell></row><row><cell>Detection</cell><cell cols="5">55.901 74.083 62.361 30.294 66.144</cell></row><row><cell cols="6">Segmentation 54.591 73.763 61.112 24.978 64.943</cell></row><row><cell cols="2">to enhance performance.</cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 4</head><label>4</label><figDesc>Performance metrics of different models</figDesc><table><row><cell></cell><cell></cell><cell>MSE</cell><cell>MAE</cell><cell>RSE</cell><cell>R 2</cell></row><row><cell cols="2">Linear Regression</cell><cell>67.465310</cell><cell>5.959565</cell><cell>8.110772</cell><cell>0.614756</cell></row><row><cell cols="2">Lasso Regression</cell><cell>62.990660</cell><cell cols="2">5.775707 7.900871</cell><cell>0.659433</cell></row><row><cell cols="2">Ridge Regression</cell><cell>64.222324</cell><cell>5.820851</cell><cell>7.789839</cell><cell>0.662985</cell></row><row><cell cols="2">ElasticNet Regression</cell><cell>65.214001</cell><cell>5.919661</cell><cell>8.063410</cell><cell>0.534604</cell></row><row><cell>SVR</cell><cell></cell><cell>81.623252</cell><cell>6.884133</cell><cell>8.980888</cell><cell>0.564414</cell></row><row><cell cols="2">Random Forest</cell><cell>67.078331</cell><cell>6.002012</cell><cell>8.102465</cell><cell>0.622985</cell></row><row><cell cols="2">AdaBoost Regression</cell><cell>76.441269</cell><cell>6.757964</cell><cell>8.621712</cell><cell>0.578526</cell></row><row><cell cols="3">KNeighbors Regression 68.750815</cell><cell>6.179380</cell><cell>8.225651</cell><cell>0.634068</cell></row><row><cell cols="2">Decision Tree</cell><cell cols="2">126.243306 8.132200</cell><cell cols="2">11.068062 0.322372</cell></row><row><cell>Table 5</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Prediction results on the test set</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell cols="6">Projected area actual weight Estimated weight Absolute error Relative error (%)</cell></row><row><cell>3219.122984</cell><cell>48.370</cell><cell>42.042340</cell><cell cols="2">6.327660</cell><cell>13.081785</cell></row><row><cell>2566.503710</cell><cell>30.760</cell><cell>33.377463</cell><cell cols="2">2.617463</cell><cell>8.509306</cell></row><row><cell>3279.246427</cell><cell>38.690</cell><cell>42.840604</cell><cell cols="2">4.150604</cell><cell>10.727847</cell></row><row><cell>2635.552676</cell><cell>30.600</cell><cell>34.294231</cell><cell cols="2">3.694231</cell><cell>12.072651</cell></row><row><cell>1273.816970</cell><cell>105.590</cell><cell>16.214358</cell><cell cols="2">89.375642</cell><cell>84.644040</cell></row><row><cell>2733.490428</cell><cell>30.360</cell><cell>35.594558</cell><cell cols="2">5.234558</cell><cell>17.241629</cell></row><row><cell>2521.044293</cell><cell>28.530</cell><cell>32.773894</cell><cell cols="2">4.243894</cell><cell>14.875199</cell></row><row><cell>3122.755376</cell><cell>37.570</cell><cell>40.762860</cell><cell cols="2">3.192860</cell><cell>8.498430</cell></row><row><cell>3501.459234</cell><cell>50.850</cell><cell>45.790941</cell><cell cols="2">5.059059</cell><cell>9.948985</cell></row><row><cell>2535.848511</cell><cell>35.070</cell><cell>32.970451</cell><cell cols="2">2.099549</cell><cell>5.986738</cell></row><row><cell>3098.277947</cell><cell>41.740</cell><cell>40.437871</cell><cell cols="2">1.302129</cell><cell>3.119618</cell></row><row><cell>…</cell><cell>…</cell><cell>…</cell><cell>…</cell><cell></cell><cell>…</cell></row><row><cell>2782.959320</cell><cell>26.520</cell><cell>36.251361</cell><cell cols="2">9.731361</cell><cell>36.694423</cell></row><row><cell>2436.892034</cell><cell>33.990</cell><cell>31.656598</cell><cell cols="2">2.333402</cell><cell>6.864966</cell></row><row><cell>2810.053656</cell><cell>37.080</cell><cell>36.611095</cell><cell cols="2">0.468905</cell><cell>1.264578</cell></row><row><cell>3040.780757</cell><cell>44.260</cell><cell>39.674477</cell><cell cols="2">4.585523</cell><cell>10.360424</cell></row><row><cell>3229.282192</cell><cell>46.620</cell><cell>42.177225</cell><cell cols="2">4.442775</cell><cell>9.529762</cell></row><row><cell>Total</cell><cell>1361.29</cell><cell>1257.726200</cell><cell cols="2">96.563799</cell><cell>7.09</cell></row><row><cell cols="2">𝑀 = 0.01327708 × 𝑃𝐴 − 0.69821033</cell><cell></cell><cell></cell><cell></cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<ptr target="https://www.fao.org/3/cc3751en/cc3751en.pdf" />
		<title level="m">Agricultural production statistics</title>
				<imprint>
			<publisher>Food and Agriculture Organization</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Tomato (solanum lycopersicum) health components: From the seed to the consumer</title>
		<author>
			<persName><forename type="first">M</forename><surname>Dorais</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ehret</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Papadopoulos</surname></persName>
		</author>
		<idno type="DOI">10.1007/s11101-007-9085-x</idno>
	</analytic>
	<monogr>
		<title level="j">Phytochemistry Reviews</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="231" to="250" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><surname>Wordatlas</surname></persName>
		</author>
		<ptr target="https://www.worldatlas.com/articles/which-are-the-world-s-leading-tomato-producing-countries.html" />
		<title level="m">The world&apos;s leading tomato producing countries</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">On plant detection of intact tomato fruits using image analysis and machine learning methods</title>
		<author>
			<persName><forename type="first">K</forename><surname>Yamamoto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yoshioka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ninomiya</surname></persName>
		</author>
		<idno type="DOI">10.3390/s140712191</idno>
		<ptr target="https://www.mdpi.com/1424-8220/14/7/12191.doi:10.3390/s140712191" />
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="12191" to="12206" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">The use of image processing and sensor in tomato sorting machine by color, size, and weight, JOIV</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">I</forename><surname>Sari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Fajar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Gunawan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Handayani</surname></persName>
		</author>
		<ptr target="https://api.semanticscholar.org/CorpusID:250542375" />
	</analytic>
	<monogr>
		<title level="m">International Journal on Informatics Visualization</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<ptr target="https://opencv.org/" />
		<title level="m">Opencv: Open source computer vision library</title>
				<imprint>
			<date type="published" when="2024-10-02">2024-10-02</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Determining fresh tomato weight using depth images from an ar headset</title>
		<author>
			<persName><forename type="first">T</forename><surname>Van Daalen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Peller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Balendonck</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.ifacol.2022.11.125</idno>
		<ptr target="https://www.sciencedirect.com/science/article/pii/S2405896322027586.doi:10.1016/j.ifacol.2022.11.125" />
	</analytic>
	<monogr>
		<title level="j">IFAC-PapersOnLine</title>
		<imprint>
			<biblScope unit="volume">55</biblScope>
			<biblScope unit="page" from="119" to="123" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">J.-S</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Nazki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Baek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Hong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lee</surname></persName>
		</author>
		<ptr target="https://api.semanticscholar.org/CorpusID:228852288" />
		<title level="m">Artificial intelligence approach for tomato detection and mass estimation in precision agriculture</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note>Sustainability</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Weight and volume estimation of single and occluded tomatoes using machine vision</title>
		<author>
			<persName><forename type="first">I</forename><surname>Nyalala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Okinda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Chao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Mecha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Korohou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Yi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Nyalala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Jiayu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Chao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Kunjie</surname></persName>
		</author>
		<idno type="DOI">10.1080/10942912.2021.1933024</idno>
		<idno>arXiv:</idno>
		<ptr target="https://doi.org/10.1080/10942912.2021.1933024" />
	</analytic>
	<monogr>
		<title level="j">International Journal of Food Properties</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page" from="818" to="832" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Chapter 7 -support vector regression</title>
		<author>
			<persName><forename type="first">F</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">J</forename><surname>O'donnell</surname></persName>
		</author>
		<idno type="DOI">10.1016/B978-0-12-815739-8.00007-9</idno>
		<ptr target="https://www.sciencedirect.com/science/article/pii/B9780128157398000079.doi:10.1016/B978-0-12-815739-8.00007-9" />
	</analytic>
	<monogr>
		<title level="m">Machine Learning</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Mechelli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Vieira</surname></persName>
		</editor>
		<imprint>
			<publisher>Academic Press</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="123" to="140" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">An introduction to convolutional neural networks</title>
		<author>
			<persName><forename type="first">K</forename><surname>O'shea</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Nash</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/1511.08458.arXiv:1511.08458" />
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Non-destructive estimation of fruit weight of strawberry using machine learning models</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">K</forename><surname>Basak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Paudel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">E</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">C</forename><surname>Deb</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">G</forename><surname>Kaushalya Madhavi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">T</forename><surname>Kim</surname></persName>
		</author>
		<idno type="DOI">10.3390/agronomy12102487</idno>
		<ptr target="https://www.mdpi.com/2073-4395/12/10/2487.doi:10.3390/agronomy12102487" />
	</analytic>
	<monogr>
		<title level="j">Agronomy</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
