<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Machine learning modeling exploration for under-bark tree bole volume estimation ⋆</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Maria</forename><forename type="middle">J</forename><surname>Diamantopoulou</surname></persName>
							<email>mdiamant@for.auth.gr</email>
							<affiliation key="aff0">
								<orgName type="institution">Aristotle University of Thessaloniki</orgName>
								<address>
									<addrLine>University Campus</addrLine>
									<postCode>54124</postCode>
									<settlement>Thessaloniki</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Machine learning modeling exploration for under-bark tree bole volume estimation ⋆</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">1382CCA23639907C831477AD9BC92326</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:30+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Gaussian Process Regression</term>
					<term>Random Forest regression</term>
					<term>pine trees 1</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper investigates the potential of utilizing both probabilistic and ensemble supervised machine learning modeling strategies to accurately estimate under-bark tree bole volume. For this purpose, primary measurement data from pine trees (Pinus brutia Ten.) in the Seich-Sou suburban forest of Thessaloniki, Greece, were used. The described analysis can offer a strong foundation for understanding the performance of both non-parametric modeling approaches. Specifically, the study employed the probabilistic Gaussian Process Regression (GPR) modeling methodology with an integrated radial basis function (RBF) kernel. Furthermore, based on its well-known ability to predict values for continuous variables, the ensemble learning technique chosen for investigation was Random Forest regression (RFr), which integrates the bootstrap aggregation methodology. A cross-validation procedure, combined with an exhaustive gridsearch methodology, was employed to determine the optimal hyperparameter combination for each constructed model. Despite the challenge of identifying the optimal combination of numerous hyperparameters unique to each modeling approach, the results demonstrated that both methodologies, due to their flexibility, have significantly strong potential to provide reliable under-bark tree bole diameters and volume estimations. This contributes to the sustainable management of forest resources and highlights potential areas for further exploration and improvement.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Accurately predicting the total volume of trees is crucial for anticipating forest growth and productivity. To estimate the bole volume by section, sophisticated formulas derived from the methods developed by Huber, Smalian, and Newton are employed <ref type="bibr" target="#b0">[1]</ref>. These techniques necessitate multiple measurements of bole diameters at specific heights, which can be difficult to obtain from standing trees.</p><p>Directly measuring the under-bark diameters of a tree bole several meters above the ground, which is necessary for calculating the true under-bark bole volume, is unfeasible, as these measurements can only be obtained from a felled tree. To avoid this destructive method, alternative indirect approaches are being explored. Traditionally, regression analysis has been used to estimate various forest attributes. However, the standard regression methodology encounters difficulties due to the need to meet multiple assumptions <ref type="bibr" target="#b1">[2]</ref>.</p><p>Lately, the emerging field of artificial intelligence (AI), including machine learning (ML) techniques have shown great potential providing accurate estimations and predictions of biological attributes, even when dealing with noisy data and non-normal distributions, which are common in primary forest measurements. Over the past two decades, there has been increasing interest in utilizing machine learning in forestry <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4]</ref>, driven by its advanced computational capabilities.</p><p>In line with this objective, the goal of this study is to accurately estimate and predict the underbark tree bole volume of pine trees using field measurements that are easily obtainable. To achieve this, two distinct machine learning approaches were employed: the probabilistic Gaussian Process Regression (GPR) method, known for its effectiveness in handling noisy continuous data, and the Random Forest regression (RFr) technique, an ensemble learning algorithm that enhances overall performance by combining the insights of multiple models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Material and Methods</head><p>The ground-truth data was collected from measurements on pine trees (Pinus brutia) within the Seich-Sou suburban forest of Thessaloniki, Greece. This forest, covering an area of 3,085.82 ha with an elevation range between 563 meters and 100 meters <ref type="bibr" target="#b4">[5]</ref>. Systematic sampling was employed to ensure that all different site classes were represented. Tree measurements included over bark (doh) and under bark diameters (duh) at one-meter height intervals starting from 0.3 meters above the ground (do0. <ref type="bibr">3, du0.3, do1.3, du1.3, …, do9.3, du9.3)</ref>, as well as the total height (h) of the sampled trees. Upon completion of the measurements, a sample size of n = 999 measurements was obtained.</p><p>The under bark bole volume (vubole) was calculated using the Smalian's cross-sectional equation <ref type="bibr" target="#b0">[1]</ref>:</p><formula xml:id="formula_0">𝑣 !"#$% = % &amp; 𝜋 4 • * 𝑑 !&amp; ' + 𝑑 (!&amp;)*) ' 2 . • 𝑙0 + 𝜋 12 • 𝑑 !, ' • 𝑙 , , &amp;-* ,<label>(1)</label></formula><p>where 𝑑 !&amp; , i=1,…k are the under bark diameters of the lower and upper stem's sections in m, l is the length of each section in m, in this case equal to one meter, and lk is the length of the tree top, in m, with lk &lt; l=1.</p><p>The mean and the standard deviation (std) for the observed over and under bark tree diameters, the tree total height and the under bark calculated volumes, are given in Table <ref type="table" target="#tab_0">1</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Machine learning modeling approaches</head><p>Using a probabilistic supervised machine learning method like Gaussian process regression (GPR) <ref type="bibr" target="#b5">[6]</ref> for estimating under bark bole volume (vbole) brings significant benefits. This approach incorporates prior knowledge through kernels and provides uncertainty measures for predictions. Furthermore, this approach works well on small datasets, and it is more efficient in low dimensional spaces, matching perfectly in the present case study. Generally, GPR is characterized by the mean and covariance of the prior Gaussian process, along with the kernel that defines the relationship between two observations. In this context, the kernel radial basis function (RBF) was employed <ref type="bibr" target="#b6">[7]</ref>:</p><formula xml:id="formula_1">𝑘(𝑥 &amp; , 𝑥 . ) = 𝜎 / ' • 𝑒 01 23 ! 1 3 " 2 # '•$6 # 7 ,<label>(2)</label></formula><p>where 𝜎 / ' is the signal variance that controls the overall variance of functions drown from the Gaussian process regression, ls is the length scale, determines how rapidly the correlation between two points diminishes as the distance between them increases, 9𝑥 &amp; − 𝑥 . 9</p><p>' is the squared Euclidean distance between the 𝑥 &amp; and 𝑥 . . In the equation ( <ref type="formula" target="#formula_1">2</ref>), both the hyperparameter ls (length scale) and 𝜎 / ' (signal variance) are critical to the quality of the resulting model and must be properly optimized. To achieve this, the tree samples were randomly divided into a fitting data set, comprising 70% of the total data, and a testing data set with the remaining 30%. Additionally, the fitting data sets were subjected to k-fold crossvalidation with k=5, ensuring the constructed model's predictive ability is adequate. The same data division approach was applied to the Random Forest regression model construction, as well.</p><p>The second non-parametric approach chosen was the RFr, selected in part for its ability to bypass the assumptions inherent in standard regression modeling. This technique is recognized as a robust non-parametric, supervised machine learning algorithm, originally proposed by <ref type="bibr" target="#b7">[8]</ref>. The concept behind this approach is that combining multiple models can better capture the true structure of the data. RFr employs multiple individual models, called decision trees, which are combined into a single model. The goal is to minimize both the variance and bias of the base model-the decision tree-as much as possible within the system.</p><p>The successful training of the RFr model significantly depends on fine-tuning its hyperparameters, particularly the number of decision trees (ndt), known as learners, and the maximum depth (dmax) of these learners. These hyperparameters are crucial as they govern the complexity of the RFr model. The RFr training utilized the bootstrap aggregation algorithm, commonly known as bagging <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b8">9]</ref>.</p><p>Both the machine learning methodologies were implemented in the scikit-learn libraries <ref type="bibr" target="#b9">[10]</ref> and the Python programming language <ref type="bibr" target="#b10">[11]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Evaluation criteria</head><p>The evaluation criteria crucial for assessing the suitability of the machine learning models used in this study were as follows: a) root mean square error (RMSE), which calculates the square root of the average squared differences between estimated/predicted and observed values; b) the coefficient of determination (R²), which reflects the proportion of variance in the dependent variable that can be explained by the independent variables; c) bias (BIAS), representing the mean difference between estimated/predicted and observed values; and d) relative sum of square errors (RSSE), which is the (%) ratio of the sum of squared errors (SSE) to the sum of the actual values of the under-bark bole volume values. High model performance is indicated by low RMSE, BIAS, and RSSE values, coupled with high R² values.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Results</head><p>Taking into account the difficulty faced in obtaining tree bole diameters in different heights, the variables used as input variables to the under bark volume machine learning systems with output variable the under bark bole volume (vubole) were the diameters located near the ground, therefore easy to be measured, which were the (do0.3), (du0.3), (do1.3), (du1.3) and the total height (h) of the trees. Moreover, these variables produce high correlation with the (vubole) values, contributing mostly to the (vubole) values configuration.</p><p>Employing both machine learning Gaussian process regression modeling, and Random Forest for regression modeling, the required hyperparameters were assessed using the grid-search methodology <ref type="bibr" target="#b11">[12]</ref>, which resulted to the optimal hyperparameters' values presenting in Table <ref type="table" target="#tab_1">2</ref>. The evaluation criteria for the constructed models are presented in Table <ref type="table" target="#tab_2">3</ref>. As indicated in the table, both models yield similar outcomes. However, the GPR model provides the most accurate and reliable results for both the fitting and testing datasets. The performance of both constructed models was further assessed through the 45-degree line plots.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Discussion</head><p>As a Bayesian regression technique, GPR modeling offers a probabilistic approach to inference, enabling the prediction of not just the expected value of a target variable but also the uncertainty associated with that prediction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 1: GPR model performance associated by its uncertainty</head><p>Offering a probabilistic prediction with a mean and variance provides a natural measure of uncertainty in the predictions. Indicatively, the uncertainty in the under bark bole volume predictions against the total tree height and the stump diameter (the tree bole diameter located at 0.3 m from ground) is shown in Figure <ref type="figure">1</ref>. Similar plots under similar uncertainty could be produced for all predictors. This evaluation is particularly useful in forestry, where risk assessment is essential for the effective implementation of sustainable forest management.</p><p>The flexible structure of the Random Forest algorithm helps prevent the serious issue of overfitting and enables the system to handle real-world data, which often includes challenges such as high variance, outliers, and missing values. However, it's important to note that the further a predicted value is from the range of the fitting data, the less reliable that prediction will be.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Declaration on Generative AI</head><p>The author(s) have not employed any Generative AI tools.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Summary statistics of the observed tree bole diameters, in centimeters, total height, in meters and under bark calculated volumes, in cubic meters</figDesc><table><row><cell cols="2">diam mean</cell><cell>std</cell><cell cols="2">diam mean</cell><cell>std</cell><cell cols="2">diam mean</cell><cell>std</cell><cell cols="2">diam mean</cell><cell>std</cell></row><row><cell>do0.3</cell><cell>16.57</cell><cell>2.76</cell><cell>do3.3</cell><cell>8.88</cell><cell>2.79</cell><cell>do6.3</cell><cell>3.90</cell><cell>2.03</cell><cell>do9.3</cell><cell>1.99</cell><cell>1.55</cell></row><row><cell>du0.3</cell><cell>14.03</cell><cell>2.42</cell><cell>du4.3</cell><cell>8.41</cell><cell>2.59</cell><cell>du6.3</cell><cell>3.64</cell><cell>1.98</cell><cell>du9.3</cell><cell>1.82</cell><cell>1.47</cell></row><row><cell>do1.3</cell><cell>13.67</cell><cell>2.61</cell><cell>do4.3</cell><cell>7.01</cell><cell>2.45</cell><cell>do7.3</cell><cell>3.20</cell><cell>1.59</cell><cell>h</cell><cell>8.17</cell><cell>1.33</cell></row><row><cell>du1.3</cell><cell>12.16</cell><cell>2.36</cell><cell>du4.3</cell><cell>6.65</cell><cell>2.32</cell><cell>du7.3</cell><cell>2.95</cell><cell>1.56</cell><cell>vubole</cell><cell>0.05</cell><cell>0.02</cell></row><row><cell>do2.3</cell><cell>11.28</cell><cell>2.62</cell><cell>do5.3</cell><cell>5.13</cell><cell>2.23</cell><cell>do8.3</cell><cell>2.67</cell><cell>1.48</cell><cell></cell><cell></cell><cell></cell></row><row><cell>du2.3</cell><cell>10.32</cell><cell>2.40</cell><cell>du5.3</cell><cell>4.83</cell><cell>2.15</cell><cell>du8.3</cell><cell>2.44</cell><cell>1.44</cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>Optimal hyperparameters values for both modeling approaches</figDesc><table><row><cell cols="3">Gaussian process regression (GPR)</cell><cell cols="3">Random Forest for regression (RFr)</cell></row><row><cell>hyperparameters</cell><cell>range</cell><cell cols="2">optimal value hyperparameters</cell><cell>range</cell><cell>optimal value</cell></row><row><cell>𝜎 / '</cell><cell>0 -1</cell><cell>0.05</cell><cell>ndt</cell><cell>1 -300</cell><cell>10</cell></row><row><cell>ls</cell><cell>1 -5</cell><cell>1.1</cell><cell>dmax</cell><cell>1 -10</cell><cell>7</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3</head><label>3</label><figDesc>Evaluation criteria for both the constructed (GPR) and (RFr) modeling approaches, for both fitting and testing data sets</figDesc><table><row><cell></cell><cell>data</cell><cell></cell><cell>criteria</cell><cell></cell><cell></cell></row><row><cell>models</cell><cell>set</cell><cell>RMSE</cell><cell>R²</cell><cell>BIAS</cell><cell>RSSE%</cell></row><row><cell>GPR</cell><cell>fitting</cell><cell>0.0026</cell><cell>0.988</cell><cell>-0.00002</cell><cell>0.0141</cell></row><row><cell></cell><cell>testing</cell><cell>0.0032</cell><cell>0.977</cell><cell>-0.00009</cell><cell>0.0233</cell></row><row><cell>RFr</cell><cell>fitting</cell><cell>0.0028</cell><cell>0.986</cell><cell>-0.00005</cell><cell>0.0163</cell></row><row><cell></cell><cell>testing</cell><cell>0.0038</cell><cell>0.974</cell><cell>-0.00136</cell><cell>0.0319</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">E</forename><surname>Avery</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">E</forename><surname>Burkhart</surname></persName>
		</author>
		<title level="m">Forest Measurements</title>
				<meeting><address><addrLine>New York, NY</addrLine></address></meeting>
		<imprint>
			<publisher>Mc Graw Hill</publisher>
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Applied regression analysis</title>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">R</forename><surname>Draper</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Smith</surname></persName>
		</author>
		<idno type="DOI">10.1002/9781118625590</idno>
		<imprint>
			<date type="published" when="1998">1998</date>
			<publisher>Wiley</publisher>
			<pubPlace>New York NY</pubPlace>
		</imprint>
	</monogr>
	<note>3rd ed</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Tree-bark volume prediction via machine learning: A case study based on black alder&apos;s tree-bark production</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Diamantopoulou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Özçelik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Yavuz</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.compag.2018.06.039</idno>
	</analytic>
	<monogr>
		<title level="j">Comput Electron Agric</title>
		<imprint>
			<biblScope unit="volume">151</biblScope>
			<biblScope unit="page" from="431" to="440" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Gaussian process regression-based forest above ground biomass retrieval from simulated L-band NISAR data</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Ghosh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Khati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bhattacharya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lavalle</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.jag.2023.103252</idno>
	</analytic>
	<monogr>
		<title level="j">Int J Appl Earth Obs Geoinf</title>
		<imprint>
			<biblScope unit="volume">118</biblScope>
			<biblScope unit="page">103252</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<ptr target="https://filotis.itia.ntua.gr/biotopes/c/AT4011119/" />
		<title level="m">FILOTIS -Database for the Natural Environment of Greece</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Gaussian Processes for Machine Learning</title>
		<author>
			<persName><surname>Ce. Rasmussen</surname></persName>
		</author>
		<author>
			<persName><surname>Williams</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2006">2006</date>
			<publisher>The MIT Press</publisher>
			<pubPlace>Massachusetts</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Kernel Radial Basis Functions</title>
		<author>
			<persName><forename type="first">W</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><forename type="middle">H</forename><surname>Qin</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-540-75999-7_147</idno>
	</analytic>
	<monogr>
		<title level="m">Computational Mechanics</title>
				<meeting><address><addrLine>Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Random Forests</title>
		<author>
			<persName><forename type="first">L</forename><surname>Breiman</surname></persName>
		</author>
		<idno type="DOI">10.1023/A:1010933404324</idno>
	</analytic>
	<monogr>
		<title level="j">Machine Learning</title>
		<imprint>
			<biblScope unit="volume">45</biblScope>
			<biblScope unit="page" from="5" to="32" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Newer Classification and Regression Techniques: Bagging and Random Forests for Ecological Prediction</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Prasad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">R</forename><surname>Iverson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Liaw</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10021-005-0054-1</idno>
	</analytic>
	<monogr>
		<title level="j">Ecosystems</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="181" to="199" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Scikit-learn: Machine Learning in Python</title>
		<author>
			<persName><forename type="first">F</forename><surname>Pedregosa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Varoquaux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gramfort</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Michel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Thirion</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Grisel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Blondel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Prettenhofer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Weiss</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.1201.0490</idno>
	</analytic>
	<monogr>
		<title level="j">J Mach Learn Res</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="2825" to="2830" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<ptr target="http://www.python.org/" />
		<title level="m">Python Software Foundation: Python Documentation</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">On the relationship between classical grid search and probabilistic roadmaps</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Lavalle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Branicky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">R</forename><surname>Lindemann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The International Journal of Robotics Research</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="673" to="692" />
			<date type="published" when="2004">2004. 2004</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
