<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">GPU accelerated Monte Carlo sampling for SPDEs</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Nikolay</forename><surname>Shegunov</surname></persName>
							<email>nshegunov@fmi.uni-sofia.bg</email>
							<affiliation key="aff0">
								<orgName type="department">Faculty of Mathematics and Informatics</orgName>
								<orgName type="institution">Sofia University</orgName>
								<address>
									<country key="BG">Bulgaria</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Peter</forename><surname>Armianov</surname></persName>
							<email>parmianov@fmi.uni-sofia.bg</email>
							<affiliation key="aff0">
								<orgName type="department">Faculty of Mathematics and Informatics</orgName>
								<orgName type="institution">Sofia University</orgName>
								<address>
									<country key="BG">Bulgaria</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Atanas</forename><surname>Semerdjiev</surname></persName>
							<email>asemerdjiev@fmi.uni-sofia.bg</email>
							<affiliation key="aff0">
								<orgName type="department">Faculty of Mathematics and Informatics</orgName>
								<orgName type="institution">Sofia University</orgName>
								<address>
									<country key="BG">Bulgaria</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Oleg</forename><surname>Iliev</surname></persName>
							<email>oleg.iliev@itwm.fraunhofer.de</email>
							<affiliation key="aff0">
								<orgName type="department">Faculty of Mathematics and Informatics</orgName>
								<orgName type="institution">Sofia University</orgName>
								<address>
									<country key="BG">Bulgaria</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Monte</forename><surname>Carlo</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">ITWM Fraunhofer</orgName>
								<address>
									<settlement>Kaiserslautern</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">GPU accelerated Monte Carlo sampling for SPDEs</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">4CFD03F1DD2295E824474795938CE012</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T06:41+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>sampling methods is a broad class of computational algorithms, that rely on repeated random sampling to obtain numerical results. The idea of such algorithms is to introduce randomness to solve problems, even in the deterministic case. Such algorithms are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches due to limitations, such as cost of performing experiment or inability to take direct measures. The problems typically require solving a stochastic partial differential equations (SDPEs), where an uncertainty is incorporated in to the model. For example: as an input parameter, or initial boundary condition. Extensive efforts have been devoted to the development of accurate numerical algorithms, so that simulation predictions are reliable in since that the numerical errors are well understood and under control for practical problems. Multilevel Monte Carlo (MLMC) is a novel idea. Instead of sampling from the true solution, a sampling is done at different levels. Such approach is beneficial in terms of convergence rate. However for practical simulations a large number of problems has to be solved, with huge number of unknowns. Such computational restrictions naturally leads to challenging parallel algorithms. To overcome some of the limitations, here we consider a parallel implementation of MLMC algorithm for a model SPDE, that uses GPU acceleration for the permeability generation.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Many mathematical models describing industrial problems are subject to uncertainty due to some limitation. Incorporating the uncertainty typically leads to more accurate representation of the world. However it is in the cost of solving a statistical problem, for example one can consider saturated flow in subsurface, or heat condition in Metal Matrix Composites. Such stochastic models require enormous computational effort, thereby requiring new fast algorithms to facilitate that need. In this paper we consider a scalar elliptic SPDE describing single phase flow throughout heterogeneous porous media. Although the approach here is not limited to this problem, we aim at computing the mean flux throughout saturated porous media with prescribed pressure drop and known distribution of the random coefficients.</p><p>One of the preferred and powerful methods for solving SPDE is Multilevel Monte Carlo algorithm (MLMC). The algorithm exploits a combination of fewer expensive computations with a plenty of cheap ones to compute expected values at significantly lower computational cost then standard Monte Carlo. A key component is the selection of the different levels of combination. Many different approaches exist. In <ref type="bibr" target="#b0">[1]</ref> authors use the number of therms of Karhunen-Loewe expansion in order to define coarser levels. In <ref type="bibr" target="#b3">[4]</ref>, similarly to our approach, the coarser levels are defined on coarser grids via averaging of the coefficients in the PDE. Here we construct the levels by renormalization. For details we reefer to <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b2">3]</ref>. For generation of permeability (random field) we use circulant embedding algorithm <ref type="bibr" target="#b6">[7]</ref>.</p><p>Due to the rapidly advancing area of computer science, methods based on Monte Carlo sampling are of great interest. Such methods are suitable for parallelization and can compute the expected value in reasonable time. For realistic simulations usually a HPC implementation is need. Here we investigate the possible benefits of generating permeability on GPUs and study the reduction of time in overall computation of MLMC.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Model problem</head><p>In order to test the overall performance of MLMC in combination with CUDA generated permeability field, we consider simple model problem in a unit cube domain, steady state single phase flow in random porous media. This problem, illustrates well the challenges in solving stochastic PDEs.</p><formula xml:id="formula_0">−∇ • [k(x, ω)∇p(x, ω)] = 0 for x ∈ D = (0, 1) d , ω ∈ Ω ( 1 )</formula><p>Subject to boundary conditions:</p><formula xml:id="formula_1">p x=0 = 1 p x=1 = 0 ∂ n p = 0 on other boundaries,<label>(2)</label></formula><p>with dimension d = {2, 3}, pressure p, scalar permeability k, and random vector ω. The quantify of interest is the mean (expected value) of the total flux through the unit cube:</p><formula xml:id="formula_2">Q(x, ω) := ˆx=0 k(x, ω)∂ n p(x, ω)dx (3)</formula><p>Both the coefficient k(x, ω) and the solution p(x, ω) are subject to uncertainty, characterized by the random vector ω in a properly defined space.</p><p>Solving this equation can be broken into three sub-problems: generating permeability (random field), solving the deterministic problem and reducing the variance with MLMC method. We briefly discuss each of them.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Generating permeability field</head><p>Generating permeability fields is essential problem in solving the SPDE. Here we consider a practical covariance proposed by <ref type="bibr" target="#b7">[8]</ref>:</p><formula xml:id="formula_3">C(x, y) = σ 2 exp(−||x − y|| p /λ), p = 2<label>(4)</label></formula><p>Where || • || p denotes the l p norm. in R d . which satisfies:</p><formula xml:id="formula_4">E[K(x, .)] = 0, E[K(x, .), K(y, .)] = C(x − y) = C(y − x) for x, y ∈ D and K(x, ω) = log(k(x, ω))</formula><p>Several approaches has been developed of generating random permeability fields applicable to flow simulations. Here we use an algorithm based on forward and inverse Fourier transform over circulant covariance matrix to generate permeability. Realization of such field is governed by two parameters: standard deviation σ and correlation length λ. More details can be found in the papers <ref type="bibr" target="#b1">[2]</ref> and <ref type="bibr" target="#b6">[7]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Solving the deterministic problem</head><p>Literature provides different numerical schemes for solving PDEs. In <ref type="bibr" target="#b1">[2]</ref>, an Multi-scale Finite element methods is used. Here for solving the elliptic PDEs corresponding to each realization of permeability field, we use finite volume method on a cell centered grid. This method is mass conservative. More details can be found in <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b2">3]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Variance reduction</head><p>We shortly recall the idea proposed in <ref type="bibr" target="#b0">[1]</ref>. Let {M l : l = 0 . . . L} ∈ N be increasing sequence of numbers called levels, with corresponding quantities {Q M l } L l=0 , and s ≥ 2 be coarsening factor, such that we have</p><formula xml:id="formula_5">M l = sM l−1 , for l = 0 . . . L. Defining Y l = Q M l − Q M l−1 and setting Y 0 = Q M 0 ,</formula><p>we can use telescopic sum to write the following identitity for the expected value</p><formula xml:id="formula_6">E E[Q M ] = E[Q M 0 ] + L l=1 E[Q M l − Q M l−1 ] = L l=0 E[Y l ]<label>( 5 )</label></formula><p>The expectation on the finest level is equal to the expectation on the coarsest level plus sum of corrections, i.e. differences in the expectations on each pair of consecutive levels. The terms in (5) are approximated using standard MC independent estimators, with N l samples. For the mean square error we have:</p><formula xml:id="formula_7">e( Q ML M,N ) 2 = E[( Q ML M,N − E[Q]) 2 ] = L l=0 N −1 l V [Y l ] + (E[Q M − E[Q]) 2 (6)</formula><p>Our goal is to have:</p><formula xml:id="formula_8">e( Q ML M,N ) 2 = L l=0 N −1 l V [Y l ] + (E[Q M − E[Q]) 2 ≤ 2ε 2<label>(7)</label></formula><p>Denote</p><formula xml:id="formula_9">v l = V [Y l ],</formula><p>and let t l be the mean time computing difference Y l once, and T = L l=0 n l t l be the total time for the computation. Minimizing T under the above constraint, with Lagrangian multipliers and turning it to integer value gives us:</p><formula xml:id="formula_10">n l = α (v l /t l ) with Lagrangian multiplier α = 1 2 L l=0 (v l /t l )<label>(8)</label></formula><p>To define the levels, the resolution of the discretization is used, such that the number of square cells in D = 2 and cubic cells in D = 3 are exact power of 2. Thus on the finer level we have 4 times more cells than on the coarser level for D = 2, and 8 times more for D = 3. To approximate the random field, we employ heuristic technique, witch combines each 4 cells in 2D into one by combination of simple arithmetic, geometric and harmonic average. For details we refer to <ref type="bibr" target="#b2">[3]</ref>.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Simulation results</head><p>The computational time used by the setup of the generation of 100 permeability fields with σ = 2.0 and λ = 0.2 is shown on figure 2. Our underling grid is in 2D with size, approximately 1.7 * 10 7 . As it can be observed, the speed up that is achieved using only CPU cores for computation at all of the steps is significant up to around 8 cores. At this point it is approximately up to 6.755 times faster than calculation on a single core. Using more than 8 cores efficiency gradually decreases and with 24 cores the speed up achieved is approximately 15 times. The GPU generation is much faster: using single GPU, generation time is comparable to 11 CPU cores, and on 2 GPUs -approximately 22 CPU cores. This is expected since permeability generation is mainly a forward Fourier transform over a matrix, followed by multiplication with random numbers for each element and then inverse Fourier transform. Table <ref type="table">1</ref> and 2 show the comparison of 3 level MLMC implementation using only CPU versus implementation, that uses GPU for permeability generation. The presented results are averaged over 10 runs of the algorithm.  The fine grid is of size 2 10 * 2 10 and permeability generation parameters are σ = 1.5, λ = 0.1. Each problem is computed by own CPU core and all use shared GPU, thus in a given moment of the execution 12 or 24 concurrent calls to the GPUs may exist. This is more taxing situation for the GPU compared to a distribution where a group of processes solves a single problem. In table 1 one can observe that MLMC calculated on single GPU and 12 CPU cores is faster than same algorithm using only CPU cores and the execution of the generation step on the different levels is approximately 2 times faster while the execution of the same MLMC implementation on single GPU and 24 CPU cores shows that the overall time is slower than execution only on the same number of CPU cores. The performance is significantly improved with introduction of a second GPU. The generation times are notably lower than the case with a single GPU. The implementation of the algorithm, used for that test is written in C++. The method used for solving the PDE conjugate gradient preconditioned with AMG if provided by DUNE library <ref type="bibr" target="#b8">[9]</ref>. For implementation of circulant embedding algorithm the fftw library is used for generation on CPU, and cufft library provided by NVIDIA is used for generation on GPU. The Multilevel Monte Carlo algorithm is implemented with pure MPI. All tests are performed on HybriLIT-education and testing cluster, Dubna Russia, on a GPU node with two NVIDIA TESLA K80 GPUs and Intel Xeon E5-2695 v2 processors.</p><formula xml:id="formula_11">#GPU E[Q] Time[s] RMS t gen l 0 [s] t</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Conclusions</head><p>Generating random filed using modern GPU accelerated computing is promising possibility, witch can decrease the computational time of the generation step of the MLMC algorithm if the task is small enough to fit in the GPU memory. When the task is larger, however, the execution time using GPU can be larger than using only CPU cores. The idea of GPU acceleration is fairly new in the area of scientific computing and there are not many libraries implemented, using GPU accelerated approach. Further more, HPC systems with GPU enabled nodes are rather expensive and the number of large scientific computing clusters with GPU capabilities is relatively small. On the other hand, in the recent years, some large commercial clusters for GPU computing were built, inspired by the block-chain and neural networks trend. A further investigation on solving large scientific problems on such clusters is an interesting study.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Generated permeability and corresponding solution</figDesc><graphic coords="5,41.75,148.57,181.55,149.78" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>11 12 13 14 15 16 17 18 19 20 21 22 23 24 Avg. time over 100 solves (log scale)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Average times for generation single permeability field over 100 samples.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 2 :</head><label>2</label><figDesc>MLMC simulation, on 24 cores</figDesc><table><row><cell>gen l 1 [s] t gen l 2 [s]</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Multilevel Monte Carlo Methods and Applications to Elliptic PDEs with Random Coefficients</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">A</forename><surname>Cliffe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">B</forename><surname>Giles</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Scheichl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">L</forename><surname>Teckentrup</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computing and Visualization in Science</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="3" to="15" />
			<date type="published" when="2011">2011</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Uncertainty Quantification for Porous Media Flow Using Multilevel Monte Carlo</title>
		<author>
			<persName><forename type="first">J</forename><surname>Mohring</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Milk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ngo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Klein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Iliev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ohlberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Bastian</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Large-Scale Scientific Computing</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="145" to="152" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Renormalization Based MLMC Method for Scalar Elliptic SPDE</title>
		<author>
			<persName><forename type="first">O</forename><surname>Iliev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mohring</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Shegunov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Large-Scale Scientific Computing</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="145" to="152" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">A study of stochastic FEM method for porous media flow problem</title>
		<author>
			<persName><forename type="first">R</forename><surname>Blaheta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Béreš</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Domesová</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Applied Mathematics in Engineering and Reliability</title>
				<editor>
			<persName><forename type="first">R</forename><surname>Bris</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Dao</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="281" to="289" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A numerical comparison between two upscaling techniques: non-local inverse based scaling and simplified renormalization</title>
		<author>
			<persName><forename type="first">I</forename><surname>Lunati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Bernard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Giudici</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Parravicini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Ponzini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in Water Resources</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page" from="913" to="929" />
			<date type="published" when="2001">2001</date>
			<publisher>Elsevier</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Calculating equivalent permeability: a review</title>
		<author>
			<persName><forename type="first">P</forename><surname>Renard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>De Marsily</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in Water Resources</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="253" to="278" />
			<date type="published" when="1997">1997</date>
			<publisher>Elsevier</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Quasi-Monte Carlo methods for elliptic PDEs with random coefficients and applications</title>
		<author>
			<persName><forename type="first">G</forename><surname>Graham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">Y</forename><surname>Kuo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Nuyens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Scheichl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">H</forename><surname>Sloan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Computational Physics</title>
		<imprint>
			<biblScope unit="volume">230</biblScope>
			<biblScope unit="page" from="3668" to="3694" />
			<date type="published" when="2011">2011</date>
			<publisher>Elsevier</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Analysis of the spatial structure of properties of selected aquifers</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">J</forename><surname>Hoeksema</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">K</forename><surname>Kitanidis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Water Resources Research</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="563" to="572" />
			<date type="published" when="1985">1985</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">A Generic Grid Interface for Parallel and Adaptive Scientific Computing. Part I: Abstract Framework</title>
		<author>
			<persName><forename type="first">P</forename><surname>Bastian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Blatt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Dedner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Engwer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Klofkorn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ohlberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Sander</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computing</title>
		<imprint>
			<biblScope unit="volume">82</biblScope>
			<biblScope unit="issue">2-3</biblScope>
			<biblScope unit="page" from="103" to="119" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
