<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Active Learning with Physics-Informed Graph Neural Networks on Unstructured Meshes</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Jens</forename><surname>Decke</surname></persName>
							<email>jdecke@uni-kassel.de</email>
							<affiliation key="aff0">
								<orgName type="department">Intelligent Embedded Systems</orgName>
								<orgName type="institution">University of Kassel</orgName>
								<address>
									<postCode>34121</postCode>
									<settlement>Kassel</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Alexander</forename><surname>Heinen</surname></persName>
							<email>alexander.heinen@uni-kassel.de</email>
							<affiliation key="aff0">
								<orgName type="department">Intelligent Embedded Systems</orgName>
								<orgName type="institution">University of Kassel</orgName>
								<address>
									<postCode>34121</postCode>
									<settlement>Kassel</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Bernhard</forename><surname>Sick</surname></persName>
							<email>bsick@uni-kassel.de</email>
							<affiliation key="aff0">
								<orgName type="department">Intelligent Embedded Systems</orgName>
								<orgName type="institution">University of Kassel</orgName>
								<address>
									<postCode>34121</postCode>
									<settlement>Kassel</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Christian</forename><surname>Gruhl</surname></persName>
							<email>cgruhl@uni-kassel.de</email>
							<affiliation key="aff0">
								<orgName type="department">Intelligent Embedded Systems</orgName>
								<orgName type="institution">University of Kassel</orgName>
								<address>
									<postCode>34121</postCode>
									<settlement>Kassel</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Active Learning with Physics-Informed Graph Neural Networks on Unstructured Meshes</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">15660A6C661361AB617E96C7253F66E3</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:23+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Physics Informed Neural Network, Graph Neural Network, Active Learning Orcid 0000-0002-7893-1564 (J. Decke)</term>
					<term>0000-0001-9467-656X (B. Sick)</term>
					<term>0000-0001-9838-3676 (C. Gruhl)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper investigates the use of Physics-Informed Neural Networks (PINNs) in active learning cycles. We defined two scenarios: one initially unsupervised and the other initially supervised. PINNs emphasize the integration of physical laws into neural networks to improve the predictive performance of vanilla neural networks and to enhance the efficiency of traditional methods for solving partial differential equations (PDEs). Key contributions include adapting existing computational frameworks to enable the use of Graph Neural Networks for solving problems that require the calculation of gradients on unstructured triangle meshes, a query strategy focusing on the physical loss, and a comparative analysis of this strategy against random sampling across both defined scenarios. This work establishes a foundation for future research aimed at expanding the application of Physics-Informed Graph Neural Networks (PIGNN) using active learning and addressing real-world problems in fluid dynamics and electrodynamics.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Solving partial differential equations (PDEs) is of paramount interest in numerous fields of science and engineering, as they form the foundation for modeling a wide range of physical phenomena. PDEs describe the behavior of physical systems over space and time, governing processes such as heat transfer <ref type="bibr" target="#b0">[1]</ref>, fluid dynamics <ref type="bibr" target="#b1">[2]</ref>, structural mechanics <ref type="bibr" target="#b2">[3]</ref>, and electromagnetics <ref type="bibr" target="#b3">[4]</ref>. Accurate and efficient solutions to PDEs are crucial for advancing research and development in these areas, making them a focal point of computational and analytical studies <ref type="bibr" target="#b4">[5]</ref>. Traditional methods for solving PDEs, such as finite element (FEM), finite difference, and finite volume methods, can be computationally intensive, especially for high-dimensional problems and complex geometries. In recent years, Physics-Informed Neural Networks (PINNs) have emerged as a powerful alternative computational framework that integrates machine learning with fundamental physical laws to address these challenges <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref>.</p><p>By embedding physical constraints directly into the neural network's loss function 𝐿 𝑡𝑜𝑡𝑎𝑙 cf. Eq. ( <ref type="formula" target="#formula_0">1</ref>), they utilize both data loss 𝐿 𝑑𝑎𝑡𝑎 , cf. Eq. ( <ref type="formula" target="#formula_1">2</ref>) and physics loss 𝐿 𝑝𝑑𝑒 , cf. Eq. (3) components. With 𝜆 as the weighting factor for the data component of the total loss and N vertices.</p><formula xml:id="formula_0">𝐿 𝑡𝑜𝑡𝑎𝑙 = 𝜆 ⋅ 𝐿 𝑑𝑎𝑡𝑎 + 𝐿 𝑝𝑑𝑒 ,<label>(1)</label></formula><formula xml:id="formula_1">𝐿 𝑑𝑎𝑡𝑎 = 1 𝑁 𝑁 ∑ 𝑖=1 (𝑢 𝑖 − 𝑢 𝑖,𝑡𝑟𝑢𝑒 ) 2 , (<label>2</label></formula><formula xml:id="formula_2">)</formula><formula xml:id="formula_3">𝐿 𝑝𝑑𝑒 = 𝑅(𝑃𝐷𝐸)<label>(3)</label></formula><p>PINNs offer several advantages over traditional methods and ensure that the solutions are not only data-consistent but also physically accurate. Additionally, PINNs can naturally incorporate multiphysics problems and seamlessly handle high-dimensional spaces, providing a flexible and efficient approach to solving complex PDEs. In Fig. <ref type="figure" target="#fig_0">1</ref> an active learning (AL) cycle with a PINN as Model is depicted. The queries from the Selector are directed towards an Oracle which is in our case a FEM simulation. The AL cycle uses Eq. ( <ref type="formula" target="#formula_0">1</ref>) where 𝐿 𝑑𝑎𝑡𝑎 measures the mean squared error between a predicted 𝑢 and a true 𝑢 𝑡𝑟𝑢𝑒 solution variable (for instance, the prediction of the electric potential), and is therefore trained in a supervised manner. While the physics loss Eq. ( <ref type="formula" target="#formula_3">3</ref>) 𝐿 𝑝𝑑𝑒 corresponds to the residual 𝑅(𝑃𝐷𝐸) and ensures adherence to the PDE. This loss term operates solely unsupervised on the predicted solution variable 𝑢. This integration allows PINNs to handle sparse data effectively, making them particularly useful in real-world applications where data is limited <ref type="bibr" target="#b5">[6]</ref>.</p><p>We use this AL cycle to train the PINN starting from two different initial states. In Scenario U the model is initially trained completely unsupervised, using the physics-informed loss (3) only, and then data is provided by the oracle to continue the following iterations using 𝐿 𝑡𝑜𝑡𝑎𝑙 on the additional data to support the unsupervised training. In Scenario S, we use ground-truth data for supervised training right from the start and therefore use 𝐿 𝑡𝑜𝑡𝑎𝑙 as a loss function.</p><p>The choice of mesh plays a crucial role in the implementation of PINNs, as it defines the discretization of the problem domain. Structured meshes, with their equidistantly distributed cells, offer computational efficiency and simplicity by enabling straightforward application of automatic differentiation algorithm to compute spatial gradients <ref type="bibr" target="#b7">[8]</ref>. In contrast, unstructured meshes provide the flexibility to handle complex geometries and allow for adaptivity in regions requiring higher resolution. Graph Neural Networks (GNNs) are ideal for solving PDEs on unstructured meshes because they adeptly handle the complex, irregular topologies of these meshes by learning node relationships directly. However, the computation of spatial gradients in unstructured meshes is more complex due to the irregular neighborhoods of their triangular cells. The design of the mesh significantly affects the distribution of data points, the precision of differential operator evaluations, and the enforcement of boundary conditions. For this reason, we combine a GNN with a physics-informed loss function to develop a Physics-Informed Graph Neural Network PIGNN. For that, we have to adapt an existing TensorFlow library to enable the computation of field gradients on unstructured meshes in our PyTorch model. In summary, PINNs represent a sophisticated method for solving PDEs by integrating neural networks with physical laws. Enhancing these networks through AL by physical loss residuals, rather than explicit uncertainty quantification, allows for a more straightforward yet effective refinement process. Strategic mesh design further augments the model, making PIGNNs a versatile tool for a wide range of applications in science and engineering.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Contributions</head><p>1. We adapt an existing TensorFlow implementation to calculate field gradients on twodimensional triangle meshes for our PyTorch model. We use this implementation to build a physical loss function representing the Poisson equation with Dirichlet boundary conditions on an unstructured mesh. 2. We propose a simple yet effective query strategy utilizing the physical loss function. 3. We develop and evaluate two distinct PINN-based active learning scenarios, initially unsupervised and initially supervised, comparing our query strategy with random sampling.</p><p>The remainder of this article is structured as follows: in Section 2 we summarize the related work before we introduce our methodology in Section 3. Our preliminary results are presented in Section 4. The article concludes with a summary of our findings and an outlook for future work in Section 5.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head><p>In this section, we propose related work to the topics of GNNs and PINNs as well as AL and PINNs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>GNNs and PINNs:</head><p>Solving mesh-based PDEs with neural networks is an increasingly progressive topic of research. Typical data-driven solution methods come from the fields of computer vision and graph-based learning <ref type="bibr" target="#b1">[2]</ref>. However, these methods lack information about the underlying physics of the problems at hand. Initial studies have demonstrated that combining GNNs and PINNs yields excellent results in various scientific and engineering applications. GNNs excel at processing data represented as graphs, which is particularly useful for handling complex relationships in unstructured meshes <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b9">10]</ref>. To leverage PINNs on unstructured meshes there is an existing package <ref type="bibr" target="#b10">[11]</ref> which was initially developed for TensorFlow. In that way, the capabilities of GNNs can be effectively utilized to design PINNs with the ability to solve equations containing field gradients.</p><p>AL and PINNs: AL for regression tasks is highly effective in reducing the computational load associated with simulating PDEs. By strategically selecting the most informative samples for extensive simulation, AL can significantly enhance efficiency <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b12">13]</ref>. However, for specific applications like design optimization, where the goal is to systematically identify the optimal design parameters that satisfy specified performance criteria, it is essential to customize the query strategies. This customization ensures that iterative algorithms effectively find the best design with minimal PDE evaluations, aligning the AL process with the optimization objectives and constraints of the physical system described by PDEs <ref type="bibr" target="#b13">[14]</ref>.</p><p>The idea of combining PINNs with AL is gaining increasing attention. Recent works have taken initial steps in this direction, employing uncertainty sampling via Monte Carlo dropout <ref type="bibr" target="#b14">[15]</ref> to select informative samples. Another study proposed an adaptive sampling strategy based on Christoffel functions <ref type="bibr" target="#b15">[16]</ref>. In contrast to these approaches, our work focuses exclusively on a score-sampling strategy based on the physical loss.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Methodology</head><p>Our methodology is structured as follows: first, we introduce the data derived from the Poisson equation, which is a second-order PDE. Subsequently, we present our model, query strategy, and oracle. Finally, we present our experimental setup.</p><p>Data: As dataset we use the charge density input array and the FEM simulated solutions of the Poisson equation together with the mesh, featuring a circular bounded domain (Ω ⊂ ℝ 𝑑 ), and the associated edge indices. As input scalar field 𝑓 we use a random distribution of circular areas with randomly chosen radii. Although the Poisson equation can be applied to a variety of physics problems, our goal is to calculate the electric potential field 𝑢 of a given constant charge density distribution 𝑓, represented by the circle areas which is expressed in Eq. ( <ref type="formula" target="#formula_4">4</ref>). Here Δ represents the Laplacian operator:</p><formula xml:id="formula_4">−Δ𝑢 = 𝑓 in Ω 𝑢 = 0 on 𝜕Ω<label>(4)</label></formula><p>In this equation, 𝜕Ω denotes the boundary of the domain Ω. In Fig. <ref type="figure" target="#fig_1">2</ref>, the input features (Fig. <ref type="figure" target="#fig_1">2a</ref>) and the ground-truth solution (Fig. <ref type="figure" target="#fig_1">2b</ref>) of a random sample are exemplarily illustrated.</p><p>As illustrated in Fig. <ref type="figure" target="#fig_1">2c</ref>, we employ an unstructured triangular mesh to discretize the domain. This type of mesh allows us to accurately capture the geometry and boundary conditions of complex domains. The physical loss 𝐿 𝑃𝐷𝐸 of the Poisson equation (Eq. ( <ref type="formula" target="#formula_4">4</ref>)) is defined in Eq. ( <ref type="formula" target="#formula_5">5</ref>):</p><formula xml:id="formula_5">𝐿 𝑃𝐷𝐸 = 𝑅(𝑃𝐷𝐸) = { ‖Δ𝑢 + 𝑓 ‖ 2 𝑢 𝑖𝑛 Ω ‖𝑢‖ 2 𝑢 𝑜𝑛 𝜕Ω<label>(5)</label></formula><p>To compute this loss, it is necessary to obtain the second spatial derivative, indicated by the Laplace operator. This computation requires considering the spatial dependencies of the mesh cells. While the Automatic Differentiation (AD) algorithm <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b16">17]</ref> is typically used for uniform and structured meshes, it cannot be applied to unstructured meshes used in our study because it struggles with efficiently propagating derivatives through the complex and irregular connections. Therefore, specialized techniques are needed to handle the unstructured nature of the mesh and accurately compute the required gradients for the physical loss.</p><p>Model: We utilize our PIGNN to efficiently handle the intricate geometries of the domain. The GNN's structure is particularly well-suited for capturing the relationships and dependencies within unstructured data. As GNN type we chose six chebyshev spectral graph convolutional (ChebConv) layers as the main model and two feed-forward layers as encoder and decoder. The ChebConv layers 𝑘-hop convolutional operator aggregates information of vertices that are in a radius of 𝑘-hops from the central vertex in contrast to the more popular 1-hop graph convolutional layers, which only take into account directly connected nodes. Using a 𝑘-factor of six allows our model to recognize bigger structures and helps to minimize the prediction error. To enhance the model's capability in dealing with complex mesh geometries, we integrate it with the MeshGradientPy package <ref type="bibr" target="#b10">[11]</ref>, which computes field gradient estimates on every cell based on linear interpolation and then uses an averaging method to obtain gradient values on vertices. This integration is crucial for accurately resolving the Laplacian, as specified in Eq. <ref type="bibr" target="#b4">(5)</ref>. By doing so, we can effectively calculate the unsupervised physical loss 𝐿 𝑃𝐷𝐸 , ensuring that the model adheres to the underlying physical laws governing the problem domain. Since the package is developed for Tensorflow, we adapted the implementation for integration with PyTorch.</p><p>Query Strategy: During inference, we can compute the physics residuals 𝑅(𝑃𝐷𝐸) without needing ground-truth values. These residuals are derived from the physical loss 𝐿 𝑃𝐷𝐸 , highlighting samples of the PIGNN's predictions that deviate from expected physical behavior. To improve the performance of our PIGNNs, we employ an innovative strategy that leverages the physical loss 𝐿 𝑃𝐷𝐸 during inference to guide AL and retraining. In Eq. ( <ref type="formula" target="#formula_6">6</ref>) our query strategy is depicted. Let 𝑆 be the set of all samples 𝑥 that are inferred, and 𝑇 be a subset of 𝑆 containing the 𝑛 samples with the highest 𝐿 𝑝𝑑𝑒 values. The subset 𝑇 is forwarded to the Oracle for target value acquisition.</p><formula xml:id="formula_6">𝑇 = {𝑥 ∈ 𝑆 | 𝐿 𝑝𝑑𝑒 (𝑥) ∈ Top 𝑛 (𝐿 𝑝𝑑𝑒 )}<label>(6)</label></formula><p>In contrast to uncertainty sampling in AL, this strategy works by evaluating the physical residuals, which quantify how well the predicted solution variable 𝑢 adheres to the governing physical laws; thus, no additional uncertainty estimation method is required. By identifying samples where the model's predictions are less reliable, we can target specific areas for model improvement. The advantage of this approach is that we can quantify the physical loss in an unsupervised manner, thereby eliminating the need for costly epistemic uncertainty quantification methods <ref type="bibr" target="#b17">[18]</ref>.</p><p>This unsupervised quantification of physical loss simplifies the AL process, allowing the model to autonomously identify and focus on regions with high residuals. These high-residual areas indicate where the model's predictions are most inaccurate, guiding the addition of new data points or retraining efforts to these critical areas. This method not only streamlines the training process but also ensures robust model enhancement by continuously refining the model based on its internal assessments of physical law adherence. This approach is particularly valuable in scenarios where obtaining groundtruth data is expensive or impractical, as it maximizes the use of available information to improve model performance and reliability.</p><p>Oracle: Focusing on samples with high physical residuals, the Oracle generates additional data in these regions, thereby improving the model's performance. The Model uses its internal physics-based evaluations to guide its learning process, leveraging both the supervised and unsupervised capabilities of the PIGNN to ensure its predictions remain physically consistent. The Selector identifies high-residual samples and the Oracle provides the corresponding true values, which are then included in the training of the Model for fine-tuning. This active interaction between the Oracle and the Model allows for targeted improvements in areas where the model's predictions are less reliable enhancing the model's performance cost-effectively.</p><p>Experimental Setup: Our experimental setup is designed to evaluate two distinct scenarios and is depicted in Fig. <ref type="figure" target="#fig_0">1:</ref> In Scenario U, we start with a pool of 1500 samples with ground-truth solution data determined from FEM simulations (oracle). The PIGNN is initially trained on 600 randomly selected samples in an unsupervised manner using the physics-based loss function 𝐿 𝑝𝑑𝑒 (cf. Eq. ( <ref type="formula" target="#formula_3">3</ref>)) only, therefore, no ground-truth data is provided. After this initial training phase, the model is evaluated on the remaining 900 samples, calculating physics residuals to identify the 60 samples with the highest residuals (cf. Eq.( <ref type="formula" target="#formula_6">6</ref>)). The ground-truth values for these high-residual samples are then obtained from the Oracle and added to the training set, enabling the use of the total loss Eq. ( <ref type="formula" target="#formula_0">1</ref>) on these additionally acquired samples. This iterative process of identifying and adding 60 high-residual samples continues until the cycle iterates 5 times. This scenario is termed unsupervised since the majority of the training is based on the unsupervised physical loss (cf. Eq. ( <ref type="formula" target="#formula_3">3</ref>)) only, except for the samples added by the oracle.</p><p>In Scenario S, we defined a supervised scenario, therefore, using the ground-truth data of the initial 600 randomly selected samples. The total loss 𝐿 𝑡𝑜𝑡𝑎𝑙 (cf. Eq. ( <ref type="formula" target="#formula_0">1</ref>)) is applied to both, the initial samples and the samples acquired over five iterations, which are provided by the oracle.</p><p>For both scenarios, after each iteration, the PIGNN is tested on a separate test dataset of 1500 samples to evaluate its prediction performance and adherence to physical laws. Additionally, we compare these methods to a random selection strategy, where 60 random samples are acquired for the training set in each iteration. This comparison assesses the efficiency of the proposed selection strategy guided by the physical loss 𝐿 𝑝𝑑𝑒 .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Preliminary Results and Discussion</head><p>The results of our experiments, are summarized in Fig. <ref type="figure" target="#fig_2">3</ref> and will be discussed in the following: First, we can observe that our proposed query strategy outperforms the random strategy in both scenarios. It is evident that after the first iteration, Scenario S, which was trained supervised to optimize the total loss 𝐿 𝑡𝑜𝑡𝑎𝑙 , surpasses Scenario U, where the model was trained solely using the unsupervised loss 𝐿 𝑝𝑑𝑒 . However, after four AL cycles, Scenario U demonstrates superior performance compared to Scenario S. This indicates that the initial unsupervised training is a viable approach for our PIGNN. Considering that substantial resources are saved by determining the ground-truth values for the initial training pool -which in our example involved 600 samples -and given that one FEM simulation in industrial use cases can take days or even weeks of computing time, the advantages become even more apparent. An adaptive approach that only simulates the most valuable samples presents significant benefits.</p><p>However, an in-depth analysis of the consistency of multiple runs using various seeds was beyond the scope of this work. Additionally, we did not conduct any hyper-parameter tuning or investigate AL parameters such as the initial pool size, the acquisition size, or the total budget. Consequently, the observed fluctuations in the results may be attributed to these factors. These fluctuations may be primarily due to the limited number of experiments performed and the non-optimized hyperparameters, which were not adjusted due to the significant effort required, especially in the context of active learning <ref type="bibr" target="#b18">[19]</ref>.</p><p>Other typical AL query strategies were also not considered. Another critical parameter is 𝜆, which serves as the weighting factor between the two components of the loss function (cf. Eq. ( <ref type="formula" target="#formula_0">1</ref>)). An incorrectly chosen 𝜆 can lead to the optimization being dominated by one part of the loss function, either 𝐿 𝑑𝑎𝑡𝑎 or 𝐿 𝑝𝑑𝑒 , at the expense of the other. These aspects need to be elaborated in future work. In Fig. <ref type="figure" target="#fig_3">4</ref> a randomly chosen test sample of the final iteration of Scenario U is depicted. It shows that our AL strategy in combination with the PIGNN is capable of providing high-performing predictions in Fig. <ref type="figure" target="#fig_3">4b</ref>. In Fig. <ref type="figure" target="#fig_3">4c</ref> the absolute deviation e.g. the L1-error between the prediction and the ground-truth solution is depicted.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion, Limitations and Future Work</head><p>Our experiments show that our PIGNN is generally suitable for use in AL scenarios. Our proposed query strategy is built upon the network's physical loss, which can be evaluated unsupervised. In future work, we aim to apply our methodology and model to real-world problems and more complex datasets from the field of fluid dynamics <ref type="bibr" target="#b19">[20]</ref> and electrodynamics <ref type="bibr" target="#b3">[4]</ref>. Further, we plan to investigate other acquisition sizes, total budgets, and the initial selection of samples as well as optimization of hyperparameters which is in general not trivial in deep AL.</p><p>Currently, our PIGNN is validated on a circular problem domain solving the Poisson equation on an unstructured triangular mesh. In the future, we plan to employ this model for more complex geometries and physical problems. For the above-mentioned datasets, we intend to solve the Maxwell equations on an unstructured mesh for modeling an electric motor and address turbulent flow in a U-bend applying the Navier-Stokes equations on a graded mesh. Another work compares methods from the fields of computer vision and graph learning on these two datasets <ref type="bibr" target="#b1">[2]</ref>. We aim to extend this comparison to include PINNs. These advancements will help validate the robustness and versatility of our PIGNN in solving a wider range of complex real-world problems. Furthermore, we want to contribute with the help of AL to face the problems of data scarcity in the realm of solving computationally expensive PDEs.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: An active learning loop for training Physics-Informed Neural Networks (PINNs) utilizing a physics lossbased query strategy, with FEM simulations as oracle. Highlighting the differences between the two scenarios: Scenario U operates without ground-truth values and hence unsupervised in the initial training of the model, contrasting to Scenario S, which follows a traditional supervised active learning approach.</figDesc><graphic coords="2,139.69,383.62,315.91,319.96" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Images of the input features in (a), the ground-truth solution provided by a FEM simulation in (b) and the triangular mesh on the circular domain in (c)</figDesc><graphic coords="4,72.00,571.88,144.41,144.41" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Comparison of our defined Scenarios. Scenario U (orange) is an AL experiment starting with a model that was initially trained unsupervised whereas in Scenario S the initial state of the model was achieved by supervised training.</figDesc><graphic coords="6,83.28,65.60,428.72,225.72" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Random sample from Scenario U. (a) the ground-truth data, (b) the PIGNN's prediction, (c) the absolute difference, e.g. L1-error between prediction and ground-truth.</figDesc><graphic coords="7,72.00,229.96,144.41,144.41" type="bitmap" /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgment</head><p>This research has been funded by the Federal Ministry for Economic Affairs and Climate Action (BMWK) within the project "KI-basierte Topologieoptimierung elektrischer Maschinen (KITE)" (19I21034C).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Physics-Informed Neural Networks for Heat Transfer Problems</title>
		<author>
			<persName><forename type="first">S</forename><surname>Cai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Perdikaris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">E</forename><surname>Karniadakis</surname></persName>
		</author>
		<idno type="DOI">10.1115/1.4050542</idno>
		<idno>doi:</idno>
		<ptr target="10.1115/1.4050542" />
	</analytic>
	<monogr>
		<title level="j">Journal of Heat Transfer</title>
		<imprint>
			<biblScope unit="volume">143</biblScope>
			<biblScope unit="page">60801</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Decke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Wünsch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Sick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gruhl</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2406.00081</idno>
		<title level="m">From structured to unstructured:a comparative analysis of computer vision and graph models in solving mesh-based pdes</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A physics-informed deep learning framework for inversion and surrogate modeling in solid mechanics</title>
		<author>
			<persName><forename type="first">E</forename><surname>Haghighat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Raissi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Moure</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Gomez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Juanes</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.cma.2021.113741</idno>
		<ptr target="https://doi.org/10.1016/j.cma.2021.113741" />
	</analytic>
	<monogr>
		<title level="j">Computer Methods in Applied Mechanics and Engineering</title>
		<imprint>
			<biblScope unit="volume">379</biblScope>
			<biblScope unit="page">113741</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><surname>Botache</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Decke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Ripken</surname></persName>
		</author>
		<title level="m">Enhancing multi-objective optimization through machine learning-supported multiphysics simulation</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Data-driven discovery of partial differential equations</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">H</forename><surname>Rudy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">L</forename><surname>Brunton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Proctor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">N</forename><surname>Kutz</surname></persName>
		</author>
		<idno type="DOI">10.1126/sciadv.1602614</idno>
		<ptr target="https://www.science.org/doi/pdf/10.1126/sciadv.1602614" />
	</analytic>
	<monogr>
		<title level="j">Science Advances</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page">e1602614</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations</title>
		<author>
			<persName><forename type="first">M</forename><surname>Raissi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Perdikaris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Karniadakis</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.jcp.2018.10.045</idno>
		<ptr target="https://doi.org/10.1016/j.jcp.2018.10.045" />
	</analytic>
	<monogr>
		<title level="j">Journal of Computational Physics</title>
		<imprint>
			<biblScope unit="volume">378</biblScope>
			<biblScope unit="page" from="686" to="707" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Cuomo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">S</forename><surname>Di Cola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Giampaolo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Rozza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Raissi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Piccialli</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2201.05624</idno>
		<title level="m">Scientific machine learning through physics-informed neural networks: Where we are and what&apos;s next</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Automatic differentiation in machine learning: a survey</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">G</forename><surname>Baydin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">A</forename><surname>Pearlmutter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Radul</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Siskind</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Mach. Learn. Res</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="page" from="5595" to="5637" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Physics-informed graph neural galerkin networks: A unified framework for solving pde-governed forward and inverse problems</title>
		<author>
			<persName><forename type="first">H</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Zahr</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-X</forename><surname>Wang</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.cma.2021.114502</idno>
		<ptr target="http://dx.doi.org/10.1016/j.cma.2021.114502.doi:10.1016/j.cma.2021.114502" />
	</analytic>
	<monogr>
		<title level="j">Computer Methods in Applied Mechanics and Engineering</title>
		<imprint>
			<biblScope unit="volume">390</biblScope>
			<biblScope unit="page">114502</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Unravelling the performance of physics-informed graph neural networks for dynamical systems</title>
		<author>
			<persName><forename type="first">A</forename><surname>Thangamuthu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Kumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bishnoi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Bhattoo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">M A</forename><surname>Krishnan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ranu</surname></persName>
		</author>
		<ptr target="https://proceedings.neurips.cc/paper_files/paper/2022/file/17b598fda495256bef6785c2b76c3217-Paper-Datasets_and_Benchmarks.pdf" />
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems</title>
				<imprint>
			<publisher>Curran Associates, Inc</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page" from="3691" to="3702" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A comparison of methods for gradient field estimation on simplicial meshes</title>
		<author>
			<persName><forename type="first">C</forename><surname>Mancinelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Livesu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Puppo</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.cag.2019.03.005</idno>
		<idno>doi:</idno>
		<ptr target="https://doi.org/10.1016/j.cag.2019.03.005" />
	</analytic>
	<monogr>
		<title level="j">Computers &amp; Graphics</title>
		<imprint>
			<biblScope unit="volume">80</biblScope>
			<biblScope unit="page" from="37" to="50" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Active learning query strategies for classification, regression, and clustering: A survey</title>
		<author>
			<persName><forename type="first">P</forename><surname>Kumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gupta</surname></persName>
		</author>
		<idno type="DOI">10.1007/s11390-020-9487-4</idno>
		<ptr target="https://doi.org/10.1007/s11390-020-9487-4.doi:10.1007/s11390-020-9487-4" />
	</analytic>
	<monogr>
		<title level="j">J. Comput. Sci. Technol</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page" from="913" to="945" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Activeglae: A benchmark for deep active learning with transformers</title>
		<author>
			<persName><forename type="first">L</forename><surname>Rauch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Aßenmacher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Huseljic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wirth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Bischl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Sick</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-43412-9_4</idno>
		<ptr target="https://doi.org/10.1007/978-3-031-43412-9_4" />
	</analytic>
	<monogr>
		<title level="m">Machine Learning and Knowledge Discovery in Databases: Research Track</title>
				<meeting><address><addrLine>Nature Switzerland</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="55" to="74" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">DADO -Low-cost query strategies for deep active design optimization</title>
		<author>
			<persName><forename type="first">J</forename><surname>Decke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gruhl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Rauch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Sick</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2023 International Conference on Machine Learning and Applications (ICMLA)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="1611" to="1618" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Improving the efficiency of training physics-informed neural networks using active learning</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Aikawa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ueda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Tanaka</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">New Generation Computing</title>
		<imprint>
			<biblScope unit="page" from="1" to="22" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Cs4ml: A general framework for active learning with arbitrary data based on christoffel functions</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Cardenas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Adcock</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Dexter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in Neural Information Processing Systems</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Automatic differentiation in ml: Where we are and where we should be going</title>
		<author>
			<persName><forename type="first">B</forename><surname>Van Merrienboer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Breuleux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bergeron</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Lamblin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in Neural Information Processing Systems</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Separation of aleatoric and epistemic uncertainty in deterministic deep neural networks</title>
		<author>
			<persName><forename type="first">D</forename><surname>Huseljic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Sick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Herde</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Kottke</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2020 25th International Conference on Pattern Recognition (ICPR)</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="9172" to="9179" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Role of hyperparameters in deep active learning</title>
		<author>
			<persName><forename type="first">D</forename><surname>Huseljic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Herde</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hahn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Sick</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Workshop on Interactive Adaptive Learning (IAL), ECML PKDD</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="19" to="24" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Dataset of a parameterized U-bend flow for deep learning applications</title>
		<author>
			<persName><forename type="first">J</forename><surname>Decke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Wünsch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Sick</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Data in Brief</title>
		<imprint>
			<biblScope unit="volume">50</biblScope>
			<biblScope unit="page">109477</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
