<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Modular Surrogate-based Optimization Framework for Expensive Computational Simulations</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Antonín</forename><surname>Panzo</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Aerospace Structures and Materials Kluyverweg 1</orgName>
								<orgName type="institution">Delft University of Technology</orgName>
								<address>
									<postCode>2629 HS</postCode>
									<settlement>Delft</settlement>
									<country key="NL">The Netherlands</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">Research &amp; Development Department Bucharova</orgName>
								<orgName type="institution">Skoda Transportation</orgName>
								<address>
									<addrLine>1314/8</addrLine>
									<postCode>158 00</postCode>
									<settlement>Praha 5</settlement>
									<country key="CZ">Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<author role="corresp">
							<persName><forename type="first">Boyang</forename><surname>Chen</surname></persName>
							<email>b.chen-2@tudelft.nl</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Aerospace Structures and Materials Kluyverweg 1</orgName>
								<orgName type="institution">Delft University of Technology</orgName>
								<address>
									<postCode>2629 HS</postCode>
									<settlement>Delft</settlement>
									<country key="NL">The Netherlands</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Modular Surrogate-based Optimization Framework for Expensive Computational Simulations</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">312D3E0CDD282B578FA0E7E73D7706B6</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T08:52+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In practical applications, the use of computational modeling has been industry-wide adopted to speed up product development as well as reduce physical testing costs. Such models of complex or large systems are, however, often computationally expensive, hence solution times of hours or more are not uncommon. Additionally, as these models are typically evaluated using blackbox solvers, the direct study of relations between design parameters renders demanding in terms of computational time and provides poor engineering insight and understanding.</p><p>To address this, a modular framework integrating computation automation with the use of surrogate-based modeling, optimization and visualization techniques is presented. The framework is built in the Python programming language. Its use is illustrated on a study of the side impact response of a car body using an artificial neural network as a surrogate together with the NSGA-III genetic algorithm for optimization.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Nowadays, solving of many engineering problems, such as the development of transport vehicles, involves the use of computationally expensive simulations, such as computational fluid dynamics (CFD), the finite element method (FEM), or others, as well as their combinations, e.g. the fluid-structure interaction (FSI). These will be further addressed as simulations. In practice, typically, such simulations suffer from two major drawbacks. Firstly, even thought the computational power available to an engineer for running such simulations has been gradually increasing over years, in practice the requirements on the model accuracy had increased as well <ref type="bibr" target="#b47">[48]</ref>, and as such, there is still a large quantity of simulations that take hours, days and occasionally even more, just to evaluate a single design <ref type="bibr" target="#b42">[43]</ref>. Secondly, they are often of a black-box nature, meaning that given an input X, an output Y is returned to the user without an accompanying explicit relation between the two <ref type="bibr" target="#b40">[41]</ref>.</p><p>Within a practically infinite space of design choices for the system under study, the task of an engineer is to identify the most influential design parameters and understand their degree of influence. Through the use of simulations, often the next goal is to meet a set of threshold requirements on selected targets, while minimizing or maximizing a certain property of the overall system, such as cost, weight, etc. Mathematically, this can be formulated as a multi-objective optimization problem with the goal to: minimize f m ( X) m = 1, ..., P</p><p>subject to g j ( X) ≤ 0 j = 1, ..., Q</p><formula xml:id="formula_1">h k ( X) = 0 k = 1, ..., R<label>(2)</label></formula><formula xml:id="formula_2">X L i ≤ X i ≤ X U i i = 1, ..., n in<label>(3)</label></formula><p>where f are the objective functions, g and h are the inequality and equality constraints, respectively, and X is the vector of design variables in the design search space bounded by X L i and X U i from the lower and upper side, respectively <ref type="bibr" target="#b46">[47]</ref>. In case of multi-objective optimization when m ≥ 2, the notion of singular optimality needs to be augmented as there is typically not a single solution X * minimizing all of the objectives of Equation 1 at the same time. In such cases, the concept of so-called Pareto optimality is considered <ref type="bibr" target="#b35">[36]</ref>, which can be loosely described as: "for each solution that is contained in the Pareto set, one can only improve one objective by accepting a tradeoff in at least one other objective" <ref type="bibr" target="#b38">[39]</ref>.</p><p>Due to said high computational costs and the black-box nature of simulations, multi-objective parametric studies and optimizations often become too lengthy and impractical for actual application <ref type="bibr" target="#b41">[42,</ref><ref type="bibr" target="#b40">41]</ref>. The lack of derivatives required for derivative-based optimization methods, such as in Liu and Reynolds <ref type="bibr" target="#b34">[35]</ref> or Peitz and Dellnitz <ref type="bibr" target="#b38">[39]</ref>, can be addressed by the use of derivative-free optimization (DFO) methods, such as genetic, evolutionary or swarm algorithms <ref type="bibr" target="#b26">[27]</ref>. However, these methods compensate for the lack of gradients by evaluating on larger sets of candidate solutions, which leaves the issue of high computational cost unresolved. One way to resolve that is by using a model of lower fidelity requiring less computational resources <ref type="bibr" target="#b32">[33]</ref>. However, the accuracy of such lower fidelity models can become unacceptably low. As an alternative, a surrogate-based approach can be adopted <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b4">5,</ref><ref type="bibr" target="#b11">12]</ref>.</p><p>A surrogate model, also addressed as a metamodel, is a mathematical model that upon training approximates the response of the original model based on a sample set from the design space <ref type="bibr" target="#b30">[31]</ref>. Such computationally cheap surrogate can then be called during the optimization defined by Equation 1-4 instead of the original, expensive simulation. In addition, the use of a surrogate model opens up several new opportunities, which include:</p><p>1. using the surrogate as a substructure of a larger model <ref type="bibr" target="#b33">[34]</ref> 2. cheap numerical evaluation of gradients using the surrogate 3. repeating the optimization for various formulations of the problem (within the bounds of validity of the surrogate) -compare various formulations of the same car side impact problem: Youn et al. <ref type="bibr" target="#b53">[54]</ref>, Guo, Wang, and Wang <ref type="bibr" target="#b25">[26]</ref>, and Tanabe and Ishibuchi <ref type="bibr" target="#b45">[46]</ref> 4. design exploration through sensitivity studies and visualization <ref type="bibr" target="#b48">[49,</ref><ref type="bibr" target="#b37">38]</ref> On the other hand, a disadvantage is the added complexity of the overall task due to introduction of extra tuning parameters related to the surrogate model and its training. Due to the often black-box nature of simulations, it is not possible to a-priori quantify the sample set size required for an accurate training of the surrogate <ref type="bibr" target="#b3">[4]</ref>. Therefore, for a purely optimization-focused approach, in general it is not guaranteed that such surrogate-based approach is indeed computationally cheaper than optimizing directly using the original model. Nevertheless, it has been established that for budget-limited tasks with expensive simulations, it is indeed the case <ref type="bibr" target="#b43">[44,</ref><ref type="bibr" target="#b23">24,</ref><ref type="bibr" target="#b37">38]</ref>. This is the case especially for multi-objective optimization, when the exploration of the full Pareto front is required. Confirming results of an empirical comparison can be found e.g. in Voutchkov and Keane <ref type="bibr" target="#b49">[50]</ref> on benchmark problems and in Bessa and Pellegrino <ref type="bibr" target="#b2">[3]</ref> on a practical problem of the design of ultra-thin carbon fiber deployable shells.</p><p>Example uses of surrogate-based optimization (SBO) from the area of structural optimization include the design of 3D weaving composite stiffened panels by Fu, Ricci, and Bisagni <ref type="bibr" target="#b20">[21]</ref>, hypersonic vehicle's metallic thermal protection by Guo et al. <ref type="bibr" target="#b24">[25]</ref> or a wellhead connector for a subsea Christmas tree by Zeng et al. <ref type="bibr" target="#b54">[55]</ref>. The SBO methodology has been already integrated into various commercial software packages, such as OptiSLang 1 1 </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ANSYS</head><p>OptiSLang, https://www.dynardo.de/en/ software/optislang/ansys-optislang.html [Accessed on 03/08/2020] or HEEDS <ref type="foot" target="#foot_0">2</ref> . This can be often practical, but not always. Firstly, the user must rely only on the available capabilities of such software packages, which can sometimes be insufficient for the application at hand. Secondly, especially for individuals or smaller businesses, the license price for such packages can be too expensive, thus inaccessible. Last but not least, for research purposes, commercial solutions often offer only a limited insight into the used methods and source code, leading to limited customabilizity, a feature often required for research.</p><p>In this paper, the core of a developed Python opensource SBO framework is presented. In Section 2, the different elements of the proposed framework are presented. Next, in Section 3, the use of the framework is demonstrated on a benchmark problem. Finally, the capabilities of the framework are summarized in Section 4 together with an outline of suggested further developments.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Proposed method</head><p>This framework integrates elements of surrogate-based modeling and automated model selection together with derivative-free optimization. One of the driving ideas behind it is modularity, such that it could be readily customized for the user's particular needs. Upon completion of the master thesis project, the framework will be made publicly available at github.com/apanzo/ optimization.</p><p>The top-level illustration of the framework is shown in Figure <ref type="figure" target="#fig_0">1</ref>. As a starting point, the simulation is prepared, and the framework settings, such as the selection of a surrogate model and optimization algorithm, are set. Both the use of a surrogate and performing optimization are optional. The surrogate loop contains sample selection, model evaluation, results storage and retrieval, optimization and training of the surrogate, and finally accessing its convergence. The optimization loop consists of the actual optimization, and, in case that a surrogate model is used, a verification of the obtained results against the original model. In the following subsections, key parts of the framework are discussed in the order from optimization start to end as indicated in Figure <ref type="figure" target="#fig_0">1</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Sampling strategy</head><p>Considering the expensiveness of the simulations addressed by this framework, one of the keys of success is the smart selection of the sample points within the explored design space, such that the surrogate model obtains a sufficiently representative sample to be trained on. For this purpose, simpler sampling strategies such as the grid search or random search are not suitable. The former is expensive as it does not benefit from sampling in multiple dimensions, as the sample projection on each of the input's axes is independent of the number of inputs dimensions, and in the case of re-sampling with a finer sample, either the new sample discards all of the previous samples, or the re-sampling is to be done on an integer multiple of samples per input dimension of the current sample. The random search does not suffer from those, but its weakness lies in its poor space-filling property, thus a large amount of samples is required to generate a sufficiently representative sample.</p><p>As a baseline within this framework, quasi-random sampling strategies are implemented. In particular, the two available sampling methods are the cell centered Latin hypercube sampling (LHS) <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b17">18]</ref> and the original Halton sequence sampling <ref type="bibr" target="#b12">[13]</ref>. The principal logic behind the LHS method is that it is a sparse grid method where the projection of the sample on each of the input's axes contains only one sample per uniform interval. As such, the sample size is significantly reduced compared to the full grid. On the contrary, the disadvantage of the poor resampling pertains. This is not the case for the Halton sequence, which is a multi-dimensional version of the van Der Cor-put sequence. It is defined as P = {φ q 1 (Z), ..., φ q n (Z)} with bases q 1 , ..., q n that are mutually coprime and in practice taken as the first s primes <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b21">22]</ref>. φ q (Z), the inverse radix number, is obtained from an integer</p><formula xml:id="formula_4">Z = r ∑ i=0 a i q i</formula><p>where q is the basis and the largest exponent is r = int ln Z ln q , and 0 ≤ a i &lt; q are the coefficients <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b10">11]</ref>, as</p><formula xml:id="formula_5">φ q (Z) = r ∑ i=0 a i q −(i−1)</formula><p>The advantage is that each point of the sequence is generated independently of the previous point. An example is shown in Figure <ref type="figure" target="#fig_1">2</ref>. It is noted however that the original method is suitable only up to 8 input dimensions, as in higher dimensions, spurious correlations between the inputs occur, considerably deteriorating the sample's quality <ref type="bibr" target="#b12">[13]</ref>. This can be leveraged by using its modified versions that address this issue <ref type="bibr" target="#b18">[19,</ref><ref type="bibr" target="#b36">37]</ref>. To take it one step further, considering that these, so called static sampling methods, only take into account the space-filling criteria, that is the input's quality, and not the nature of the response, that is the output's quality, an adaptive sampling method that does so is implemented as well. As an example, if S is the full design space and the response of a model is flat everywhere apart from the region D ∈ S, where there is a valley in the response, only a few samples are required outside D, while in D more samples are required to train the surrogate. The selected method is from Eason and Cremaschi <ref type="bibr" target="#b16">[17]</ref>, which determines the new samples based on a balance of the variance in predictions of the K models from cross-validation, explained further in subsection 2.4, for exploitation and the distance to the nearest sample for exploration, each normalized by its maximum. The new sample is chosen where the sum of the two criteria is the largest.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">External evaluator</head><p>For the practical use of the framework, it has to be able to interact with external solvers that evaluate the simulations. This independence is one of the main strong points of the framework, as it allows to integrate results from different solvers. As an example, for a FSI study, different FEM and CFD solvers can be used and their results integrated within this framework.</p><p>The implementation of different evaluators is application dependent, therefore the interaction with each new solver has to be customized according to its specific interface. In the course of development of this framework, a custom ANSYS evaluator has been integrated. For its use, the procedure is to firstly build a parametric model inside the ANSYS Workbench environment <ref type="foot" target="#foot_1">3</ref> , with the input and output parameters defined. These parameters are passed over by name to the framework together with the path to the prepared model. The rest of the interaction is automated. After determining the sample points, firstly the framework checks that there are available free licenses for carrying out the computations. If there are available licenses, a request is sent to ANSYS Workbench to evaluate the selected sample points. Once all the samples have been evaluated, the framework retrieves the defined output parameters and integrates them within its internal database.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Surrogate model</head><p>Among others, the most commonly used surrogatemodeling techniques are kriging, radial basis functions (RBF), artificial neural networks (ANN) or support vector machines (SVM) <ref type="bibr" target="#b16">[17,</ref><ref type="bibr" target="#b11">12]</ref>. The former two are integrated within the framework using the Surrogate Modeling Toolbox (SMT) library <ref type="bibr" target="#b9">[10]</ref> natively, while the ANN is integrated as a custom class into SMT using the TensorFlow library <ref type="bibr" target="#b0">[1]</ref>. In the course of the reported research, the ANN has been used due to previous experience in the research group, therefore it is elaborated upon closer in this section.</p><p>In the general sense, ANNs are one of the subclasses of the broader field of machine learning (ML), which in turn is a subclass of artificial intelligence (AI). Most commonly, the ANNs are trained using a form of supervised learning, that is by providing both the input and output data during the training process. They can be used both for classification, that is categorization of the input content into different sub-classes with common features, as well as regression. For the purpose of optimization, regression is typically used and the task of learning is to obtain the unknown relation between input and output data.</p><p>The core concept of an ANN is a single neuron. Inspired by the function of a real neuron inside the living brain, the mathematical model from input x to output y consists of performing a weighted summation of all the inputs</p><formula xml:id="formula_6">z = n in ∑ i=1 w i x i<label>(5)</label></formula><p>and applying a non-linear transformation</p><formula xml:id="formula_7">y = g(z)</formula><p>with g the so-called activation function. Common activation functions are the logistic sigmoid</p><formula xml:id="formula_8">g(z) = 1 1 + e −(z) , rectified linear unit (ReLU) g(z) = max(0, z) [23], or Swish g(z) = z 1 1 + e −(z) [40] functions.</formula><p>To provide a flexible range of outputs based on the neuron's activations, the neurons are organized into layers. The typical neural network architecture for regression is the multi-layer perceptron (MLP), consisting of the first and last layers containing the amount of neurons equal to the number of input and output dimensions, respectively, and a selected number of intermediate, hidden layers, which can each contain an arbitrary amount of neurons. Building upon the Kolmogorov's theorem <ref type="bibr" target="#b31">[32]</ref>, Hecht-Nielsen, Ne, and Corporation <ref type="bibr" target="#b27">[28]</ref> proved that using specific activation functions, even an ANN with a single hidden layer of neurons can approximate any continuous function. However, it is not guaranteed that such ANN can actually learn such representation <ref type="bibr" target="#b5">[6]</ref>, and in practice, using multiple hidden layers of neurons can simplify the learning process <ref type="bibr" target="#b28">[29]</ref>.</p><p>The training of the network is equivalent to the determination of appropriate weights w from Equation <ref type="formula" target="#formula_6">5</ref>for each neuron, such that the desired mapping x → y from the input data to the output data is obtained. The common training method is the backpropagation method, where the prediction error E after the forward pass is propagated back to the contribution to that error from each neuron, and their weights are updated in each training iteration. This is schematically illustrated in Figure <ref type="figure">3</ref>.</p><p>Independent of the selected surrogate model, to obtain an accurate model, a step of crucial importance is the selection of the model's hyperparameters, that is the parameters that are not directly learned through the training process. This is specific for each of the surrogate models. As an example, in the case of the ANN, these are Figure <ref type="figure">3</ref>: Illustration of backpropragation the amount of hidden layers and neurons in each of them, the learning rate parameter, regularization parameters or the activation function choice. For this purpose, modelspecific methods, such as the neuron pruning <ref type="bibr" target="#b55">[56]</ref>, where the neurons of low significance are removed, as well as model-independent, such as a random search, Bayesian optimization or meta-heuristic optimization approaches, can be used.</p><p>At this stage, a random search and Bayesian optimization (BO) <ref type="bibr" target="#b44">[45]</ref> are included for the ANN using Keras Tuner <ref type="foot" target="#foot_2">4</ref> . A hypermodel is defined with default hyperparameters stored in a configuration file. Within, these hyperparameters can be either changed manually, or specified to be subject of automatic tuning. For ANNs, the tunable hyperparameters are: number of hidden layers, number of neurons in each layer, initial learning rate, activation function and the regularization parameter. The current implementation of the BO allows only setting a fixed optimization budget, therefore the selected option is to start with an initial random sample of 3x times the number of optimized hyperparameters, and stop at 10x times the number. The acquisition function is the upper-confidence bound <ref type="bibr" target="#b44">[45]</ref>. The frequency of hyperparameter optimization is set as at the first iteration, and then every time the sample set size increases by 50%.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4">Surrogate's convergence</head><p>A metric of the surrogate's accuracy of choice is to be tracked to determine the convergence of the sample set size. In our approach, the tracked metric is the mean absolute error (MAE)</p><formula xml:id="formula_9">E MAE = ∑ N i=1 |y pred − y train | N</formula><p>where y pred is the prediction of the surrogate, y train is the training value and N is the total number of samples. Since the amount of data is relatively low, to perform validation of the trained surrogate model, the K-fold cross-validation approach is adopted <ref type="bibr" target="#b8">[9]</ref>. In each iteration, the data is split into a set of K-folds, and K surrogates are trained each time leaving out a different fold of the data. This holdout set is used to calculate the generalization error of the model on previously unseen data. With increasing size of the data set, leaving out the holdout set will affect the surrogate's prediction less and less, so the average E MAE over the K surrogates will decrease. The resampling loop is terminated when a satisfactory level of E MAE is attained.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.5">Optimization algorithm</head><p>Once the surrogate model has been trained to a desired level of accuracy, the optimization run can be started. Within this framework, various population-based optimization algorithms are provided by the Pymoo library <ref type="bibr" target="#b6">[7]</ref>. Pymoo provides implementations of a series of population-based algorithms, including:</p><formula xml:id="formula_10">• Differential Evolution • Genetic Algorithm • BRKGA • Nelder Mead • Pattern Search • CMAES</formula><p>• NSGA-II and NSGA-III</p><p>• RNSGA-II and RNSGA-III</p><formula xml:id="formula_11">• UNSGA-III • MOEA/D.</formula><p>Among those, the most commonly used in the industry is the NSGA-II method <ref type="bibr" target="#b14">[15]</ref> and its updated NSGA-III version <ref type="bibr" target="#b15">[16]</ref> adapted for the many-objective cases of m &gt; 2, where an even exploration of the Pareto-front is accomplished by the use of reference directions. A generic scheme of a genetic algorithm is shown in Figure <ref type="figure" target="#fig_2">4</ref>. The idea behind is similar to Darwinian evolution, that is that once the initial population of solutions is generated (Step 1), the fitness of the individuals is evaluated (Step 2) and only the fittest survive in each generation (Step 3). From those, a selection parent selection takes places (Step 4), from which new offsprings are generated through crossover (Step 5). Additionally, a random mutation is applied to their genome (Step 6) to maintain diversity within the population and explore the design space better. At the end of each iteration, the convergence criterion is evaluated, and either the optimization stops, or continues with a new generation <ref type="bibr" target="#b51">[52]</ref>.</p><p>The available termination criteria are the maximum:</p><p>• number of evaluations</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>• number of generations</head><p>• time</p><p>• design space tolerance</p><p>• objective space tolerance.</p><p>Due to the computational cheapness of the surrogate, the optimization process is not limited by the computational cost of evaluating the simulation, such as the maximal number of its evaluations. Therefore, the selection of the termination criterion can focus purely on the convergence of the optimization. In this case the last two criteria are of main importance. A threshold tolerance on the metrics tracking the evolution of solutions in the design or objective space is defined, and if the solutions in the specified amount of last iterations do not improve over this threshold, the optimization is terminated. For robustness purposes, also the maximal amount of evaluations or generations can be specified, e.g. for cases when the optimal solutions oscillate around some value, and thus in the defined window would otherwise never have converged.</p><p>It is also possible to enter the optimization directly using the original model without constructing the surrogate. In this case however, for an expensive simulation, the choices on the optimization algorithm and its components, especially the population size and number of generated offsprings in each generation, have to be more carefully selected with respect to the number of required simulations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.6">Result verification</head><p>Once an optimum (set) has been found, given that this optimum has been found based on the surrogate, it is important to verify it's accuracy. In particular, it could be that the algorithm has found an optimum that lies in a spurious extremum caused by the surrogate's inaccuracy which does not pertain to the original model. Therefore, by submitting a sample from the Pareto optimal set back to an evaluation using the original model, it can be verified whether the obtained solutions indeed pertain to the original model. If the mean or maximal error within this verification exceeds a pre-defined tolerance, the surrogate is retrained on an enlarged sample, and the optimization occurs again, as seen in Figure <ref type="figure" target="#fig_0">1</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.7">Implementation notes</head><p>As an important implementation note, the data is normalized within the internal data flow. This is done such that common settings can be used across different problems, as well as for a single problem to treat inputs and outputs of different magnitudes comparably. Not doing so would mean that for example when the prediction error is summed, the weight in kilograms would completely dominate deformation in millimeters, which is undesirable. Therefore, all the inputs and outputs are normalized by dividing each quantity with its absolute maximal value. In such a way, all the data is guaranteed to lie within the [-1,1] range, while maintaining its sign, conforming to the constraint violation formulation in Equation 2.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Illustrative example</head><p>As an illustrative example of a practical problem, the results on a parametrized nonlinear FEM simulation of a car side impact problem <ref type="bibr" target="#b29">[30]</ref> ran using the proposed framework are presented in this section.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Model description</head><p>The design space is defined by 7 input parameters</p><formula xml:id="formula_12">0.5 ≤ x 1 ≤ 1.5 0.45 ≤ x 2 ≤ 1.35 0.5 ≤ x 3 ≤ 1.5 0.5 ≤ x 4 ≤ 1.5 0.875 ≤ x 5 ≤ 2.625 0.4 ≤ x 6 ≤ 1.2 0.4 ≤ x 7 ≤ 1.2</formula><p>that represent the thicknesses of structural members such as B-pillars, beams or the floor. A single solution was, at the time of the problem's publication, reported to take about 20h <ref type="bibr" target="#b53">[54]</ref>. Even thought this time has likely reduced with today's hardware, it is still a good illustrative example of surrogate-based modeling.</p><p>For the purpose of benchmarking, the response has been parametrized by analytical expressions, which are presented hereafter <ref type="bibr" target="#b29">[30]</ref>. The 3 objectives of the design study are to minimize the structural weight, impact force on the passenger and average velocity of the side members that absorbs the impact load, represented as f 1 ( x) ≡ W = 1.98 + 4.9x 1 + 6.67x 2 + 6.98x 3 + 4.01x 4 +1.78x 5 + 0.00001x 6 + 2.73x 7</p><formula xml:id="formula_13">f 2 ( x) = F f 3 ( x) = 0.5(V MBP +V FD )</formula><p>respectively. In addition, there are 10 constraints that take into account criteria such as the abdomen load </p><formula xml:id="formula_14">g 1 ( x) = 1.16 − 0.3717x 2 x 4 − 0.0092928x 3 ≤ 1 viscous criteria g 2 ( x) = 0.261 − 0.0159x 1 x 2 − 0.06486x 1 − 0.</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Setup and user interaction</head><p>The format of the user input using the JSON formatting 5 is shown below:</p><formula xml:id="formula_15">1 { 2 " data ": { 3 " problem ": " carside " ,<label>4</label></formula><p>" evaluator ": " benchmark " ,</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>5</head><p>" sampling ": " halton " ,</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>6</head><p>" d e fa u lt _ s am p le _ co e f ": 10 ,</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>7</head><p>" resampling ": " geometric " , To translate it, the carside problem is evaluated from the benchmark problems set. The initial sample is 10x the number of input dimensions, thus 70, and in every additional iteration, the sample set is increased by 20%. The convergence metric for the model is the MAE metric, with a maximal value of 0.03. The ANN is the used surrogate model and its performance metric is calculated using 5-fold cross validation. The hyperparameters are re-optimized when the amount of samples increases by 50%. The optimization uses the NSGA-III algorithm and its termination is based upon the objective space tolerance, which terminates, if there is no change by more than 0.001 in the tracked metrics: changes of the objective functions, inverted generational distance (IGD) <ref type="bibr" target="#b1">[2]</ref> and the nadir point, over the last 5 generations. This convergence check is performed every 5 generations. Additionally, a maximum of 150 generations is allowed during a single optimization run. The verification mean error limit is set as 5%. Additionally, not present in the input file, the ANN's hyperparameters are defined as in Table <ref type="table" target="#tab_2">1</ref>. The output layer function is linear. The number of hidden layers and neurons per hidden layer are optimized using BO between <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b9">10]</ref> and <ref type="bibr" target="#b5">[6,</ref><ref type="bibr">200]</ref>, respectively. For optimization, the algorithm is initiated with default settings 7 . The reference directions are obtained using an energy method from Blank et al. <ref type="bibr" target="#b7">[8]</ref> and their number, which also defines the population size, is set at 30x the number of objectives, that is 90 in this case. The amount of generated offsprings is equal to the population size.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Results</head><p>In Figure <ref type="figure" target="#fig_3">5</ref>, the comparison of training data with the prediction of the final trained ANN is shown. The sample set converged after 3 surrogate training iterations, that is with 101 samples evaluated. This is just over the defined population size in one generation of the genetic algorithm. The optimization took 70 generations to converge, which amount to 9000 evaluations of the model. That as a factor of 90x more than the required surrogate training sample size, confirming the advantage of using the surrogate. It is noted however, that thus far the selected optimization parameters are determined empirically on a trial-and-error basis.</p><p>As a validation of the result, the numerical values of the Pareto optimal solutions are unfortunately not presented in Jain and Deb <ref type="bibr" target="#b29">[30]</ref> nor other paper, however, the qualitative comparison with their solutions in the objective space as shown in Figure <ref type="figure" target="#fig_5">6</ref> indicates that the ranges of all 3 objectives fall within nearly identical ranges. The final mean verification error between the Pareto optimal set and the original model, as discussed in subsection 2.6, was 2.10%.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Conclusion</head><p>In this paper, a working core example of a modular surrogate-based optimization framework was presented. A more detailed elaboration on the ANN surrogate was  drawn, together with a top-level description of each of the key submodules. An illustrative working example was shown, that demonstrated the interaction and use of the framework together with the efficiency of the surrogate based approach. Compared to commercial solutions, the modularity makes it suitable for research purposes, while the open-source nature is beneficiary for small companies or users with advanced customization needs.</p><p>On top of the correct implementation of the methods, the main challenges related to its use include the selection of a particular surrogate and optimization algorithm or the termination, model selection and convergence criteria. On a fully general scale, the no free lunch (NFL) theorems <ref type="bibr" target="#b50">[51]</ref> prove that an universal optimal choice is impossible. However, if the set of problems is limited to an expected class of practical engineering problems, at least a good baseline starting point is possible <ref type="bibr" target="#b52">[53]</ref>. Thus, the main aim of the follow-up work is to perform testing on a wide range of problems with a range of framework setups to thoroughly establish the influence of the tunable parameters of the surrogate model as well as of the optimization algorithm.</p><p>Further suggestions include a quantitative comparative study of the performance of different proposed adaptive sampling methods, which show the largest potential on the overall SBO performance, as the expensive simulations remains main bottleneck overall. Beyond that, softwarewise, additional suggested features include a graphical user interface (GUI), inclusion of more surrogate models from SMT, such as the RBF, extending the amount of supported evaluators e.g. for Abaqus, and others. Finally, for practice oriented research, a composite layup optimization study in terms of the number of plies and their ply angles is an interesting area of possible application of this framework within the field of advanced structures.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Diagram of the proposed framework</figDesc><graphic coords="3,56.69,80.51,240.94,390.94" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Example of a Halton sample on the unit range</figDesc><graphic coords="3,307.56,390.04,240.95,180.71" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: A generic flowchart of a genetic algorithm</figDesc><graphic coords="6,111.97,80.50,120.47,215.98" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Comparison of the ANN's trained prediction against the training output data</figDesc><graphic coords="8,66.19,224.07,212.03,160.44" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head></head><label></label><figDesc>7 pymoo -NSGA-III, http://pymoo.org/algorithms/nsga3. html [Accessed on 07/08/2020]</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Scatter plot of the Pareto optimal set</figDesc><graphic coords="8,307.56,80.50,240.94,185.01" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>( x) = 28.98 + 3.818x 3 − 4.2x1x 2 + 1.27296x 6 −2.68065x 7 ≤ 32 g 6 ( x) = 33.86 + 2.95x 3 − 5.057x 1 x 2 − 3.795x 2 −3.4431x 7 + 1.45728 ≤ 32 g 7 ( x) = 46.36 − 9.9x 2 − 4.4505x 1 ≤ 32 pubic symphysis force g 8 ( x) ≡ F = 4.72 − 0.5x 4 − 0.19x 2 x 3 ≤ 4 and velocities of structural members at impact g 9 ( x) ≡ V MBP = 10.58 − 0.674x 1 x 2 − 0.67275x 2 ≤ 9.9 g 10 ( x) ≡ V FD = 16.45 − 0.489x 3 x 7 − 0.843x 5 x 6 ≤ 15.7</figDesc><table><row><cell>019x 2 x 7</cell></row><row><cell>+0.0144x 3 x 5 + 0.0154464x 6 ≤ 0.32</cell></row><row><cell>g 3 ( x) = 0.214 + 0.00817x 5 − 0.045195x 1 − 0.0135168x 1</cell></row><row><cell>+0.03099x 2 x 6 − 0.018x 2 x 7 + 0.007176x 3</cell></row><row><cell>+0.023232x 3 − 0.00364x 5 x 6 − 0.018x 2 2 ≤ 0.32</cell></row><row><cell>g 4 ( x) = 0.74 − 0.61x 2 − 0.031296x 3 − 0.031872x 7</cell></row><row><cell>+0.227x 2 2 ≤ 0.32</cell></row><row><cell>rib deflections</cell></row><row><cell>g 5 [54].</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 1 :</head><label>1</label><figDesc>ANN hyperparameters specification6  </figDesc><table><row><cell>Optimizer</cell><cell>adam</cell></row><row><cell>Activation function</cell><cell>relu</cell></row><row><cell>Learning rate</cell><cell>0.01</cell></row><row><cell>Regularization</cell><cell>None</cell></row><row><cell>Weight initialization</cell><cell>he_normal</cell></row><row><cell>Bias initialization</cell><cell>zeros</cell></row><row><cell>Max epochs</cell><cell>1000</cell></row><row><cell>Loss</cell><cell>mse</cell></row><row><cell>Early stopping delta</cell><cell>0.0001</cell></row><row><cell cols="2">Early stopping patience 50</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_0">HEEDS MDO, https://www.redcedartech.com/index. php/solutions/heeds-software [Accessed on 03/08/2020]</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_1">ANSYS Workbench Platform, https://www.ansys.com/ -/media/Ansys/corporate/resourcelibrary/brochure/ workbench-platform-121.pdf [Accessed on 05/08/2020]</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_2">keras-tuner, https://github.com/keras-team/ keras-tuner [Accessed on 06/08/2020]</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_3">Module: tf.keras | TensorFlow Core v2.2.0-rc1, https://www. tensorflow.org/versions/r2.2/api_docs/python/tf/keras [Accessed on 07/08/2020]</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">TensorFlow: A system for large-scale machine learning</title>
		<author>
			<persName><forename type="first">Martin</forename><surname>Abadi</surname></persName>
		</author>
		<ptr target="https://www.usenix.org/system/files/conference/osdi16/osdi16-abadi.pdf" />
	</analytic>
	<monogr>
		<title level="m">12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16)</title>
				<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="265" to="283" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Performance indicators in multiobjective optimization</title>
		<author>
			<persName><forename type="first">Charles</forename><surname>Audet</surname></persName>
		</author>
		<ptr target="https://hal.archives-ouvertes.fr/hal-02464750" />
		<imprint>
			<date type="published" when="2020-02">Feb. 2020</date>
		</imprint>
	</monogr>
	<note>working paper or preprint</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Design of ultrathin shell structures in the stochastic post-buckling range using Bayesian machine learning and optimization</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Bessa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Pellegrino</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.ijsolstr.2018.01.035</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal of Solids and Structures</title>
		<imprint>
			<biblScope unit="volume">139</biblScope>
			<biblScope unit="page" from="174" to="188" />
			<date type="published" when="2018-05">May 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">A framework for data-driven analysis of materials under uncertainty: Countering the curse of dimensionality</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Bessa</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.cma.2017.03.037</idno>
	</analytic>
	<monogr>
		<title level="j">Computer Methods in Applied Mechanics and Engineering</title>
		<imprint>
			<biblScope unit="volume">320</biblScope>
			<biblScope unit="page" from="633" to="667" />
			<date type="published" when="2017-06">June 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Advances in surrogate based modeling, feasibility analysis, and optimization: A review</title>
		<author>
			<persName><forename type="first">Atharv</forename><surname>Bhosekar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marianthi</forename><surname>Ierapetritou</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.compchemeng.2017.09.017</idno>
	</analytic>
	<monogr>
		<title level="j">Computers &amp; Chemical Engineering</title>
		<imprint>
			<biblScope unit="volume">108</biblScope>
			<biblScope unit="page" from="250" to="267" />
			<date type="published" when="2018-01">Jan. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Pattern Recognition and Machine Learning</title>
		<author>
			<persName><forename type="first">Christopher</forename><forename type="middle">M</forename><surname>Bishop</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2006-08-17">Aug. 17, 2006</date>
			<publisher>Springer-Verlag New York Inc</publisher>
			<biblScope unit="page">738</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">pymoo: Multiobjective Optimization in Python</title>
		<author>
			<persName><forename type="first">Julian</forename><surname>Blank</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kalyanmoy</forename><surname>Deb</surname></persName>
		</author>
		<idno type="DOI">10.1109/access.2020.2990567</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="89497" to="89509" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Generating Well-Spaced Points on a Unit Simplex for Evolutionary Many-Objective Optimization</title>
		<author>
			<persName><forename type="first">Julian</forename><surname>Blank</surname></persName>
		</author>
		<idno type="DOI">10.1109/tevc.2020.2992387</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Evolutionary Computation</title>
		<imprint>
			<biblScope unit="page" from="1" to="1" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Beating the Hold-Out: Bounds for K-fold and Progressive Cross-Validation</title>
		<author>
			<persName><forename type="first">Avrim</forename><surname>Blum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Adam</forename><surname>Kalai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">John</forename><surname>Langford</surname></persName>
		</author>
		<idno type="DOI">10.1145/307400.307439</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the twelfth annual conference on Computational learning theory -COLT &apos;99</title>
				<meeting>the twelfth annual conference on Computational learning theory -COLT &apos;99</meeting>
		<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A Python surrogate modeling framework with derivatives</title>
		<author>
			<persName><forename type="first">Mohamed</forename><surname>Amine</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bouhlel</forename></persName>
		</author>
		<idno type="DOI">10.1016/j.advengsoft.2019.03.005</idno>
	</analytic>
	<monogr>
		<title level="j">Advances in Engineering Software</title>
		<imprint>
			<biblScope unit="volume">135</biblScope>
			<biblScope unit="page">102662</biblScope>
			<date type="published" when="2019-09">Sept. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">An Application of Random and Hammersley Sampling Methods to Iris Recognition</title>
		<author>
			<persName><forename type="first">Luis</forename><forename type="middle">E</forename><surname>Garza Castañón</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Saúl</forename><surname>Montes De Oca</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rubén</forename><surname>Morales-Menéndez</surname></persName>
		</author>
		<idno type="DOI">10.1007/11779568_56</idno>
	</analytic>
	<monogr>
		<title level="m">Advances in Applied Artificial Intelligence</title>
				<editor>
			<persName><forename type="first">M</forename><surname>Ali</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Dapoigny</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2006">2006</date>
			<biblScope unit="volume">4031</biblScope>
			<biblScope unit="page" from="520" to="529" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Surrogate-assisted global sensitivity analysis: an overview</title>
		<author>
			<persName><forename type="first">Kai</forename><surname>Cheng</surname></persName>
		</author>
		<idno type="DOI">10.1007/s00158-019-02413-5</idno>
	</analytic>
	<monogr>
		<title level="j">Structural and Multidisciplinary Optimization</title>
		<imprint>
			<biblScope unit="volume">61</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="1187" to="1213" />
			<date type="published" when="2020-01">Jan. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">On the optimal Halton sequence</title>
		<author>
			<persName><forename type="first">H</forename><surname>Chi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mascagni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Warnock</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.matcom.2005.03.004</idno>
	</analytic>
	<monogr>
		<title level="j">Mathematics and Computers in Simulation</title>
		<imprint>
			<biblScope unit="volume">70</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="9" to="21" />
			<date type="published" when="2005-09">Sept. 2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Fast Generation of Space-filling Latin Hypercube Sample Designs</title>
		<author>
			<persName><forename type="first">Keith</forename><surname>Dalbey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">George</forename><surname>Karystinos</surname></persName>
		</author>
		<idno type="DOI">10.2514/6.2010-9085</idno>
	</analytic>
	<monogr>
		<title level="m">13th AIAA/ISSMO Multidisciplinary Analysis Optimization Conference</title>
				<imprint>
			<publisher>American Institute of Aeronautics and Astronautics</publisher>
			<date type="published" when="2010-09">Sept. 2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">A fast and elitist multiobjective genetic algorithm: NSGA-II</title>
		<author>
			<persName><forename type="first">K</forename><surname>Deb</surname></persName>
		</author>
		<idno type="DOI">10.1109/4235.996017</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Evolutionary Computation</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="182" to="197" />
			<date type="published" when="2002-04">Apr. 2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints</title>
		<author>
			<persName><forename type="first">Kalyanmoy</forename><surname>Deb</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Himanshu</forename><surname>Jain</surname></persName>
		</author>
		<idno type="DOI">10.1109/tevc.2013.2281535</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Evolutionary Computation</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="577" to="601" />
			<date type="published" when="2014-08">Aug. 2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Adaptive sequential sampling for surrogate model generation with artificial neural networks</title>
		<author>
			<persName><forename type="first">John</forename><surname>Eason</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Selen</forename><surname>Cremaschi</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.compchemeng.2014.05.021</idno>
	</analytic>
	<monogr>
		<title level="j">Computers &amp; Chemical Engineering</title>
		<imprint>
			<biblScope unit="volume">68</biblScope>
			<biblScope unit="page" from="220" to="232" />
			<date type="published" when="2014-09">Sept. 2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">An adaptive sequential experiment design method for model validation</title>
		<author>
			<persName><forename type="first">Ke</forename><surname>Fang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yuchen</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ping</forename><surname>Ma</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.cja.2019.12.026</idno>
	</analytic>
	<monogr>
		<title level="j">Chinese Journal of Aeronautics</title>
		<imprint>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="1661" to="1672" />
			<date type="published" when="2020-06">June 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">Generalized Halton sequences in 2008</title>
		<author>
			<persName><forename type="first">Henri</forename><surname>Faure</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christiane</forename><surname>Lemieux</surname></persName>
		</author>
		<idno type="DOI">10.1145/1596519.1596520</idno>
		<imprint>
			<date type="published" when="2009-10">Oct. 2009</date>
			<publisher>ACM</publisher>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="page" from="1" to="31" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Recent advances in surrogate-based optimization</title>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">J</forename><surname>Alexander</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andy</forename><forename type="middle">J</forename><surname>Forrester</surname></persName>
		</author>
		<author>
			<persName><surname>Keane</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.paerosci.2008.11.001</idno>
	</analytic>
	<monogr>
		<title level="j">Progress in Aerospace Sciences</title>
		<imprint>
			<biblScope unit="volume">45</biblScope>
			<biblScope unit="page" from="50" to="79" />
			<date type="published" when="2009-01">Jan. 2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Minimum-weight design for three dimensional woven composite stiffened panels using neural networks and genetic algorithms</title>
		<author>
			<persName><forename type="first">Xinwei</forename><surname>Fu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sergio</forename><surname>Ricci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Chiara</forename><surname>Bisagni</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.compstruct.2015.08.077</idno>
	</analytic>
	<monogr>
		<title level="j">Composite Structures</title>
		<imprint>
			<biblScope unit="volume">134</biblScope>
			<biblScope unit="page" from="708" to="715" />
			<date type="published" when="2015-12">Dec. 2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Design of computer experiments: A review</title>
		<author>
			<persName><forename type="first">S</forename><surname>Sushant</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Iftekhar</forename><forename type="middle">A</forename><surname>Garud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Markus</forename><surname>Karimi</surname></persName>
		</author>
		<author>
			<persName><surname>Kraft</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.compchemeng.2017.05.010</idno>
	</analytic>
	<monogr>
		<title level="j">Computers &amp; Chemical Engineering</title>
		<imprint>
			<biblScope unit="volume">106</biblScope>
			<biblScope unit="page" from="71" to="95" />
			<date type="published" when="2017-11">Nov. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Understanding the difficulty of training deep feedforward neural networks</title>
		<author>
			<persName><forename type="first">Xavier</forename><surname>Glorot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yoshua</forename><surname>Bengio</surname></persName>
		</author>
		<ptr target="http://proceedings.mlr.press/v9/glorot10a.html" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics</title>
				<editor>
			<persName><forename type="first">Yee</forename><forename type="middle">Whye</forename><surname>Teh</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Mike</forename><surname>Titterington</surname></persName>
		</editor>
		<meeting>the Thirteenth International Conference on Artificial Intelligence and Statistics<address><addrLine>Chia Laguna Resort, Sardinia, Italy</addrLine></address></meeting>
		<imprint>
			<publisher>PMLR</publisher>
			<date type="published" when="2010-05">May 2010</date>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="249" to="256" />
		</imprint>
	</monogr>
	<note>Proceedings of Machine Learning Research</note>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Adaptive Distributed Metamodeling</title>
		<author>
			<persName><forename type="first">Dirk</forename><surname>Gorissen</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-540-71351-7_45</idno>
	</analytic>
	<monogr>
		<title level="j">LNCS</title>
		<editor>M. Daydé et al.</editor>
		<imprint>
			<biblScope unit="volume">4395</biblScope>
			<biblScope unit="page" from="579" to="588" />
			<date type="published" when="2007">2007</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Thermo-mechanical optimization of metallic thermal protection system under aerodynamic heating</title>
		<author>
			<persName><forename type="first">Qi</forename><surname>Guo</surname></persName>
		</author>
		<idno type="DOI">10.1007/s00158-019-02379-4</idno>
	</analytic>
	<monogr>
		<title level="m">Structural and Multidisciplinary Optimization</title>
				<imprint>
			<date type="published" when="2019-08">Aug. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">An objective reduction algorithm using representative Pareto solution search for many-objective optimization problems</title>
		<author>
			<persName><forename type="first">Xiaofang</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yuping</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xiaoli</forename><surname>Wang</surname></persName>
		</author>
		<idno type="DOI">10.1007/s00500-015-1776-4</idno>
	</analytic>
	<monogr>
		<title level="j">Soft Computing</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="4881" to="4895" />
			<date type="published" when="2015-07">July 2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">A survey of non-gradient optimization methods in structural engineering</title>
		<author>
			<persName><forename type="first">Warren</forename><surname>Hare</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Julie</forename><surname>Nutini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Solomon</forename><surname>Tesfamariam</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.advengsoft.2013.03.001</idno>
	</analytic>
	<monogr>
		<title level="j">Advances in Engineering Software</title>
		<imprint>
			<biblScope unit="volume">59</biblScope>
			<biblScope unit="page" from="19" to="28" />
			<date type="published" when="2013-05">May 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Kolmogorov&apos;s Mapping Neural Network Existence Theorem</title>
		<author>
			<persName><forename type="first">Robert</forename><surname>Hecht-Nielsen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hecht-Nielsen</forename><surname>Ne</surname></persName>
		</author>
		<author>
			<persName><surname>Urocomputer Corporation</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the international conference on Neural Networks</title>
				<meeting>the international conference on Neural Networks<address><addrLine>New York</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE Press</publisher>
			<date type="published" when="1987">1987</date>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="11" to="14" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Learning capability and storage capacity of two-hidden-layer feedforward networks</title>
		<author>
			<persName><forename type="first">Guang-Bin</forename><surname>Huang</surname></persName>
		</author>
		<idno type="DOI">10.1109/tnn.2003.809401</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Neural Networks</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="274" to="281" />
			<date type="published" when="2003-03">Mar. 2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point Based Nondominated Sorting Approach, Part II: Handling Constraints and Extending to an Adaptive Approach</title>
		<author>
			<persName><forename type="first">Himanshu</forename><surname>Jain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kalyanmoy</forename><surname>Deb</surname></persName>
		</author>
		<idno type="DOI">10.1109/tevc.2013.2281534</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Evolutionary Computation</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="602" to="622" />
			<date type="published" when="2014-08">Aug. 2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">On Sequential Sampling for Global Metamodeling in Engineering Design</title>
		<author>
			<persName><forename type="first">Ruichen</forename><surname>Jin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Wei</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Agus</forename><surname>Sudjianto</surname></persName>
		</author>
		<idno type="DOI">10.1115/detc2002/dac-34092</idno>
	</analytic>
	<monogr>
		<title level="m">28th Design Automation Conference</title>
				<imprint>
			<publisher>ASMEDC</publisher>
			<date type="published" when="2002-01">Jan. 2002</date>
			<biblScope unit="volume">2</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Kolmogorov</surname></persName>
		</author>
		<ptr target="http://mi.mathnet.ru/eng/dan22050" />
	</analytic>
	<monogr>
		<title level="j">Doklady Akademii Nauk SSSR</title>
		<imprint>
			<biblScope unit="volume">114</biblScope>
			<biblScope unit="page" from="953" to="956" />
			<date type="published" when="1957">1957</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Multi-Objective Design of Antennas Using Variable-Fidelity Simulations and Surrogate Models</title>
		<author>
			<persName><forename type="first">Slawomir</forename><surname>Koziel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Stanislav</forename><surname>Ogurtsov</surname></persName>
		</author>
		<idno type="DOI">10.1109/tap.2013.2283599</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Antennas and Propagation</title>
		<imprint>
			<biblScope unit="volume">61</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="5931" to="5939" />
			<date type="published" when="2013-12">Dec. 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Crashworthiness optimization of helicopter subfloor based on decomposition and global approximation</title>
		<author>
			<persName><forename type="first">L</forename><surname>Lanzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Bisagni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ricci</surname></persName>
		</author>
		<idno type="DOI">10.1007/s00158-004-0394-z</idno>
	</analytic>
	<monogr>
		<title level="j">Structural and Multidisciplinary Optimization</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="issue">5</biblScope>
			<date type="published" when="2004-06">June 2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Gradient-based multi-objective optimization with applications to waterflooding optimization</title>
		<author>
			<persName><forename type="first">Xin</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Albert</forename><forename type="middle">C</forename><surname>Reynolds</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10596-015-9523-6</idno>
	</analytic>
	<monogr>
		<title level="j">Computational Geosciences</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="677" to="693" />
			<date type="published" when="2015-09">Sept. 2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">Survey of multiobjective optimization methods for engineering</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">T</forename><surname>Marler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Arora</surname></persName>
		</author>
		<idno type="DOI">10.1007/s00158-003-0368-6</idno>
	</analytic>
	<monogr>
		<title level="j">Structural and Multidisciplinary Optimization</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="369" to="395" />
			<date type="published" when="2004-04">Apr. 2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<monogr>
		<title level="m" type="main">A randomized Halton algorithm in R</title>
		<author>
			<persName><forename type="first">Art</forename><forename type="middle">B</forename><surname>Owen</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1706.02808v2</idno>
		<imprint>
			<date type="published" when="2017-06-09">June 9, 2017</date>
		</imprint>
	</monogr>
	<note>stat.CO</note>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">On the use of surrogate models in engineering design optimization and exploration</title>
		<author>
			<persName><forename type="first">Pramudita</forename><surname>Satria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Palar</forename></persName>
		</author>
		<idno type="DOI">10.1145/3319619.3326813</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Genetic and Evolutionary Computation Conference Companion</title>
				<meeting>the Genetic and Evolutionary Computation Conference Companion</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2019-07">July 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">Gradient-Based Multiobjective Optimization with Uncertainties</title>
		<author>
			<persName><forename type="first">Sebastian</forename><surname>Peitz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><surname>Dellnitz</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-319-64063-1_7</idno>
	</analytic>
	<monogr>
		<title level="m">Springer International Publishing</title>
				<imprint>
			<date type="published" when="2017-09">Sept. 2017</date>
			<biblScope unit="page" from="159" to="182" />
		</imprint>
	</monogr>
	<note>NEO 2016</note>
</biblStruct>

<biblStruct xml:id="b39">
	<analytic>
		<title level="a" type="main">Searching for Activation Functions</title>
		<author>
			<persName><forename type="first">Prajit</forename><surname>Ramachandran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Barret</forename><surname>Zoph</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Quoc</surname></persName>
		</author>
		<author>
			<persName><surname>Le</surname></persName>
		</author>
		<idno>arXiv:</idno>
		<ptr target="http://arxiv.org/abs/1710.05941v2[cs.NE" />
	</analytic>
	<monogr>
		<title level="j">CoRR</title>
		<imprint>
			<date type="published" when="2017-10-16">Oct. 16, 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b40">
	<analytic>
		<title level="a" type="main">Survey of modeling and optimization strategies to solve high-dimensional design problems with computationally-expensive black-box functions</title>
		<author>
			<persName><forename type="first">Songqing</forename><surname>Shan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">Gary</forename><surname>Wang</surname></persName>
		</author>
		<idno type="DOI">10.1007/s00158-009-0420-2</idno>
	</analytic>
	<monogr>
		<title level="j">Structural and Multidisciplinary Optimization</title>
		<imprint>
			<biblScope unit="volume">41</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="219" to="241" />
			<date type="published" when="2009-08">Aug. 2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b41">
	<analytic>
		<title level="a" type="main">Metamodels for Computerbased Engineering Design: Survey and recommendations</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">W</forename><surname>Simpson</surname></persName>
		</author>
		<idno type="DOI">10.1007/pl00007198</idno>
	</analytic>
	<monogr>
		<title level="j">Engineering with Computers</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="129" to="150" />
			<date type="published" when="2001-07">July 2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b42">
	<analytic>
		<title level="a" type="main">A balanced sequential design strategy for global surrogate modeling</title>
		<author>
			<persName><forename type="first">Prashant</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dirk</forename><surname>Deschrijver</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tom</forename><surname>Dhaene</surname></persName>
		</author>
		<idno type="DOI">10.1109/wsc.2013.6721594</idno>
	</analytic>
	<monogr>
		<title level="m">2013 Winter Simulations Conference (WSC)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2013-12">Dec. 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b43">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Sobieszczanski-Sobieski</surname></persName>
		</author>
		<title level="m">Structural Optimization: Challenges and Opportunities</title>
				<imprint>
			<date type="published" when="1984">1984</date>
		</imprint>
		<respStmt>
			<orgName>National Aeronautics and Space Administration)</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">NASA</note>
</biblStruct>

<biblStruct xml:id="b44">
	<analytic>
		<title level="a" type="main">Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design</title>
		<author>
			<persName><forename type="first">Niranjan</forename><surname>Srinivas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 27th International Conference on International Conference on Machine Learning. ICML&apos;10</title>
				<meeting>the 27th International Conference on International Conference on Machine Learning. ICML&apos;10<address><addrLine>Haifa, Israel</addrLine></address></meeting>
		<imprint>
			<publisher>Omnipress</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="1015" to="1022" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b45">
	<monogr>
		<title level="m" type="main">An easy-touse real-world multi-objective optimization problem suite</title>
		<author>
			<persName><forename type="first">Ryoji</forename><surname>Tanabe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hisao</forename><surname>Ishibuchi</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.asoc.2020.106078</idno>
		<imprint>
			<date type="published" when="2020-04">Apr. 2020</date>
			<publisher>Elsevier BV</publisher>
			<biblScope unit="volume">89</biblScope>
			<biblScope unit="page">106078</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b46">
	<analytic>
		<title level="a" type="main">Thirty years of modern structural optimization</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">N</forename><surname>Vanderplaats</surname></persName>
		</author>
		<idno type="DOI">10.1016/0965-9978(93)90052-u</idno>
	</analytic>
	<monogr>
		<title level="j">Advances in Engineering Software</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="81" to="88" />
			<date type="published" when="1993-01">Jan. 1993</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b47">
	<analytic>
		<title level="a" type="main">Structural optimization complexity: what has Moore?s law done for us?</title>
		<author>
			<persName><forename type="first">S</forename><surname>Venkataraman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">T</forename><surname>Haftka</surname></persName>
		</author>
		<idno type="DOI">10.1007/s00158-004-0415-y</idno>
	</analytic>
	<monogr>
		<title level="j">Structural and Multidisciplinary Optimization</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="375" to="387" />
			<date type="published" when="2004-09">Sept. 2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b48">
	<analytic>
		<title level="a" type="main">Special Section on Multidisciplinary Design Optimization: Metamodeling in Multidisciplinary Design Optimization: How Far Have We Really Come?</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">C</forename><surname>Felipe</surname></persName>
		</author>
		<author>
			<persName><surname>Viana</surname></persName>
		</author>
		<idno type="DOI">10.2514/1.j052375</idno>
	</analytic>
	<monogr>
		<title level="j">AIAA Journal</title>
		<imprint>
			<biblScope unit="volume">52</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="670" to="690" />
			<date type="published" when="2014-04">Apr. 2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b49">
	<analytic>
		<title level="a" type="main">Multi-Objective Optimization Using Surrogates</title>
		<author>
			<persName><forename type="first">Ivan</forename><surname>Voutchkov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andy</forename><surname>Keane</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-642-12775-5_7</idno>
	</analytic>
	<monogr>
		<title level="m">Computational Intelligence in Optimization</title>
				<editor>
			<persName><forename type="first">Y</forename><surname>Tenne</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C.-K</forename><surname>Goh</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="155" to="175" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b50">
	<analytic>
		<title level="a" type="main">No free lunch theorems for optimization</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">H</forename><surname>Wolpert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">G</forename><surname>Macready</surname></persName>
		</author>
		<idno type="DOI">10.1109/4235.585893</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Evolutionary Computation</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="67" to="82" />
			<date type="published" when="1997-04">Apr. 1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b51">
	<monogr>
		<title level="m" type="main">Nature-Inspired Optimization Algorithms</title>
		<author>
			<persName><forename type="first">Xin-She</forename><surname>Yang</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2014">2014</date>
			<publisher>Elsevier</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b52">
	<analytic>
		<title level="a" type="main">Nature-inspired optimization algorithms: Challenges and open problems</title>
		<author>
			<persName><forename type="first">Xin-She</forename><surname>Yang</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.jocs.2020.101104</idno>
		<idno>eprint: 2003.03776v1</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Computational Science</title>
		<imprint>
			<biblScope unit="page">101104</biblScope>
			<date type="published" when="2020-03">Mar. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b53">
	<analytic>
		<title level="a" type="main">Reliability-based design optimization for crashworthiness of vehicle side impact</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">D</forename><surname>Youn</surname></persName>
		</author>
		<idno type="DOI">10.1007/s00158-003-0345-0</idno>
	</analytic>
	<monogr>
		<title level="j">Structural and Multidisciplinary Optimization</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="page" from="272" to="283" />
			<date type="published" when="2004-02">Feb. 2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b54">
	<analytic>
		<title level="a" type="main">Design Optimization of a VX Gasket Structure for a Subsea Connector Based on the Kriging Surrogate Model-NSGA-II Algorithm Considering the Load Randomness</title>
		<author>
			<persName><forename type="first">Wei</forename><surname>Zeng</surname></persName>
		</author>
		<idno type="DOI">10.3390/a12020042</idno>
	</analytic>
	<monogr>
		<title level="j">Algorithms</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page">42</biblScope>
			<date type="published" when="2019-02">Feb. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b55">
	<analytic>
		<title level="a" type="main">To Prune, or Not to Prune: Exploring the Efficacy of Pruning for Model Compression</title>
		<author>
			<persName><forename type="first">Michael</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Suyog</forename><surname>Gupta</surname></persName>
		</author>
		<idno>arXiv:</idno>
		<ptr target="https://arxiv.org/abs/1710.01878v2[cs.ML" />
	</analytic>
	<monogr>
		<title level="m">6th International Conference on Learning Representations, ICLR 2018</title>
				<meeting><address><addrLine>Vancouver, BC, Canada</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018-05-03">April 30 -May 3, 2018. 2018</date>
		</imprint>
	</monogr>
	<note>Workshop Track Proceedings</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
