<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">White-and Black-Box Computing and Measurements under Limited Resources: Cloud, High Performance, and Quantum Computing, and Two Case Studies -Robotic Boat and Hierarchical Covid testing</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Vladik</forename><surname>Kreinovich</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Texas at El Paso</orgName>
								<address>
									<addrLine>500 W. University</addrLine>
									<postCode>79968</postCode>
									<settlement>El Paso</settlement>
									<region>TX</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Martine</forename><surname>Ceberio</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Texas at El Paso</orgName>
								<address>
									<addrLine>500 W. University</addrLine>
									<postCode>79968</postCode>
									<settlement>El Paso</settlement>
									<region>TX</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Olga</forename><surname>Kosheleva</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Texas at El Paso</orgName>
								<address>
									<addrLine>500 W. University</addrLine>
									<postCode>79968</postCode>
									<settlement>El Paso</settlement>
									<region>TX</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">White-and Black-Box Computing and Measurements under Limited Resources: Cloud, High Performance, and Quantum Computing, and Two Case Studies -Robotic Boat and Hierarchical Covid testing</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">C4F77233ABB05E55163DD676FA733579</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T23:46+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>limited resources</term>
					<term>computing and measurement under limited resources</term>
					<term>cloud computing</term>
					<term>high performance computing</term>
					<term>quantum computing</term>
					<term>and robotic boat</term>
					<term>Covid testing</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In many practical problems, it is important to take into account that our computational and measuring resources are limited. In this paper, we overview main resource limitations for different types of computers, and we provide two case studies explaining how to best take this resource limitation into account.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Formulation of the Problem</head><p>What are the goals of science and engineering. We want to know what will happen in the future, and we want to decide what to do to make this future better. In a nutshell:</p><p>• predicting the future is what science is about, and</p><p>• deciding what to do is more the area of engineering.</p><p>Need for measurements and computations. To predict the world's future and to decide how to improve this future, we need to understand how the world works. Some of the situations we can simply observe, but in most cases, we need to measure the values of different quantities.</p><p>Based on the measurement results, we deduce the algorithms describing dynamics of the world's processes -and formulate selection of the best action and/or device as an appropriate optimization problem. To use prediction algorithms and to solve the corresponding optimization problem, we often need to perform a large number of computations. For example, to reasonably accurately predict tomorrow's weather, we need to use high-performance computers.</p><p>Comment. Some problems require so many computational steps that we cannot solve them even on the fastest computers. For example, to predict in which direction a tornado will go, even the existing high-performance computers are not sufficient:</p><p>ICT&amp;ES-2020: Information-Communication Technologies &amp; Embedded Systems, November 12, 2020, Mykolaiv, Ukraine vladik@utep.edu (V. Kreinovich); mceberio@utep.edu (M. Ceberio); olgak@utep.edu (O. Kosheleva) 0000-0002-1244-1650 (V. Kreinovich); 0000-0003-2587-4209 (O. <ref type="bibr">Kosheleva)</ref> • to predict where the tornado will do in the next 15 minutes, we need to spend several hours on a high-performance computer;</p><p>• by that time, the tornado has already moved and caused damage.</p><p>Our resources are limited. In principle, for the same computational problem, we may have different algorithms. Which algorithm should we select? Traditional approach of theoretical computer science uses asymptotic analysis to compare different algorithms. For example, we analyze how the computation time grows with the size 𝑛 of the input, and we select an algorithm that leads to the shortest possible time for large 𝑛. This analysis implicitly assumes that we can have an unlimited amount of resources:</p><p>• we can have inputs of arbitrary large size 𝑛,</p><p>• we can have an arbitrary large memory to store this data and to store all needed intermediate results,</p><p>• we can perform an arbitrary large number of computations, etc.</p><p>In practice, our resources are limited. It is therefore important to consider how this limitation affect computations and measurements.</p><p>What we do in this paper. In this paper:</p><p>• we provide a brief overview of the corresponding resource limitations, and</p><p>• we also present two case studies explaining, in some detail, how to take these resource limitations into account.</p><p>Comment. In most cases, we focus on our own related results -although, of course, in each of these problems, there are many other results and techniques. The reason for this selection is that the results that we cite are usually mathematically reasonably simple -this is why we selected them: for our own results, we know how we can simplify these results into easy-to-follow not-too-complex ones. We believe that overall, these results provide a good introduction to the great variety of related problems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>White-box and black-box computing.</head><p>In our analysis, we will distinguish between:</p><p>• traditional (white-box) computing and • practically important black-box computing.</p><p>White-box computing corresponds to the usual analysis of theoretical computer science, when we know, step-by-step, what our algorithms are doing. This corresponds to:</p><p>• situations when we write our own code "from scratch", and</p><p>• situations when we only use open-source software.</p><p>In practice, however, we often use proprietary code, i.e., a code that its producer does not explain. We can use it, but we are not allowed to see the algorithm -we only see the results. Similarly, in defense applications, we may need to use classified code -e.g., the code used to control the corresponding devices.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">White-Box Computing: Three Types of Situations and Related Resource Limitations</head><p>Three types of situations. In our analysis, we will distinguish between three different computational situations.</p><p>• One such situation is regular-scale computing, when we perform computations on a usual computer, be it a single laptop of an affordable computer cluster.</p><p>• Regular-size computations work well in many cases, but, as we have mentioned earlier, there are many practical situations where regular-scale computing is not sufficient, where we need large-scale computing -what is usually called high-performance computing.</p><p>• For some situations, the opposite is true: we do not need even the full computational power of a regular laptop, it is sufficient to use limited computational power of an easierto-carry smaller device such as a cell phone or a smart watch. It is natural to call such situations small-scale computing.</p><p>Let us describe what are the typical resource limitations of these three types of situations -and what can we do to best takes these resource limitations into account.</p><p>Resource limitations of small-scale computing: energy. For small-scale computing -such as computing on cell phones -by definition, computational ability is not a problem. The main need for such small-scale devices comes from the fact that regular computers are not very portable: it is easier to carry a cell phone than a computer. This portability is a problem: to perform computations, we need energy. For a more or less stationary device like a regular computer, this is not a problem: we just plug in the computer, and get the energy non-stop. For portable devices, energy is a problem: we can only plug it in once in a while, and in a small volume, we can only a limited amount of energy. So, for such devices, the main resource limitation is energy.</p><p>This require a serious change in algorithms -since most traditional algorithms are designed to minimize computation time, without taking into account energy limitations. In particular, this means that energy-needing auxiliary procedures like garbage collection -that periodically frees the no-longer-used part of computer memory -have to be performed only based on need, and not, as in usual computers, periodically or whenever this leads to faster computations; see, e.g., <ref type="bibr" target="#b0">[1]</ref> and references therein.</p><p>For example, a regular laptop usually updates when it is idle, when it is not doing any computations:</p><p>• this preserves its computational speed when it is used, and • no matter how many updates we make, it does not affect its computing abilities.</p><p>For portable devices, any such procedure uses some portion of stored energy, so it needs to be performed as rarely as possible.</p><p>The corresponding optimization of computations is a very challenging task, especially taking into account that some operating systems running on this devices are proprietary. Since the operating system uses a significant amount of energy, we thus face a partly-black-box problem even when the computational algorithms that we run on these devices are white-box ones; see, e.g., <ref type="bibr" target="#b1">[2]</ref>.</p><p>Resource limitations of regular-size computing: cost and cloud computing. For regularsize computing, computational ability is not a problem -otherwise, we would have needed a high-performance computer. As we have mentioned earlier, for such computers, energy is also not a problem: we just plug in. So is there any limited resource? Yes, a very mundane one: money.</p><p>The possibility to save money comes from the fact that the amount of needed computations changes with time:</p><p>• we have daily fluctuations, since most computations are performed at daytime,</p><p>• we have seasonal computations -e.g., a tax-advising company needs a lot of computations before the taxes deadline, stores need to perform more computations before Christmas, when many people are shopping for Christmas gifts, etc.</p><p>In all these cases, we have a choice:</p><p>• we can buy as many computers that we need at peak times, or</p><p>• we can buy fewer computers and at peak time, rent computation time on other computers.</p><p>Computations involving rented computation time are known as cloud computing; see, e.g., <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b4">5,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b6">7]</ref>. This name comes from the fact that we do not care where exactly these computations are performed -as long as they are performed. So, the possible locations of processors performing such computations do not form a clear shape, the resulting geographic shape is amorphous and often fast-changing, like a cloud in the sky. So how much computing power should we buy and how much should we rent? Detailed answers to this question can be found in <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b10">11]</ref>. In this paper, we show the optimal money-saving solution for the simplest case, when we have:</p><p>• full information about the costs and • full information about our computational needs. This means, in particular, that we know:</p><p>• the cost 𝑐 0 per computational step of in-house computations (including the corresponding part of the cost of buying the computers),</p><p>• the cost 𝑐 1 &gt; 𝑐 0 per computational step of rented out-of-house computations, and</p><p>• probability distribution of the future computer needs 𝑥, which can be described, e.g., by the probability density function (pdf) 𝜌(𝑥).</p><p>Based on this information, we need to select the optimal amount 𝑥 0 of computational power that we should buy.</p><p>For each value 𝑥 &gt; 𝑥 0 , we rent the missing amount 𝑥 − 𝑥 0 of the computational power. Thus, the overall expected cost of all these computations is equal to</p><formula xml:id="formula_0">𝑐 0 ⋅ 𝑥 0 + 𝑐 1 ⋅ ∫ ∞ 𝑥 0 𝑐 1 ⋅ (𝑥 − 𝑥 0 ) ⋅ 𝜌(𝑥) 𝑑𝑥.</formula><p>To find the optimal value 𝑥 0 , we can differentiate this expression with respect to 𝑥 0 and equate the derivative to 0. As a result, we get the equation</p><formula xml:id="formula_1">𝐹 (𝑥 0 ) = 1 − 𝑐 0 𝑐 1 ,</formula><p>where 𝐹 (𝑥) is the cumulative distribution function corresponding to the given pdf. This equation described the optimal amount of computing power that we should buy.</p><p>Comment. In <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b10">11]</ref>, one can also find:</p><p>• algorithms for solving this problem in more realistic settings, when we know the costs and needs with some uncertainty,</p><p>• algorithms for deciding when a long-term contract is beneficial, and</p><p>• algorithms that help companies providing cloud computing to find the optimal locations of their computers around the world.</p><p>Resource limitations of large-scale computing: energy again. For large-scale computing, as we have mentioned earlier, one of the main problems is that some important practical problems still cannot be solved on such computers -computer engineers are working on this.</p><p>For the existing computers, what is the main limitation? Somewhat surprisingly, the main limitation is again energy, the same as for small-scale computing. Indeed, we can simply plug in a regular-size computer, but a usual company-wide or university-wide connection can only support a limited number of such plugged-in computers.</p><p>For a high-performance computer -which, crudely speaking, consists of hundreds and thousands of usual computers -we need to build a special power station, and even this may not be sufficient. How can we minimize the power needed for computations?</p><p>To answer this question, we need to go back to computer design:</p><p>• When we design a high-performance computer, we want to maximize the overall number of computations per second. From this viewpoint, a seemingly natural idea is to run all the processors at their maximum speed.</p><p>• It turns out that from the viewpoint of energy consumption, this is not the best arrangement.</p><p>To be more precise, if the power 𝑃 𝑖 used by each processor 𝑖 was exactly proportional to this processor's computation speed 𝑓 𝑖 -i.e., to the number of computational operations per second, this would not matter:</p><formula xml:id="formula_2">• if 𝑃 𝑖 = 𝑘 ⋅ 𝑓 𝑖 ,</formula><p>• then, no matter how we distribute the task between processors, the overall power 𝑃 would be similarly proportional to the overall number 𝑓 of computational steps per second:</p><formula xml:id="formula_3">𝑃 = 𝑘 ⋅ 𝑓 .</formula><p>In practice, however, the linear dependence of 𝑃 𝑖 on 𝑓 𝑖 is only a rough approximation, the actual dependence 𝑃 𝑖 = 𝐹 (𝑓 𝑖 ) is non-linear. As a result, instead of having all the processor work at their maximal speed, a better idea is:</p><p>• to have each processor work at the speed 𝑓 𝑖 at which the ratio</p><formula xml:id="formula_4">𝑃 𝑖 𝑓 𝑖 = 𝐹 (𝑓 𝑖 ) 𝑓 𝑖</formula><p>is the smallest possible, and</p><p>• to use more processors if needed.</p><p>If at some point, the task requires a smaller overall computation speed 𝑓 , then, to minimize energy consumption:</p><p>• we make some processors idle, while</p><p>• others will work at their optimal speed (optimal in terms of energy consumption).</p><p>Quantum computing and its resource limitations. As we have mentioned several times, there is a practical need to increase computation speed beyond what we can do now. This need faces a fundamental physical limitation: that, according to relativity theory, the speed of all processes is limited by the speed of light. As a result:</p><p>• for a usual laptop of approximately 30 cm size, it takes 1 nanosecond for the light to travel from one side of the computer to another, and</p><p>• during this time, even the cheapest 4-GHz laptop performs 4 operations.</p><p>To drastically speed up processing, we need to drastically decrease the computer size -and this means to make memory cells drastically smaller. Already each cell consists of thousands of molecules. With further size decrease, we will get to the level of individual molecules -and at this level, we have to take into account physical laws of the micro-world -the law of quantum physics.</p><p>Computations that take quantum effects into account are know as quantum computing; see, e.g., <ref type="bibr" target="#b11">[12]</ref>. One of the main features of quantum physics -see, e.g., <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b13">14]</ref> -is that, in addition to classical states 𝑠, 𝑠 ′ , . . . -which, in quantum physics, are denoted by |𝑠⟩, |𝑠 ′ ⟩, etc. -we can also have superpositions of these states, i.e., states of the type</p><formula xml:id="formula_5">𝑐 𝑠 |𝑠⟩ + 𝑐 𝑠 ′ |𝑠 ′ ⟩ + … ,</formula><p>where 𝑐 𝑠 , 𝑐 𝑠 ′ , . . . , are complex values for which</p><formula xml:id="formula_6">|𝑐 𝑠 | 2 + |𝑐 𝑠 ′ | 2 + … = 1.</formula><p>If we measure, in this superposition state, a classical variable 𝑣, then we get the value 𝑣(𝑠) with probability |𝑐 𝑠 | 2 , the value 𝑣(𝑠 ′ ) with probability |𝑐 𝑠 ′ | 2 , etc. The sum of all these probabilities should add to 1 -which explains the above condition on the values 𝑐 𝑠 , 𝑐 𝑠 ′ , . . . In particular, a quantum analogue of the usual bit -i.e., an element that can be in two possible states 0 and 1 -is a quantum bit (qubit, for short), i.e., the state of the type</p><formula xml:id="formula_7">𝑐 0 |0⟩ + 𝑐 1 |1⟩.</formula><p>Similarly, a quantum analogue of a 2-bit state is a general 2-qubit state</p><formula xml:id="formula_8">𝑐 00 |00⟩ + 𝑐 01 |01⟩ + 𝑐 10 |10⟩ + 𝑐 11 |11⟩, etc.</formula><p>There are many efficient quantum algorithms, e.g., Grover's algorithm <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b15">16,</ref><ref type="bibr" target="#b11">12]</ref>.</p><p>• This algorithm enables to find an element with a given property in an unsorted 𝑛-element array in time proportional to √ 𝑛.</p><p>• On the other hand, for non-quantum algorithms, not to miss the desired element, we need to look at all 𝑛 elements, so we need at least 𝑛 computational steps, and 𝑛 ≫ √ 𝑛.</p><p>Grover's algorithm requires 𝑛-qubit states. At this moment, the number of available qubits is the main limitation to quantum computing, the main limited resource. What happens if we only have 𝑠 &lt; 𝑛 qubits, i.e., if we can only implement 𝑠-qubit states? In this case, we can:</p><p>• divide the 𝑛-element array into 𝑛/𝑠 fragments of size 𝑠, and • apply Grover's algorithms to each fragment; see, e.g., <ref type="bibr" target="#b16">[17]</ref>.</p><p>For each fragment, Grover's algorithm requires √ 𝑠 steps, so the overall computation time will be 𝑛 𝑠</p><formula xml:id="formula_9">⋅ √ 𝑠 = 𝑛 √ 𝑠</formula><p>. This amount:</p><p>• is still smaller than the time 𝑛 needed in the non-quantum case, and</p><p>• decreases as the resource bound 𝑠 increases (and reaches the Grover's value 𝑡 ∼ √ 𝑛 when we reach the case 𝑠 = 𝑛).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Black-Box Computing: Related Resource Limitations</head><p>General case. For black-box computing, when computations include calls to a "black-box" (proprietary or classified) program, a natural idea is to minimize the number of such calls:</p><p>• For proprietary programs, when we often need to pay for each class, this minimizes the overall costs.</p><p>• For classified programs, when each call is an extra vulnerability, this minimizes the risk.</p><p>• For programs that take a long time to compute, this also minimizes the overall computation time.</p><p>What can we compute in this manner? Usually, commercial software provides a turn-key solution to the corresponding problem, a solution that does not require additional programming. There exist commercial software for deep learning, for solving partial differential equations, etc. This software produces the result 𝑦 = 𝑓 (𝑥 1 , … , 𝑥 𝑛 ) of processing the inputs 𝑥 1 , … , 𝑥 𝑛 , but what is usually does not do is produce the accuracy of this result. To be more precise, most black-box packages compute the result implicitly assuming that the inputs are exactly known. In practice, the inputs come from measurements, and measurements are never absolutely accurate: the result x𝑖 of measuring the 𝑖-th input are, in general, different from the actual (unknown) value 𝑥 𝑖 of this input.</p><p>Thus, when we apply the black-box algorithm 𝑓 to the measurement results x𝑖 , the result ỹ of this computation will be, in general, different from the desired value 𝑦 = 𝑓 (𝑥 1 , … , 𝑥 𝑛 ):</p><formula xml:id="formula_10">ỹ = 𝑓 ( x1 , … , x𝑛 ) ≠ 𝑦 = 𝑓 (𝑥 1 , … , 𝑥 𝑛 ).</formula><p>An important question is: how accurate is the computation result? In other words, how big can the difference Δ𝑦 def = ỹ = 𝑦 be? This question is very practically important. For example, if we are prospecting for oil and we learned that the given area contains ỹ = 200 million tons, then:</p><p>• if this amount is 200 ± 20, this is good news and we should start drilling, but • if it is 200 ± 300, then maybe there is no oil at all, and it is better to perform additional measurements before we spend a large amount of money on possibly useless drilling. • In some cases, we know the probability distribution of each measurement error Δ𝑥 𝑖 , and we know that these measurement errors are independent. In many cases, this distribution is normal (Gaussian) with mean 0 and known standard deviation 𝜎 𝑖 .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Let us formulate this</head><p>• In other cases, we do not have any information about the probabilities, we only know the upper bound Δ 𝑖 on the absolute value of the corresponding measurement error: |Δ𝑥 𝑖 | ≤ Δ 𝑖 ; see, e.g., <ref type="bibr" target="#b17">[18]</ref>.</p><p>How can we use this information to find bounds on Δ𝑥 𝑖 ?</p><p>Seemingly natural idea. If we know the values 𝑐 𝑖 , then, e.g., for the normal distribution, we can conclude that the linear combination • For 𝑐 𝑖 ≥ 0, the term 𝑐 𝑖 ⋅ Δ𝑥 𝑖 is increasing, so its largest value is attained when Δ𝑥 𝑖 attains its largest value Δ 𝑖 . The corresponding largest value of the 𝑖-th term is 𝑐 𝑖 ⋅ Δ 𝑖 .</p><p>• For 𝑐 𝑖 ≤ 0, the term 𝑐 𝑖 ⋅ Δ𝑥 𝑖 is decreasing, so its largest value is attained when Δ𝑥 𝑖 attains its smallest value −Δ 𝑖 . The corresponding largest value of the 𝑖-th term is −𝑐 𝑖 ⋅ Δ 𝑖 .</p><p>In both cases, the largest value of the 𝑖-th term is |𝑐 𝑖 | ⋅ Δ 𝑖 , so the largest value Δ of the sum Δ𝑦 is equal to</p><formula xml:id="formula_11">Δ = 𝑛 ∑ 𝑖=1 |𝑐 𝑖 | ⋅ Δ 𝑖 .</formula><p>Straightforward approach and its limitations. If we knew the values 𝑐 𝑖 , then we could use the above formulas and find the desired characteristic 𝜎 or Δ of the approximation error Δ𝑦.</p><p>The algorithm 𝑓 (𝑥 1 , … , 𝑥 𝑛 ) is given as a black box, we do not know the corresponding expression and thus, we cannot simply differentiate this expression. However, what we can do is use numerical differentiation, according to which, for sufficiently small value ℎ 𝑖 , we have</p><formula xml:id="formula_12">𝑐 𝑖 = 𝜕𝑓 𝜕𝑥 𝑖 ≈ 𝑓 ( x1 , … , x𝑖 + ℎ 𝑖 , x𝑖+1 , … , x𝑛 ) − ỹ ℎ 𝑖 .</formula><p>This way, we get the desired expressions for 𝜎 and Δ -but we need to make 𝑛 + 1 calls to 𝑓 : the first call to compute ỹ and then 𝑛 more calls to compute 𝑛 partial derivatives 𝑐 𝑖 . In many data processing algorithms, we use thousands of inputs, and each call to 𝑓 may require minutes or even hours, so this is not a very feasible idea. What can we do? Case of probability distributions: Monte-Carlo simulations. In situations when we know the probability distributions, the answer is well known: use Monte-Carlo simulations. Namely, for some integer 𝑁 , we:</p><p>• 𝑁 times simulate the variables 𝛿𝑥 (𝑘) 𝑖 , 𝑘 = 1, … , 𝑁 , whose distribution is the same as the distribution of the corresponding measurement errors Δ𝑥 𝑖 , and then</p><p>• compute the values</p><formula xml:id="formula_13">𝛿 (𝑘) = 𝑓 ( x1 , … , x𝑛 ) − 𝑓 ( x1 − 𝛿𝑥 (𝑘) 1 , … , x𝑛 − 𝛿𝑥 (𝑘) 𝑛 ).</formula><p>By using the same Taylor expansion as before, we conclude that</p><formula xml:id="formula_14">𝛿 (𝑘) = 𝑛 ∑ 𝑖=1 𝑐 𝑖 ⋅ 𝛿𝑥 (𝑘) 𝑖 .</formula><p>Since the values 𝛿𝑥 (𝑘) 𝑖 have the same distribution as the measurement errors Δ𝑥 𝑖 , we can conclude that the distribution of the values 𝛿 (𝑘) is exactly the same as the desired distribution of the approximation errors Δ𝑦.</p><p>In particular, for the normal distribution, we can estimate 𝜎 as</p><formula xml:id="formula_15">𝜎 ≈ √ 1 𝑁 ⋅ 𝑁 ∑ 𝑘=1 ( 𝛿 (𝑘) ) 2 .</formula><p>Good news is that the accuracy of this estimation is proportional to 1 √ 𝑁 (see, e.g., <ref type="bibr" target="#b18">[19]</ref>) and does not depend on the number of inputs 𝑛.</p><p>For example, if we want to find 𝜎 with accuracy 1 √ 𝑁 = 20%, we need to take 𝑁 = 25 iterations and thus, 25 calls for the program 𝑓 -which is much smaller than thousands of calls needed for straightforward estimation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>What if we only know the upper bounds?</head><p>At first glance, in this case the situation is hopeless: we do not know what distribution to use. However, interestingly, a Monte-Carlo approach is possible in this case as well; see, e.g., <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b20">21]</ref>. The idea is to use Cauchy distribution, i.e., the distribution with probability density function</p><formula xml:id="formula_16">𝜌(𝑥) = 1 𝜋 ⋅ Δ ⋅ 1 1 + ( 𝑥 Δ ) 2 .</formula><p>The use of Cauchy distribution is based on the following property of this distribution:</p><p>• if independent random variables Δ𝑥 1 , . . . , Δ𝑥 𝑛 are Cauchy-distributed with parameters Δ 𝑖 ,</p><p>• then the linear combination Thus, to find Δ:</p><p>• we simulate 𝛿𝑥 (𝑘) 𝑖 to be Cauchy-distributed with parameter Δ 𝑖 ; • then the values</p><formula xml:id="formula_17">𝛿 (𝑘) = 𝑓 ( x1 , … , x𝑛 ) − 𝑓 ( x1 − 𝛿𝑥 (𝑘)</formula><p>1 , … , x𝑛 − 𝛿𝑥 (𝑘) 𝑛 ) are Cauchy-distributed with the desired value of the parameter Δ. We can then find this value Δ by applying, e.g., the Maximum Likelihood method <ref type="bibr" target="#b18">[19]</ref>.</p><p>To solve this problem, researchers are working on designing robotic boats; see, e.g., <ref type="bibr" target="#b25">[26]</ref> and references therein. The idea is to place this boat at the origin of the small river and let is flow along the river and measure the corresponding parameters. The problem is that measurements and transmission of the resulting data use some energy, and since boat is autonomous, it can only carry a limited amount of energy. So, the question is: how to get the desired map of the river by using the smallest possible number of measurements.</p><p>The main idea behind this minimization is straightforward:</p><p>• when we are passing through a part of the trajectory where the depth (or any other quantity of interest) is approximately constant, then there is no sense of measuring this quantity too frequently; but</p><p>• when we come to the part of the trajectory where the quantity starts changing, then we must measure it with the highest possible frequency.</p><p>Let us describe the corresponding problem in precise terms. We know reasonably well the trajectory followed by a spaceship or the trajectory followed by a floating robotic boat or by another similar device. Let 𝑥 be the parameter describing the device's location on this trajectory -e.g., the length of the path that it has already covered along this trajectory. At some values 𝑥, we perform some measurements. We may want to measure some quantity in its location -e.g., the depth at this location. We may want to measure some quantity that depends not only 𝑥, but also on some other parameters 𝑦 1 , …. For example, for the boat, we may want, by sending a signal in several directions, to measure the depths not only right beneath the boat, but also on other locations (𝑥, 𝑦 1 ) on the line orthogonal to the trajectory. In general, based on the measurements, we want to find the while dependence 𝑞(𝑥, 𝑦 1 , …) of the measured quantity on all these parameters, and we want to measure all these values with accuracy 𝜀. How can we reach this goal by using the smallest possible number of measurements?</p><p>A natural idea is to use a simple approximation</p><formula xml:id="formula_18">𝑞(𝑥, 𝑦 1 , …) = 𝐶 1 ⋅ 𝑞 1 (𝑥, 𝑦 1 , …) + … + 𝐶 𝑘 ⋅ 𝑞 𝑘 (𝑥, 𝑦 1 , …)</formula><p>to describe the local behavior of the corresponding field. In this approximation:</p><p>• the functions 𝑞 1 (𝑥, 𝑦 1 , …), . . . , 𝑞 𝑘 (𝑥, 𝑦 1 , …) are fixed, and</p><p>• the parameters 𝐶 1 , … , 𝐶 𝑘 change from location to location.</p><p>In many cases, the field is smooth, so it can be locally approximated by linear or quadratic functions. For example, if we are interested in the depth 𝑑(𝑥) at different points along the trajectory, then we can use approximations 𝑑(𝑥) ≈ 𝐶 1 + 𝐶 2 ⋅ 𝑥 or take 𝑑(𝑥) ≈ 𝐶 1 + 𝐶 2 ⋅ 𝑥 + 𝐶 3 ⋅ 𝑥 2 .</p><p>• In the first case, 𝑘 = 2, 𝑞 1 (𝑥) = 1, and 𝑞 2 (𝑥) = 𝑥.</p><p>• In the second case, 𝑘 = 3, 𝑞 1 (𝑥) = 1, 𝑞 2 (𝑥) = 𝑥, and 𝑞 3 (𝑥) = 𝑥 2 .</p><p>How we proceed depends on:</p><p>• whether we know the probability distribution of the measurement error -e.g., Gaussian with standard deviation 𝜎 -or</p><p>• whether we only know the bound Δ on the measurement error.</p><p>In both cases, we first perform 𝑘 + 1 (or more) measurements corresponding to the values 𝑥 (𝑖) , 𝑦 (𝑖) 1 , . . . , where 𝑖 = 1, 2, … , 𝑘 + 1. Let 𝑞 (𝑖) be the corresponding measurement results.</p><p>In the probabilistic case, we form the resulting system of approximate equations:</p><p>𝑞 (1) ≈ 𝐶 1 ⋅ 𝑞 1 ( 𝑥 (1) , 𝑦 (1) 1 , … ) + … + 𝐶 𝑘 ⋅ 𝑞 𝑘 ( 𝑥 (1) , 𝑦 (1) 1 , … ) ;</p><formula xml:id="formula_19">… 𝑞 (𝑘+1) ≈ 𝐶 1 ⋅ 𝑞 1 ( 𝑥 (𝑘+1) , 𝑦 (𝑘+1) 1 , … ) + … + 𝐶 𝑘 ⋅ 𝑞 𝑘 ( 𝑥 (𝑘+1) , 𝑦 (𝑘+1) 1 , … ) .</formula><p>We can use the usual Least Squares method (see, e.g., <ref type="bibr" target="#b18">[19]</ref>) to find the estimates Ĉ1 , … , Ĉ𝑘 for the corresponding parameters 𝐶 1 , … , 𝐶 𝑘 . This method also provides us with the variance of these estimates, and with the covariances describing correlation between these estimates.</p><p>Based on these variances and covariances, for all future values 𝑥 and for all corresponding values 𝑦 1 , …, we can compute the standard deviation for the predicted value</p><formula xml:id="formula_20">𝑞(𝑥, 𝑦 1 , …) = Ĉ1 ⋅ 𝑞 1 (𝑥, 𝑦 1 , …) + … + Ĉ𝑘 ⋅ 𝑞 𝑘 (𝑥, 𝑦 1 , …).</formula><p>As long as this value is smaller than or equal to the desired accuracy 𝜀, we do not perform any additional measurements. The next group of measurements is performed at the first future value 𝑥 at which the standard deviation becomes equal to 𝜀.</p><p>In the case of upper bounds, for each future value 𝑥 and for all possible values of the variables 𝑦 1 , …, we find the range [𝑞, 𝑞] of possible values 𝑞(𝑥, 𝑦 1 , …) by solving the following two optimization problems. To find the upper bound 𝑞, we maximize the expression 𝐶 1 ⋅ 𝑞 1 (𝑥, 𝑦 1 , …) + … + 𝐶 𝑘 ⋅ 𝑞 𝑘 (𝑥, 𝑦 1 , …) under the constraints 𝑞 (1) − Δ ≤ 𝐶 1 ⋅ 𝑞 1 ( 𝑥 (1) , 𝑦 <ref type="bibr" target="#b0">(1)</ref> 1 , … ) + … + 𝐶 𝑘 ⋅ 𝑞 𝑘 ( 𝑥 (1) , 𝑦 (1) 1 , … ) ≤ 𝑞 (1) + Δ; … 𝑞 (𝑘+1) − Δ ≤ 𝐶 1 ⋅ 𝑞 1 ( 𝑥 (𝑘+1) , 𝑦 (𝑘+1) 1 , … ) + … + 𝐶 𝑘 ⋅ 𝑞 𝑘 ( 𝑥 (𝑘+1) , 𝑦 (𝑘+1) 1 , … ) ≤ 𝑞 (𝑘+1) + Δ. To find the lower bound 𝑞, we minimize the same expression under the same constraints. In both optimization problems, we optimize a function which is linear in terms of the unknowns 𝐶 1 , … , 𝐶 𝑘 under constraints which are also linear in terms of the unknowns. Such optimization problems are known as linear programming problems. There exist efficient algorithms for solving such problems; see, e.g., <ref type="bibr" target="#b26">[27]</ref>.</p><p>The next measurement is performed at a location for which the prediction accuracy is 𝜀, i.e., for which the interval [𝑞, 𝑞] has the form [ q − 𝜀, q + 𝜀] for some q, i.e., equivalently, for which</p><formula xml:id="formula_21">𝑞 − 𝑞 = 2𝜀.</formula><p>Measuring combinations of quantities: general description and the case study of Covid-19 testing. When we measure, we are interested in some property. In some situations, only a few objects have the property of interest. In such situations, the possibility to measure combinations of quantities corresponding to individual objects can save on the number of measurements.</p><p>For example, if we are prospecting for oil, we save a lot of efforts if, instead of drilling at each possible location, we perform some measurements of the area as a whole and only drill in those areas which, according to these combined measurements, have oil.</p><p>Another example is Covid-19 testing. In the ideal world we should test everybody, but the number of available test kits is limited. If instead of testing all 𝑁 people from a given area, we apply the test to a mixture of samples from a group of several (𝑠 1 ) people, we can decrease the required number of tests kits -since:</p><p>• we can dismiss all the groups where no one was Covid-positive, and</p><p>• we need to continue testing only folks from the remaining groups; see, e.g., <ref type="bibr" target="#b27">[28,</ref><ref type="bibr" target="#b28">29,</ref><ref type="bibr" target="#b29">30]</ref>.</p><p>In <ref type="bibr" target="#b29">[30]</ref>, we considered the arrangement when after the group testing, we individually test everyone from still-suspicious groups. However, if there are many such folks, a reasonable idea is to combine these remaining folks into new groups of size 𝑠 2 &lt; 𝑠 1 , and test each new group, etc., until, after 𝑛 such stages, on the following stage 𝑛 + 1, we test all remaining possibly-positive folks individually. Let us analyze what is the optimal approach to such multi-stage group testing.</p><p>Let 𝑝 denote the proportion of people in a given area that test Covid-positive. This number can be (and is) estimated based on the preliminary random testing. Thus, overall, we have 𝑁 ⋅ 𝑝 Covid-positive folks.</p><p>The value 𝑝 is small, so for sufficiently small group sizes 𝑠 𝑘 , the probability that we have two Covid-positive folks in each group is small. Since most of 𝑁 ⋅ 𝑝 Covid-positive folks belong to different groups, after 𝑘 stages, we will have 𝑁 ⋅ 𝑝 different groups of size 𝑠 𝑘 -and thus, we have 𝑁 ⋅ 𝑝 ⋅ 𝑠 𝑘 possibly-positive folks. . This is true for all 𝑘, so the value</p><formula xml:id="formula_22">𝑠 𝑘 𝑠 𝑘+1</formula><p>is the same for all 𝑘. Let us denote this ratio by 𝑞. Thus, the values 𝑠 𝑘 form a geometric progression with common ratio 𝑞.</p><p>At the end, we check individually, so 𝑠 𝑛+1 = 1. Thus, we get 𝑠 𝑛 = 𝑞, 𝑠 𝑛−1 = 𝑞 2 , . . . , until we get 𝑠 1 = 𝑞 𝑛 . Thus, 𝑛 = log 𝑞 (𝑠 1 ) = ln(𝑠 1 ) ln(𝑞)</p><p>. For these values 𝑠 𝑘 , each term 𝑁 ⋅ 𝑝 ⋅ 𝑠 𝑘 𝑠 𝑘+1</p><p>in the expression for the overall number of tests is equal to 𝑁 ⋅ 𝑝 ⋅ 𝑞, and there are 𝑛 such terms -as many as stages in the testing procedure. Thus, the overall number of tests is equal to</p><formula xml:id="formula_23">𝑁 𝑠 1 + 𝑛 ⋅ 𝑁 ⋅ 𝑝 ⋅ 𝑞 = 𝑁 𝑠 1 + ln(𝑠 1 ) ln(𝑞) ⋅ 𝑁 ⋅ 𝑝 ⋅ 𝑞.</formula><p>Minimizing this expression with respect to 𝑞 means minimizing the ratio 𝑞 ln(𝑞)</p><p>. Differentiating this expression with respect to 𝑞 and equating the derivative to 0, we conclude that ln(𝑞)−𝑞⋅ 1 𝑞 = 0, i.e., that ln(𝑞) = 1 and thus, 𝑞 = 𝑒. Multiplying both sides by 𝑠 2 1 , dividing both sides by 𝑁 ⋅ 𝑝 ⋅ 𝑒, and moving negative terms to the other side, we conclude that 𝑠 1 = 1 𝑝 ⋅ 𝑒 . Thus, we arrive at the following recommendation.</p><p>Optimal testing schedule. We first combine folks into groups of size 𝑠 1 = 1 𝑝 ⋅ 𝑒 , and test combined samples from each group. All the people from groups whose combined sample tested negative are Covid-negative and thus, do not need any additional testing.</p><p>The remaining folks are combined in groups of size 𝑠 2 = 𝑠 1 𝑒</p><p>. We test combined samples from each group. All the people from groups whose combined sample tested negative are Covid-negative and thus, do not need any additional testing.</p><p>The remaining folks are combined in groups of size 𝑠 3 = 𝑠 1 𝑒 2 , etc. At the end, we reach the stage at which 𝑠 𝑛+1 ≈ 1, i.e., where the remaining folks need to be individually tested.</p><p>How many tests do we need for the optimal testing schedule. Substituting the expression Is this method optimal? This is indeed the best we can do -at least asymptotically the best. Indeed, we need to find 𝑁 ⋅ 𝑝 elements out of 𝑁 . The number of such possible arrangement is </p><note type="other">.</note><p>This expression differs from our number of tests by a factor 𝑒 ⋅ ln(2) ≈ 1.88; so, modulo a small multiplicative constant, our method is indeed asymptotically optimal.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>problem in precise terms. Usually, the measurement errors Δ𝑥 𝑖 def = x𝑖 − 𝑥 𝑖 are relatively small. So, taking into account that 𝑥 𝑖 = x𝑖 − Δ𝑥 𝑖 , we can expand the expression Δ𝑦 = 𝑓 ( x1 , … , x𝑛 ) − 𝑓 ( x1 − Δ𝑥 1 , … , x𝑛 − Δ𝑥 𝑛 )in Taylor series and keep only linear terms in this expansion. As a result, we get the expression know about the measurement errors?</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>In case of upper bounds, when each measurement error Δ𝑥 𝑖 can take any value from the interval [−Δ 𝑖 , Δ 𝑖 ], the largest possible value of the sum 𝑛 ∑ 𝑖=1 𝑐 𝑖 ⋅ Δ𝑥 𝑖 is attained when each term 𝑐 𝑖 ⋅ Δ𝑥 𝑖 in this sum is the largest possible.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>𝑖 of these random variables is also Cauchy-distributed, with parameter Δ = 𝑛 ∑ 𝑖=1 |𝑐 𝑖 | ⋅ Δ 𝑖 . This expression is exactly what we want.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>For 1 +</head><label>1</label><figDesc>𝑞 = 𝑒, the above expression for the overall number of tests takes the form 𝑁 𝑠 𝑁 ⋅ 𝑝 ⋅ 𝑒 ⋅ ln(𝑠 1 ). There is only one parameter left undetermined -the value 𝑠 1 . Differentiating the above expression with respect to 𝑠 1 and equating the derivative t 0, we get −</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>1 +</head><label>1</label><figDesc>into the formula for the overall number of tests, and taking into account that ln(𝑠 1 ) = | ln(𝑝)| − 1, we conclude that we need 𝑁 𝑠 𝑁 ⋅ 𝑝 ⋅ 𝑒 ⋅ ln(𝑠 1 ) = 𝑁 ⋅ 𝑝 ⋅ 𝑒 + 𝑁 ⋅ 𝑝 ⋅ 𝑒 ⋅ (| ln(𝑝)| − 1), i.e., we need 𝑁 ⋅ 𝑝 ⋅ | ln(𝑝)| ⋅ 𝑒 tests.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>2 (</head><label>2</label><figDesc>𝑁 )! ⋅ ((1 − 𝑝) ⋅ 𝑁 )! . Each test has 2 possible results, so after 𝑡 tests, we have 2 𝑡 possible combinations of results. To get all ( 𝑁 𝑝 ⋅ 𝑁 ) possible answers, we need to have 2 𝑡 ≥ ( 𝑁 𝑝 ⋅ 𝑁 ), i.e., we need 𝑡 ≈ log 2 ( 𝑁 𝑝 ⋅ 𝑁 ) . Asymptotically, for each integer 𝑞, we have𝑞! ≈ 𝑞!) ≈ 𝑞 ⋅ (log 2 (𝑞) − log 2 (𝑒)) = 𝑞 ⋅ log 2 (𝑞) − 𝑞 ⋅ log 2 (𝑒). Thus, 𝑡 ≈ log 2 ( 𝑁 𝑝 ⋅ 𝑁 ) = log 2 (𝑁 !) − log 2 ((𝑝 ⋅ 𝑁 )!) − log 2 ((1 − 𝑝) ⋅ 𝑁 )!) = 𝑁 ⋅ log 2 (𝑁 ) − 𝑁 ⋅ log 2 (𝑒) − 𝑝 ⋅ 𝑁 ⋅ log 2 (𝑝 ⋅ 𝑁 ) + 𝑝 ⋅ 𝑁 ⋅ log 2 (𝑒)− (1 − 𝑝) ⋅ 𝑁 ⋅ log 2 ((1 − 𝑝) ⋅ 𝑁 ) + (1 − 𝑝) ⋅ 𝑁 ⋅ log 2 (𝑒).Terms proportional to log 2 (𝑒) cancel each other. So, taking into account that the logarithm of the product is equal to the sum of logarithms, we get𝑡 ≈ 𝑁 ⋅ log 2 (𝑁 ) − 𝑝 ⋅ 𝑁 ⋅ log 2 (𝑁 ) − 𝑝 ⋅ 𝑁 ⋅ log 2 (𝑝)− (1 − 𝑝) ⋅ 𝑁 ⋅ log 2 (𝑁 ) − (1 − 𝑝) ⋅ 𝑁 ⋅ log 2 (1 − 𝑝).Terms proportional to 𝑁 ⋅ log 2 (𝑁 ) cancel each other, so 𝑡 ≈ −𝑁 ⋅ (𝑝 ⋅ log 2 (𝑝) + (1 − 𝑝) ⋅ log 2 (1 − 𝑝)). For small 𝑝, we have 1 − 𝑝 ≈ 1 and log 2 (1 − 𝑝) ∼ 𝑝, so (1 − 𝑝) ⋅ log 2 (1 − 𝑝) ∼ 𝑝 ≪ 𝑝 ⋅ log 2 (𝑝), thus the smallest possible number of tests is indeed proportional to 𝑁 ⋅ 𝑝 ⋅ | log 2 (𝑝)| = 𝑁 ⋅ 𝑝 ⋅ | ln(𝑝)| ln(2)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>We want to select the values 𝑠 𝑘 that minimize this overall number of tests. Let us first select an integer 𝑘 &gt; 1 and minimize the overall number of tests with respect to 𝑠 𝑘 . Differentiating the above expression for the overall number of tests with respect to 𝑠 𝑘 and equating the derivative to 0, we conclude that Dividing both sides by 𝑁 ⋅ 𝑝, multiplying both sides by 𝑠 𝑘 , and moving the negative term to the other side, we conclude that</figDesc><table><row><cell></cell><cell></cell><cell>𝑠 𝑘</cell><cell>𝑠 𝑘−1</cell></row><row><cell></cell><cell></cell><cell>=</cell><cell></cell></row><row><cell></cell><cell></cell><cell>𝑠 𝑘+1</cell><cell>𝑠 𝑘</cell></row><row><cell>On the first stage, we test</cell><cell>𝑁</cell><cell cols="3">folks. After 𝑘 stages, we have 𝑁 ⋅ 𝑝 ⋅ 𝑠 𝑘 possibly positive folks.</cell></row><row><cell></cell><cell>𝑠 1</cell><cell></cell><cell></cell></row><row><cell cols="5">We divide them into groups of size 𝑠 𝑘+1 . Thus, on the (𝑘 + 1)-st stage, we get</cell><cell>𝑁 ⋅ 𝑝 ⋅ 𝑠 𝑘</cell><cell>testing</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>𝑠 𝑘+1</cell></row><row><cell cols="5">groups. Hence, on the (𝑘 + 1)-st stage, we need to perform</cell><cell>𝑁 ⋅ 𝑝 ⋅ 𝑠 𝑘</cell><cell>tests. The overall number</cell></row><row><cell>of tests is therefore equal to</cell><cell></cell><cell></cell><cell></cell><cell>𝑠 𝑘+1</cell></row><row><cell>𝑁</cell><cell cols="2">𝑁 ⋅ 𝑝 ⋅ 𝑠 1</cell><cell>𝑁 ⋅ 𝑝 ⋅ 𝑠 𝑘−1</cell><cell>𝑁 ⋅ 𝑝 ⋅ 𝑠 𝑘</cell></row><row><cell>+</cell><cell></cell><cell cols="2">+ … +</cell><cell>+</cell><cell>+ …</cell></row><row><cell>𝑠 1</cell><cell></cell><cell>𝑠 2</cell><cell>𝑠 𝑘</cell><cell>𝑠 𝑘+1</cell></row><row><cell cols="4">𝑁 ⋅ 𝑝 ⋅ 𝑠 𝑘−1 Towards finding the optimal testing strategy. − 𝑁 ⋅ 𝑝 𝑘 𝑠 𝑘+1 𝑠 2 +</cell><cell>= 0.</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Acknowledgments</head><p>This work was supported in part by the National Science Foundation grants 1623190 (A Model of Change for Preparing a New Generation for Professional Practice in Computer Science), and HRD-1834620 and HRD-2034030 (CAHSI Includes).</p><p>The authors are greatly thankful to the conference organizers for their encouragement and support, and to Laura Alvarez for helpful discussions.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">How to gauge disruptions caused by garbage collection: towards an efficient algorithm</title>
		<author>
			<persName><forename type="first">G</forename><surname>Arellano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hudgins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Pruitt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Veliz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Freudenthal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kreinovich</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Uncertain Systems</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="4" to="9" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Accurate cpu power modeling for multicore smartphones</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhuang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Li</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">B</forename><surname>Furht</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Escalante</surname></persName>
		</author>
		<title level="m">Handbook of cloud computing</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="volume">3</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">C</forename><surname>Marinescu</surname></persName>
		</author>
		<title level="m">Cloud computing: theory and practice</title>
				<imprint>
			<publisher>Morgan Kaufmann</publisher>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Mell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Grance</surname></persName>
		</author>
		<title level="m">The NIST definition of cloud computing</title>
				<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Cloud computing explained</title>
		<author>
			<persName><forename type="first">J</forename><surname>Rhoton</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2010">2010</date>
			<publisher>Recursive Press</publisher>
			<pubPlace>London</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><forename type="first">T</forename><surname>Velte</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Velte</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Elsenpeter</surname></persName>
		</author>
		<title level="m">Cloud computing, a practical approach</title>
				<imprint>
			<publisher>McGraw-Hill, Inc</publisher>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Towards optimizing cloud computing: an example of optimization under uncertainty</title>
		<author>
			<persName><forename type="first">V</forename><surname>Kreinovich</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Scalable Computing and Communications: Theory and Practice</title>
				<editor>
			<persName><forename type="first">S</forename><forename type="middle">U</forename><surname>Khan</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">Y</forename><surname>Zomaya</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Wang</surname></persName>
		</editor>
		<imprint>
			<publisher>John Wiley &amp; Sons and IEEE Computer Society Press</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="613" to="627" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Optimizing cloud use under interval uncertainty</title>
		<author>
			<persName><forename type="first">V</forename><surname>Kreinovich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Gallardo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Parallel Processing and Applied Mathematics</title>
				<editor>
			<persName><forename type="first">R</forename><surname>Wyrzykowski</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Deelman</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Dongarra</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Karczewski</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Kitowski</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Wiatr</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="435" to="444" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Towards optimal knowledge processing: from centralization through cyberinsfrastructure to cloud computing</title>
		<author>
			<persName><forename type="first">O</forename><surname>Lerma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Gutierrez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Kiekintveld</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kreinovich</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Innovative Management, Information &amp; Production (IJIMIP)</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="67" to="72" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Towards analytical techniques for optimizing knowledge acquisition, processing, propagation, and use in cyberinfrastructure and big data</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">O</forename><surname>Lerma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kreinovich</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
			<publisher>Springer Verlag</publisher>
			<biblScope unit="volume">29</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Nielsen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">L</forename><surname>Chuang</surname></persName>
		</author>
		<title level="m">Quantum Computation and Quantum Information</title>
				<imprint>
			<publisher>Cambridge University Press</publisher>
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">P</forename><surname>Feynman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">B</forename><surname>Leighton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sands</surname></persName>
		</author>
		<title level="m">The Feynman Lectures on Physics: Definitive Edition</title>
				<imprint>
			<publisher>Pearson Addison-Wesley</publisher>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">S</forename><surname>Thorne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">D</forename><surname>Blandford</surname></persName>
		</author>
		<title level="m">Modern Classical Physics: Optics, Fluids, Plasmas, Elasticity, Relativity, and Statistical Physics</title>
				<imprint>
			<publisher>Princeton University Press</publisher>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">A fast quantum mechanical algorithm for database search</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">K</forename><surname>Grover</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the twenty-eighth annual ACM symposium on Theory of computing</title>
				<meeting>the twenty-eighth annual ACM symposium on Theory of computing</meeting>
		<imprint>
			<date type="published" when="1996">1996</date>
			<biblScope unit="page" from="212" to="219" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Quantum mechanics helps in searching for a needle in a haystack</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">K</forename><surname>Grover</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Physical review letters</title>
		<imprint>
			<biblScope unit="volume">79</biblScope>
			<biblScope unit="page">325</biblScope>
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Can quantum computers be useful when there are not yet enough qubits?</title>
		<author>
			<persName><forename type="first">L</forename><surname>Longpre</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kreinovich</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Bulletin of the EATCS</title>
		<imprint>
			<biblScope unit="volume">79</biblScope>
			<biblScope unit="page" from="164" to="169" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">G</forename><surname>Rabinovich</surname></persName>
		</author>
		<title level="m">Measurement errors and uncertainties: theory and practice</title>
				<imprint>
			<publisher>Springer Science &amp; Business Media</publisher>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">J</forename><surname>Sheskin</surname></persName>
		</author>
		<title level="m">Handbook of Parametric and Non-Parametric Statistical Procedures</title>
				<imprint>
			<publisher>Chapman &amp; Hall/CRC</publisher>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">A new cauchy-based black-box technique for uncertainty in risk analysis</title>
		<author>
			<persName><forename type="first">V</forename><surname>Kreinovich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Ferson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Reliability Engineering &amp; System Safety</title>
		<imprint>
			<biblScope unit="volume">85</biblScope>
			<biblScope unit="page" from="267" to="279" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<title level="m" type="main">Computing statistics under interval and fuzzy uncertainty</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">T</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kreinovich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Xiang</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2012">2012</date>
			<publisher>Springer</publisher>
			<biblScope unit="volume">130</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">How to introduce technical details of quantum computing in a theory of computation class: using the basic case of the deutsch-jozsa algorithm</title>
		<author>
			<persName><forename type="first">O</forename><surname>Kosheleva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kreinovich</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Computing and Optimization</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="83" to="91" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Trade-off between sample size and accuracy: Case of dynamic measurements under interval uncertainty</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">T</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Kosheleva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kreinovich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ferson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Interval/Probabilistic Uncertainty and Non-Classical Logics</title>
				<editor>
			<persName><forename type="first">V.-N</forename><surname>Huynh</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Y</forename><surname>Nakamori</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Ono</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Lawry</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">V</forename><surname>Kreinovich</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><forename type="middle">T</forename><surname>Nguyen</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="45" to="56" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Trade-off between sample size and accuracy: case of measurements under interval uncertainty</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">T</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Kosheleva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kreinovich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ferson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Approximate Reasoning</title>
		<imprint>
			<biblScope unit="volume">50</biblScope>
			<biblScope unit="page" from="1164" to="1176" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Trade-off between sample size and accuracy: Case of static measurements under interval uncertainty</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">T</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Kosheleva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kreinovich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ferson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Interval/Probabilistic Uncertainty and Non-Classical Logics</title>
				<editor>
			<persName><forename type="first">V.-N</forename><surname>Huynh</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Y</forename><surname>Nakamori</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Ono</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Lawry</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">V</forename><surname>Kreinovich</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><forename type="middle">T</forename><surname>Nguyen</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="32" to="44" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Merging unmanned aerial systems (uas) imagery and echo soundings with an adaptive sampling technique for bathymetric surveys</title>
		<author>
			<persName><forename type="first">L</forename><surname>Alvarez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Moreno</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Segales</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Pham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Pillar-Little</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Chilson</surname></persName>
		</author>
		<idno type="DOI">10.3390/rs10091362</idno>
		<ptr target="http://dx.doi.org/10.3390/rs10091362.doi:10.3390/rs10091362" />
	</analytic>
	<monogr>
		<title level="j">Remote Sensing</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page">1362</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<title level="m" type="main">Introduction to algorithms</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">H</forename><surname>Cormen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">E</forename><surname>Leiserson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">L</forename><surname>Rivest</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Stein</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2009">2009</date>
			<publisher>MIT press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Researchers are using algorithms to tackle the coronavirus test shortage: The scramble to develop new test kits that deliver faster results -[spectral lines</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">S</forename><surname>Perry</surname></persName>
		</author>
		<idno type="DOI">10.1109/MSPEC.2020.9099916</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Spectrum</title>
		<imprint>
			<biblScope unit="volume">57</biblScope>
			<biblScope unit="page" from="4" to="4" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<monogr>
		<title level="m" type="main">Efficient high throughput SARS-CoV-2 testing to detect asymptomatic carriers</title>
		<author>
			<persName><forename type="first">N</forename><surname>Shental</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Levy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Skorniakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Wuvshet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Shemer-Avni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Porgador</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Hertz</surname></persName>
		</author>
		<idno type="DOI">10.1101/2020.04.14.20064618</idno>
		<idno>doi:10.1101/2020.04.14.20064618</idno>
		<ptr target="https://doi.org/10.1101/2020.04.14.20064618" />
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">How mathematics and computing can help fight the pandemic: two pedagogical examples</title>
		<author>
			<persName><forename type="first">J</forename><surname>Urenda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Kosheleva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ceberio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kreinovich</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS&apos;2020</title>
				<meeting>the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS&apos;2020</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
