<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Aspects of the Assessment of the Quality of Loading Hybrid High-Performance Computing Cluster</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Konstantin</forename><forename type="middle">I</forename><surname>Volovich¹</surname></persName>
							<email>kvolovich@frccsc.ru</email>
						</author>
						<author>
							<persName><forename type="first">Sergey</forename><forename type="middle">A</forename><surname>Denisov¹</surname></persName>
							<email>sdenisov@frccsc.ru</email>
						</author>
						<author>
							<persName><forename type="first">Alexander</forename><forename type="middle">P</forename><surname>Shabanov¹</surname></persName>
							<email>apshabanov@mail.ru</email>
						</author>
						<author>
							<persName><forename type="first">Sergey</forename><forename type="middle">I</forename><surname>Malkovsky</surname></persName>
							<email>sergey.malkovsky@gmail.com</email>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="department">¹Federal research center &apos;Computer Science and Control&apos;</orgName>
								<orgName type="institution">Russian Academy of Sciences</orgName>
								<address>
									<settlement>Moscow</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="department">²Computing Center of Far Eastern Branch Russian Academy of Sciences</orgName>
								<address>
									<settlement>Khabarovsk</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Aspects of the Assessment of the Quality of Loading Hybrid High-Performance Computing Cluster</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">B1AD999AF60F033092B2566A5D80A17A</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T00:30+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>high-performance computing cluster</term>
					<term>hybrid architecture</term>
					<term>graphics accelerator</term>
					<term>performance efficiency</term>
					<term>profiling</term>
					<term>dynamic priority</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The article proposes a method for estimating workload, based on the calculation of peak performance, which is required to perform computational tasks. The system of dynamic priorities of computing tasks is considered, based on the resource efficiency indicators of the highperformance cluster.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>The most important issue in the operation of a high-performance computing cluster is to provide the complete utilization of its resources. This is necessary for solving scientific problems and ensuring the return of investments (ROI).</p><p>We can distinguish two main areas in this problem <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>:</p><p>-ensure execution of the maximum possible number of applications for a certain period of time; -the most efficient use of cluster resources by user applications. An important issue of operations is to determine the grade of loading of the cluster, because it allows to plan the provision of resources, to assess the necessity for modernization, to determine the quality of the services.</p><p>As a rule, the workload is defined as the ratio of the metric (parameter) of the workload to the maximum possible value of this parameter. The metric is determined by measurement or calculation.</p><p>The article proposes a new method for calculating the value of the workload using the peak performance of the cluster.</p><p>A high workload of the HPC cluster does not mean efficient use of its resources. It is possible that the resources requested by the application are not used and are idle. In this case, the workload factor of the cluster can be high, but the quality of the tasks is low.</p><p>To provide an advantage to applications that efficiently use the resources of the cluster, the article discusses a system of dynamic priorities. The system is based on determining the coefficient of profiling and using it to change the priorities of applications.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Technique for Estimating the Workload of the Hybrid HPC Cluster</head><p>For traditional supercomputers, the workload parameter may be the number of core-hours that were provided to the application for performing calculations <ref type="bibr" target="#b2">[3]</ref>. The ratio of allocated core-hours to the maximum possible is an indicator of the cluster workload. These parameters are calculated for a certain period of time and are an integral indicator of the workload over a given period.</p><p>For hybrid architectures, this approach is less significant, since there are various types of cores in the hybrid HPC cluster, and applications reserve the calculator's resources not by the cores, but by entire graphics accelerators.</p><p>The proposed method allows to take into account this feature. The workload estimate is determined by comparing the requested by the applications and the maximum possible number of floating-point operations per period of time.</p><p>Note that there is a difference between the theoretically possible performance of cluster (peak performance) and practically achievable results. Results are determined by different tests and vary greatly depending on the type of tasks and configuration of the cluster <ref type="bibr" target="#b3">[4]</ref>.</p><p>To estimate the workload of a hybrid high-performance computing cluster, we use peak performance. It is defined as the sum of the peak productivities of its componentsnodes <ref type="bibr" target="#b0">(1)</ref>.</p><formula xml:id="formula_0">𝑃 𝑝𝑒𝑎𝑘 = ∑ 𝑃 ℎ𝑜𝑠𝑡 𝑖 𝐾 𝑖=1 , (<label>1</label></formula><formula xml:id="formula_1">)</formula><p>where Ppeak is peak performance of the computing cluster, Phost ipeak performance (Phost) of the i-th node of the computing cluster Note that the summation does not take into account the performance losses that occur when the nodes interact over the computer network connecting them (interconnect) <ref type="bibr" target="#b3">[4]</ref>.</p><p>The peak performance of the P host node is defined as the sum of the performance of the central processors of the node (P cpu ) and its graphic accelerators -P gpu . It is assumed that they are fully loaded with floating point operations, do not perform any other operations, and there are no data transfer losses between the central processors and graphics accelerators <ref type="bibr" target="#b1">(2)</ref>.</p><formula xml:id="formula_2">𝑃 ℎ𝑜𝑠𝑡 = 𝑁 𝑐𝑝𝑢 𝑃 𝑐𝑝𝑢 + 𝑁 𝑔𝑝𝑢 𝑃 𝑔𝑝𝑢 , (<label>2</label></formula><formula xml:id="formula_3">)</formula><p>where Ncpu is number of CPUs in the compute node, Ngpunumber of graphics accelerators in the compute node, Pcpupeak CPU performance, Pgpugraphics accelerator peak performance.</p><p>To calculate the peak performance of the CPU (3), we assume that the operations are performed by the cores in parallel, each core can process a group of threads, and the flow allows several operations to be performed in parallel if there are several operational blocks for this. Such a core-streaming architecture is characteristic of modern classical processors of various manufacturers.</p><p>𝑃 𝑐𝑝𝑢 = 𝑛 𝑐𝑜𝑟𝑒 𝑛 𝑠𝑡𝑟𝑒𝑎𝑚 𝑛 𝑢𝑛𝑖𝑡 𝐹 𝑐𝑝𝑢 , (3) where ncore is number of CPU cores, nstreamthe number of threads processed by the CPU core, nunitthe number of operating units per flow corresponds to the number of operations performed in one flow per cycle, Fcpu -CPU frequency.</p><p>To assess the performance of graphics accelerators, we use the the modern accelerator architecture of the NVidia company. Consider the family of accelerators Tesla Volta, as the most popular. Accelerators contain cuda-and tensor-cores, which allow performing parallel operations on floating-point numbers and matrices. The performance of a graphics accelerator is defined as the sum of the productivity of all cores without taking into account performance losses on scheduling and interaction (4) <ref type="bibr" target="#b4">[5]</ref>.</p><p>P gpu = P cuda + P tensor , (4) where Pcuda is total performance of the graphics accelerator cuda-cores, Ptensortotal performance of tensor-cores of the graphics accelerator.</p><p>Let us determine the performance value of the cuda-cores of the graphics accelerator using formula <ref type="bibr" target="#b4">(5)</ref>, assuming that the floating-point operation is performed in one clock cycle.</p><p>Tensor-cores perform a multiplication of square matrices in one clock cycle. When calculating the number of operations performed in this case, we will take into account that the calculation of each element of the resulting matrix requires the execution of multiplication operations equal to the order of the matrix, as well as addition operations one less. Thus, the total performance of tensor-kernels is calculated as (6).</p><p>Note that the accuracy of performing floating point operations for different cores may differ. So, in the graphics accelerator NVidia Tesla V 100 cuda-cores work with double precision numbers, and tensor-cores with single precision numbers. In this method of performance evaluation, this feature is not taken into account.</p><p>The total performance of cuda-and tensor-cores are determined by formulas ( <ref type="formula">5</ref>) and (6). P cuda = n cuda F gpu (5) P tensor = n tensor r 2 (2r − 1)F gpu (6) where ncuda is number of graphics accelerator cuda-cores, ntensornumber of tensor-cores of the graphics accelerator, rsquare matrix order, Fgpugraphics accelerator frequency. Thus, the peak performance of the graphics accelerator is calculated by the formula (7). 𝑃 𝑔𝑝𝑢 = (𝑛 𝑐𝑢𝑑𝑎 + 𝑛 𝑡𝑒𝑛𝑠𝑜𝑟 𝑟 2 (2𝑟 − 1))𝐹 𝑔𝑝𝑢 (7) The peak performance of the computing node of a hybrid high-performance computing cluster is calculated by the formula (8).</p><p>𝑃 ℎ𝑜𝑠𝑡 = 𝑁 𝑐𝑝𝑢 𝑛 𝑐𝑜𝑟𝑒 𝑛 𝑠𝑡𝑟𝑒𝑎𝑚 𝑛 𝑢𝑛𝑖𝑡 𝐹 𝑐𝑝𝑢 + (8) (𝑛 𝑐𝑢𝑑𝑎 + 𝑛 𝑡𝑒𝑛𝑠𝑜𝑟 𝑟 2 (2𝑟 − 1))𝐹 𝑔𝑝𝑢 The total peak performance of a hybrid high-performance computing cluster is calculated as (1). As shown above, the performance of the HPC cluster is calculated as the sum of the performances of its components and is expressed by the number of floating-point operations performed per second.</p><p>The resource of the hybrid high-performance computing cluster in the time interval will be the peak number of floating point operations available to users during this interval.</p><p>The total number of operations of the hybrid high-performance computing cluster Op(T) on the time interval T is defined as: 𝑂𝑝 (𝑇) = 𝑃 𝑝𝑒𝑎𝑘 𝑇, (9) where T is time interval.</p><p>The peak estimate differs from the actual, which is determined on the basis of various tests. However, as noted above, in this method we will use the peak values.</p><p>To estimate the requirements of applications to the resources of a hybrid high-performance computing cluster, we calculate the number of operations required for the execution of the application (10).</p><p>For each application, a number of CPU cores, graphics accelerators, and runtime are reserved. We take into account that the resources of graphic accelerators are reserved entirely, and the resources of central processors -by cores. Therefore, the total number of hybrid high-performance computing cluster operations performed by the task -Opapp(t) -for a given time t is determined by the number of cores of the central processors (Rcore) and graphics accelerators (Rgpu) reserved by the application.</p><p>𝑂𝑝 𝑎𝑝𝑝 (𝑡) = (</p><formula xml:id="formula_4">(𝑅 𝑐𝑜𝑟𝑒 𝑃 𝑐𝑝𝑢 ) 𝑛 + 𝑅 𝑔𝑝𝑢 𝑃 𝑔𝑝𝑢 )𝑡, (<label>10</label></formula><formula xml:id="formula_5">)</formula><p>where Rcore is the number of cores reserved by the application, Rgputhe number of graphics accelerators reserved by the application, ntotal number of cores in CPU.</p><p>After calculations for all applications i=1…N, the execution of which accounted for the period T, we obtain the total number of operations required for the execution of applications on the period T (11):</p><formula xml:id="formula_6">𝑂𝑝 𝑎𝑝𝑝 (𝑇) = ∑( (𝑅 𝑐𝑝𝑢 𝑃 𝑐𝑝𝑢 ) 𝑛 + 𝑅 𝑔𝑝𝑢 𝑖 𝑃 𝑔𝑝𝑢 )𝑡 𝑖 𝑁 𝑖=1 for 𝑡 𝑖 ∈ 𝑇<label>(11)</label></formula><p>Figure <ref type="figure" target="#fig_0">1</ref> shows a diagram of tasks performed on period T. Note that it is possible that only part of the execution time of the application falls on this period. In this case, when estimating the resources used, only the time interval ti belonging to T is taken into account. The Q(T) indicator can be used to set and evaluate the performance indicator of high-performance hybrid computing systems, plan the modernization of the cluster and draw up plans for the calculations of users of the hybrid cluster.</p><p>Note that the proposed assessment of the quality of loading of processor resources Q(T) is purely declarative and does not take into account the degree of use of allocated resources, the efficiency of algorithms and the quality of program code.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">System of Dynamic Priorities</head><p>To provide an advantage to applications that use the resources of the cluster qualitatively, a system of dynamic priorities should be used. The system is based on the cluster utilization indicator.</p><p>To obtain an indicator of the use of computing resources, it is proposed to introduce the task profiling coefficient as an integral indicator of the quality of resource use (13).</p><p>𝑂𝑝 𝑎𝑝𝑝−𝑝𝑟𝑜𝑓 (𝑡) = 𝐾 𝑝𝑟𝑜𝑓 𝑂𝑝 𝑎𝑝𝑝 (𝑡) (13) where Kprof is profiling ratio.</p><p>Using such an assessment allows you to avoid a situation when the application does not use or irrationally uses the resources requested from the computing cluster. The coefficient of profiling is obtained by running a custom application under the control of a special debugging tool -a profiler that allows you to determine the degree of resource utilization, the execution time of individual code sections, bottlenecks and problems of memory usage. As part of development packages, there are profilers for both code executed on central processors and graphic accelerators.</p><p>Information on program profiling should be available both to the developer of a scientific application and to the division operating the computing cluster. This is necessary to take measures to improve the efficiency of the program code and increase the efficiency of the functioning of the hybrid HPC cluster as a whole.</p><p>Obviously, applications with a high profiling coefficient improve the quality indicator of a high-performance cluster workload. Therefore, a competitive advantage should be given to such applications. This encourages users to improve the calculation algorithms and taking into account the capabilities of the computing cluster. A classic way of encouraging tasks with a high profiling rate is to introduce a system of dynamic priorities based on the profiling coefficient.</p><p>The introduction of dynamic priorities allows within certain limits to change the priority of an application depending on its quality. This service policy is especially useful in conditions of heavy workload of the computing cluster. It allows to improve the quality of resource use and reduce the workload, as well as provide an advantage in the implementation of the applications that make the most use of the cluster's resources.</p><p>The decision to change the priority should be made on the basis of a comparison of the measured profiling coefficient with the recommended one, which is determined by expert. It is possible to set several threshold values of the profiling coefficient, for each of which there is a different priority rule. For example, for two quality thresholds (profiling coefficients К1, К2) that divide a multitude of applications into three subsets of quality "low", "medium", "high", the dynamic priority can be calculated based on a piecewise linear function <ref type="bibr">(14)</ref>.</p><formula xml:id="formula_7">𝑃𝑟 𝑑𝑦𝑛 = { 𝑃𝑟 𝑏𝑎𝑠𝑒 (С 0 𝐾 𝑝𝑟𝑜𝑓 − С 0 𝐾 1 + 1), 𝐾 𝑝𝑟𝑜𝑓 &lt; 𝐾 0 𝑃𝑟 𝑏𝑎𝑠𝑒 (С 1 𝐾 𝑝𝑟𝑜𝑓 − С 1 𝐾 1 + 1), 𝐾 1 &gt; 𝐾 𝑝𝑟𝑜𝑓 ≥ 𝐾 0 𝑃𝑟 𝑏𝑎𝑠𝑒 ( С 2 𝐾 𝑝𝑟𝑜𝑓 − С 2 𝐾 2 + С 1 𝐾 2 − С 1 𝐾 1 + 1), 𝐾 𝑝𝑟𝑜𝑓 ≥ 𝐾 1 (14)</formula><p>where Prdyn is dynamic application priority;</p><p>Prbasebasic application priority; Kprofcoefficient derived from application execution profiling; K1, К2expert profiling coefficients; C0, C1, C2expert change factors. Figure <ref type="figure" target="#fig_1">2</ref> shows an example of the dependence of dynamic priority on the values of К and С with Prbase =1. Thus, when obtaining the values of the profiling coefficient below K1, a linear decrease in priority relative to the base value is made; when K1 is exceeded, a linear increase in priority is obtained. If K2 is exceeded, the priority growth increases. The recommended profiling coefficients K and coefficients C are determined by an expert method, based on the characteristics of the functioning and loading of the computing cluster. The choice of the recommended parameters K and C should be made by the owner of the HPC cluster and proceed from the following.</p><p>In the early stages of cluster operation, when technologies and algorithms are being debugged, the priority should be changed to the minimum extent. The requirements for grading factors should not be too high. Therefore, the values of C, which determine the slope of the straight lines, should be chosen closer to zero, this provides a slight change in priorities.</p><p>With increasing workload on the computing cluster and the need for more accurate task management, the values of C can be increased. This leads to a more significant change in priority compared to the baseline with a significant deviation of Kprof from K1.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Conclusion</head><p>The proposed methodology for estimating the workload on the hybrid HPC cluster allows to determine how fully and efficiently the resources of the hybrid cluster are used. On the basis of the results obtained, it is possible to determine indicators of ROI, plan the work of the cluster, and determine the need for modernization.</p><p>The system of dynamic priorities will allow to control the quality of resource utilization of hybrid highperformance computing clusters when they perform different types of applications from various fields of science and technology.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Execution of tasks on the time interval T Based on the peak value of the available floating-point operations on the interval T, the workload quality ratio of the hybrid high-performance computing cluster is calculated (12). 𝑄(𝑇) = 𝑂𝑝 𝑎𝑝𝑝 (𝑇) 𝑂𝑝 (𝑇) * 100 %. (12)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Changing the dynamic priority of the computing task</figDesc></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>The research is partially supported by the Russian Foundation for Basic Research (project 18-29-03100).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Analysis of supercomputer cyber infrastructure of the leading countries of the world // Supercomputer technologies</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Abramov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Materials of the 5th All-Russian Scientific and Technical Conference</title>
				<meeting><address><addrLine>Rostov</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="11" to="18" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">The state and prospects of development of ultra-high-performance computing systems // Information technologies and computing systems Moscow</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Abramov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">P</forename><surname>Lilitko</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="6" to="22" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Features of the use of multi-core processors in scientific computing</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Klinov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lapshina</surname></persName>
		</author>
		<author>
			<persName><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><surname>Telegin</surname></persName>
		</author>
		<author>
			<persName><surname>Pn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">M</forename><surname>Shabanov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Bulletin of Ufa State Aviation Technical University. V</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="25" to="31" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">True, distorting the truth. how to analyze top500?</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Abramov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computational Mathematics and Computer Science. Chelyabinsk. V</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="50" to="81" />
			<date type="published" when="2013">2013</date>
		</imprint>
		<respStmt>
			<orgName>South Ural State University</orgName>
		</respStmt>
	</monogr>
	<note>Bulletin of the</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">The comparison of large-scale graph processing algorithms implementation methods for Intel KNL and NVIDIA GPU</title>
		<author>
			<persName><forename type="first">I</forename><surname>Afanasyev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Voevodin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Communications in Computer and Information Science</title>
		<imprint>
			<biblScope unit="volume">793</biblScope>
			<biblScope unit="page" from="80" to="94" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
