<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Distributed PIV Technology: Network Storage Usage</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Rodion</forename><surname>Stepanov</surname></persName>
							<email>rodion@icmm.ru</email>
							<affiliation key="aff0">
								<orgName type="department">Institute of Continuous Media Mechanics UrB RAS</orgName>
								<address>
									<settlement>Perm</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Andrey</forename><surname>Sozykin</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">Krasovskii Institute of Mathematics and Mechanics</orgName>
								<address>
									<settlement>Yekaterinburg</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
							<affiliation key="aff2">
								<orgName type="institution">Ural Federal University</orgName>
								<address>
									<settlement>Yekaterinburg</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Distributed PIV Technology: Network Storage Usage</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">DD2DC49E4BCE81834980D3F0631800FF</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T05:39+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>particle image velocimetry</term>
					<term>supercomputer</term>
					<term>network attached storage</term>
					<term>high performance computing</term>
					<term>high-speed network</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The approach to data transfer from particle image velocimetry system to a supercomputer through the network attached storage is suggested. The advantages of the approach are simple implementation and high communication speed. Connecting the particle image velocimetry system to the super-computer allows us to carry out real-time controlled experiments with feed-back and to apply computational intensive algorithms for processing.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Particle Image Velocimetry (PIV) is a popular method of visualization of fluid and gas flows for its ability to estimate the velocity field <ref type="bibr" target="#b0">[1]</ref>. The PIV method is widely used in hydrodynamics <ref type="bibr" target="#b2">[3]</ref>, aerodynamics <ref type="bibr" target="#b11">[12]</ref>, astrophysics <ref type="bibr" target="#b4">[5]</ref>, medical research <ref type="bibr" target="#b5">[6]</ref>, and other areas of science. The specific feature of PIV is generating large amount of imaging during the measurement process (tens or thousands of gigabytes), which then is used to compute the velocity field of the flow. Nowadays the images are processed on personal computers, which do not have enough performance to process data in the real-time. Hence, controlled experiments cannot be carried out. In addition, the relatively low performance of personal computers is not suitable for the advanced computational consuming algorithms of velocity field calculation. Widely using standard crosscorrelation algorithm meets only the minimal requirements to processing quality. Connecting the PIV system to a supercomputer removes the computational resource restriction and provides the ability to use the preprocessing procedures for noise filteration and adaptive algorithms. Effective computation distribution makes it possible to process images in the real-time and run the experiments with feedback.</p><p>The main problem to connect PIV system to a supercomputer is the lack of high-speed data transfer interfaces in modern supercomputers. Although supercomputers use high-speed network technologies (Gigabit and 10G Ethernet), the popular protocols, which are used to transfer data to supercomputer (SCP, FTP and so on), cannot utilize the full network bandwidth. In addition, the encryption of transferred data widely used due to security consideration creates significant overhead and further decrease of the performance. The encryption is often unnecessary for connection of the experimental facility to supercomputers and, therefore, should not be used.</p><p>The common approach to data processing on the supercomputer is also problematic. According to de-facto standards, experimental data are accumulated on the local storage first, then whole data are transmitted to the supercomputer, and after that it can be processed. In this case the data processing in real-time is not possible.</p><p>The speed of data transfer to supercomputer can be increased by eliminating unnecessary intermediate elements, such as local storage of experimental facility and head node of a supercomputer. This can be done by writing experimental data directly to the supercomputer storage system. In this paper we present and test the architecture of supercomputer input/output system, which is provided with such capabilities, describe the implementation of proposed architecture in the "URAN" supercomputer, and the connection of the PIV system of Institute of Continuous Media Mechanics UrB RAS (ICMM UrB RAS) to this supercomputer.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">PIV System</head><p>PIV method is based on the fast recording the motion of flow with small particles, which can be specially added to the medium or already be there. Velocity field is estimated by comparing two images taking in the rapid succession. A scheme of typical PIV system is shown in Fig. <ref type="figure" target="#fig_0">1</ref>.</p><p>The base components of PIV system are pulsed laser lightening the particles, camera capturing the pairs of images at small time intervals, and personal computer, which synchronizes laser and camera and computes crosscorrelation between images. Modern cameras can generate the data stream up to 500 Mbit/s. Some PIV system uses two cameras to compute three components of velocity and produce even more data stream. The average volume of data generated during one experiment varies from 100GB to 10TB depends on the details of the experiment.</p><p>Velocity field is determined by analysing the pair of images. Nowadays, several algorithms exist for this purpose, the most popular is the crosscorrelation algorithm <ref type="bibr" target="#b6">[7]</ref>. One of the drawbacks of the standard crosscorrelation algorithm is the requirements of using computational areas with rectangle shape and fixed boundaries. This leads to low dynamic range of velocity field values. Another disadvantage is sub-pixel interpolation procedure, which cause false peak creation near the integer-valued deviation on the probability distribution function.</p><p>As an alternative to the crosscorrelation algorithm for computing the velocity field, adaptive choosing of computational areas and wavelet crosscorrelation algorithms <ref type="bibr" target="#b7">[8]</ref> can be used. However, the relatively low performance of personal computer constrains the development and application of these algorithms due to their high computational requirements. While the crosscorrelation algorithm 3 Related Work <ref type="bibr" target="#b12">[13]</ref> describes the real-time controlled experiments with feedback based on PIV measurements. The mineral oil has been chosen as a working fluid because it provides a relatively low flow speed up to 25 sm/s. Such low speed makes it possible to process PIV images on the personal computer with one dual-core CPU. Authors declare that the performance of their system is limited, which leads to occasional image loss and control command delays. To solve this problem and to control the flows with higher speed, the performance of image processing needs to be increased, for example, with the help a of supercomputer.</p><p>First effort to connect PIV system to a supercomputer has been made as part of "Distributed PIV" project <ref type="bibr" target="#b8">[9]</ref>. The attempt has been made to transfer data from PIV system in ICMM UrB RAS, Perm, to the supercomputer "Chebyshev" in Moscow State University through the dedicated network channel 1Gb/s. As a result, the restrictions of standard technologies, which are used to transfer data to supercomputers, have been revealed. Particularly, maximum speed of writing data to the supercomputer using standard network protocol is only approximately 300 Mb/s for CIFS and 500 Mb/s for FTP. These rates have been achieved by running several data transfer session simultaneously. Speed of each session has been significantly smaller. Based on the results from conducted experiments, <ref type="bibr" target="#b9">[10]</ref> suggested the architecture and special protocol to multisession data transfer from PIV system directly to supercomputer nodes bypassing the head node. However, to use proposed protocol, special software must be developed and installed on the PIV system and the supercomputer.</p><p>Increasing data processing speed of PIV system can be achieved not only by using supercomputers, but also with the help of Field Programmable Gate Arrays (FPGA). In paper by <ref type="bibr" target="#b13">[14]</ref>, the hardware implementation of the direct crosscorrelation algorithms based on the FPGA is described. The FPGA board is installed into the case of a personal computer in the PIV system, which allows providing computational resources without creating infrastructure of data transfer to a supercomputer. Meanwhile, the FPGA programming is much more complicated comparing with the developing software for supercomputers, which impose constraints on wide FPGA using.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Architecture</head><p>To overcome the drawbacks of existing data input interfaces of supercomputers, we suggest changing the architecture of the supercomputer input/output system. In contrast to traditional approach of transferring data to supercomputer through a head node, we propose to write data directly to the storage system in the supercomputer. The suggested architecture is presented in Fig. <ref type="figure" target="#fig_1">2</ref>. The storage system of a supercomputer must use the Network Attached Storage (NAS) technology and contain at least two network interfaces to provide the ability to connect the PIV system. One interface is used to connect the nodes of a supercomputer to the storage, and other one is intended to connect the PIV system.</p><p>Connection to the storage system can be established by the standard network protocol, such as NFS (Linux-and UNIX-based computers) and CIFS (Windows-based computers). One logical volume inside the storage system can be connected to nodes of the supercomputer and to the PIV system simultaneously. Storage system prevents data losses caused by concurrent access to files using the mechanisms of network sharing files protocols NFS and CIFS.</p><p>The storage is presented to the PIV system as a simple network drive or a catalog. But the images from the camera, which are written to this disk, are available not only to PIV system, but also to supercomputer nodes.</p><p>The main advantage of the proposed architecture is the transparent integration of the PIV system and the supercomputer: there are no needs to modify the experimental facility. The only one required change is to write data to the network drive instead of local drive of a personal computer in the PIV system.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Implementation</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Trial Supercomputer Structure</head><p>The proposed architecture has been implemented in the supercomputer "URAN", which is installed in the Institute of Mathematics and Mechanics UrB RAS (IMM UrB RAS), Yekaterinburg, and used to connect the PIV system of ICMM UrB RAS to this supercomputer.</p><p>The supercomputer "URAN" has the peak performance approximately 160 TFlops and consists of the Linux-based nodes with Intel CPU and NVIDIA GPU. The storage subsystem of the supercomputer "URAN" includes the internal drives of the head node and the NAS system EMC Celerra NS-480. This NAS system has the 30 TB of usable capacities, two hardware RAID-controllers, and 8 Gigabit Ethernet network interfaces. Six of those interfaces are used to connect the supercomputer nodes to the storage subsystem, while other two are devoted to external experimental facilities. These two interfaces are connected to the Academic Network of the Ural Branch of RAS and to the Internet. Therefore, experimental facilities can transfer data by the network connection directly to the storage system of the supercomputer "URAN".</p><p>The PIV system in the ICMM UrB RAS includes two high-speed cameras, each of which generates pairs of 4 Megapixel images at frequency 15 Hz; therefore, the maximum data stream is 240 Mb/s. The PIV system is managed by a Windows-based personal computer, which is also run the ActualFlow software for processing images using the crosscorrelation algorithm. ICMM UrB RAS and IMM UrB RAS are connected by the dedicated channel of the Academic Network UrB RAS utilizing the DWDM technology. The speed of the channel physical media is 1 Gb/s, length of the channel is approximately 400 km, and round-trip time is approximately 5 ms.</p><p>The supercomputer storage system has the separate logical volume devoted to store experimental data from the PIV system. The logical volume consists of ten 300GB Fibre Chanel disks, and has the useful capacity 1.8 TB. The logical volume is exported simultaneously by NFS and CIFS protocols. The supercomputer nodes with Linux operating systems mount this logical volume by NFS in the special directory /home3. The personal computer in the PIV system with Windows use CIFS to mount the logical volume as a network drive. When the PIV system writes data to this drive, they are became available for the supercomputer nodes in the specified catalog. Simultaneous work with the same logical volume from different operating systems by different network protocol is provided by NAS system EMC Celerra NS-480.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2">Security</head><p>Data from the PIV system to the supercomputer "URAN" are transferred through the dedicated channel of the Academic Network of UrB RAS, which is isolated from the Internet. Inside the Academic Network of UrB RAS, network connection is further isolated by VLAN technology. Dedicated VLAN contains only the computer in the PIV system and the network interfaces of supercomputer NAS system, which are intended to connect external experimental facilities.</p><p>Each experiment running on the PIV system in the ICMM UrB RAS does not demand high security. Therefore, we make a decision do not use encryption due to its large overhead. As a result, the performance of data transfer is improved, while sufficient level of security is provided by isolation of the communication channel from the Internet. Inside the storage system, two level of access control are implemented. The first level is the access restriction to logical volume by IP-address of experimental facility. The second level is restriction by user name and password. Mapping the usernames and file owners in the NFS and CIFS is provided by NAS system. As a result, it becomes possible that several users with different names and passwords can work with the PIV system, for example, to carry out different experiments. Files of such users are isolated from each over.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.3">Performance Evaluation</head><p>To estimate the performance of suggested solution, we run several experiments to test the speed of data transfer to the supercomputer by different protocols. We use sequential write speed test because PIV system creates this type of load due to sequential writing of flow images recorded by camera.</p><p>The first experiment is in testing the traditional approach of transferring data to supercomputers through the head node by SCP protocol. We copy the images generated by the PIV system from the personal computer with Windows 7 to the supercomputer "URAN" by utilities pscp and WinSCP. The speed of data transfer is measured by the tools of pscp and WinSCP.</p><p>The second experiment is in writing data directly to the network storage of supercomputer from Windows 7 machine by CIFS (using both SMB1 and SMB2 protocols). To increase the performance of the data transfer through long distance channel, the Compound TCP <ref type="bibr" target="#b10">[11]</ref> protocol has been enabled in Windows.</p><p>The performance is measured by the iozone utility (IOzone Filesystem Benchmark, http://www.iozone.org/). In the third experiment, the data are written also directly to the supercomputer storage from the computer with Ubuntu Linux 11.04 by NFS. We use the 4th version of NFS, and the size of block (both wsize and rsize) is 1MB. The performance is also measured by the iozone.</p><p>Results of the experiments are presented in Fig. <ref type="figure" target="#fig_2">3</ref>. The number of experiments of each type is 100; the average results with confidence intervals are presented. The worst results are obtained using the traditional approach to transfer data through head node by SCP protocol. Such low speed is caused by existing the intermediate element (head node of supercomputer) and encryption used by SCP that create significant unnecessary overheads.</p><p>The best results are achieved by Linux machine with NFS protocol. Its performance is 6 times more, then at SCP. Despite of such good results, this approach cannot be used in the PIV system of the ICMM UrB RAS because it includes the personal computer with Windows. But in the future Linux-based experimental facilities can be connected to the supercomputer by the NFS protocol.</p><p>Performance of the Windows machine with SMB version 2 is only slightly less then the performance of Linux machine. SMB2 is also provides the 6 times speedup compare to traditional SCP. The disadvantage of SMB2 is that it is available only in relatively new versions of Windows, such as Windows Vista, Windows 7, and Windows Server 2008. Early versions of Windows support only SMB1 protocol, which provides only half of the SMB2 performance (see Fig. <ref type="figure" target="#fig_2">3</ref>). However, both SMB2 and SMB1 provide enough performance to data transfer from the PIV system of the ICMM UrB RAS to the supercomputer "URAN".</p><p>The experiments results have confirmed that writing data directly to the network storage in the supercomputer can notably speedup the data transfer.</p><p>It should be emphasized that the significant difference from the results presented in <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b9">10]</ref> is that performance in our experiments has been reached in one session. Consequently, it is not necessary to run several sessions simultaneously to achieve high-speed of the data transfer.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Conclusions</head><p>The approach to organize the high-speed data transfer from the PIV system to the supercomputer based on direct write to the supercomputer network storage has been presented. The advantage of approach is that it can be used without modification of experimental facility. The performance testing shows that direct write to the network storage by CIFS or NFS protocols can increase the data transfer speed 6 times in comparison with the traditional data transfer through supercomputer head node by SCP or FTP protocols. In contrast to the previous work <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b9">10]</ref>, the speedup can be achieved in one network session without the requirements to run multiple session of the data transfer simultaneously to utilize the bandwidth of network connection.</p><p>Suggested approach has been implemented to connect the PIV system in the ICMM UrB RAS, Perm, Russia to the supercomputer "URAN" in the IMM UrB RAS, Yekaterinburg, Russia. Distance between the PIV system and the supercomputer is approximately 400 km. The connection uses dedicated Giga-bitEthernet channel of Academic Network of UrB RAS.</p><p>The high-speed data transfer provides the ability to process experimental data from the PIV system on the supercomputer in the real-time and control the experiment based on the results of such processing. Moreover, it is possible to use the supercomputer to implement highly accurate but computational consuming image processing algorithms. Personal computer cannot be used to run such algorithms due to low computational resources. Since the results of processing can also be written to the storage system, the user of the PIV system can visualize the experiment process using the standard existing tools. As a result, the user can monitor the experiment course and control its conditions.</p><p>Future work includes conducting the closed-loop experiments with feedback based on PIV measurements, connecting other experimental facilities to the "URAN" supercomputer, such as setup for two-phase flux control in spray of injectors for aircraft engines <ref type="bibr" target="#b3">[4]</ref>, implementing the adaptive and wavelet crosscorrelation algorithms of velocity field estimation, evaluating the possibility to run this algorithms on GPU, and increasing the speed of data transfer by using 10G Ethernet network equipment.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. Typical monoscopic PIV system [2] with a network access</figDesc><graphic coords="3,169.35,116.83,276.67,209.44" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 2 .</head><label>2</label><figDesc>Fig. 2. Architecture of data transfer from PIV system to a supercomputer</figDesc><graphic coords="4,188.37,448.13,238.62,136.24" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 3 .</head><label>3</label><figDesc>Fig. 3. Sequential write speed test</figDesc><graphic coords="7,188.37,229.08,238.62,154.39" type="bitmap" /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Acknowledgments. The work was supported by Ural Branch of Russian Academy of Science and Russian Foundation for Basic Research (grants 17-45-590846) and by the Research Program of Ural Branch of RAS, project no. 15-7-1-26. Our study was performed using the Uran supercomputer of the Krasovskii Institute of Mathematics and Mechanics and the cluster of the Ural Federal University.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Scattering particle characteristics and their effect on pulsed laser measurements of fluid flow: speckle velocimetry vs particle image velocimetry</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">J</forename><surname>Adrian</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied Optics</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="issue">11</biblScope>
			<biblScope unit="page" from="1690" to="1691" />
			<date type="published" when="1984">1984</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Twenty years of particle image velocimetry</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">J</forename><surname>Adrian</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Experiments in Fluids</title>
		<imprint>
			<biblScope unit="volume">39</biblScope>
			<biblScope unit="page" from="159" to="169" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Laboratory study of differential rotation in a convective rotating layer</title>
		<author>
			<persName><forename type="first">V</forename><surname>Batalov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sukhanovsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Frick</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Geophys. Astrophys. Fluid Dynamics</title>
		<imprint>
			<biblScope unit="volume">104</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="349" to="368" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">The use of field measurement techniques to study two-phase flows</title>
		<author>
			<persName><forename type="first">V</forename><surname>Batalov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Kolesnichenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Stepanov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sukhanovsky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Vestnik Permskogo Universiteta. Mathematics. Mechanics. Informatics</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">9</biblScope>
			<biblScope unit="page" from="21" to="25" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Wavelet based faraday rotation measure synthesis</title>
		<author>
			<persName><forename type="first">P</forename><surname>Frick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Stepanov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Sokoloff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Beck</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Monthly Notices of the Royal Astronomical Society Letters</title>
		<imprint>
			<biblScope unit="volume">401</biblScope>
			<biblScope unit="page" from="L24" to="L28" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Development of high resolution particle image velocimetry for use in artificial heart research</title>
		<author>
			<persName><forename type="first">P</forename><surname>Hochareon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Manning</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Fontaine</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Deutsch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Tarbell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Second Joint EMBS-BMES Conference</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2002-10">Oct 2002</date>
			<biblScope unit="page" from="1591" to="1592" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Theory of cross-correlation analysis of piv images</title>
		<author>
			<persName><forename type="first">R</forename><surname>Keane</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Adrian</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied Scientific Research</title>
		<imprint>
			<biblScope unit="volume">49</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="191" to="215" />
			<date type="published" when="1992">JUL 1992</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Wavelet crosscorrelations of two-dimensional signals</title>
		<author>
			<persName><forename type="first">I</forename><surname>Mizeva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Stepanov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Frick</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Numerical methods and programming</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="172" to="179" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Initiative project &quot;distributed piv</title>
		<author>
			<persName><forename type="first">R</forename><surname>Stepanov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Masich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Masich</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of Scientific service in the Internet: scalability, parallelism, efficiency</title>
				<meeting>Scientific service in the Internet: scalability, parallelism, efficiency</meeting>
		<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="360" to="363" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Processing the stream of experimental data on the supercomputer</title>
		<author>
			<persName><forename type="first">R</forename><surname>Stepanov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Masich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sukhanovsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Schapov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Igumnov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Masich</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of Scientific service in the Internet: exaflops future</title>
				<meeting>Scientific service in the Internet: exaflops future</meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="168" to="174" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A compound tcp approach for highspeed and long distance networks</title>
		<author>
			<persName><forename type="first">K</forename><surname>Tan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sridharan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. IEEE INFOCOM</title>
				<meeting>IEEE INFOCOM</meeting>
		<imprint>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="1" to="12" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Recent applications of particle image velocimetry in large-scale industrial wind tunnels</title>
		<author>
			<persName><forename type="first">C</forename><surname>Willert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Raffel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kompenhans</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Congress on Instrumentation in Aerospace Simulation Facilities</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="1997-09">Sep 1997</date>
			<biblScope unit="page" from="258" to="266" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Real-time particle image velocimetry for closed-loop flow control applications</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">E</forename><surname>Willert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Munson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gharib</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">15th Int Symp on Applications of Laser Techniques to Fluid Mechanics</title>
				<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Real-time particle image velocimetry for feedback loops using fpga implementation</title>
		<author>
			<persName><forename type="first">H</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Leeser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Tadmor</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Aerospace Computing, Information, and Communication</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="52" to="62" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
