<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Distributed PIV Technology: Network Storage Usage</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Rodion Stepanov</string-name>
          <email>rodion@icmm.ru</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrey Sozykin</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Continuous Media Mechanics UrB RAS</institution>
          ,
          <addr-line>Perm</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Krasovskii Institute of Mathematics and Mechanics</institution>
          ,
          <addr-line>Yekaterinburg</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Ural Federal University</institution>
          ,
          <addr-line>Yekaterinburg</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <fpage>121</fpage>
      <lpage>129</lpage>
      <abstract>
        <p>The approach to data transfer from particle image velocimetry system to a supercomputer through the network attached storage is suggested. The advantages of the approach are simple implementation and high communication speed. Connecting the particle image velocimetry system to the super-computer allows us to carry out real-time controlled experiments with feed-back and to apply computational intensive algorithms for processing.</p>
      </abstract>
      <kwd-group>
        <kwd>particle image velocimetry</kwd>
        <kwd>supercomputer</kwd>
        <kwd>network at- tached storage</kwd>
        <kwd>high performance computing</kwd>
        <kwd>high-speed network</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Particle Image Velocimetry (PIV) is a popular method of visualization of uid
and gas ows for its ability to estimate the velocity eld [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The PIV method is
widely used in hydrodynamics [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], aerodynamics [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], astrophysics [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], medical
research [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], and other areas of science. The speci c feature of PIV is generating
large amount of imaging during the measurement process (tens or thousands of
gigabytes), which then is used to compute the velocity eld of the ow. Nowadays
the images are processed on personal computers, which do not have enough
performance to process data in the real-time. Hence, controlled experiments cannot
be carried out. In addition, the relatively low performance of personal computers
is not suitable for the advanced computational consuming algorithms of velocity
eld calculation. Widely using standard crosscorrelation algorithm meets only
the minimal requirements to processing quality. Connecting the PIV system to
a supercomputer removes the computational resource restriction and provides
the ability to use the preprocessing procedures for noise lteration and
adaptive algorithms. E ective computation distribution makes it possible to process
images in the real-time and run the experiments with feedback.
      </p>
      <p>The main problem to connect PIV system to a supercomputer is the lack
of high-speed data transfer interfaces in modern supercomputers. Although
supercomputers use high-speed network technologies (Gigabit and 10G Ethernet),
the popular protocols, which are used to transfer data to supercomputer (SCP,
FTP and so on), cannot utilize the full network bandwidth. In addition, the
encryption of transferred data widely used due to security consideration creates
signi cant overhead and further decrease of the performance. The encryption is
often unnecessary for connection of the experimental facility to supercomputers
and, therefore, should not be used.</p>
      <p>The common approach to data processing on the supercomputer is also
problematic. According to de-facto standards, experimental data are accumulated on
the local storage rst, then whole data are transmitted to the supercomputer,
and after that it can be processed. In this case the data processing in real-time
is not possible.</p>
      <p>The speed of data transfer to supercomputer can be increased by eliminating
unnecessary intermediate elements, such as local storage of experimental facility
and head node of a supercomputer. This can be done by writing experimental
data directly to the supercomputer storage system. In this paper we present and
test the architecture of supercomputer input/output system, which is provided
with such capabilities, describe the implementation of proposed architecture in
the \URAN" supercomputer, and the connection of the PIV system of Institute
of Continuous Media Mechanics UrB RAS (ICMM UrB RAS) to this
supercomputer.
2</p>
    </sec>
    <sec id="sec-2">
      <title>PIV System</title>
      <p>PIV method is based on the fast recording the motion of ow with small particles,
which can be specially added to the medium or already be there. Velocity eld
is estimated by comparing two images taking in the rapid succession. A scheme
of typical PIV system is shown in Fig. 1.</p>
      <p>The base components of PIV system are pulsed laser lightening the
particles, camera capturing the pairs of images at small time intervals, and personal
computer, which synchronizes laser and camera and computes crosscorrelation
between images. Modern cameras can generate the data stream up to 500 Mbit/s.
Some PIV system uses two cameras to compute three components of velocity
and produce even more data stream. The average volume of data generated
during one experiment varies from 100GB to 10TB depends on the details of the
experiment.</p>
      <p>
        Velocity eld is determined by analysing the pair of images. Nowadays,
several algorithms exist for this purpose, the most popular is the crosscorrelation
algorithm [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. One of the drawbacks of the standard crosscorrelation algorithm
is the requirements of using computational areas with rectangle shape and xed
boundaries. This leads to low dynamic range of velocity eld values. Another
disadvantage is sub-pixel interpolation procedure, which cause false peak creation
near the integer-valued deviation on the probability distribution function.
      </p>
      <p>
        As an alternative to the crosscorrelation algorithm for computing the
velocity eld, adaptive choosing of computational areas and wavelet crosscorrelation
algorithms [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] can be used. However, the relatively low performance of personal
computer constrains the development and application of these algorithms due
to their high computational requirements. While the crosscorrelation algorithm
Particle
      </p>
      <p>Seeder</p>
      <p>Flow event
Synchronizer
Validation</p>
      <p>u(x,t)
Flow Field</p>
      <p>Display
Displacement
Jnterrogation
y</p>
      <p>Image
Processing
Kinematics/
Dynamics</p>
      <p>Image
k: :- .c: ± (mm, cc;,etc.)</p>
      <p>Y Recording</p>
      <p>I(X)
l[m,n]
Digitizer
Storage
Statistics
z
Network
mean
rms,...
requires 150 CPU cores for the real-time processing, the wavelet crosscorrelation
needs 1500 CPU cores for full processing of data. But to run experiments with
feedback, it is not necessary to process all images generated by the PIV
system. Processing one pair of images per second is enough for control. Therefore,
the wavelet crosscorrelation algorithm for the purpose of real-time controlled
experiments requires only 150 CPU cores.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Related Work</title>
      <p>
        [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] describes the real-time controlled experiments with feedback based on PIV
measurements. The mineral oil has been chosen as a working uid because it
provides a relatively low ow speed up to 25 sm/s. Such low speed makes it
possible to process PIV images on the personal computer with one dual-core
CPU. Authors declare that the performance of their system is limited, which
leads to occasional image loss and control command delays. To solve this problem
and to control the ows with higher speed, the performance of image processing
needs to be increased, for example, with the help a of supercomputer.
      </p>
      <p>
        First e ort to connect PIV system to a supercomputer has been made as part
of \Distributed PIV" project [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The attempt has been made to transfer data
from PIV system in ICMM UrB RAS, Perm, to the supercomputer \Chebyshev"
in Moscow State University through the dedicated network channel 1Gb/s. As
a result, the restrictions of standard technologies, which are used to transfer
data to supercomputers, have been revealed. Particularly, maximum speed of
writing data to the supercomputer using standard network protocol is only
approximately 300 Mb/s for CIFS and 500 Mb/s for FTP. These rates have been
achieved by running several data transfer session simultaneously. Speed of each
session has been signi cantly smaller. Based on the results from conducted
experiments, [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] suggested the architecture and special protocol to multisession
data transfer from PIV system directly to supercomputer nodes bypassing the
head node. However, to use proposed protocol, special software must be
developed and installed on the PIV system and the supercomputer.
      </p>
      <p>
        Increasing data processing speed of PIV system can be achieved not only
by using supercomputers, but also with the help of Field Programmable Gate
Arrays (FPGA). In paper by [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], the hardware implementation of the direct
crosscorrelation algorithms based on the FPGA is described. The FPGA board
is installed into the case of a personal computer in the PIV system, which
allows providing computational resources without creating infrastructure of data
transfer to a supercomputer. Meanwhile, the FPGA programming is much more
complicated comparing with the developing software for supercomputers, which
impose constraints on wide FPGA using.
4
      </p>
    </sec>
    <sec id="sec-4">
      <title>Architecture</title>
      <p>To overcome the drawbacks of existing data input interfaces of supercomputers,
we suggest changing the architecture of the supercomputer input/output
system. In contrast to traditional approach of transferring data to supercomputer
through a head node, we propose to write data directly to the storage system in
the supercomputer. The suggested architecture is presented in Fig. 2.
The storage system of a supercomputer must use the Network Attached
Storage (NAS) technology and contain at least two network interfaces to provide the
ability to connect the PIV system. One interface is used to connect the nodes of
a supercomputer to the storage, and other one is intended to connect the PIV
system.</p>
      <p>Connection to the storage system can be established by the standard
network protocol, such as NFS (Linux- and UNIX-based computers) and CIFS
(Windows-based computers). One logical volume inside the storage system can
be connected to nodes of the supercomputer and to the PIV system
simultaneously. Storage system prevents data losses caused by concurrent access to les
using the mechanisms of network sharing les protocols NFS and CIFS.</p>
      <p>The storage is presented to the PIV system as a simple network drive or a
catalog. But the images from the camera, which are written to this disk, are
available not only to PIV system, but also to supercomputer nodes.</p>
      <p>The main advantage of the proposed architecture is the transparent
integration of the PIV system and the supercomputer: there are no needs to modify
the experimental facility. The only one required change is to write data to the
network drive instead of local drive of a personal computer in the PIV system.
5
5.1</p>
    </sec>
    <sec id="sec-5">
      <title>Implementation</title>
      <sec id="sec-5-1">
        <title>Trial Supercomputer Structure</title>
        <p>The proposed architecture has been implemented in the supercomputer \URAN",
which is installed in the Institute of Mathematics and Mechanics UrB RAS (IMM
UrB RAS), Yekaterinburg, and used to connect the PIV system of ICMM UrB
RAS to this supercomputer.</p>
        <p>The supercomputer \URAN" has the peak performance approximately 160
TFlops and consists of the Linux-based nodes with Intel CPU and NVIDIA GPU.
The storage subsystem of the supercomputer \URAN" includes the internal
drives of the head node and the NAS system EMC Celerra NS-480. This NAS
system has the 30 TB of usable capacities, two hardware RAID-controllers, and
8 Gigabit Ethernet network interfaces. Six of those interfaces are used to connect
the supercomputer nodes to the storage subsystem, while other two are devoted
to external experimental facilities. These two interfaces are connected to the
Academic Network of the Ural Branch of RAS and to the Internet. Therefore,
experimental facilities can transfer data by the network connection directly to
the storage system of the supercomputer \URAN".</p>
        <p>The PIV system in the ICMM UrB RAS includes two high-speed cameras,
each of which generates pairs of 4 Megapixel images at frequency 15 Hz;
therefore, the maximum data stream is 240 Mb/s. The PIV system is managed by a
Windows-based personal computer, which is also run the ActualFlow software
for processing images using the crosscorrelation algorithm. ICMM UrB RAS and
IMM UrB RAS are connected by the dedicated channel of the Academic Network
UrB RAS utilizing the DWDM technology. The speed of the channel physical
media is 1 Gb/s, length of the channel is approximately 400 km, and round-trip
time is approximately 5 ms.</p>
        <p>The supercomputer storage system has the separate logical volume devoted
to store experimental data from the PIV system. The logical volume consists
of ten 300GB Fibre Chanel disks, and has the useful capacity 1.8 TB. The
logical volume is exported simultaneously by NFS and CIFS protocols. The
supercomputer nodes with Linux operating systems mount this logical volume
by NFS in the special directory /home3. The personal computer in the PIV
system with Windows use CIFS to mount the logical volume as a network drive.
When the PIV system writes data to this drive, they are became available for the
supercomputer nodes in the speci ed catalog. Simultaneous work with the same
logical volume from di erent operating systems by di erent network protocol is
provided by NAS system EMC Celerra NS-480.
5.2</p>
      </sec>
      <sec id="sec-5-2">
        <title>Security</title>
        <p>Data from the PIV system to the supercomputer \URAN" are transferred through
the dedicated channel of the Academic Network of UrB RAS, which is isolated
from the Internet. Inside the Academic Network of UrB RAS, network
connection is further isolated by VLAN technology. Dedicated VLAN contains only the
computer in the PIV system and the network interfaces of supercomputer NAS
system, which are intended to connect external experimental facilities.</p>
        <p>Each experiment running on the PIV system in the ICMM UrB RAS does not
demand high security. Therefore, we make a decision do not use encryption due
to its large overhead. As a result, the performance of data transfer is improved,
while su cient level of security is provided by isolation of the communication
channel from the Internet.</p>
        <p>Inside the storage system, two level of access control are implemented. The
rst level is the access restriction to logical volume by IP-address of experimental
facility. The second level is restriction by user name and password. Mapping the
usernames and le owners in the NFS and CIFS is provided by NAS system. As a
result, it becomes possible that several users with di erent names and passwords
can work with the PIV system, for example, to carry out di erent experiments.
Files of such users are isolated from each over.
5.3</p>
      </sec>
      <sec id="sec-5-3">
        <title>Performance Evaluation</title>
        <p>To estimate the performance of suggested solution, we run several experiments
to test the speed of data transfer to the supercomputer by di erent protocols.
We use sequential write speed test because PIV system creates this type of load
due to sequential writing of ow images recorded by camera.</p>
        <p>The rst experiment is in testing the traditional approach of transferring
data to supercomputers through the head node by SCP protocol. We copy the
images generated by the PIV system from the personal computer with Windows
7 to the supercomputer \URAN" by utilities pscp and WinSCP. The speed of
data transfer is measured by the tools of pscp and WinSCP.</p>
        <p>
          The second experiment is in writing data directly to the network storage of
supercomputer from Windows 7 machine by CIFS (using both SMB1 and SMB2
protocols). To increase the performance of the data transfer through long
distance channel, the Compound TCP [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] protocol has been enabled in Windows.
The performance is measured by the iozone utility (IOzone Filesystem
Benchmark, http://www.iozone.org/). In the third experiment, the data are written
also directly to the supercomputer storage from the computer with Ubuntu Linux
11.04 by NFS. We use the 4th version of NFS, and the size of block (both wsize
and rsize) is 1MB. The performance is also measured by the iozone.
        </p>
        <p>Results of the experiments are presented in Fig. 3. The number of experiments
of each type is 100; the average results with con dence intervals are presented.</p>
        <p>The worst results are obtained using the traditional approach to transfer
data through head node by SCP protocol. Such low speed is caused by existing
the intermediate element (head node of supercomputer) and encryption used by
SCP that create signi cant unnecessary overheads.</p>
        <p>The best results are achieved by Linux machine with NFS protocol. Its
performance is 6 times more, then at SCP. Despite of such good results, this approach
cannot be used in the PIV system of the ICMM UrB RAS because it includes the
personal computer with Windows. But in the future Linux-based experimental
facilities can be connected to the supercomputer by the NFS protocol.</p>
        <p>Performance of the Windows machine with SMB version 2 is only slightly
less then the performance of Linux machine. SMB2 is also provides the 6 times
speedup compare to traditional SCP. The disadvantage of SMB2 is that it is
available only in relatively new versions of Windows, such as Windows Vista,
Windows 7, and Windows Server 2008. Early versions of Windows support only
SMB1 protocol, which provides only half of the SMB2 performance (see Fig. 3).
However, both SMB2 and SMB1 provide enough performance to data transfer
from the PIV system of the ICMM UrB RAS to the supercomputer \URAN".</p>
        <p>The experiments results have con rmed that writing data directly to the
network storage in the supercomputer can notably speedup the data transfer.</p>
        <p>
          It should be emphasized that the signi cant di erence from the results
presented in [
          <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
          ] is that performance in our experiments has been reached in one
session. Consequently, it is not necessary to run several sessions simultaneously
to achieve high-speed of the data transfer.
6
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Conclusions</title>
      <p>
        The approach to organize the high-speed data transfer from the PIV system to
the supercomputer based on direct write to the supercomputer network storage
has been presented. The advantage of approach is that it can be used without
modi cation of experimental facility. The performance testing shows that direct
write to the network storage by CIFS or NFS protocols can increase the data
transfer speed 6 times in comparison with the traditional data transfer through
supercomputer head node by SCP or FTP protocols. In contrast to the previous
work [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ], the speedup can be achieved in one network session without the
requirements to run multiple session of the data transfer simultaneously to utilize
the bandwidth of network connection.
      </p>
      <p>Suggested approach has been implemented to connect the PIV system in the
ICMM UrB RAS, Perm, Russia to the supercomputer \URAN" in the IMM
UrB RAS, Yekaterinburg, Russia. Distance between the PIV system and the
supercomputer is approximately 400 km. The connection uses dedicated
GigabitEthernet channel of Academic Network of UrB RAS.</p>
      <p>The high-speed data transfer provides the ability to process experimental
data from the PIV system on the supercomputer in the real-time and control the
experiment based on the results of such processing. Moreover, it is possible to use
the supercomputer to implement highly accurate but computational consuming
image processing algorithms. Personal computer cannot be used to run such
algorithms due to low computational resources. Since the results of processing
can also be written to the storage system, the user of the PIV system can visualize
the experiment process using the standard existing tools. As a result, the user
can monitor the experiment course and control its conditions.</p>
      <p>
        Future work includes conducting the closed-loop experiments with feedback
based on PIV measurements, connecting other experimental facilities to the
\URAN" supercomputer, such as setup for two-phase ux control in spray of
injectors for aircraft engines [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], implementing the adaptive and wavelet
crosscorrelation algorithms of velocity eld estimation, evaluating the possibility to
run this algorithms on GPU, and increasing the speed of data transfer by using
10G Ethernet network equipment.
      </p>
      <p>Acknowledgments. The work was supported by Ural Branch of Russian Academy
of Science and Russian Foundation for Basic Research (grants 17-45-590846) and
by the Research Program of Ural Branch of RAS, project no. 15-7-1-26. Our
study was performed using the Uran supercomputer of the Krasovskii Institute
of Mathematics and Mechanics and the cluster of the Ural Federal University.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Adrian</surname>
          </string-name>
          , R.J.:
          <article-title>Scattering particle characteristics and their e ect on pulsed laser measurements of uid ow: speckle velocimetry vs particle image velocimetry</article-title>
          .
          <source>Applied Optics</source>
          <volume>23</volume>
          (
          <issue>11</issue>
          ),
          <volume>1690</volume>
          {
          <fpage>1691</fpage>
          (
          <year>1984</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Adrian</surname>
          </string-name>
          , R.J.:
          <article-title>Twenty years of particle image velocimetry</article-title>
          .
          <source>Experiments in Fluids 39</source>
          ,
          <issue>159</issue>
          {
          <fpage>169</fpage>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Batalov</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sukhanovsky</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frick</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Laboratory study of di erential rotation in a convective rotating layer</article-title>
          .
          <source>J. Geophys. Astrophys. Fluid Dynamics</source>
          <volume>104</volume>
          (
          <issue>4</issue>
          ),
          <volume>349</volume>
          {
          <fpage>368</fpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Batalov</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kolesnichenko</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stepanov</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sukhanovsky</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>The use of eld measurement techniques to study two-phase ows</article-title>
          .
          <source>Vestnik Permskogo Universiteta. Mathematics. Mechanics. Informatics</source>
          <volume>5</volume>
          (
          <issue>9</issue>
          ),
          <volume>21</volume>
          {
          <fpage>25</fpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Frick</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stepanov</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sokolo</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beck</surname>
          </string-name>
          , R.:
          <article-title>Wavelet based faraday rotation measure synthesis</article-title>
          .
          <source>Monthly Notices of the Royal Astronomical Society Letters</source>
          <volume>401</volume>
          ,
          <issue>L24</issue>
          {
          <fpage>L28</fpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Hochareon</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Manning</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fontaine</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deutsch</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , ,
          <string-name>
            <surname>Tarbell</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>Development of high resolution particle image velocimetry for use in arti cial heart research</article-title>
          . In: Second Joint EMBS-BMES Conference. pp.
          <volume>1591</volume>
          {
          <fpage>1592</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (Oct
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Keane</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Adrian</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          :
          <article-title>Theory of cross-correlation analysis of piv images</article-title>
          .
          <source>Applied Scienti c Research</source>
          <volume>49</volume>
          (
          <issue>3</issue>
          ),
          <volume>191</volume>
          {215 (JUL
          <year>1992</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Mizeva</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stepanov</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frick</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Wavelet crosscorrelations of two-dimensional signals</article-title>
          .
          <source>Numerical methods and programming 7</source>
          , 172{
          <fpage>179</fpage>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Stepanov</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Masich</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Masich</surname>
          </string-name>
          , G.:
          <article-title>Initiative project "distributed piv"</article-title>
          . In:
          <article-title>Proceedings of Scienti c service in the Internet: scalability, parallelism</article-title>
          , e ciency. pp.
          <volume>360</volume>
          {
          <issue>363</issue>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Stepanov</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Masich</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sukhanovsky</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schapov</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Igumnov</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Masich</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>Processing the stream of experimental data on the supercomputer</article-title>
          . In:
          <article-title>Proceedings of Scienti c service in the Internet: exa ops future</article-title>
          . pp.
          <volume>168</volume>
          {
          <issue>174</issue>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Tan</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Song</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , Zhang,
          <string-name>
            <given-names>Q.</given-names>
            ,
            <surname>Sridharan</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.:</surname>
          </string-name>
          <article-title>A compound tcp approach for highspeed and long distance networks</article-title>
          .
          <source>In: Proc. IEEE INFOCOM</source>
          . pp.
          <volume>1</volume>
          {
          <issue>12</issue>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Willert</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ra</surname>
            <given-names>el</given-names>
          </string-name>
          , M.,
          <string-name>
            <surname>Kompenhans</surname>
          </string-name>
          , J.:
          <article-title>Recent applications of particle image velocimetry in large-scale industrial wind tunnels</article-title>
          .
          <source>In: International Congress on Instrumentation in Aerospace Simulation Facilities</source>
          . pp.
          <volume>258</volume>
          {
          <fpage>266</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (Sep
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Willert</surname>
            ,
            <given-names>C.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Munson</surname>
            ,
            <given-names>M.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gharib</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Real-time particle image velocimetry for closed-loop ow control applications</article-title>
          .
          <source>In: 15th Int Symp on Applications of Laser</source>
          Techniques to Fluid Mechanics (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Yu</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leeser</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tadmor</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>Real-time particle image velocimetry for feedback loops using fpga implementation</article-title>
          .
          <source>Journal of Aerospace Computing, Information, and Communication 7</source>
          ,
          <issue>52</issue>
          {
          <fpage>62</fpage>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>