=Paper= {{Paper |id=Vol-2744/paper72 |storemode=property |title=Implementation of Basic Computer Vision Methods for Analyzing the Results of Biological Experiments |pdfUrl=https://ceur-ws.org/Vol-2744/paper72.pdf |volume=Vol-2744 |authors=Kseniia Ezhova,Andrey Veremenko,Ksenia Baranova,Alexander Belaykov,Vladislav Cherebedov,Evgeny Lanskov }} ==Implementation of Basic Computer Vision Methods for Analyzing the Results of Biological Experiments== https://ceur-ws.org/Vol-2744/paper72.pdf
     Implementation of Basic Computer Vision Methods for
       Analyzing the Results of Biological Experiments*

    Kseniia Ezhova1[0000-0002-8076-876X], Andrey Veremenko1[0000-0001-5015-4081], Ksenia Bara-
       nova2[0000-0002-2746-2040], Alexander Belaykov2[0000-0002-4614-3336], Vladislav Cher-
                 ebedov1[0000-0001-6066-7884], Evgeny Lanskov1[0000-0002-1634-7856]
          1 ITMO university, Kronverksky prospekt, 49, St. Petersburg, Russian Federation

                                    ezhovakv@itmo.ru
                                veremenko.andre@gmail.com
                                  jeka94-lans@yandex.ru
                                 vlad.cherebedov@mail.ru

    2 Institute of Physiology Pavlova Russian Academy of Sciences, Tiflisskaya St., 3, St. Peters-

                                     burg, Russian Federation
                                   belyakov07@gmail.com
                                     ksentippa@mail.ru



          Abstract. The article discusses the video processing methods necessary for the
          automation of processing the results of the experiments “Morris Water Maze”,
          “Open Field”, “Elevated Cross-shaped Maze”, which are used to study the be-
          havior of laboratory mice depending on various external factors, as well as an
          experiment with daily tracking Rhesus macaque activity. To process the received
          information, the C++ programming language is used including the OpenCV 3.2
          and Qt 5.2 libraries. Later in the paper new applications are discussed to further
          the research based on findings described here. For further use monkey observa-
          tion is proposed. The choice was made based on similarity between discussed
          and proposed methods for video processing and automation of experiments. For
          reviewed methods their advantages and disadvantages are included in the work.
          For each of experiments “Morris Water Maze”, “Open Field”, “Elevated Cross-
          shaped Maze” there is corresponding image sequence for anyone unfamiliar with
          the topic.

          Keywords: Computer Vision, Open Field, Morris Water Maze, Background
          Search, Motion Detection, Daily Activity, OpenCV, Qt, Elevated Cross-shaped
          Maze.




Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License
Attribution 4.0 International (CC BY 4.0).

*     Publication financially supported by RFBR grant №20-01-00358
2 K. Ezhova, A. Veremenko, K. Baranova et al.


Introduction

To acquire information concerning psychological condition of mammals in different
unnatural environments by studying the difference in rodents’ behavior, Institute of
Physiology Pavlova conducts different experiments. Those experiments include “Mor-
ris Water Maze”, “Open Field”, “Elevated Cross-shaped Maze”. Outside of rodent ex-
periments, the university conducts experiment on mammals such as Rhesus macaque.
For those big animals’ daily activity is recorded and anylised.

    The open field maze is used as a simple test of emotionality. It is generally accepted
that if the animal is in a suppressed emotional state, then it will be less mobile, com-
pared with the psychologically normal.
During the Open Field experiment, the rodent is placed in the center of the maze and
its behavior is observed. As soon as the rodent enters a new square, it registers. An
example of the “Open Field” labyrinth is shown in Figure 1- a.




          Fig. 1. Example of Labyrinths. a- “Open Field”, b- “Morris Water Maze”

   The Morris Water Maze (MWM) (Figure 1-b) is the main test for studying spatial
memory in rodents. MWM is a cylindrical pool filled with water tinted with powdered
milk or chalk. In the pool there is a platform below the water level, so the animal can
not see it.
   Once in the MWM, the rodent is stressed and begins to look for a way out of the
maze. During the experiment, the animal learns to find the platform using spatial
memory and get out faster. During the experiment, they monitor the time during which
the rodent finds the platform.
   When tracking the daily activity of rhesus macaques, the monkey is observed in its
usual environment. This experiment was conducted to analyze the activity of monkeys,
to identify their behavioral characteristics and to analyze their emotional state outside
the tests.
   Goal and tasks:
   The aim of this work is to automate the processing of the results of the experiments
“Morris Water Maze”, “Open Field” , “Elevated Cross-shaped Maze” and tracking the
daily activity of macaques.
           Implementation of Basic Computer Vision Methods for Analyzing the Results… 3


   General algorithm:
   In the case of experiments with rodents, the user selects the type of experiment and
indicates the path to the video file, indicates the region of interest (ROI), and from
which frame the experiment begins. Then the subroutine with the analysis of the exper-
iment is executed. At the output, the user receives a file with the result of the analysis
of the video stream.
   When analyzing the activity of rhesus monkeys, the user chooses between analyzing
the recording or working with online video. After which the program begins to analyze
the behavior of the monkey. The result of the program is a file with the results of activ-
ity.


1      Motion Search Algorithm - Background Subtraction Method

The method of background subtraction was chosen due to the motionlessness of the
background in the task. This method has several main points presented in Figure 2.
At the pre-processing stage, the frame must be prepared for the detection of moving
objects. To do this, it is necessary to reduce the frame size and transfer the frame from
the RGB system to the YUV system. After preprocessing, the background is subtracted
pixel by pixel from each frame [1]. The selected background highlighting method will
be discussed later.




                     Fig. 2. Scheme of background subtraction method

   In the “mix of Gaussian distributions” method , the background is the sum of the
distributions 𝐵. For this, a certain number of frames is taken, according to which the
background will be determined and  , which determines the minimum share of the
background. Consider the method at time t.  - Is a set of Gaussian distributions, each
individual distribution of which is determined by two parameters:  - variance and
                                                                      2
  – vector of average values. Also, a weight is calculated for each distribution  ,
indicating how well the distribution describes the background, considering
4 K. Ezhova, A. Veremenko, K. Baranova et al.



i=
 M
   1 i = 1 . It should be noted that  and  - are numbers that describe the dis-
tribution, and the  vector connecting each pixel with the selected distribution. It is
believed that the selected distribution is closed if the distance between the points of
                                              (vT * v)
Mahalanobis is more than three D ( l ) =                  , v = l −,
                                    2
                                                   2
  where 𝑙 ̅ and 𝜇̅ are vectors.
  Let’s sort the elements of  according to the condition k  k +1 . Next, let’s
find   B using 1.
                                            b
                            B = argbmin (  i  )                                 (1)
                                           i =1

   We calculate the probability that belongs to the current background using only the
first distribution.

                              B
                 P (lt ) =  ( * N(lt , m,t , m
                                                 2
                                                   ,t * t ))                       (2)
                            m =1

                                                       v T *v
                                            1            2
                          N (l, , ) =         e                                   (3)
                                            2
if P(lt (x))   , then we get this point lt (x) , belonging to the background.
   The following calculations are carried out for each distribution from  formula 2,
formula 3 and formula 4.


                      t +1 = t + (ot − t ) −  * Ct                             (4)

                                                 
                          t +1 = t + ot (           ) * t                        (5)
                                                t +1
                                       
                 t2+1 = t2 + ot (         ) * (t * t − t2 )                    (6)
                                      t +1
           Implementation of Basic Computer Vision Methods for Analyzing the Results… 5


            1
where  =      . ot = 1 only for closed distribution with the highest weight t , in any
            t
other case ot = 0 . If there are no closed distributions in  , then we add  distri-

bution to t+1 =  , t +1 = lt и t+1 = 0 , where 0 previously defined
constant. Next, we normalize the weights t and perform calculations for the new set
 [2].

2      The Search Algorithm for Geometric Objects in the Image

In this paper, to search for a grid in the Open Field labyrinth, a site in the Morris Water
Maze, and a labyrinth in the Elevated Cross-shaped Maze, the Hough transform is used.
   The Hough transform is a linear transform for defining lines in an image. The feature
of the Hough transform is that the line can be represented as a point with coordinates
 m (slope coefficient) and b (intersection point with the ordinate axis) in the pa-
rameter space.
   It is known that straight lines parallel to the ordinate axis have infinite value for the
parameter m Therefore, in the Hough transform, the line is represented using the pa-
rameter r and  By understand the length of the normal to the line drawn from the
origin.  - is the angle between the radius vector of the point closest to the origin on
the line and the abscissa. Therefore, the equation of the line in the Hough transform:

                    cos           r
          y = (−          ) x+ (       ) = r = x cos  + y sin                         (7)
                    sin         sin 
   Therefore, you can connect the lines in the original image in the XY plane, the point
with the coordinates r and  n the Hough space , which is unique if [0, ] и
r  R , или [0,2] и r  0 .
    Through each point in space is possible to make an infinite number of straight lines
if you specify the location of the point of using 𝑥0 , 𝑦0 all lines passing through it corre-
spond to the relationship: r () = x 0 *cos  + y0 sin  .
   This equation corresponds to a sinusoidal line in the Hough space, which is unique
for a given point and uniquely determines it. If these lines correspond to two points in
the Hough space, then the point where they intersect corresponds to a straight line in
the XОY plane.
   A series of points that form a straight line define sinusoids that intersect at the pa-
rameter point for that line. Thus, the problem of detecting points lying on one straight
line can be reduced to the problem of detecting intersecting curves [3].
6 K. Ezhova, A. Veremenko, K. Baranova et al.


3       Center of Mass Search Algorithm

The search algorithm for the center of mass of the rodent in the image implements the
method of finding the center of the white spot in the image.

                                  m      n
                                   (  j *U (i, j))
                                  i =1 j =1
                           xc =       n m
                                                                                      (8)

                                       (  U (i, j))
                                      j =1 i =1

                                    n     m
                                    (  i* U (i, j))
                                   j =1 i =1
                           yc =       n m
                                                                                      (9)

                                       (  U (i, j))
                                      j =1 i =1

    Where U (i, j) – the brightness of the pixel, m , n – he number of rows and col-
umns of the image.
  As a result, the coordinates of the brightness center of mass of the image are deter-
mined.[7]


4       Software Development

Based on the analysis presented algorithms and developed math equations software was
created to automate the processing of results of experiments.


4.1     Subprogram “Open field”
At the input, the algorithm receives a video stream, starting from the frame at the be-
ginning of the experiment. The frame consists of the cut-out area of interest of the orig-
inal video file. The algorithm is divided into four stages: determining the background,
searching for the labyrinth grid, tracking the movement of the rodent and determining
the number of the square in which it is located.
   At the first stage, using the “mixture of Gaussian distributions” algorithm, sorting
through all the frames in turn, the background on the video is determined. The back-
ground is a place where the rodent is completely absent or completely invisible. An
example of a detected background in Figure 3.
            Implementation of Basic Computer Vision Methods for Analyzing the Results… 7




Fig. 3. Background Detection. On the left is the original frame, on the right is the detected back-
ground

In step two, stripes are detected in the background image that form the maze grid.
Search is performed using the Hough transform. The user himself determines when the
entire grid is found. Further, from the formed grid using the equation of intersection of
lines, the quadrangular cells of the maze are determined. An example of a detected grid
is shown in Figure 4.
    After going to step three, the background is subtracted from each frame of the video
stream. Then the frame is binarized with a threshold equal to the average background
brightness, the image is filtered by the Gauss method with a 11x11 mask - this allows
you to get rid of the remaining noise in the image. The result of background subtraction
and filtering in Figure 5.
    The next step is to search for all the contours in the image. A search is made for the
largest contour and its center of mass. The detected point is the center of mass of the
rodent. Then there is a search of the area from the predefined cells of the labyrinth into
which the point enters, and on the current raw frame a specific cell of the labyrinth and
the point of the center of mass are displayed. This algorithm applies to all subsequent
frames.
    In paragraph 4, the number of maze cells crossed by the rodent is determined, and
their characteristics are calculated along the length of the trajectory and speed. At the
last stage, the calculated parameters are written to a file, an example of which is shown
in Figure 6.[6]




                                 Fig. 4. Grid Detection Interface
8 K. Ezhova, A. Veremenko, K. Baranova et al.




Fig. 5. Steps of processing a frame after subtracting the background from the current frame from
left to right. 1. After subtracting the background 2. After the Gaussian filter 3. The original image
with the center of mass of the object drawn and the selected location square




                                     Fig. 6. File with results.


4.2     Subprogram "Morris Water Maze"
At the input, this algorithm receives the same prepared frame as in the algorithm for
the “Open Field” maze. These algorithms are very similar and their steps are almost
identical. The background definition is similar to the “Open Field” experiment, an ex-
ample of a background detection in Figure 7.
    In paragraph two, four intersecting lines are searched, and they will be considered a
site. Site definition does not require user participation.
    At stage 3, after determining the center of mass of the rodent, it is checked whether
it enters the rectangle found earlier. If it enters, the loop exits.




Fig. 7. Background detection in the Morris Water Maze. On the left is the original frame, on the
right is the detected background
            Implementation of Basic Computer Vision Methods for Analyzing the Results… 9


   Stage 4 is performed similarly to the "Open Field" experiment. The calculation of
the distance, the speed of the rodent, and the time spent searching for the site. Data is
written to a file, on which the operation of the algorithm ends.


5       Results Analysis

To analyze the results, the algorithms were tested on five files with experiments, three
of them - the “Open Field” maze, two - “Morris Water Maze”.
   Testing revealed the following disadvantages: the displacement of the chamber dur-
ing the experiment leads to the inability to analyze this experiment (Figure 8 - a); mov-
ing any objects between the camera and the maze leads to inaccuracies in determining
the position of the mouse (Figure 8 - b); errors in the determination of the labyrinth grid
or site due to the presence of distortion in the original image (Figure 8 - c, d). [4]




Fig. 8. Examples of algorithm deficiencies: background image when the camera moves during
the experiment (a); an error in determining the position of the rodent (b); the error in determining
the grid of the labyrinth “Open field” (c, d), in - the image of the maze with distortion, g - the
error in determining the grid caused by the presence of distortion


6       Future Developments

Further application of the obtained algorithms can be extended to other similar experi-
ments.
   One of the experiments conducted by the Institute. Pavlova, is an analysis of the
cognitive abilities and daily activity of rhesus monkeys in the cell. At the input, the
program receives a video recording or online video stream that must be processed. The
object tracked on the studied videos is the rhesus monkey in the cell. Its daily activity,
special movements and habits must be entered into a text file obtained at the output of
the program. Figure 9a shows a freeze frame from the video, and figure 9b shows the
10 K. Ezhova, A. Veremenko, K. Baranova et al.


process of processing the frame with highlighting the position of the monkey on the
frame [5].




Fig. 9. - Freeze frame from the analyzed video (A), and a frame with the selection of the monkey
relative to a stationary background (B)

   The result of the program is a text file with highlighted time periods of macaque
activity in the cell. Also, the position of the monkey in the cage is recorded in the file
to provide a more complete analysis of its behavior.
   As can be seen in Figure 9B, when the monkey is selected, images from the rods
remain on the contour, since the entire contour is important during processing, the pro-
gram should be able to isolate the macaque completely. This problem is solved by ap-
plying the algorithms described earlier in the work.


7      Conclusion

As part of the work on the master's thesis, the subject area was analyzed, the tracking
algorithm of the object was adapted to search for the movement of a mammal in differ-
ent ones, and software was developed based on the developed algorithms.
   In conclusion, it should be noted that the work was carried out in the framework of
cooperation between the Faculty of Applied Optics and the Institute of Physiology. Ivan
Petrovich Pavlov of the Russian Academy of Sciences.


References
 1. Ezhova K., Veremenko A., Baranova K.: A nalysis of filtering algorithms and searching for
    objects on the video image during the "Morris water maze" and "open field" experiments.
    Proceedings of SPIE 11061, pp. 110610F (2019).
 2. Lei C.: Research of methods and algorithms for detecting moving objects in a video stream.
    Youth Scientific and Technical Bulletin (2013).
         Implementation of Basic Computer Vision Methods for Analyzing the Results… 11


3. Sergeev I.A.: Investigation of background detection methods in thermal imaging footage.
   Vestnik TSU, pp.97-104 (2010).
4. Gonzalez R., Woods R.: Digital image processing. Technosphere, 848-854 (2012).
5. Yilmaz A.,:Object tracking: survey.ACM Computing Surveys(CSUR) 38(4), pp. 1-45
   (2006).
6. OpenCV Homepage, https://opencv.org, last accessed 2020/06/10
7. Kalinichenko, Yu V.: On the issue of boundary detection by the Kenny. Collection of sci-
   entific works SWorld . Proceedings of the international scientific-practical conference
   "Modern directions of theoretical and applied research '2012" 3(1), pp. 11-17 (2012).