=Paper= {{Paper |id=Vol-3041/348-352-paper-64 |storemode=property |title=Algorithms for Behavioral Analysis of Laboratory Animals in Radiobiological Research |pdfUrl=https://ceur-ws.org/Vol-3041/348-352-paper-64.pdf |volume=Vol-3041 |authors=Alexey Stadnik,Oksana Streltsova,Dmitry Podgainy,Yuriy Butenko, Kristina Lyakhova,Yuri Severiukhin,Dina Utina,Aleksanr Nartikov }} ==Algorithms for Behavioral Analysis of Laboratory Animals in Radiobiological Research== https://ceur-ws.org/Vol-3041/348-352-paper-64.pdf
Proceedings of the 9th International Conference "Distributed Computing and Grid Technologies in Science and
                           Education" (GRID'2021), Dubna, Russia, July 5-9, 2021



          ALGORITHMS FOR BEHAVIORAL ANALYSIS OF
          LABORATORY ANIMALS IN RADIOBIOLOGICAL
                        RESEARCH
         A.V. Stadnik1,a, O.I. Streltsova1, D.V. Podgayny1, Yu.A. Butenko1,
         K.N. Golikova2, Yu.S. Severiukhin2, D.M. Utina2, A. G. Nartikov3
 1
     Meshcheryakov Laboratory of Information Technologies, JINR, 6 Joliot-Curie St., Dubna, 141980,
                                               Russia
            2
                Laboratory of Radiation Biology, JINR, 6 Joliot-Curie St., Dubna, 141980, Russia
 3
     Regional Scientific and Educational Mathematical Center, North Ossetian State University after K.
           L. Khetagurov, 44-46 Vatutina, Vladikavkaz, 362025, North Ossetia - Alania, Russia

                                            E-mail: a stadnik@jinr.ru

As part of the development of an information system for radiobiological research, an algorithmic unit
for analyzing video recordings of the behavior of laboratory animals to study the dependence on
pathomorphological changes in the central nervous system after exposure to ionizing radiation and
other factors. The analysis of data characterizing the behavioral responses of a laboratory animal based
on machine and deep learning algorithms and computer vision methods. To achieve the goal of fully
automating the processing of data from behavioral experiments, it is necessary to develop several
groups of algorithms: algorithms for automated marking of the field of an experimental setup,
algorithms for tracking the position of a mouse in an experimental setup of various types, and
algorithms for assessing the characteristic behavioral patterns of an animal characterizing its emotional
state. The paper proposes approaches and specific algorithms developed for use within the information
system for processing data from radiobiological research.

Keywords: behavioral analysis, computer vision, tracking, automated data processing.


                                     Alexey Stadnik, Oksana Streltsova, Dmitry Podgainy, Yuriy Butenko,
                                      Kristina Lyakhova, Yuri Severiukhin, Dina Utina, Aleksanr Nartikov



                                                                 Copyright © 2021 for this paper by its authors.
                        Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).




                                                       348
Proceedings of the 9th International Conference "Distributed Computing and Grid Technologies in Science and
                           Education" (GRID'2021), Dubna, Russia, July 5-9, 2021



1. Introduction
        Correct and high-quality automation of part of the work in experiments carried out in the
Laboratory of Radiation Biology of JINR is extremely important in the context of increasing the
amount of received experimental information per unit of time, as well as for eliminating the human
factor and identifying patterns that appear on large data sets.

        Computer vision as a field of knowledge offers a large number of tools with which one can
successfully solve some of the problems that arise. The data processing algorithms are part of the
information system for radiobiological research [1] developed in Meshcheryakov Laboratory of
Information Technologies of JINR.


2. Challenges ahead
        The tasks of the algorithmic block of the information system are:
    ●   Analysis of the layout of the experimental field
    ●   Tracking the position of the animal in the experiment
    ●   Classification and determination of the type of activity of the animal (grooming, freezing)
    ●   Segmentation of neurons on images of brain slices
    ●   Classification of neurons by type and belonging to the layer
        The heterogeneity of the data and the algorithmic block is a task that being solved by the
creation of an information system. The information system will simplify the statistical analysis of
behavioral patterns and finding a correlation with the results of pathomorphological analysis.
       The work considers part of the development of algorithms for automated marking of the
experimental field and part of tracking the behavior of the animal for subsequent analysis.
        Currently, three experimental installations are used:
    ●  Open field
    ●  T-maze
    ●  Morris water maze
       Each of them requires a separate algorithmic study to automate the determination of its
parameters based on the characteristics of a particular experimental installation (fig. 1).




                                          Figure 1. Experimental setups

       For automated marking of all three installations, a combination of classical computer vision
methods is used, including:
    ●    the Hough transform for finding lines and circles, including its probabilistic implementation,
         which is well suited for the problem of finding short segments
    ● filtering using a specially selected two-dimensional filter and a Gaussian filter
    ● threshold transformation and its adaptive version with a small sliding window
    ● background estimate presented as a multi-Gaussian model
        The input data in the case of behavioral analysis of a laboratory animal is a video file of the
recording of an experiment with an animal placed in one of the corresponding installations. The
specificity of video data shot with non-specialized machine vision cameras has strong variability of
images from frame to frame, due to the work of image-correcting algorithms built into the cameras to



                                                   349
Proceedings of the 9th International Conference "Distributed Computing and Grid Technologies in Science and
                           Education" (GRID'2021), Dubna, Russia, July 5-9, 2021



improve the perception of the image by the human eye. This specificity of working with video data
also manifested in the case of experiments carried out at LRB JINR.
        As a result, the work of such classical algorithms as the detection of lines, circles and tracking
the movement of an animal is rather unstable in operation and give slightly different results from
frame to frame (fig. 2). For the problem of automated marking, this circumstance is quite critical,
therefore, when constructing a solution, two possibilities were investigated to overcome this problem.




                                     Figure 2. Single frame detection result

        Both possibilities lie in the plane of statistical processing of the results and are different only
in the object that subjected to statistical analysis. For marking, a set of the first 30 frames of the
original video clip with the experiment recording is used. Thus, it turns out to eliminate the influence
of such random factors as the presence of the experimenter in several frames of the beginning of the
video, the influence of small fluctuations in lighting and noise introduced by the corrective algorithms
of the camera itself.


3. Marking Algorithms
3.1. Open Field
         In the first approach, the results of detecting lines and circles corresponding to the layout of
the experiment on each of the frames analyzed, according to the frequency of occurrence, and clusters
of close lines and circles with the maximum frequency of appearance on the frames considered
correspond to the marking lines. At the same time, when forming clusters for lines, the proximity
criterion is the total parameter of proximity in the angle of inclination and the point of intersection
with the abscissa axis, and for circles, a combination of the sum of the Euclidian distance of the
centers of the circles and the modulus of the difference in radii. This approach made it possible to
fairly consistently determine the maximum circle in the experimental setup, the open field and the
main marking lines.
         The second approach, with data quality for statistical processing, uses frames entirely. Using
either the shading or the background when building a multi-Gaussian model, after 30 frames we get an
image that contains quite a bit of noise. The detection of markings in such an image is more stable.
         The final solution combines both approaches: determining the maximum radius of the open
field, creating a mask for the middle image that leaves only the field itself, threshold filtering and then
detecting the main lines using the Hough transform, and detecting circles by constructing a histogram
of the distance of pixels from the known center.
As a control check, the intersection of the main scout lines should be close to the center of the
maximum circle.
3.2. T-maze
         When building the T-maze markup, no circles are present on the frame, so the final average
background image is segmented into 2 classes: the background of the labyrinth and the background of
the laboratory room. The quality of this segmentation does not have to be high, since it is used to filter
out false ones in the context of setting the lines that are given by detection using the Hough transform.


                                                   350
Proceedings of the 9th International Conference "Distributed Computing and Grid Technologies in Science and
                           Education" (GRID'2021), Dubna, Russia, July 5-9, 2021



The Hough transform gives the boundaries of the maze that allows you to find the corresponding
segments of the installation (fig. 3).
3.3. Morris Water Maze
        Due to the special features of the device, this setting gives a relatively low-contrast image,
therefore, the first stage of the marking algorithm is the use of a two-dimensional filter, the core of
which was a Mexican hat with radial symmetry. The use of such a filter allows you to increase the
contrast of the image so that the result is suitable for the use of adaptive threshold filtering, which
finds out both the boundaries of the pool circle and a small area in the middle of the pool, the position
of which you need to know to mark the experiment (fig. 3).




                        Figure 3. T-maze, Morris Water Maze markup procedure results


4. Tracking a laboratory animal
        To fully automate the processing of the experimental results, coupled with high-quality
markings, it is necessary to know the position of the animal in the installation at each moment of time.
Knowing the position of the animal in the frame and the marking of the installation in relation to the
frame of the video fragment, we can solve the problem of determining the position of the animal
within the framework of the experimental installation. The tracking algorithm allows you to track the
position of a moving object in the frame starting from a certain starting position.
        Difficulties in the work of this approach are the sensitivity of the tracking algorithms, the
different video quality, and the dependence on the initial position of the object in the frame, which
must be determined separately. With a non-contrasting object, tracking algorithms often lose the
object, and in the case of the experimental setup, the Morris water maze, we just have a similar
situation - both the laboratory animal itself and the land area hardly contrast with the pool water.
Tracking disruptions for the current situation mean inaccuracies introduced into the results of
measurements of the animal's behavior and should be avoided, if possible.
        In the work, the implementations of the object tracking algorithms implemented in the
OpenCV [2] library were tested. Tracking algorithms using a correlation filter - CSRT tracker - turned
out to be the most successful in the task of tracking the trajectory of a laboratory mouse. GOTURN [3]
based on the neural network approach and MOSSE [4] also based on the correlation filter proved to be
somewhat worse in this task. However, in the course of the study, a combination of two algorithms
turned out to be the most stable: CSTR [5] and object tracking based on object-background
segmentation. In this case, the background estimated using a multi-Gaussian model with an estimate
for several hundred frames from 300 to 800.
        The behavioral aspects of the animal under study can be divided into two large parts:
information about the activity associated with movement, active study of the territory and the speed of
movement of the animal within the experimental setup, as well as the characteristic actions of the
animal. The construction of a high-quality track of an animal provides an essential part of information
and already allows one to draw a conclusion about the level of the animal's behavioral activity. Figure
4 shows the form of the visual representation of the tracking result in the form of the track itself and
the heat map.




                                                   351
Proceedings of the 9th International Conference "Distributed Computing and Grid Technologies in Science and
                           Education" (GRID'2021), Dubna, Russia, July 5-9, 2021




                          Figure 4. Track heat map and track itself in open field setup


5. Conclusion
        The part of the information system related to the automation of behavioral analysis, including
automated marking of the experimental field and tracking the laboratory animal, building heat maps
and obtaining statistical information about the characteristics of animal behavior, is developed and at
the stage of integration into the information system [6]. It is planned to use the resources of the
heterogeneous platform "HybriLIT" [7], which is a part of the Multifunctional Information and
Computing Complex (MICC), the Laboratory of Information Technologies of JINR, for the
functioning of the information and computing system for processing experimental data.


References
[1] Butenko Yu.A., Marov D.M., Nechaevskiy A.V., Podgainy D.V. Development of a service for
conducting radiobiological studies on the HybriLIT platform // CEUR Workshop Proceedings, 2020,
2743, pp. 26–33
[2] Open Source Computer Vision Library, URL: https://opencv.org/ (accessed on: 01.04.2021).
[3] David Held, Sebastian Thrun, Silvio Savarese, Learning to Track at 100 FPS with Deep
Regression Networks, Available at: https://arxiv.org/pdf/1604.01802.pdf
[4] Bolme D.S., Beveridge J.R., Draper B.A., Lui Y.M., Visual Object Tracking using Adaptive
Correlation Filters, Available at: https://www.cs.colostate.edu/~draper/papers/bolme_cvpr10.pdf
[5] Khurshedjon Farkhodov , Suk-Hwan Lee and Ki-Ryong Kwon, Object Tracking using CSRT
Tracker and RCNN, Available at: https://www.scitepress.org/Papers/2020/91838/91838.pdf
[6] Kolesnikova I., Nechaevskiy A., Podgainy D., Stadnik A., Streltsov A., Streltsova O. Information
System for Radiobiological Studies // Proceedings of the Workshop on Information Systems for the
Radiation Biology Tasks, Dubna, Russia, June 18, 2020, pp. 1-6. http://ceur-ws.org/Vol-2743/1-6-
paper-1.pdf
[7] Heterogeneous platform “HybriLIT”, URL: http://hlit.jinr.ru/ (accessed on: 01.10.2021).




                                                   352