=Paper=
{{Paper
|id=Vol-2604/paper76
|storemode=property
|title=The Development of an Application for Microparticle Counting Using a Neural Network
|pdfUrl=https://ceur-ws.org/Vol-2604/paper76.pdf
|volume=Vol-2604
|authors=Ganna Khoroshun,Ruslan Luniakin,Andrii Riazantsev,Oleksandr Ryazantsev,Tetiana Skurydina,Halyna Tatarchenko
|dblpUrl=https://dblp.org/rec/conf/colins/KhoroshunLRRST20
}}
==The Development of an Application for Microparticle Counting Using a Neural Network==
The Development of an Application for Microparticle
Counting Using a Neural Network
Ganna Khoroshun[0000-0002-1272-1222], Ruslan Luniakin, Andrii Riazantsev[0000-0002-1431-
5682]
, Oleksandr Ryazantsev, Tetiana Skurydina, Halyna Tatarchenko[0000-0003-4685-0337]
Volodymyr Dahl East Ukrainian National University, Severodonetsk, Ukraine
an_khor@i.ua,lun1998lun@gmail.com, drew.ryazancev@gmail.com,
a_ryazantsev@ukr.net, tg.skurydina@gmail.com,
tatarchenkogalina@gmail.com
Abstract. Program application for the automatic intelligent registration of a mi-
croparticle moving is an important task for solving the problem of machine vi-
sion system development. The video file with particle movement is cut on
frames. Every frame is passed to the preliminary analysis based on its noise to
signal value, contrast measuring and statistical analysis. This technique allows
us to accept the frame as valid and go to the microparticle counting process. It
is developed the program product, which provides the selecting of the area with
particles, modelling the background and count particles in the field. Three pro-
cesses are occurred with particles in the experiment: guiding, fixing and flick-
ing. The problem of the process type recognizing in which the microparticle is
observed effects on the accuracy of their counting. The problem is suggested to
solve based on neural network.
Keywords. Program Application, Machine Vision System, Microparticle
Counting, Neural Network.
1 Introduction
The great scope of problems devoted to the image registration can be described by the
help of automated measuring information optical system or the Machine Vision Sys-
tem (MVS) [1]. In our consideration MVS contains the optical source which is a laser
or incoherent light or a combination of several sources; optical elements are lenses,
filters and other devices for phase, amplitude and polarization modulation; a CCD or
CMOS camera is received and converted optical radiation into electrical signals; the
computer is for the data visualization; a software for data processing and analyzing; a
protocol suite with algorithms what to do for software and hardware with a final deci-
sion of the system’s working ability (Figure 1). The system can be called as an adap-
tive system with feedback due to which the improvement of the system by iteration
method can be realized. The quality of the image is under influence of fluctuations,
instabilities and aberrations of the system which total by the standard should not be
higher than 10 %.
There are many practical tasks [2-6], varying from control automation of produc-
tion to the construction of robotic cars, that are directly related to the task of registra-
tion of the object movement in the video file. Different strategies for frames of the
Copyright © 2020 for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
video can be used to solve it. The software RADiCAL [7] provides 3D-modelling of
the human movement by processing the video with the help of artificial intelligence
and motion science.
Let’s consider the registration of the movement objects in the micro world. Some
distributions of light allow to control each atom individually or a group of atoms.
Developing of data object design of the intensity pattern for controlling micro- and
nanoparticles was the aim of some previous research [8,9]. Model of service provid-
ing on the use of optical laboratory in conditions of individual customer needs is rep-
resented in [10].
Fig. 1. Сommon view of the machine vision system for optical research realization
Here, we want to focus on the processing of the video file with the goal to count the
number of moving micro objects. Several important stages are needed including statis-
tical analysis of one frame, choosing the criteria of the optical image quality and mak-
ing decision about finalizing step. The simple final decision is stop the record if the
image does not satisfies to criteria. Another important problem is to provide high accu-
racy of the micro particle counting. Three processes are occurred with particles in the
experiment: guiding, fixing and flicking. Incorrect recognition of the process leads to
an incorrect counting of the number of particles. So, the task should be solved by the
more efficient way in the process recognition that is intelligent systems are known.
2 The Image Quality and Processing.
The video file of the microparticle guiding is recorded by the camera. The one shot
(Fig. 1) is required some image processing, which is shown in the information model
with two parts. First one (Fig. 2a) contains preliminary and statistical analyses with
the task to make conclusion about image quality and decision for using this one or
taking another snapshot. Cutting of the image on Fig. 2b is realized with some re-
quirements, which will be discussed later. Next step (Fig. 2b) for the image of particle
guiding processing is segmentation of the picture and observation of the particles
movement in 2D space.
a b
Fig. 2. a,b. One frame from the video file demonstrated a part from laser setup with chamber at
different initial conditions ready for microparticles guiding recorded by CCD-camera
Let us consider the components of preliminary analysis parameters. The quality of the
typical image as on Fig 2 a,b can be described by two the most common parameters:
signal-to-noise ratio (SNR) and contrast ratio (CR). SNR is measured ratio of the
level of a real signal to the level of background noise. For estimation of the image
quality there are used Rose criterion [11] for SNR and Rayleigh rule [12] for resolu-
tion of two points by CR.
These criteria are suitable and available for our physical task and their attributes
possess the features of relevance, objectivity, measurability and completeness accord-
ing to the standards [13].
The first step of an optical image processing is obtaining of statistical parameters
for experimental and theoretical data: the arithmetic mean ̅ , the mode and the
standard deviation σ. In our previous paper [13] we have shown detailed statistical
analysis of the optical image.
The main stage includes segmentation of the picture and observation of the parti-
cles movement. The questions of noise modelling and background deviation are paid
attention. Background modeling [14] is often used in various systems to model back-
grounds and further detects moving objects in a scene, such as video surveillance,
optical motion capture, and multimedia.
a b
Fig. 3. The information model of the image processing. First part (a) is devoted to the prelimi-
nary analysis. Second part (b) describes the analysis of the background deviation and revealing
of the moving micro particles.
We have used the approach of background modeling for the case if the camera is
stationary, that is, we have a background that changes little, and so we can build its
model. We consider all points of the image that deviate significantly from the back-
ground model to be foreground objects. Thus, we can solve the problem of detection
and maintenance of the object.
3 Image Analysis
We quantized the received data for intensity into the range from 0 to 255 with the step
1, which corresponds to the features of the CCD camera. More details about provid-
ing this operation for the optical image in [13]. It is worth mentioning, that due to
quantization obtained results have an error of ±0.5.
IT is considered the situation with two different initial conditions for recording the
microparticle movement that are shown on figures 2a and 2b correspondingly. Prelim-
inary stage of the image processing reveals next important information: SNR are
equal to 35,79 and 14,64, which means good quality according to the Rose criterion.
The contrast is defined as the ratio of the brightest color to that of the darkest color.
The contrast value for both experimental optical images is about 0,937, which means
high enough quality of the images.
The initial conditions for recording of the microparticles movement can be studied
by statistical analysis. The picture on the Fig. 2a in comparison with the Fig. 2b is
more lightning in the cell and has more pixels with high intensity. So, the value of
median is much bigger for the more lighted case that can be seen clearly from the data
on the Table 1. The variation curves for images on Fig. 2a and Fig. 2b are essentially
different from each other (Fig. 4). They allow us easily recognize the case of record-
ing video due to curves location in different parts of the graph. So, we can provide the
statistical analysis of both cases simultaneously.
Table 1. Statistical parameters for experimental images represented in Fig. 2a and Fig. 2b.
Statistical Parame- Initial frame on Fig. 2a Initial frame on Fig.
ters 2b
Mean 121,38 36,14
Standard Deviation 17,81 12,00
Median 118 34
Fig. 4. Variation curves of the experimental images to Fig. 2a and Fig. 2b are marked by
dotted and solid black curves correspondingly
4 Program Application for Registration of a Particle Movement
Video is a sequence of frames; each of them is displayed at a high enough frequency
for the human eye to see this sequence continuously. Thus, the contents of the sequen-
tially two frames are closely related. In this case, adjacent frames can be used to track
the position and condition of an object. In addition, it is obvious that all image pro-
cessing methods can be applied to individual frames. There are three basic steps to
video analytics:
1. identifying of the required objects;
2. tracking the change in position and status of these objects between frames;
3. object behavior analysis.
It should be noted that object detection and tracking are two very closely related
processes, as tracking often begins with the detection of the required objects, and the
detection of the object again in the next sequence of frames is necessary to verify the
accuracy of tracking .
Under the information process we consider the processes of obtaining, storing,
transforming, presenting and transmitting information on the registration of a number
of particles of a certain size. The registration of particles occurs in three ways: particle
flicking when it falls down and illuminated by laser radiation, guiding the particles
along the laser beam and fixing the particles on the walls of the cell. A light pulse is
detected by a photodetector in our case - a camera that records the passage of a sam-
ple through a beam. Therefore, there is a need to create a software for processing the
data obtained from the photodetector. So, the aim of the paper is to develop program
application for the automatic registration of a microparticle moving under the influ-
ence of laser radiation.
The developed program application is represented main stage of the frame pro-
cessing and counting of the moving microparticles. The AForge.NET library was
selected for the representation of the task, which has ready-made implementations of
motion detection algorithms. The convenient interface was developed for particle
movement registration (fig. 5). It includes such icons, as
File list
Download file
Particle size
Algorithm
Comparison with the previous frame
Background modeling
Сomparison with the first frame
Noise reduction
Clear capture area
The total number of particles
5 Program Operation
To get started, we need to select the desired video file using the "Download File"
button. After selecting the video file, the first video frame will be displayed in the
interface of the program (Fig. 5). Than we have selected the areas of the interest by
rectangles. It is the option to choose the particle size in the range of 1-10 pixels. In
our research we have chosen 2-3 pixels.
In order to filter out unnecessary sections of the frame, we can select separate areas
of motion search with the mouse; they will be displayed in the frame only when the
video processing process does not work. So we can clear the selection list and set new
ones.
Fig. 5. – Interface view of the developed application program after uploading the video file.
There are three search algorithms, which can be chosen for video processing:
1. Comparison with the previous one.
2. Background modeling.
3. Comparison with the first frame.
The algorithm for comparison with the previous frame is selected by default. To
start motion registration and particle counting, it is needed to press “Start”. Also the
program provides for choosing ability to reduce the noise in video.
The area where the movement was observed is highlighted by red rectangles as
shown on Figure 6. The program calculates the number of particles, which is equal to
30 for considered case.
After finishing the video processing or pressing the “Stop” button, we can resume
searching or download another video. The particle counter then will be reset.
Fig. 6. Video frame with areas where the microparticles moving is registered.
Let’s consider the situation, which can be observed for the case if the quality of the
image is low and criteria are not satisfied. According to our choice there are two char-
acteristics are defined for every image. In the case of signal-to-noise ratio or contrast
ratio has a value out of criterion level for satisfier image quality the program is
stopped the recording of the video file and visualize the proper text on the computer
screen. The standard if-then construction provides this result. The question is the
loosed data important for the research. The development of the alternative decision of
the system can be realized by employment of the intelligent systems. The structure of
an intelligent system includes a knowledge base, which can be renewed online reflects
preferences for a decision making about continue or stop video record at the moment.
Very important problem in microparticles counting is right counting of the objects.
Three processes are occurred with particles in the experiment: guiding, fixing and
flicking. The particle is guided by laser light is change the position in the horizontal
direction, as shown in Figure 7a. The object can be fixed to cell’s wall (Figure 7b)
and be visible for long time of registration. And the last variant of the microparticle
behavior to reveal itself at the moment of crossing laser beam that can be observed
just in one frame (Figure 7c). The guided particle can be counted twice as it was two
particles of flicking process. The fixed particle can be counted several times if the
illumination from laser light is interrupted. Due to that the accuracy of the objects
counting are become smaller. So, we have a problem of automatically recognition of
the process in what particle is participated.
Previous Current Next
Guiding a
Fixing b
Flicking c
Fig. 7. There are frames at previous, current and next moments from one video file in columns.
Three processes with particles in the experiment are guiding (a), fixing (b) and flicking (c) that
is shown in rows.
The task is close to the problem of automated human behavior recognition system,
which was resolved by the method of convolutional neural network [15]. The logistic
regression of the process can be described by the vectors X and Y. The video is divid-
ed into frames with number A. Every frame consists from N=n×n pixels. We stack in
matrix is called previous N·(A-2) pixels from the first frame till (A-2) frame. The
matrix’s called current is made from the stack of N·(A-2) pixels from the second
frame till (A-1) frames and the matrix possess data about the next frame stack of
N·(A-2) pixels from the third frame till A frames. So, an input feature vector X has a
length is 3N (A-2) pixels. For the binary classification the value of vector Y is 1 or 0
is searched separately for the every microparticle process guiding, fixing and flicking.
For the case we have m training examples with the matrices shapes X·shape = (3N
(A-2), m) and Y·shape = (1, m) are represented for every process. Input ( ) is the
value of i-th set of three images, ( ) is the output function, the ̂ ( ) is the estimation
of ( ) value. The function ̂ ( ) of the logistic regression model measures how well
the result on the entire training set with parameters w and b can be written in the view
̂( ) (∑ ()
).
6 Conclusions
In this paper the work of the Machine Vision System is presented. We focus on the
developing program application for the automatic registration of a microparticle mov-
ing. For convenience, image processing is based on the information model, which
consists of two stages. The preliminary one is used for the basic image characteristics
as signal-to-noise ratio, contrast ratio, resolution of the system and statistical analysis
for making a decision is the image enough good for next consideration. The main
stage includes the segmentation of the picture, background modelling, nose reduction
and observation of the particles movement.
Very important problem in microparticles counting is right counting of the objects.
Three processes are occurred with particles in the experiment: guiding, fixing and
flicking. Incorrect recognition of the process leads to an incorrect counting of the
number of particles - they can be skipped or counted twice. So, we stated the problem
of automatically recognition of the process in which micro particle is involved by the
methods of artificial intelligence. The task of the process recognition is formalized
and the method of deep learning system is discussed.
In the future, it is planned to continue the developing of neural network for making
decision provides recognition of the process and right counting of the micro particles
in the machine vision system.
References
1. Sinha, P. K.: Image Acquisition and Preprocessing for Machine Vision Sytems. SPIE
Press, USA (2012)
2. Gustafsson, L., and Lanshammar, H.: Enoch - An integrated system for measurement
and analysis of human gait. Ph.D. thesis, UPTEC 7723 R, Uppsala (1977)
3. Jarret, M.O., Andrews, B.J., and Paul, J.P.: Quantitative analysis of locomotion using
television. Proceedings of lSPO World Congress, Montreux. (1974)
4. Lanshammar, H.: Measurement and analysis of displacement. In Gait Analysis in
Theory and Practice Proceedings of the 1985 Uppsala Gait Analysis Meeting, 29 – 45 .
(1985)
5. Lindholm, L.E.: An optoelectronic instrument for remote on-line movement monitor-
ing. In R. C. Nelson and C. A. Morrehouse, (eds), Biomechanics IV. University Park
Press, Baltimore 510 – 512 . (1974)
6. Mitchelson, D.: Recording of movement without photography. In D. V. Grieve, D.
Miller, D. Mitchelson, J. P. Paul, and A. J. Smith (eds) , Techniques for the Analysis of
Human Movement, London Lepus Books, London 59 – 65 (1975)
7. NVIDIA Homepage, https://www.nvidia.com.ua/object/the-startup-in-the-field-of-
artificial-intelligence-has-made-ru.html, last accessed 2020/04/20.
8. Khoroshun, A., Ryazantse,v A., Ryazantsev, O., Sato, S., Kozawa, Y., Masajada, J.,
Popiołek-Masajada, A., Szatkowski, M., Chernykh, A., Bekshaev, A.: Formation of an op-
tical field with regular singular-skeleton structure by the double-phase-ramp converter. J.
Opt., 22 (2), 025603 (2020)
9. Bekshaev, A., Chernykh, A., Khoroshun, A., Mikhaylovskaya, L.: Singular skeleton
evolution and topological reactions in edge-diffracted circular optical-vortex beams. Op-
tics Communications, 397, 72-83 (2017)
10. Khoroshun, G.: Model of service providing on the use of optical laboratory in condi-
tions of individual customer needs. Visnik of the Volodymyr Dahl East Ukrainian National
University, № 8 (256) 118-122 (2019)
11. Rose, A., Vision: Human and Electronic. Plenum Press. New York (1973)
12. Born, M.; Wolf, E.: Principles of Optics. Cambridge University Press. Great Britain
(1999).
13. Ryazantsev, O., Khoroshun, G., Riazantsev, A., Ivanov, V., Baturin, A.: Statistical
Optical Image Analysis for Information System. Proceedings of 2019 7th International
Conference (FiCloudW), Istanbul, Turkey, IEEE, pp. 130-134 (2019)
14. Bouwmans, T., El Baf, F., Vachon, B.: Background Modeling using Mixture of
Gaussians for Foreground Detection - A Survey. Recent Patents on Computer Science.
Bentham Science Publishers, 1 (3), pp. 219-237. (2008)
15. Bo, Yu.: Design and Implementation of Behavior Recognition System Based on Con-
volutional Neural Network. ITM Web of Conferences, 12, 01025 (2017)