<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Cattle Breed Identification and Live Weight Evaluation on the Basis of Machine Learning and Computer Vision</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Oleg</forename><surname>Rudenko</surname></persName>
							<email>oleh.rudenko@nure.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Kharkiv National University of Radio Electronics</orgName>
								<address>
									<addrLine>Nauky Ave. 14</addrLine>
									<postCode>61166</postCode>
									<settlement>Kharkiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">P. Vasilenko</orgName>
								<orgName type="institution">Kharkov National Technical University of Agriculture named after</orgName>
								<address>
									<addrLine>Alchevskih str., 44</addrLine>
									<postCode>61002</postCode>
									<settlement>Kharkiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Cattle Breed Identification and Live Weight Evaluation on the Basis of Machine Learning and Computer Vision</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">465FAFAF8589DD402B237351388C3B15</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T04:20+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Convolutional neural network</term>
					<term>epipolar geometry</term>
					<term>image processing</term>
					<term>computer vision</term>
					<term>cow</term>
					<term>live weight</term>
					<term>mask-rcnn</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The problem of the cow's live weight estimation is considered. A convolutional neural network based method for animal recognition and its breed identification in combination with epipolar geometry approach for object's size measurement is proposed. Information regarding animal's size and its breed is further used for LW estimation by multilayer perceptron based predictive model. This approach can be used to replace traditional methods of direct observation and measurement. The proposed system can be widely used in the management of a modern farm. Accuracy and performance of the proposed method has been tested with the participation of the experts.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Introduction</head><p>Computer-based image analysis and various predictive applications are commonly used in different fields of human activity, one of which is agriculture. The number of farms in Ukraine has increased significantly and their productivity is growing, so the importance of computer technology in the automation of agricultural processes is gradually increasing. When raising cows, the relationship between live weight (LW), milk yield and feed intake can be taken as criteria for organizing the care and nutrition of animals in modern keeping conditions. These parameters are quite important and must be strictly controlled. When they go beyond permissible limits, this significantly affects the immune system of cows, and, accordingly, the economic efficiency of the farm. Negative changes in live weight may indicate animal's health problems, inappropriate environmental conditions, and nutritional errors. Therefore, a parameter such as live weight (LW) is certainly important for dairy cows <ref type="bibr" target="#b0">[1]</ref>. It should be noted that at present the process of measuring and servicing cattle is still carried out manually and is very expensive.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Formal problem statement</head><p>The aim of this study is the estimation of cows LW using several neural network models: the convolution artificial neural network for recognition of a cow at the pictures and its breed identification with subsequent determination of the size of its body by the stereopsis method with subsequent utilization of multilayer perceptron for LW estimation of a cow on the basis of information regarding its breed and size.</p><p>For more accurate estimation of the animal's physical parameters 3D camera (Intel RealSense D435i) was additionally used. It should be noted that the use of a 3D camera alone does not yield to good results due to its low resolution. Thus, cows images taken at different angles are used to determine parameters of bodies cows using photogrammetric method. Parameters such as the withers height (WH), hip height (HH), the body length (BL) and the hip width (HW) of cows were obtained via photogrammetry.</p><p>Model estimation based on the ANN has been developed using those parameters (input parameters WH, HH, BL, HW, and the output parameter -LW).</p><p>Cow's body dimensions are determined from the analysis of animal images taken synchronized cameras from different perspectives. Initially, a cow is identified at the image and its breed is determined by using Mask-rcnn convolution neural network. Then withers height, hip height, length and width of a cow determined via stereopsis method which allows to obtain geometric parameters of the objects at digital images and perform their measurements. Digital imaging and photogrammetric processing include several completely certain steps that can generate three-dimensional or twodimensional digital model of the animal's body. Then obtained data about the species and its size are fed to the predictive model, which determines the estimated weight of the animal.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>3</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Literature review</head><p>In the literature different estimation of animals LW for various purposes is typically performed by using regression equations. However, in recent years the methods and means of computational intelligence based on ANN are increasingly used.</p><p>Prediction of LW of bulls' slaughter value from growth data by using ANN carried out in <ref type="bibr" target="#b1">[2]</ref>, an artificial intelligence technology in dairy industry <ref type="bibr" target="#b2">[3]</ref>, a comparison of an artificial neural network and a method of linear regression for prediction of the LW of hair goats <ref type="bibr" target="#b3">[4]</ref>, comparison of ANN and decision tree algorithms used for prediction of LW at post weaning period <ref type="bibr" target="#b4">[5]</ref>, weight prediction of broiler chickens with the help of 3D-computer vision is performed in <ref type="bibr" target="#b5">[6]</ref>, prediction of the goats masses <ref type="bibr" target="#b6">[7]</ref>, artificial neural network to predict rabbits body weight <ref type="bibr" target="#b7">[8]</ref>, prediction of carcass meat percentage in young pigs using linear regression models and artificial neural networks <ref type="bibr" target="#b8">[9]</ref>, weighing pigs using machine vision and artificial neural networks <ref type="bibr" target="#b9">[10]</ref>.</p><p>The purpose of this study is to estimate cows LW by using artificial neural networks. Body dimensions (BD) were determined by the photogrammetric method on images in which the cow is identified and classified by the convolution neural network.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>4</head><p>The traditional methods of measuring the mass of cows</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">The method of Trukhanovskii</head><p>This method is used to determine LW of the adult cattle by the following formula , 100</p><formula xml:id="formula_0">K B A LW   <label>(1)</label></formula><p>where A -chest girth behind the shoulders, cm; B -direct length of the trunk, measured with a stick, cm; K -a correction factor (2 -for dairy cattle and 2.5 -for milk-meat and beef breeds).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Fig. 1. The measurement method of Trukhanovskii</head><p>To determine the approximate body weight special tables are used. For these tables the initial data is measurements of animals taken at the correct animal's position (feet must stand upright, a head at the level of the back).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>4.2</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Other methods</head><p>There are some other methods of LW and cow's body size estimation that could be applied, in particular Pin Bone method <ref type="bibr" target="#b10">[11]</ref> , 300</p><formula xml:id="formula_1">) ) (( 2 BL HG BW  <label>( 2 )</label></formula><p>where BW -Body Weight estimation (pounds); HG -heart girth (inch); BL -Body Length (inch).</p><p>Thus, in estimating of cattle LW the important task of determining its size measure arises. This problem is complicated especially when measures are made in real conditions. The traditional animal measuring process is shown in Fig. <ref type="figure" target="#fig_0">2</ref>. The first step in the proposed algorithm is detection of a cow. This step is based on the use of convolution neural network for the cows detection at the picture and stereopsis method, which allows the system to obtain measurements of the real world objects, located at different distances from the cameras.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>5.1</head><p>The stereopsis principle</p><p>This principle can be explained as follows: Suppose there are two cameras, defined by their matrices P and P' in some coordinate system. In this case we say that there is a pair of calibrated cameras. If the cameras centers do not match, the pair of cameras can be used to determine the threedimensional coordinates of the observed points.</p><p>Often, the coordinate system is chosen in such a way that the cameras matrix are of the form ] 0</p><formula xml:id="formula_2">| [I K P  , ] | [ t R K P     </formula><p>(this is always possible, if we choose the origin coinciding with the first camera's center, and direct a Z-axis along the optical axis).</p><p>Consider a point P in the real three-dimensional space projected simultaneously in two image points p and p' through the two camera projection center (C and C'). The points P, p, p', C and C' lie in a plane, called the epipolar plane. Epipolar plane inter-sects each image forming the intersection lines. These lines (l and l') correspond to the projection ray through the p and P, and p' and P, and are called epipolar lines. This projection epipolar geometry is described in <ref type="bibr" target="#b11">[12]</ref> (Fig. <ref type="figure">3</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Fig. 3. Epipolar geometry</head><p>Epipolar geometry is used to search for stereo pairs and for verifying that a pair of points can be a stereo pair (i.e. the projection of a point in space).</p><p>Epipolar geometry has a very simple description in the coordinates. Suppose there is a pair of calibrated cameras, and let p is homogeneous coordinates of the point at the image from the first camera, and p' -from the second camera. A 3 × 3 matrix F exists, such that the pair of points p, p' is a stereo pair if and only if:</p><formula xml:id="formula_3">. 0   Fp p T ( 3 )</formula><p>The matrix F is called a fundamental matrix. Its rank is 2 and it is determined up to a nonzero factor that only depends on the original matrix cameras P and P'.</p><p>In the case where the matrix cameras are of the following form ] 0</p><formula xml:id="formula_4">| [I K P  , ] | [ t R K P   </formula><p>the fundamental matrix may be calculated with the formula:</p><formula xml:id="formula_5">x T T T t KR RK K F ] [ ) ( 1    . (<label>4</label></formula><formula xml:id="formula_6">)</formula><p>where the e vector notation </p><p>In <ref type="bibr" target="#b12">[13]</ref> the various calculation algorithms of F with using a set of points are considered. In particular, gradient descent algorithm, Newton's method and the Levenberg-Marquardt algorithm are described.</p><p>Epipolar line equations are calculated with the help of the fundamental matrix. For the point x, the vector that defines the epipolar line is of the form Fp l   , and the equation of the epipolar line itself is:</p><formula xml:id="formula_8">0    p l T</formula><p>. Similarly, for a point p', the vector defining the epipolar line is of the form p F l T   .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2">Construction of the depth map</head><p>Depth map -is an image in which each pixel, instead of color, keeps a distance from the camera. To this end, for each point in one image its pair is searched on the other image. A pair of corresponding points can be used for triangulation and determination of their prototype coordinates in three dimensions. Knowing the three-dimensional coordinates of the prototype image, the depth is calculated as the distance to the camera's plane <ref type="bibr" target="#b12">[13]</ref>.</p><p>Thus it is possible to determine the size of the object.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Convolutional Neural Network (CNN)</head><p>CNN was first proposed in <ref type="bibr" target="#b13">[14]</ref> as the development of the neocognitron model <ref type="bibr" target="#b14">[15]</ref>, intended for effective image recognition. This network uses pattern recognition technology based not on hardcoded by developers algorithms, but on system training, which involves a consistent identification of a huge number of images. Subsequently, the R-CNN (Regions With CNNs) networks were built based on CNN. R-CNN were used for detecting all objects of the given classes and determining the bounding box for each of them (object detection). R-CNN creates bounding boxes for each object in the image or suggestions regions using selective search process. Fast R-CNN increases productivity of R-CNN and classifies objects of each region together with tighter bounding boxes. Next network, Faster R-CNN, improved generation mechanism of used therein candidate regions by computing the regions not on the original image but on the features map derived from CNN. For this purpose the module called Region Proposal Network (RPN) was added.</p><p>Finally, the Mask R-CNN network <ref type="bibr" target="#b15">[16]</ref> improves Faster R-CNN architecture by adding one more sub-module, which predicts the position of the detected object covering mask, and thus solves the task of instance segmentation. After image processing, the network outputs objects bounding boxes (bbox), their classes (class) and masks (mask). It worth to note, that Mask R-CNN is one of the fastest network at the moment. The structure of this network is shown in Fig. <ref type="figure" target="#fig_2">4</ref>. In accordance with these arrangements, three main layers are used to build a convolutional neural network: convolution, pooling (otherwise subsampling or downsampling), fully connected layer.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2">Convolution layer</head><p>Convolution equation for the l -th ) ,..., 1 ( L l  network layer has the following form <ref type="bibr" target="#b16">[17]</ref> </p><formula xml:id="formula_9">             a l b l b js a is l ab l ij b y w x . 1 ) )( (<label>( 6 )</label></formula><p>It reflects movement of the l w core along the image or map of input attributes for the layer</p><formula xml:id="formula_10">1  l y . Here ) ( 1 1    l ij l x f y -image after ) 1 (  l -th layer; ) ( f -used acti- vation function; l b -offset. Indices b a j i , , ,</formula><p>-indices of the elements in the matrices, S -the value of the convolution step.</p><p>As seen from ( <ref type="formula">3</ref>) convolution are performed for each element j i, of image matrix l x .</p><p>Convolution preserves spatial relationships between the pixels. Each convolutional layer is followed by subsampling or computational layer that is serving to reduce the dimension of the image by averaging the values of the local output neurons.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.3">Subsampling Layer (pooling, MAX-pooling)</head><p>Subsampling layer zooms planes by local averaging of the output neurons values . Thus, the hierarchical organization is achieved. Subsequent layers are extracted more common characteristics that less depend on the image distortion.</p><p>The difference between a subsampling layer and a convolution layer is that in the convolution layer regions of the neighboring neurons overlap, which does not occur in the subsampling layer.</p><p>Pooling layer operates independently of the input data depth and scales the spatial volume by using a maximum function.</p><p>The architecture of the convolution network assumes that the presence of a sign is more important than information about its location. Therefore, the maximum one is selected from several neighboring neurons in the feature map and its value is considered as a single neuron in the feature map of lower dimension.</p><p>In addition to maximum subsampling, pooling layers can perform other functions, such as averaging subsampling or even L2-normalized subsampling.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.4">Non-linear activation layer</head><p>On these layers within the network, a nonlinear activation function ) ( f is applied to all input values and the result is sent to the output. Thus, the activation layer does not change the input resolution.</p><p>Usually due to significant positive properties for hidden layers, ReLu(ReLU (x) = max (0; x)) and its various modifications (Leaky ReLU, Parametric ReLU, Randomized ReLU) <ref type="bibr" target="#b17">[18]</ref> are used. SoftMax function</p><formula xml:id="formula_11">1 1               L N i L i x L j x L j e e f</formula><p>(for solving classification problems) or linear function (for regression tasks) are used for fully connected layer.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.5">Dropout layer</head><p>Different regularization techniques are used to avoid network retraining. Dropout is a simple and effective regularization method and consists in the fact that in the process of training a network, a subnet is randomly allocated from its aggregate topology, i.e. part of the neurons is turned off from the process, and the next update of the scales occurs only within the allocated subnet. Thus, only weights of remaining neurons are changed. Each neuron is excluded from the total network with a certain probability, which is called the dropout rate.</p><p>This layer reduces the time of one training epoch due to the smaller number of optimized parameters, and also allows to better deal with retraining of the network compared to standard regularization methods.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>6.6</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Normalizing layer</head><p>The standard normalization of inputs occurs on this layer (the sample average of their values is subtracted, and the result is divided by the root of the sample variance). Sampled values are calculated taking into account the values at the inputs of this layer at previous training iterations. This approach allows to increase the speed of learning the network and improve the final result.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.7">Fully connected layer</head><p>This layer is an ordinary multilayer perceptron, the purpose of which is classification. It models a complex nonlinear function, the optimization of which improves the quality of recognition. The neurons of each map of the previous subsample layer are associated with one neuron of the hidden layer. Thus, the number of neurons in the hidden layer is equal to the number of maps in the subsample layer.</p><p>As in ordinary neural networks, in a fully connected layer, neurons are connected to all activations in the previous layer. Their activations can be calculated by multiplying matrices and applying bias.</p><p>The differences between fully connected and convolutional layers are that the neurons of the convolutional layer 1) connected only with the local area of the input; 2) can share parameters. In practice, a quadratic function, cross entropy, or some combined functional are used as a criterion for the error function.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">Backpropagation Neural Network (BPNN)</head><p>Backpropagation neural network (BPNN) is a multilayer feedforward neural network that uses a supervised learning algorithm known as error back-propagation algorithm. Errors accumulated at the output layer are propagated back into the network for the adjustment of weights. Multilayer perceptron (MLP) is a neural network with several layers, each of which consists of computing nodes (neurons). The topology of the MLP is shown in Fig. 5 <ref type="bibr" target="#b18">[19]</ref>. Network inputs are connected to each neuron in the first layer. The outputs of the neurons of the first layer then become inputs to the neurons of the second layer and so on. The last layer is the output layer, all other layers between the input and output layers are called hidden layers. The architecture of the multi-layer perceptron can be conveniently written as</p><formula xml:id="formula_12">l n n n    ... 1 0</formula><p>where 0 n is the network's input vector dimension, and i n ,</p><formula xml:id="formula_13">l i   1</formula><p>denotes the number of nodes in the respective layers. </p><formula xml:id="formula_14">      , ... ) ( ... ) ( ) ( ˆ1 1 1 2 1 1 q T q T q q T q q b b k x W f f W f W f k f k y                                 <label>(7)</label></formula><p>where i W -vector of weight parameters of neurons in the i-th layer of the network;</p><formula xml:id="formula_15">f i [•] -activation function (AF) of i-layer, b i -bias of the i-th neuron.</formula><p>Since in practical applications it is necessary to perform various operations not only with the activation function itself, but also with its first derivative, it is necessary to use a monotonous, differentiable and limited function as an activation function. A particularly important role is played by such functions in modeling nonlinear relationships between input and output variables. These are the so-called logistic or sigmoidal (S-shaped) functions.</p><p>In general, the representation of the input-output can be expressed as</p><formula xml:id="formula_16">m l n R R f  : ˆ.</formula><p>Approximate capacity of MLP have been studied and described by many authors <ref type="bibr" target="#b18">[19]</ref><ref type="bibr" target="#b19">[20]</ref><ref type="bibr" target="#b20">[21]</ref><ref type="bibr" target="#b21">[22]</ref><ref type="bibr" target="#b22">[23]</ref>. The basic idea is that every continuous function</p><formula xml:id="formula_17">m l n f R R D f   :</formula><p>can be uniformly approximated with arbitrary precision function f ˆ by f D , where f D is a compact subset of l n R , provided that there is a sufficient number of hidden layers in the network. This is true even for networks with only one hidden layer. A typical assumption for the activation functions in the hidden layer is the following: ) ( f is a continuous, non-constant and limited.</p><p>MLP is in general function approximator and it ensures that the network with one hidden layer will always be enough to represent any arbitrary continuous function. But this statement says nothing about the number of neurons in the hidden layer, which provide a given accuracy of approximation. It is also important that the proof of the possibility of approximation by means of MLP suggests that weights are set correctly. However, the question of a learning algorithm choice remains open.</p><p>The pseudocode algorithm for BPNN is given below <ref type="bibr" target="#b23">[24]</ref>. (VI) Periodically evaluate the network performance. Repeat Forward and Backward computations until the network converges on the target output.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>8</head><p>The structure of the developed system Fig. <ref type="figure" target="#fig_6">6</ref> shows the structure of the developed intellectual breed recognition and evaluation of DA cattle. The problem of recognizing various breeds of cows and assessing their linear sizes with the subsequent determination of mass using a neural network predictive model based on a multilayer perceptron was solved. To recognize the breed of a cow, the Mask-RCNN network trained on the COCO sample was used, the source code of which is freely available on the Internet <ref type="bibr" target="#b24">[25]</ref>. The network weights were fine-tuned on 250 images of representatives of each of the considered breeds of cows (Ayrshire, Holstein, Jersey, Red Steppe). To increase the size of the training and test samples, the augmentation method was used <ref type="bibr" target="#b25">[26]</ref>, which made it possible to obtain additional images from the original ones. The network was retrained over 5000 epoches using the SGD algorithm. To train the network, we used the Nvidia GeForce 1060 graphics card, which allowed us to speed up the learning process many times as compared to learning on the CPU. The results of recognition and evaluation of the size and weight of cows are given in Table <ref type="table" target="#tab_0">1</ref>. Examples of correct recognition are presented in fig. <ref type="figure">7</ref>.</p><p>It should be noted that, despite the rather extensive training sample and the achieved high recognition accuracy (92%), as a result of the network, false recognition occurs.</p><p>An example of such recognition is shown in Fig. <ref type="figure">8</ref>. The obtained masks of recognized objects were used to determine their linear dimensions using the triangulation method (epipolar geometry). As additional information that allows us to adjust the obtained sizes, we used images from a 3D camera, which allow us to estimate the distance to the recognized object. Examples of such images are presented in Fig. <ref type="figure">9</ref>.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgement</head><p>This project has been funded with support from the European Commission. This publication reflects the views only of the author, and the Commission cannot be held responsible for any use which may be made of the information contained therein.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="10">Conclusions</head><p>The proposed method for measuring cattle using neural network image processing algorithms allows modern farmers to quickly and accurately measure the weight of the animal, as well as recognize its breed, saving time and reducing effort without compromising the habitat and disturbing livestock growth. This is possible because the method allows studying cows both in corrals and pastures, without interfering with their normal behavior. The proposed approach can be used to replace traditional methods that use direct observation and measurement, which adversely affect the behavior of animals. The system can be widely used in the management of modern farming. The accuracy and performance of the proposed methodology were tested with the participation of the experts. The same measurements were carried out by the farm staff who confirmed the effectiveness of the proposed system.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 2 .</head><label>2</label><figDesc>Fig. 2. The traditional measuring process</figDesc><graphic coords="4,125.52,308.64,169.42,78.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 4 .</head><label>4</label><figDesc>Fig. 4. Mask R-CNN structure</figDesc><graphic coords="7,156.24,147.24,283.00,127.56" type="vector_box" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>6 . 8</head><label>68</label><figDesc>The choice of training criteriaANN training is an iterative process. At each iteration, the network outputs for one (or more) samples in the training set are calculated, and the network weights are adjusted to reduce the error between the actual network output ) Therefore, training is reduced to minimizing some error function.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Fig. 5 .</head><label>5</label><figDesc>Fig. 5. Backpropagation neural network</figDesc><graphic coords="10,160.68,213.24,225.83,147.72" type="vector_box" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head></head><label></label><figDesc>(I) Network initialization: randomly choose the initial weights (II) Select the first training pair (III) Forward computation that includes the following steps: (A) Apply the inputs to the network (B) Calculate the output for every neuron from the input layer, through the hidden layer (s), to the output layer (C) Calculate the error at the outputs (IV) Backward computation (A) Use the output error to compute error signals for preoutput layers (B) Use the error signals to compute weight adjustments (C) Apply the weight adjustments (V) Repeat Forward and Backward computations for other training pairs.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Fig. 6 .</head><label>6</label><figDesc>Fig. 6. The structure of the developed intellectual system for breed recognition and cattle LW evaluation</figDesc><graphic coords="12,161.76,147.24,271.94,157.20" type="vector_box" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Fig. 7 .Fig. 8 .Fig. 9 .</head><label>789</label><figDesc>Fig. 7. Examples of correct recognitions</figDesc><graphic coords="13,124.68,590.52,170.94,78.72" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>The obtained results</figDesc><table><row><cell>Breed</cell><cell>Ayrshire</cell><cell>Holstein</cell><cell>Jersey</cell><cell>R e d</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>Steppe</cell></row><row><cell>Average calculated dimen-</cell><cell>149/171</cell><cell>155/180</cell><cell>132/160</cell><cell>157/189</cell></row><row><cell>sions (length of the body /</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>chest girth) (cm)</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Qualification weight (kg)</cell><cell>610</cell><cell>745</cell><cell>422</cell><cell>490</cell></row><row><cell>The standard deviation of</cell><cell>1.3</cell><cell>1.6</cell><cell>2.4</cell><cell>1.8</cell></row><row><cell>obtained sizes</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>The mean absolute error</cell><cell>0.9</cell><cell>1.2</cell><cell>2.5</cell><cell>1.5</cell></row><row><cell>(cm)</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>The standard deviation of</cell><cell>0.5</cell><cell>1.7</cell><cell>4.5</cell><cell>3.8</cell></row><row><cell>absolute error (cm)</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>The average relative error</cell><cell>0.1</cell><cell>0.12</cell><cell>0.17</cell><cell>0.15</cell></row><row><cell>The standard deviation of</cell><cell>0.11</cell><cell>0.15</cell><cell>0.09</cell><cell>0.09</cell></row><row><cell>the relative error</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Number of correct recogni-</cell><cell>461</cell><cell>418</cell><cell>424</cell><cell>439</cell></row><row><cell>tion</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Number of false positives</cell><cell>39</cell><cell>82</cell><cell>76</cell><cell>61</cell></row><row><cell>Accuracy</cell><cell>0.92</cell><cell>0.84</cell><cell>0.85</cell><cell>0.88</cell></row><row><cell>The ratio of false positives</cell><cell>0.09</cell><cell>0.2</cell><cell>0.18</cell><cell>0.14</cell></row><row><cell>to the correct ones</cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Determination of Body Measurements On the Holstein Cows by Digital Image Analysis Method and Estimation of Their Live Weight</title>
		<author>
			<persName><forename type="first">S</forename><surname>Tasdemir</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2010">2010</date>
			<pubPlace>Konya, Turkey</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Selcuk University</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Ph. D. thesis</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Prediction of Bulls&apos;slaughter Value From Growth Data Using Artificial Neural Network</title>
		<author>
			<persName><forename type="first">K</forename><surname>Adamczyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Molenda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Szarek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Skrzyński</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Central European Agriculture</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="133" to="142" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Artificial Intelligence Technologies in Dairy Science: Fuzzy Logic and Artificial Neural Network</title>
		<author>
			<persName><forename type="first">A</forename><surname>Akilli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Atil</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Hayvansal Uretim</title>
		<imprint>
			<biblScope unit="volume">55</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="39" to="45" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Comparison of Artificial Neural Network and Multiple Linear Regression for Prediction of Live Weight in Hair Goats</title>
		<author>
			<persName><forename type="first">S</forename><surname>Akkol</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Akilli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Cema</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">YYU Yuzuncu Yıl Universitesi Journal of Agricultural Sciences</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="21" to="29" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Comparison of artificial neural network and decision tree algorithms used for predicting live weight at post weaning period from some biometrical characteristics in Harnai sheep</title>
		<author>
			<persName><forename type="first">M</forename><surname>Ali</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Eyduran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Tariq</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Tirink</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Abbas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Bajwa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">.</forename><forename type="middle">.</forename><surname>Shah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">H</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pakistan J. Zool</title>
		<imprint>
			<biblScope unit="volume">47</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="1579" to="1585" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Weight prediction of broiler chickens using 3D computer vision</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">K</forename><surname>Mortensen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Lisouski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Ahrendt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computers and Electronics in Agriculture</title>
		<imprint>
			<biblScope unit="volume">123</biblScope>
			<biblScope unit="page" from="319" to="326" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Comparison of connectionist and multiple regression approaches for prediction of body weight of goats</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">V</forename><surname>Raja</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">P</forename><surname>Ruhil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">S</forename><surname>Gandhi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Computing and Applications</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="119" to="124" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Using artificial neural network to predict body weights of rabbits</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">O</forename><surname>Salawu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Abdulraheem</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Shoyombo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Adepeju</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Davies</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Akinsola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Nwagu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Open Journal of Animal Sciences</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="issue">04</biblScope>
			<biblScope unit="page">182</biblScope>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Prediction of carcass meat percentage in young pigs using linear regression models and artificial neural networks</title>
		<author>
			<persName><forename type="first">M</forename><surname>Szyndler-Nędza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Eckert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Blicharski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tyra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Prokowski</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Annals of Animal Science</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="275" to="286" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Walk-through weighing of pigs using machine vision and an artificial neural network</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Winter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Walker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Biosystems Engineering</title>
		<imprint>
			<biblScope unit="volume">100</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="117" to="125" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">I</forename><surname>Mcnitt</surname></persName>
		</author>
		<title level="m">Livestock Husbandry Techniques, Low priced edition</title>
				<imprint>
			<publisher>Granada publishing company limited</publisher>
			<date type="published" when="1983">1983</date>
			<biblScope unit="page">280</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">I</forename><surname>Hartley</surname></persName>
		</author>
		<author>
			<persName><surname>Zisserman</surname></persName>
		</author>
		<title level="m">A: Multiple View Geometry</title>
				<imprint>
			<publisher>Cambridge University Press</publisher>
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Learning OpenCV: Computer Vision with the OpenCV Library</title>
		<author>
			<persName><forename type="first">G</forename><surname>Bradski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kaehler</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2008">2008</date>
			<publisher>O&apos;Reilly Media</publisher>
			<biblScope unit="page">580</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Backpropagation Applied to Handwritten Zip Code Recognition</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Lecun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Boser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Denker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Henderson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">E</forename><surname>Howard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Hubbard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">D</forename><surname>Jackel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Winter, Neural Computation</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="541" to="551" />
			<date type="published" when="1989">1989</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position</title>
		<author>
			<persName><forename type="first">K</forename><surname>Fukushima</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Biological Cybernetics</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="issue">4</biblScope>
			<date type="published" when="1980">1980</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<author>
			<persName><forename type="first">K</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Gkioxari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dollar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Girshick</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1703.06870</idno>
		<title level="m">Mask r-cnn</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Convolutional networks for images, speech, and timeseries</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Lecun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Handbook of Brain Theory and Neural Networks</title>
				<imprint>
			<date type="published" when="1995">1995</date>
			<biblScope unit="page" from="255" to="258" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Rectified linearunits improve restricted Boltzmann machines</title>
		<author>
			<persName><forename type="first">V</forename><surname>Nair</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">E</forename><surname>Hinton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">InICML</title>
		<imprint>
			<biblScope unit="page" from="807" to="814" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<author>
			<persName><forename type="first">Ye</forename><forename type="middle">V</forename><surname>Bodyanskiy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">G</forename><surname>Rudenko</surname></persName>
		</author>
		<title level="m">Iskusstvennyye neyronnyye seti: arkhitektura, obucheniye, primeneniye</title>
				<imprint>
			<publisher>TELETEKH</publisher>
			<date type="published" when="2004">2004</date>
			<biblScope unit="page">372</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Neyrokomp&apos;yuternaya tekhnika</title>
		<author>
			<persName><forename type="first">F</forename><surname>Uossermen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">M</title>
		<imprint>
			<biblScope unit="page">184</biblScope>
			<date type="published" when="1992">1992</date>
			<publisher>Mir</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">M</forename><surname>Ham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Kostanic</surname></persName>
		</author>
		<title level="m">Principles of Neurocomputing for Science and Engineering</title>
				<meeting><address><addrLine>NY</addrLine></address></meeting>
		<imprint>
			<publisher>Mc Graw-Hill Inc</publisher>
			<date type="published" when="2001">2001</date>
			<biblScope unit="page">468</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<title level="m" type="main">Artifical Neural Networks, Theory and Application</title>
		<author>
			<persName><forename type="first">D</forename><surname>Patterson</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1996">1996</date>
			<publisher>Prenice Hall Inc</publisher>
			<biblScope unit="page">497</biblScope>
			<pubPlace>Singapur</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Mnogokriterial&apos;naya optimizatsiya evolyutsioniruyushchikh setey pryamogo rasprostraneniya</title>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">G</forename><surname>Rudenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Bessonov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Problemy upravleniya i informatiki</title>
		<imprint>
			<biblScope unit="page" from="29" to="41" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<title level="m" type="main">Neural Networks and Back Propagation Algorithm</title>
		<author>
			<persName><forename type="first">M</forename><surname>Cilimkovic</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2015">2015</date>
			<pubPlace>Dublin, Ireland</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Institute of Technology Blanchardstown</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<ptr target="https://github.com/matterport/Mask_RCNN" />
		<title level="m">Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<ptr target="https://github.com/aleju/imgaug" />
		<title level="m">Image augmentation for machine learning experiments</title>
				<imprint/>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
