<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Adam Grelewicz</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mateusz Lis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dawid Michalak</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Faculty of Applied Mathematics, Silesian University of Technology</institution>
          ,
          <addr-line>Kaszubska 23, 44-100 Gliwice</addr-line>
          ,
          <country country="PL">POLAND</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>IVUS2024: Information Society and University Studies 2024</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Sound analysis plays a crucial role in identifying various types of defects in collaboration with artificial intelligence system models. To design a well-functioning model, a thorough data analysis is essential. Therefore, this article presents the implementation of the MFCC algorithm for different music genres. The algorithm is supported by high-pass and triangular filters. The recording will be transformed using the discrete Fourier transform (DFT). Then, the correctness of the algorithm will be verified using the KNN classifier and Naive Bayes to check the correct identification of the music genre.The project was conducted on a publicly available dataset. The results for the KNN classifier are very satisfactory. Additionally, this article demonstrates the superiority of the KNN classifierover Bayes for sound analysis.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;MFCC algorithm</kwd>
        <kwd>Genre Recognition</kwd>
        <kwd>Naive Bayes</kwd>
        <kwd>KNN</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Sound is a wave that arises from changes in atmospheric pressure caused by vibration [1].
Combined with artificial intelligence systems, it can have broad applications in various fields. In
medicine, image recognition using deep learning is utilized. In [2], models are used to help
specialists diagnose diseases more quickly. It is worth noting that sound contains a lot of
information. Based on sound, certain abnormalities can be detected. In [3], the use of heart
sounds for early disease detection is excellently demonstrated, allowing for earlier treatment. In article
[12], there is another medical application, namely recognizing people with Parkinson’s disease
from recorded voice samples. The average accuracy of this method is around 90%.</p>
      <p>Therefore sound should be converted into a spectrogram, and then a model for image
recognition should be used. A spectrogram is a visual representation of the intensity of a signal over
time, with respect to different frequencies present in a given waveform. The evaluation of
spectrograms involves transforming the signal from the time domain to the frequency domain using
the Fourier transform [4]. In [4], it is shown that sound can also be used in the food industry
to identify various food products. In the following articles [8][9][10][11], various techniques
utilizing sound recognition are described, such as Environmental Sound Recognition (ESR) and
Automatic Sound Recognition (ASR), which can be used in a smart home. A smart home, along
with artificial intelligence methods, can provide support for people, reduce exploration costs,and
improve energy efficiency [13][14]. Therefore, this field also utilizes sound recognition. This
mechanism can be used as one of the biometric security measures for homes [15]. However, the sound
processing scheme is the same.</p>
      <p>All these articles demonstrate that data analysis is very important for the application of neural
networks. In particular, sound must be properly processed. Sound, especially human speech or
music, has certain features that can be used for its characterization, such as a unique human
voice, communication method-specific noise, or the use of similar instruments in musical pieces of
the same genre [1]. Therefore, to extract the most important features of a sound signal in the
form of a coefficient matrix, the MFCC algorithm, which will be described in detail in this
article. Later in this article, there will be a comparison of two classifiers:KNN and Naive Bayes.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Methodology</title>
      <sec id="sec-2-1">
        <title>2.1. The MFCC Algorithm (Mel-frequency cepstral coefficients)</title>
        <p>Before describing the MFCC algorithm itself, certain concepts need to be defined:
1. Mel scale - a scale of pitches that measures the perceived frequency of sound, in contrast to
the objective frequency scale measured in hertz.</p>
        <p>The function for converting a frequency in hertz to the Mel scale:
 ( ) = 1125 log (1 +
 () = 700(exp

 )
700
— 1)
The inverse function:
1125
2. Window function is a function that takes non-zero values only within a specified interval.</p>
        <p>
          That functions are used to filter signals.
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
For the purposes of mathematical description, let’s introduce the following notation:
1. {...}* denotes an array, which is a set where elements can repeat and maintain order. An
array of arrays is called a matrix.
2. If  is an array, the notation  [ ] means the k-th element of  .
3. If  is an array, the notation  [:  ] means the first k elements of  .
4. All other operations on arrays work similarly to operations on sets.
        </p>
        <p>
          Description of the MFCC algorithm:
1. Let:
•  be the number of samples in the input signal,
•  = { 0,  1, ..., −1}* be the input signal,
• ℎ be the sampling frequency of the input signal in hertz,
•  be the number of triangular filters,
•  be the number of numbers transformed by the discrete Fourier transform,
•  be the length of the window in samples,
•  be the number of samples by which the window is shifted and 0 &lt;  ≤ ,
•  be the number of cepstral coefficients.
2. Filters are applied to remove noise. This step is optional, but noise removal improves
accuracy, so a high-pass filter is used in the form of:
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
(
          <xref ref-type="bibr" rid="ref4">4</xref>
          )
(5)
(6)
 =  − 0.97+1
 ∈ {0, 1, 2, ...,  − 2}*
{︃  = (0) = 0
 = ( ℎ ) = 1125 log (1 +
)
        </p>
        <p>ℎ
 = { +  Δ :  ∈ {0, 1, 2, ...,  + 1}*}*</p>
        <p>− 
Δ =
 + 1
3. Triangular filters are created to extract desired features from the input signal while
omitting unnecessary ones. These filters are distributed on a frequency scale between 0and
ℎ . Initially, the boundaries of this scale are converted from frequency in hertz to the Mel
scale2.
Array  contains non-linearly distributed numbers from 0 to ⌊︂ 2⌋︂.</p>
        <p>(, ) =
⎪
⎧
⎨
⎪
⎪
⎪
(8)
(9)
(10)
(11)
(12)
The power spectrum is calculated, i.e., the discrete Fourier transform (DFT) of the first 
elements from the array  ( ), then square each number in the resulting array and scale
these elements by 1 .</p>
        <p>() = 1</p>
        <p>(DFT( ( ), ))

where  ( ) denotes the i-th power spectrum for the i-th window  ( ) and</p>
        <p>({ 0,  1, ..., }*) = {| 0|2, | 1|2, ..., ||2}*
Absolute values are required in function  as  can be complex numbers.</p>
        <p>The previously calculated filters are then utilized to filter the power spectrum via the
matrix product of  ( ) and the transpose of matrix  .</p>
        <p>The final step is to compute the natural logarithm for each element of() and transform these
logarithms using the discrete cosine transform (DCT) of type II.</p>
        <p>∈ {0, 1, 2, ...,
4. The input signal is divided into windows, where a window is defined as:
 ( ) = {() ⌊:︂  ≤  &lt;  +  }* ⌋︁</p>
        <p>+  −  mod 
 is the window index.
where  ({ 0,  1, ..., }*) = {ln  0, ln  1, ..., ln }*.</p>
        <p>The result of the algorithm is a matrix:</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. KNN (k-Nearest-Neighbours)</title>
        <p>The K-Nearest Neighbors (KNN) algorithm is a classification and regression method that utilizes the
similarity between data points. It operates by findingthe nearest neighbors (data points) to a new
point and uses their information to predict the class or value for that point [5]. Before
describing the KNN algorithm itself, certain concepts need to be defined:</p>
        <p>Value of k - The number of neighbors to be considered during classification or regression.
Mahalanobis distance - It considers the correlations between two vectors x and y with covariance
matrix S and scales distances depending on the distribution of data. It is given by the formula:
√︁
 (, ) = ( − )⊤−1( − )
(13)</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Naive Bayes classifier</title>
        <p>The Naive Bayes classifieris a machine learning method used for classifying data into
decision classes. Despite its simplicity, it has a wide range of applications in text classification,
medical diagnosis, and system performance management. The task of the Bayes classifieris to
assign a new case to one of the classes [6][7]. Each training example is described by a set of
conditional attributes {} and one decision attribute . According to Bayes’ theorem, the
most probable class to which a new object, described by the values of n-conditional
attributes ⟨1, 2, . . . , ⟩, belongs is the class  that maximizes the conditional probability
 ( | 1, 2, . . . , ).</p>
        <p>= arg max  () ·  (1, 2, . . . ,  | )</p>
        <p>∈
 (1, 2, . . . ,  | ) =
 = arg max  () ·
∈
∏︁
=1</p>
        <p>( | )
=1
 ( | )
The probability  () can be estimated as the ratio of the number of training examples belonging to
class  to the total number of training examples. To estimate  (1, 2, . . . ,  | ), the Naive Bayes
classifier assumes the conditional independence of attributes:
(14)
(15)
(16)
The probability  ( | ) can be estimated as the ratio of the number of training examples in class  for
which the attribute  has the value  to the total number of training examples in class .
Considering this assumption, the class  (Naïve Bayes) chosen for a new example is:
∏︁</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Experiments</title>
      <p>The example of MFCC algorithm will be conducted using the file classical.00000.wav.
1. The sampling frequency for this fileis ℎ = 22050 Hz.</p>
      <p>The length of the file is 30.013 s.</p>
      <p>The number of samples is  = 661794.</p>
      <p>The sound samples are  = { 0,  1, ..., −1}*. The
plot of the samples of this file is on Figure 2</p>
      <p>The number of triangular filters = 26.</p>
      <p>The number of values transformed by the discrete Fourier transform  = 512. The
length of the window in samples  = 551.</p>
      <p>The number of samples by which we shift the window  = 220. The
number of cepstral coefficients  = 13.
2. After applying the high-pass filter, the samples look as follows on Figure 3</p>
      <p>The difference between the samples before and after the filteris not visible at firstglance,
but applying this filter to each file improved accuracy by about 4%.
3. Triangular filters are created. According to the formulas in the algorithm description, the
boundary frequencies are obtained.</p>
      <p>The array  is created according to the formula in the description.</p>
      <p>Δ =</p>
      <p>27
As can be seen, the array  contains numbers ranging from 0 to ⌊︂ 2⌋︂ = 256.</p>
      <p>As can be seen, these numbers are not evenly distributed, meaning the differences between
consecutive numbers increase as the elements progress. This is because the boundary frequencies
 and  were converted from the frequency scale in Hertz to the Mel scale, which is nonlinear. The
reason why a change to a nonlinear scale was required will be explained later in the example.</p>
      <p>Triangular filters are created, again, according to the formula from the description:
For  = 0 the function  (0,  ) will look like:</p>
      <p>⎧
 = {{( ,) :  ∈ {0, 1, 2, ..., 256}*}* :</p>
      <sec id="sec-3-1">
        <title>Representing this filter on a graph on figure 4.</title>
        <p>As can be seen, this filter is very narrow. It passes only what is at the beginning and zeroes out
the rest.</p>
        <p>After calculating all the values (,), all  = 26 filters can be represented on a graph onfigure5
Due to the application of the Mel scale to distribute these filters, the highest density is at the
beginning, and the lowest at the end.</p>
        <p>The reason for needing the Mel scale is that it accurately represents how humans perceive
sound. It turns out that most useful information is in the lower frequencies, not the higher
ones. Therefore, it makes sense to place more filtersat the beginning, which was achieved
by converting the frequency scale in Hertz to the Mel scale. Without this, all filterswould be
evenly distributed across the entire scale.</p>
        <p>It was tested what would happen if evenly distributed filters were used, and it degraded the
accuracy by about 5%. The input signal is divided into windows.</p>
        <p>For the window  = 0:
For the window  = 1:
 ( ) = { ( ) : 220 ≤  &lt; 220 + 551}*
,  ∈ {0, 1, ..., 3008}*</p>
        <p>{︃
() =
⇒  (0) = {−102.19, −705.89, −136.54, ..., 123.9}*
For the window  = 3008 (the last window):</p>
        <p>
          (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) = {() : 220 ≤  &lt; 771}*
⇒  (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) = {220, 221, 222, ..., 770}*
 (3008) = { ( ) : 661760 ≤  &lt; 662311}*
⇒  (3008) = {661760, 661761, 661762, ...,
 661793, 0, 0, ..., 0}*
 () =
1
        </p>
        <p>1
551
It is worth noting that for  = 3008 the index  goes beyond the input signal, so there are zeros at
the end.</p>
      </sec>
      <sec id="sec-3-2">
        <title>Now the power spectral density is calculated:</title>
        <p>Having the power spectral density, the matrix product  ( ) and  F is calculated, which will be the
operation of filtering frequencies according to the triangular filters previously established.
Finally, the discrete cosine transform of the logarithms of  ( ) is calculated, taking only the first
= 13 elements:
(0) =  (0) F
, 18245.6147}*
⇒  (0) = {10695.2993, 31658.4727, 20555.0554, ...</p>
        <p>( ) = DCT( ( ( )))[: 13]
(0) = DCT(((0)))[: 13]
⇒  (0) = DCT( ({10695.2993, 31658.4727,</p>
        <p>20555.0554, ..., 18245.6147}*))[: 13]
⇒ (0) = {62.5650537, −2.03586229,</p>
      </sec>
      <sec id="sec-3-3">
        <title>Finally, the array  is obtained, which is:</title>
        <p>
          = {(0), (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ), (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ), ..., (3008)}*
Thus, the input signal is represented in the form of a matrix of cepstral coefficients.
        </p>
        <p>Analysis of the results is conducted for 6 classes of abstraction, with the following music
genres:
• Classical music,
• Disco,
• Hip-hop,
• Metal,
• Blues,
• Country.</p>
        <p>For each genre, there are 100 assigned tracks, each lasting 30 seconds. The split between
training and test data is 70:30.</p>
        <p>Before conducting a detailed analysis, it is important to determine the most effective value of k
for the KNN classifier. According to the Table 2, it can be seen that the most effective value is k =
5, therefore this value should be adopted for the analysis.</p>
        <p>The next step is to evaluate the obtained matrices with the KNN classifier and Naive Bayes.
Performance evaluation metrics such as accuracy, loss, precision, recall, and F1 score will be
used to assess the effectiveness of these methods [4]. These metrics are essential for evaluating the
performance of machine learning models and are described by the following equations:
Accuracy =</p>
        <p>+  
  +   +   +</p>
        <p>+  
Loss =  +   +   +</p>
        <p>Precision =</p>
        <p>Recall =</p>
        <p>+</p>
        <p>+</p>
      </sec>
      <sec id="sec-3-4">
        <title>Precision × Recall</title>
        <p>F1 Score = 2 · Precision + Recall
(18)
(19)
(20)
(21)
(22)
• TP (True Positive), which is the number of cases where the model correctly classified
positive instances.
• TN (True Negative), which is the number of cases where the model correctly classified
negative instances.
• FP (False Positive), which is the number of cases where the model incorrectly classified
negative instances as positive.
• FN (False Negative), which is the number of cases where the model incorrectly classified
positive instances as negative.</p>
        <p>As we can see from the Table 2, 3, the metric values are very good for KNN with 6 classes of
abstraction, whereas Naive Bayes performs significantly worse. In terms of accuracy for the
entire test set, KNN achieved 80.56%, while the naive Bayes classifier achieved 36.67%. On the
Figure 6, 7 we observe the confusion matrix. An ideal confusion matrix has 100% on the diagonal, and
the rest should be 0%. For KNN, the matrix is nearly ideal. The classifier performed worst for
the disco music genre. However, for Naive Bayes, the confusion matrix does not resemble the
ideal one. Nevertheless, the algorithm performed best for the metal genre.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>This MFCC algorithm allows for highly efficient classificationof music genres with the KNN
classifier. The covariance matrix effectively extracted features from the audio signal and could be
used for commercial purposes such as in medicine. In the case of classifiers, KNN uses
distance metrics that can be very effective in measuring similarities between musical pieces.
Additionally, it does not assume any specific form of the classification function, relying insteadon
local similarities, which is why it worked perfectly here. On the other hand, the advantages of
using the naive Bayes classifier are its simplicity and speed compared to KNN. Naive Bayes
assumes that the features are independent, which is rarely true for audio data where different
features can be strongly correlated. In this project, only one feature was used, namely the mean
of the sum of all elements of the matrix, which may have influenced the low accuracy compared
to KNN. To achieve high accuracy, more advanced methods such as neural networks like CNN
and RNN should be used. In the future, based on the matrix obtained from the MFCC algorithm, a
spectrogram can be created and the given algorithm can be tested on more complex models to
achieve better results.
“Automated detection and recognition system for chewable food items using advanced
deep learning models,” 2024.
[5] Bartosz A. Nowak, Robert K. Nowicki, Marcin Woźniak, Christian Napoli, “Multi-class</p>
      <p>Nearest Neighbour Classifierfor Incomplete Data Handling.”
[6] I. Rish, “An empirical study of the naive Bayes classifier,” 2001.
[7] Harry Zhang, “The Optimality of Naive Bayes,” 2004.
[8] Sachin Chachada, C.-C. Jay Kuo, “Environmental sound recognition: a survey,” 2014.
[9] Michael Cowling, Renate Sitte, “Comparison of techniques for environmental sound
recognition,” 2003.
[10] Roneel V. Sharan, Tom J. Moir, “An overview of applications and advancements in automatic
sound recognition,” 2016.
[11] Jia-Ching Wang, Hsiao-Ping Lee, Jhing-Fa Wang, Cai-Bei Lin, “Robust Environmental</p>
      <p>Sound Recognition for Home Automation,” 2008.
[12] Junxin Chen, Wei Wang, Bo Fang, Yu Liu, Keping Yu, Victor C. M. Leung, Xiping Hu,
‘Exploiting Smartphone Voice Recording as a Digital Biomarker for Parkinson’s Disease
Diagnosis,” 2023.
[13] Marcin Woźniak, Dawid Połap, ‘Intelligent Home Systems for Ubiquitous User Support by</p>
      <p>Using Neural Networks and Rule-Based Approach,” 2020.
[14] Richard Hauxwell-Baldwin, Charlie Wilson, Tom Hargreaves ‘Learning to live in a smart
home,” 2017.
[15] Jessamyn Dahmen, Brian L. Thomas, Diane J. Cook, Xiaobo Wang, ‘Activity Learning as a
Foundation for Security Monitoring in Smart Homes,” 2017.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Malgorzata</given-names>
            <surname>Przedpelska-Bieniek</surname>
          </string-name>
          , “
          <article-title>Dzwiek i akustyka</article-title>
          . Nauka o dzwieku,”
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Marcin</given-names>
            <surname>Woźniak</surname>
          </string-name>
          , Jakub Siłka, Michał Wieczorek, “
          <article-title>Deep neural network correlation learning mechanism for CT brain tumor detection</article-title>
          ,”
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Junxin</given-names>
            <surname>Chen</surname>
          </string-name>
          , Zhihuan Guo, Xu Xu,
          <string-name>
            <surname>Li-bo</surname>
            <given-names>Zhang</given-names>
          </string-name>
          , Yue Teng, Yongyong Chen, Marcin Woźniak, Wei Wang, “
          <article-title>A Robust Deep Learning Framework Based on Spectrograms for Heart Sound Classification</article-title>
          ,”
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Yogesh</given-names>
            <surname>Kumar</surname>
          </string-name>
          , Apeksha Koul, Kamini, Marcin Woźniak, Jana Shafi, Muhammad Fazal Ijaz,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>