<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Comparative Analysis of ARIMA, Deep Learning, and Lasso Regression Models for Time Series Forecasting: Assessing Accuracy, Robustness, and Computational Efficiency</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sanjay Kumar</string-name>
          <email>k.sanjay123@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Meenakshi Srivastava</string-name>
          <email>msrivastava@lko.amity.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vijay Prakash</string-name>
          <email>vijaylko@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Amity University Lucknow</institution>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Babu Banarsi Das University Lucknow</institution>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper provides a comprehensive review of time-series forecasting models for forecasting the performance of Indian mutual funds. Specifically, we evaluate the effectiveness of three popular approaches: ARIMA, deep learning, and Lasso regression. Using a dataset of historical mutual fund data from the Indian market, we compare the predictive accuracy of these models using various evaluation metrics. Our findings indicate that Lasso regression outperforms both ARIMA and Deep Learning (LSTM) models in capturing the complex patterns and dynamics of mutual fund data. These findings offer valuable insights for investors and financial practitioners, shedding light on the most effective modeling approaches for predicting Indian mutual fund performance. This study contributes to the field of time series forecasting by providing a comprehensive comparison of ARIMA, Deep Learning, and Lasso Regression models. The findings can guide researchers and practitioners in selecting the most suitable model for specific forecasting tasks based on the desired balance between accuracy, robustness, and computational efficiency. The proposed research focuses on providing sustainability in investment domain. Lasso Regression models exhibit superior accuracy and competitive performance with a lower computational cost. The popular methods MAE, RMSE, MAE, R2 Score, MAPE, and MPE are used to measure the accuracy of the models.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Time-series forecasting</kwd>
        <kwd>performance analysis</kwd>
        <kwd>ARIMA</kwd>
        <kwd>deep learning</kwd>
        <kwd>Lasso</kwd>
        <kwd>regression</kwd>
        <kwd>predictive</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>encouraging outcomes when applied to financial time-series data, showcasing their potential in
forecasting stock prices, identifying market trends, and predicting various financial indicators.
Sustainability in investment has gained significant traction in recent years as more investors
recognize the importance of long-term sustainability for both financial returns and broader
societal well-being. Various investment products, such as sustainable mutual funds,
exchangetraded funds (ETFs), and green bonds, cater to investors looking to align their financial goals
with their values. Ensuring sustainability in investment involves a combination of research,
analysis, due diligence, and ongoing monitoring.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Review of Literature</title>
      <p>In recent years, various time-series forecasting models have gained prominence in the financial
domain for their potential to capture the complex dynamics of financial data.</p>
      <p>Zeroual, A et al. [1] studies five deep learning models to forecast the new and recovered
cases of COVID-19. VAE (Variational Autoencoder) algorithm shows superior performance
among all. Benevento, E et al. [2] evaluate the predictive performance of lasso regression,
random forest, support vector regression, artificial neural networks, and ensemble methods
using a range of error metrics and computation time measurements. The results reveal that the
ensemble method surpasses other approaches in accurately predicting latency. Zhang, L et al.
[3] introduced, where the stochastic trend data is eliminated from the SSE Composite Index to
obtain de-noised training data for the SVM (Support Vector Machine). Subsequently, the SVM is
trained using this de-noised data to make predictions on the test data. The SVM achieves a 25%
hit rate when predicting with the noisy training data. Guo, K et al. [4] applying the ARIMA
model, both the original data series and the logarithmic series of the S&amp;P 500 Exponential
Weekly Data Series and found that model predicts accurate stock price. Pandey, A et al. [5]
developed a model for investors to accurately forecast prices, regardless of the employed
strategy. The primary objective of this research is to analyze and predict changes in the stock
market. By examining past historical trends, the model aims to identify and forecast emerging
patterns that will manifest in the upcoming days. Xu, Y et al. [6] presents a predictive analysis
conducted across various economic cycles, uncovering that the social media sentiment index
demonstrates the strongest predictive ability during periods of economic expansion. Dai, Z et al.
[7] predicts stock earnings volatility by utilizing the partially least squares technique, which
identifies crucial predictors from a data-rich context. The research findings illustrate the
efficacy of the partial least squares approach in improving the accuracy of stock return volatility
predictions in data-rich environments. This approach surpasses alternative models and exhibits
a significant advancement over benchmark models .Ma, F et al. [8] proposes the use of
dimensionality reduction and contraction techniques to forecast stock market returns. This
research provides fresh insights into stock market return projections by considering
macroeconomic fundamentals as a basis for analysis. Li, X et al. [9] proposes a
MS-MIDASLASSO model that shows superior predictive accuracy compared to both the conventional
LASSO strategy and its regime-switching extension. Notably, the outstanding predictive
performance of this model remains unchanged even in the face of the onset of the COVID-19
pandemic. Ren, X et al. [10] identify that the Fourier transform-based LSTM method enhances
the prediction accuracy of stock price fluctuation dynamics. This improvement is observed from
both statistical and economic standpoints, as we exploit the role of oil shocks in the analysis.
Zhu, Z and He, K [11] Finding the best models to predict stock price trends has always been a
topic of great interest and is closely related to investor investment behavior. However, LSTM
models still need to be improved in terms of performance to reduce distortion. We expect to
discover more models for predicting stock prices in the future. Lee, H. Y et al. [12] purpose of
this study was to extract valuable outlier information from the residuals of ARIMA modeling
using the Continuous Wavelet Transform (CWT). The obtained CWT information was then
incorporated into the ARIMA forecasts, resulting in the creation of long-term heterogeneous
forecasts. Liu, T et al. [13] suggests a new stock price forecast model named VML with the aim of
enhancing forecast accuracy and achieving improved forecast results. The proposed approach
involves splitting the decomposed subseries into multiple tasks using the MAML algorithm. This
facilitates the training of the LSTM model with initial parameters that possess strong
generalization capabilities. Experimental outcomes obtained from Chinese and American stock
market datasets demonstrate that the proposed method significantly enhances prediction
accuracy. Nair, A. V and Narayanan, J [14] suggest a stock market forecasting model was
suggested to anticipate the future performance of a company's stock. The incorporation of
machine learning techniques represents the latest advancement in market analysis technology,
enabling the determination of current stock index values by leveraging past values. Zeng, L et al.
[15] proposes an optimal combinatorial framework for agricultural commodity price
forecasting was introduced. This framework integrates a decomposition-reconstruction
ensemble technique and an enhanced global optimization algorithm, inspired by natural
processes. Wu, D et al. [16] introduces a hybrid stock market forecasting model that merges a
multilayer artificial neural perceptron network (MLP-ANN) with the conventional Altman
Zscore model. Empirical analysis demonstrates that the hybrid neural network model achieves a
notable average correct classification rate. Isabona, J et al. [17] study indicate that the
prediction errors of the suggested MLP model, when compared to the measured data, are highly
favorable and surpass those obtained through the conventional logarithmic distance-based path
loss model. Li, G et al. [18] proposes a technique called the PCC-BLS framework was suggested
to choose multi-indicator functions for predicting stock prices. This approach utilizes the
Pearson's correlation coefficient (PCC) and the broad learning system (BLS). Initially, PCC was
employed to select input features from a pool of 35 options, which encompassed original stock
prices, technical indicators, and financial indicators. Banerjee, S and Mukherjee, D [19]
emphasis his study on the utilization of nonparametric approaches like stacked multilayer
perceptions (MLP), long short-term memory (LSTM), and gated recurrent units (GRU).
Specifically, long-term short-term bidirectional memory (BLSTM) and gated bidirectional
recurrent units (BGRU) were employed to forecast short-term stock prices for three NSE-listed
banks. The performance of these models was then compared against a flat neural network
benchmark. Ji, X et al. [20] proposes a novel forecasting approach was introduced, which
combines conventional financial indicators with social media text features as inputs for
predictive models. Additionally, a unique stock price prediction model incorporating both
traditional financial variables and social media text features extracted through deep learning
methods was suggested in this study. Kumar, D [21] proposes that stock market prediction is a
cohesive process, implying the need for a closer examination of specific parameters relevant to
stock market forecasting. Tanwar, R et al. [22] proposed a hybrid deep learning approach,
specifically a model combining Convolutional Neural Network and Long Short-Term Memory
(CNN-LSTM), designed for the identification of stress. Tanwar, R et al.[23] introduced a hybrid
deep learning model that incorporates an attention mechanism. This allows for thorough
feature extraction and dynamic prioritization of information. Makwana, Y et al.[24] Conducts a
comparative analysis of different methods and technologies, with a particular focus on the
effectiveness of Convolutional Neural Network (CNN) in food recognition. The research reveals
insights into various CNN models, showcasing their accuracy and outcomes in the context of
food recognition.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Problem Statement</title>
      <p>The problem at hand is the lack of a comprehensive assessment of time-series forecasting
models for predicting the performance of Indian mutual funds. Although various approaches,
such as ARIMA, deep learning (LSTM), and Lasso regression, have shown promise in other
domains, their effectiveness and comparative performance in the context of Indian mutual
funds remain unclear. The evaluation seeks to address this research gap by conducting a
comprehensive assessment of the ARIMA, deep learning, and Lasso regression approaches.</p>
      <p>i. This analysis will provide insights into the models' ability to accurately predict mutual
fund performance.</p>
      <p>ii. This evaluation will help determine the models' ability to adapt and provide reliable
forecasts under different circumstances.</p>
      <p>iii. This analysis will provide insights into how well the models can generalize their
predictions beyond the training data and make accurate forecasts for unseen mutual fund
performance.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Data for proposed model</title>
      <p>
        This paper focuses on analyzing historical mutual fund data of TATAPOWER. The data,
which can be obtained from the yahoo finance site, encompasses the period from January 1,
2011, to April 28, 2023. To facilitate analysis, the data is divided into training and testing
segments, with 80% allocated for training and 20% for testing. Prediction tasks are then carried
out on this dataset using ARIMA (
        <xref ref-type="bibr" rid="ref1">0, 1, 0</xref>
        ), Deep Learning (LSTM), and Lasso Regression models.
Table 1
      </p>
      <p>Sample Dataset (TATAPOWER)
Date Open High Low Close Adj Close Volume
2011-01-03 133.558380 133.558380 132.014343 132.665741 102.871704 1747585
2011-01-04 132.506500 133.558380 131.584915 133.235092 103.313179 2267182
2011-01-05 132.979370 135.777908 132.120499 135.189255 104.828468 3228574
2011-01-06 134.619888 136.163925 133.321945 135.034851 104.708755 2761494
2011-01-07 133.881653 135.763443 132.796005 134.065002 103.956696 3027490
... ... ... ... ... ... ...
2023-04-24 196.500000 196.699997 194.800003 195.850006 194.042862 5017631
2023-04-25 195.850006 198.800003 195.350006 197.649994 195.826233 5957551
2023-04-26 197.649994 198.949997 196.149994 198.199997 196.371170 4910837
2023-04-27 198.449997 199.949997 197.649994 198.500000 196.668396 5215692
2023-04-28 199.500000 201.550003 199.000000 201.100006 199.244415 7951645
Dataset contains 3038 rows × 6 columns from TATAPOWER mutual fund from dated 201-01-03
to 2023-04-28.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Research methodologies</title>
      <sec id="sec-5-1">
        <title>5.1. ARIMA (Autoregressive Integrated Moving Average)</title>
        <p>The Autoregressive Integrated Moving Average (ARIMA) model is a commonly employed
technique for time-series forecasting. It incorporates three essential components: auto
regression (AR), differencing (I), and moving average (MA). The ARIMA model is defined by the
order assigned to each component, denoted as ARIMA (p, d, q). In this notation, 'p' represents
the autoregressive order, ’d’ represents the differencing order, and 'q' represents the moving
average order.</p>
        <p>5.1.1.</p>
      </sec>
      <sec id="sec-5-2">
        <title>Autoregressive Component (AR)</title>
        <p>The autoregressive component of the model captures the linear association between the
present observation and its previous values. The AR component of order p is represented by the
equation:</p>
        <p>AR(p): Xt = c + Σ(ϕi ∗ Xt − i) + εt</p>
        <p>
          Here, Xt represents the current observation, c is a constant term, ϕi represents the
autoregressive coefficients for lagged values X t-i, and εt is the error term at time t.
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
5.1.2.
        </p>
      </sec>
      <sec id="sec-5-3">
        <title>Moving Average Component (MA)</title>
        <p>The moving average component addresses the interdependence between the current
observation and the error terms within the model. It acknowledges the relationship between
them. The MA component of order q is represented by the equation:</p>
        <p>MA (q): Xt = c + εt + Σ(θi ∗ εt − i)</p>
        <p>Here, θi represents the moving average coefficients for the lagged error terms εt-i.
Combining the three components, the ARIMA (p, d, and q) model is given by:</p>
        <p>
          ARIMA (p, d, q): Xt = c + Σ(ϕi ∗ Xt − i) + εt + Σ(θi ∗ εt − i)
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
        </p>
        <p>The ARIMA model aims to estimate the optimal values of the parameters (p, d, q) that
minimizes the disparity between the observed values and the predicted values. This estimation
is commonly accomplished through techniques like maximum likelihood estimation.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Deep learning</title>
      <p>Deep learning, a branch of machine learning, concentrates on training artificial neural
networks with multiple layers to acquire knowledge and make predictions based on intricate
data. At the heart of deep learning lies artificial neural networks, consisting of interconnected
layers of artificial neurons (also referred to as nodes or units). Each neuron conducts a weighted
summation of its inputs, applies an activation function, and generates an output.</p>
      <p>
        The mathematical representation of the output of a neuron can be expressed as:
z = w₁x₁ + w₂x₂ + . . . + wₙxₙ + b
(
        <xref ref-type="bibr" rid="ref4">4</xref>
        )
      </p>
      <p>In this context, x₁, x₂, ..., xₙ denote the input values or activations from the preceding layer,
w₁, w₂, ..., wₙ refer to the respective weights, b represents the bias term, and z denotes the
weighted sum of inputs.</p>
      <p>
        To train a deep learning model, a loss or cost function is necessary, which measures the
disparity between the predicted output and the true output. The objective is to minimize this
difference using an optimization algorithm called backpropagation. Backpropagation calculates
the gradient of the loss function concerning the weights and biases in the network, enabling
their adjustment in a manner that reduces the error. The gradient descent algorithm is
commonly employed for this purpose. The process of updating the weights and biases is
governed by the following equations:
wᵢ(new) = wᵢ(old) − learning rate ∗ ∂loss/ ∂wᵢ
b(new) = b(old) − learning rate ∗ ∂loss/ ∂b
(
        <xref ref-type="bibr" rid="ref5">5</xref>
        )
(
        <xref ref-type="bibr" rid="ref6">6</xref>
        )
      </p>
      <p>Here, wᵢ(new) and b(new) represent the updated weights and biases, wᵢ(old) and b(old) are
the current weights and biases, learning rate is a hyper parameter that determines the step size
of the update, and ∂loss/∂wᵢ and ∂loss/∂b represent the derivatives of the loss function with
respect to the weights and biases.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Lasso Regression</title>
      <p>Lasso Regression, which stands for Least Absolute Shrinkage and Selection Operator, is a
linear regression technique that integrates regularization to enhance model performance and
select relevant features. Given a dataset with n observations and p features, let X be an n x p
matrix representing the predictor variables, y is an n-dimensional vector representing the
response variable, and β be a p-dimensional vector representing the coefficients to be
estimated.</p>
      <p>
        The formulation of the Lasso Regression model can be expressed as follows:
y = β₀ + β₁x₁ + β₂x₂ + . . . + βₚxₚ + ɛ
(
        <xref ref-type="bibr" rid="ref7">7</xref>
        )
where ɛ is the error term.
      </p>
      <p>The primary goal of Lasso Regression is to minimize the total of squared residuals while
adhering to a constraint on the absolute sum of the coefficients:</p>
      <p>
        minimize: (1/2n) ∗ Σ(yᵢ − (β₀ + β₁x₁ᵢ + β₂x₂ᵢ + . . . + βₚxₚᵢ))² (
        <xref ref-type="bibr" rid="ref8">8</xref>
        )
subject to: Σ|βⱼ| ≤ t,
where i ranges from 1 to n, j ranges from 1 to p, and t is a tuning parameter that controls the
level of regularization.
      </p>
      <p>The constraint Σ|βⱼ| ≤ t encourages sparsity in the model, meaning it promotes the selection
of a subset of relevant features by driving some coefficients to zero. The characteristic of Lasso
Regression makes it valuable for the purpose of feature selection since it automatically conducts
variable selection by reducing the coefficients of irrelevant features towards zero.</p>
    </sec>
    <sec id="sec-8">
      <title>8. Findings and Discussions</title>
      <sec id="sec-8-1">
        <title>8.1. ARIMA Model (Result analysis)</title>
        <p>In Fig. actual closing price of TATAPOWER mutual fund and predicted closing price of
TATAPOWER mutual fund taken into consideration. The fig. shows forecasted and actual closing
price of mutual fund results i.e. very closed to each other. So we can say that performance of
model is very adequate. The MAE value is 3.020% and RMSE is 4.764% also shows the accuracy
of model.</p>
      </sec>
      <sec id="sec-8-2">
        <title>8.2. Deep Learning Model (Result analysis)</title>
        <p>Fig- shows the real closing price of TATAPOWER mutual fund and predicted closing price of
TATAPOWER mutual fund. The graph shows that proposed model the actual value and
predicted value of this mutual fund is very close to each other. Forecasting analysis also proves
the accuracy of model with MAE value is 03.3140% and RMSE is 04.7740% these values slightly
differ from ARIMA model.</p>
      </sec>
      <sec id="sec-8-3">
        <title>8.3. Lasso Regression Model(Result Analysis)</title>
        <p>Fig- shows the real closing price of TATAPOWER mutual fund and predicted closing price of
TATAPOWER mutual fund. The graph shows that proposed Lasso Regression model’s actual
closing price and predicted value of this mutual fund is very close to each other. Forecasting
analysis also proves the accuracy of model with MAE value is 0. 0.0274% and RMSE is 0.0333%
these values slightly differ from ARIMA model. This model performs more actuate than both
above models.</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>9. Model Evaluation Criteria</title>
      <sec id="sec-9-1">
        <title>9.1. Mean Squared Error (MSE)</title>
        <p>MSE is another way to calculate the accuracy and error of the forecast model used:</p>
        <p>
          MSE = Yi – Ŷi )2
Ŷi is the predicted ith value and Yi is the actual/ observed value.
(
          <xref ref-type="bibr" rid="ref9">9</xref>
          )
        </p>
      </sec>
      <sec id="sec-9-2">
        <title>9.2. Root-mean-square deviation (RMSE)</title>
        <p>RMSE is another way to calculate the accuracy of proposed model but it considers the error
calculation based on standard deviation. The final output is one standard deviation of the
magnitude of the error, and the individual calculations are reported as residuals:
RMSE =</p>
        <p>
          Yi – Ŷi)2
(
          <xref ref-type="bibr" rid="ref10">10</xref>
          )
Ŷ i is the predicted ith value and Yi is the actual / observed value.
        </p>
      </sec>
      <sec id="sec-9-3">
        <title>9.3. Mean absolute percentage error (MAPE)</title>
        <p>MAPE may be a formula for calculating the precision of estimates. The calculation is done by
taking the contrast between the real value and the anticipated esteem and separating the
distinction by the actual value.</p>
        <sec id="sec-9-3-1">
          <title>Mean Squared Error (MSE) Root Mean Squared Error (RMSE)</title>
          <p>Mean Absolute Error
(MAE)
R2 Score
Explained Variance
Score
Mean
Percentage
(MAPE)
Mean Percentage Error
(MPE)</p>
        </sec>
        <sec id="sec-9-3-2">
          <title>Absolute</title>
          <p>
            Error 68.207455468409
(
            <xref ref-type="bibr" rid="ref11">11</xref>
            )
is the predicted value and At is the actual / observed value
          </p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-10">
      <title>Conclusion</title>
      <p>In conclusion, this study aimed to perform a Comparative Analysis of ARIMA, Deep Learning,
and Lasso Regression Models for Time Series Forecasting on an Indian mutual fund dataset.
Through a comprehensive evaluation and comparison of these models, several significant
findings have emerged. Firstly, the ARIMA model exhibited robust performance in capturing the
temporal patterns and trends in the mutual fund data. Secondly, the deep learning models,
particularly the long short-term memory (LSTM) networks, demonstrated comparable
predictive capabilities to ARIMA. Lastly, the Lasso regression approach, which leverages
regularization techniques, offered a unique perspective by incorporating variable selection and
regularization into the forecasting process. It proved to be effective in handling multicollinearity
and identifying significant predictors for mutual fund performance.Table-5 shows the accuracy
results of different models Lasso Regression Model outperforms over Deep Learning and ARIMA
model. Sustainability in investment refers to the practice of considering environmental, social,
and governance (ESG) factors when making investment decisions. It goes beyond traditional
financial analysis by evaluating how a company's operations and practices impact the planet,
society, and its long-term performance. The goal of sustainable investing is to generate positive
financial returns while also promoting positive outcomes for the environment and society. It is
crucial to acknowledge that the choice of an appropriate forecasting model should consider
multiple factors, such as the specific objectives, characteristics of the data, and the desired
balance between accuracy and interpretability. Researchers and practitioners can leverage the
insights gained from this study to make informed decisions when selecting a time-series
forecasting model for Indian mutual fund performance analysis. Additionally, further research
could explore ensemble techniques that combine the strengths of different models to enhance
forecasting accuracy and robustness.
[16] Wu, D., Ma, X., &amp; Olson, D. L. (2022). Financial distress prediction using integrated Z-score
and multilayer perceptron neural networks. Decision Support Systems, 159, 113814.
[17] Isabona, J., Imoize, A. L., Ojo, S., Karunwi, O., Kim, Y., Lee, C. C., &amp; Li, C. T. (2022).</p>
      <p>
        Development of a multilayer perceptron neural network for optimal predictive modeling in
urban microcellular radio environments. Applied Sciences, 12(
        <xref ref-type="bibr" rid="ref11">11</xref>
        ), 5713.
[18] Li, G., Zhang, A., Zhang, Q., Wu, D., &amp; Zhan, C. (2022). Pearson correlation coefficient-based
performance enhancement of Broad Learning System for stock price prediction. IEEE
Transactions on Circuits and Systems II: Express Briefs, 69(
        <xref ref-type="bibr" rid="ref5">5</xref>
        ), 2413-2417.
[19] Banerjee, S., &amp; Mukherjee, D. (2022). Short Term Stock Price Prediction in Indian Market: A
      </p>
      <p>
        Neural Network Perspective. Studies in Microeconomics, 10(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), 23-49.
[20] Ji, X., Wang, J., &amp; Yan, Z. (2021). A stock price prediction method based on deep learning
technology. International Journal of Crowd Science, 5(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), 55-72.
[21] Kumar, D., Sarangi, P. K., &amp; Verma, R. (2022). A systematic review of stock market
prediction using machine learning and statistical techniques. Materials Today: Proceedings,
49, 3187-3191.
[22] Tanwar, R., Phukan, O. C., Singh, G., &amp; Tiwari, S. (2022). CNN-LSTM Based Stress
      </p>
      <p>
        Recognition Using Wearables.
[23] Tanwar, R., Phukan, O. C., Singh, G., Pal, P. K., &amp; Tiwari, S. (2024). Attention based hybrid
deep learning model for wearable based stress recognition. Engineering Applications of
Artificial Intelligence, 127, 107391.
[24] Makwana, Y., Iyer, S. S., &amp; Tiwari, S. (2022). The food recognition and nutrition assessment
from images using artificial intelligence: a survey. ECS Transactions, 107(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), 3547.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Zeroual</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Harrou</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dairi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Deep learning methods for forecasting COVID-19 time-Series data: A Comparative study</article-title>
          .
          <source>Chaos, Solitons &amp; Fractals</source>
          ,
          <volume>140</volume>
          ,
          <fpage>110121</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Benevento</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aloini</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Squicciarini</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Towards a real-time prediction of waiting times in emergency departments: A comparative analysis of machine learning techniques</article-title>
          .
          <source>International Journal of Forecasting</source>
          ,
          <volume>39</volume>
          (
          <issue>1</issue>
          ),
          <fpage>192</fpage>
          -
          <lpage>208</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xiang</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Pan</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>A Hybrid Forecasting Method for Anticipating Stock Market Trends via a Soft-Thresholding De-noise Model and Support Vector Machine (SVM)</article-title>
          .
          <source>World Basic and Applied Sciences Journal</source>
          ,
          <volume>13</volume>
          (
          <year>2023</year>
          ),
          <fpage>597</fpage>
          -
          <lpage>602</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Guo</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jiang</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Prediction of S&amp;P500 Stock Index Using ARIM and Linear Regression</article-title>
          . Highlights in Science, Engineering and Technology,
          <volume>38</volume>
          ,
          <fpage>399</fpage>
          -
          <lpage>407</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Pandey</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hadiyuono</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mourya</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Rasool</surname>
            ,
            <given-names>M. J.</given-names>
          </string-name>
          (
          <year>2023</year>
          , January).
          <article-title>Using ARIMA and LSTM to Implement Stock Market Analysis</article-title>
          .
          <source>In 2023 International Conference on Artificial Intelligence and Smart Communication (AISC)</source>
          (pp.
          <fpage>935</fpage>
          -
          <lpage>940</lpage>
          ). IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Liang</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Sentiment indices and stock returns: Evidence from China</article-title>
          .
          <source>International Journal of Finance &amp; Economics</source>
          ,
          <volume>28</volume>
          (
          <issue>1</issue>
          ),
          <fpage>1063</fpage>
          -
          <lpage>1080</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Dai</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Forecasting stock return volatility in data-rich environment: A new powerful predictor</article-title>
          .
          <source>The North American Journal of Economics and Finance</source>
          ,
          <volume>64</volume>
          ,
          <fpage>101845</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Ma</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lu</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2022</year>
          ).
          <article-title>Macroeconomic attention and stock market return predictability</article-title>
          .
          <source>Journal of International Financial Markets, Institutions and Money</source>
          ,
          <volume>79</volume>
          ,
          <fpage>101603</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liang</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Ma</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          (
          <year>2022</year>
          ).
          <article-title>Forecasting stock market volatility with a large number of predictors: New evidence from the MS-MIDAS-LASSO model</article-title>
          .
          <source>Annals of Operations Research</source>
          ,
          <fpage>1</fpage>
          -
          <lpage>40</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Ren</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Duan</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          (
          <year>2022</year>
          ).
          <article-title>Fourier transform based LSTM stock prediction model under oil shocks</article-title>
          .
          <source>Quantitative Finance and Economics</source>
          ,
          <volume>6</volume>
          (
          <issue>2</issue>
          ),
          <fpage>342</fpage>
          -
          <lpage>358</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Zhu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>He</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          (
          <year>2022</year>
          ).
          <article-title>Prediction of Amazon's Stock Price Based on ARIMA, XGBoost, and LSTM Models</article-title>
          .
          <source>Proceedings of Business and Economic Studies</source>
          ,
          <volume>5</volume>
          (
          <issue>5</issue>
          ),
          <fpage>127</fpage>
          -
          <lpage>136</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>H. Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beh</surname>
            ,
            <given-names>W. L.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Lem</surname>
            ,
            <given-names>K. H.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Forecasting with information extracted from the residuals of ARIMA in financial time series using continuous wavelet transform</article-title>
          .
          <source>International Journal of Business Intelligence and Data Mining</source>
          ,
          <volume>22</volume>
          (
          <issue>1-2</issue>
          ),
          <fpage>70</fpage>
          -
          <lpage>99</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ma</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2022</year>
          ).
          <article-title>A stock price prediction method based on meta-learning and variational mode decomposition</article-title>
          .
          <source>Knowledge-Based Systems</source>
          ,
          <volume>252</volume>
          ,
          <fpage>109324</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Nair</surname>
            ,
            <given-names>A. V.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Narayanan</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2022</year>
          ,
          <article-title>August)</article-title>
          .
          <article-title>Indian Stock Market Forecasting using Prophet Model</article-title>
          .
          <source>In 2022 International Conference on Connected Systems &amp; Intelligence (CSI)</source>
          (pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          ). IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Zeng</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ling</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Jiang</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Optimal forecast combination based on PSOCS approach for daily agricultural future prices forecasting</article-title>
          .
          <source>Applied Soft Computing</source>
          ,
          <volume>132</volume>
          ,
          <fpage>109833</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>