=Paper= {{Paper |id=Vol-1623/papermp9 |storemode=property |title=Stochastic Optimal Growth Model with S-Shaped Utility Function |pdfUrl=https://ceur-ws.org/Vol-1623/papermp9.pdf |volume=Vol-1623 |authors=Sergei Mironov,Alexey Faizliev,Sergei Sidorov, Alexandre Gudkov |dblpUrl=https://dblp.org/rec/conf/door/MironovFSG16 }} ==Stochastic Optimal Growth Model with S-Shaped Utility Function== https://ceur-ws.org/Vol-1623/papermp9.pdf
                  Stochastic Optimal Growth Model
                     with S-Shaped Utility Function


                   S. V. Mironov, A. Faizliev, S. P. Sidorov, A. Gudkov⋆

               Saratov State University, Russian Federation http://www.sgu.ru



        Abstract.    In this paper we examine the problem of nding the optimal solu-
        tion of the stochastic dynamic growth problem with innite time horizon for
        a decision maker described in the prospect theory (PT decision maker) that
        has asymmetric attitude with respect to a reference level of consumption. We
        compare the optimal solution of the PT decision maker with the optimal so-
        lution of the decision maker with the classic power utility. Numerical results
        show that the optimal behavior of PT decision maker is quite dierent from
        the behavior of the power utility investor: the value functions of PT decision
        maker are still monotone increasing, but non-concave; the optimal investment
        strategies of PT decision maker are non-monotone.

        Keywords:     stochastic dynamic programming; prospect theory; optimal growth
        model.


1     Introduction
Let us consider the following stochastic optimal growth model [1]
                                    {∞         }
                                     ∑
               V (k0 , θ0 ) = max E     βu(ct ) , ct = F (kt , θt ) − kt+1 ,                    (1)
                                 {ct }
                                           t=0

where ct is consumption, kt is the capital invested at time t, (θt )t≥0 is stochastic
technology process, F (·, ·) is a net production function, E(·) is the expectation operator,
V is the value function, k0 , θ0 are initial states of capital and technology processes
respectively, 0 < β < 1.
    The process (kt )t≥0 is the discrete time continuous state process. We will suppose
that the process (θt )t≥0 is a discrete-time Markov chain, i.e. is the stochastic process
satisfying the Markov property with the state space S = {s1 , . . . , sd } and transition
matrix Π = (πjk ), πjk = P(st+1 = sk |st = sj ).
    The classical theory of investment considers a decision maker with a concave utility
function u. However, the works [2], [3], [4] provide examples and demonstrations show-
ing that under the conditions of laboratory experiments the predictions of expected
utility theory are regularly disturbed. Furthermore, a new theory was proposed [2]
⋆
    This work was supported by the Russian Fund for Basic Research under Grant 14-01-00140.
Copyright ⃝
          c by the paper's authors. Copying permitted for private and academic purposes.

In: A. Kononov et al. (eds.): DOOR 2016, Vladivostok, Russia, published at http://ceur-ws.org
                Stochastic Optimal Growth Model with S-Shaped Utility Function      235

 the prospect theory (PT) which accounts for people's behavior in decision-making
under risk in the experiments where the traditional theory of expected utility failed.
    We will assume that PT decision maker has a reference level of consumption X .
Then the deviation of consumption from reference point X on the date t is equal to
the dierence between a real consumption ct and the reference level X (decision maker
compares the real consumption ct with X at the moment t and if ct > X , he/she
considers the deviation ct − X as a gain; in the case ct < X , decision maker thinks
it to be a loss X − ct . In this paper we consider the stochastic dynamic problem for
PT decision maker assuming that the utility function is s-shaped, i.e. is convex over
losses and is strictly convex over gains. The problem of nding the optimal path for
PT decision maker is to maximize the unconditional expectation of the innite sum
of discounted values of the s-shaped utility function at discrete time moments.
    In the beginning, we briey present the summary of main ideas of the prospect
theory, and then we proceed to the problem of nding the optimal solution of the
stochastic dynamic problem with innite time horizon for PT decision maker. We
compare the optimal solution of PT decision maker with the optimal solution of the
decision maker with the classic power utility.
    While such a problem has not been considered by other authors, there are research
papers examining similar ones. For example, M. Dempster [5] considers the problem
of asset liability management for individual households with S-shaped utility func-
tions as the multistage stochastic programming problem rather than stochastic control
problem. The work [6] studies stochastic control problems using performance measures
derived from the cumulative prospect theory. This thesis solves the problem of evalu-
ating Markov decision processes (MDPs) using CPT-based performance measures. The
paper [7] introduces the concept of a Markov risk measure and the author uses it to
formulate risk-averse control problems for a nite horizon model and a discounted in-
nite horizon model. For both Markov models the paper [7] derives risk-averse dynamic
programming equations and a value iteration method.


2   PT decision maker
Prospect theory (PT) has three essential features:

  decision maker makes investment decisions based on deviation of his/her nal con-
   sumption from a reference point X and not according to his/her nal consumption,
   i.e. PT-decision maker concerned with deviation of his/her nal consumption from
   a reference level, whereas Expected Utility decision maker takes into account only
   the nal value of his/her consumption.
  utility function is S -shaped with turning point in the origin, i.e. decision maker
   reacts asymmetrical towards gains and losses;
  decision maker evaluates gains and losses not according to the real probability
   distribution per ce but on the basis of the transformation of this real probability
   distribution, so that decision maker's estimates of probability are transformed in
   the way that small probability (close to 0) is overvalued and high probability (close
   to 1) is undervalued.
236     S. V. Mironov, A. Faizliev, S. P. Sidorov, A. Gudkov

                                          4


                                          2

                                                                        x
                    -4          -2                        2             4

                                         −2


                                         −4


                                         −6          λ = 2.25, a = b = 0.88
                                                      λ = 2.25, a = b = 0.5
                                                    λ = 2, a = 0.25, b = 0.75
                                         −8


       Fig. 1. The plot of the value function v(· − X) for dierent a, b, λ and X = 0




    Let X = {x1 , . . . , xn } is the set of all possible outcomes, X be the reference point
and x refers to the element of X. The deviation x−X can be negative (loss) and positive
(gain). PT includes three important parts: a PT value function over outcomes, v(·−X);
a weighting function over probabilities, w(·); PT-utility as unconditional expectation
of the PT value function v under probability distortion w.
Denition 1. The PT value function derives utility from gains and losses and is
dened as follows [3]:
                                  {
                                    (x − X)a ,   if x ≥ X,
                       v(x − X) =                                            (2)
                                    −λ(X − x)b , if x < X.

Fig. 1 plots the value function for dierent values of α, β, λ. Note that the value function
is convex over losses if 0 ≤ b ≤ 1 and it is strictly convex if 0 < b < 1. The PT decision
maker dislikes losses with a factor of λ > 1 as compared to his/hers liking of gains.
It reects the fact that individual decision makers are more sensitive to losses than to
gains. Moreover, the value function reects loss aversion when λ > 1. Kahneman and
Tversky estimated the parameters of the value function a = b = 0.88, λ = 2.25 based
on experiments with gamblers [2].
Denition 2. The PT-probability weighting function w : [0, 1] → [0, 1] is dened by
                                               pd
                             w (p) =                      1/d
                                                                , d≤1                       (3)
                                       (pd + (1 − p)d )
    The function w (·) is well-dened when a, b are less than 2d. In the following we will
assume that 0.28 < d ≤ 1 and a < 2d, b < 2d.
    Fig. 2 presents the plots of the probability weighting function for dierent values
of d.
    Let P = {p1 , . . . , pn } denote probabilities of outcomes {x1 , . . . , xn } respectively.
                     Stochastic Optimal Growth Model with S-Shaped Utility Function                    237


                           1


                         0.8


                         0.6


                         0.4

                                                                        d=1
                         0.2
                                                                        d = 0.75
                                                                        d = 0.5
                           0
                               0                       0.5                    1
                                                             p


         Fig. 2. The plot of the probability weighting function w(x) for dierent d




Denition 3. The PT-utility of a prospect G = (X, P ) with the reference point X is
dened as [2]
                                                 ∑
                                                 n
                                    UP T (G) =         w(pj )v(xj − X).                                (4)
                                                 j=1



3     Stochastic Optimal Growth Model
3.1   Model with Power Utility Function
Bellman equation for the stochastic optimal growth model (1) has the following form

                     V (k, θ) = ′max (u(F (k, θ) − k ′ ) + βE{V (k ′ , θ′ )|k, θ},
                                   k ∈K(θ)

where k and θ are states of (discrete time continuous) capital and (stochastic discrete
state) technology processes at the present time moment respectively, k ′ , θ′ are states
of the same processes at the next time moment, F (·, θ) is the net production function
at the technology state θ, u(·) is a utility function, K(θ) is a feasible set of k ′ at the
technology state θ, c = F (k, θ) − k ′ is consumption at the technology state θ.
    Let us suppose that utility function is the power utility: u(c) = c 1−σ−1 , where
                                                                            1−σ


c = F (k, θ) − k ′ is consumption at the present time and at technology state θ. The
control variable k ′ = F (k, θ) − c is the amount of capital invested in production.
    We will assume that F (k, θ) = exp(θ)k α +(1−δ)k , where 0 < α ≤ 1 is the parameter
of the model. Then Bellman equation is
                                                       (                       ∑
                                                                               d                  )
      V (k, si ) =                 max                  u(F (k, θ) − k ′ ) + β   πij V (k ′ , sj ) .
                     (1−δ)k≤k′ ≤exp(si )kα +(1−δ)k
                                                                                  j=1
238     S. V. Mironov, A. Faizliev, S. P. Sidorov, A. Gudkov

3.2    Model with S-Shaped Utility Function
For PT decision maker we will assume that reference level of consumption at any time
moment is X . Then the deviation of consumption from reference point X is equal to
F (k, θ) − k ′ − X and Bellman equation is
                                                                             
                                                        ∑
                                                        d
           V (k, si ) = ′max v(F (k, θ) − k ′ − X) + β   w(πij )V (k ′ , sj ) ,  (5)
                      k ∈K(θ)
                                                         j=1


where v and w are dened in (2) and (2) respectively. Note that the existence of
dynamic programming equations for non-convex performance criteria was proved in
the work [9].

Lemma 1. Blackwell's suciency conditions are fullled for the operator dened in
(5).


3.3    Numerical Results
We use the value function iteration algorithm with high-precision discretization [8] to
solve both the stochastic optimal growth problem with power utility function and the
problem with S-shaped utility function, and the algorithm iterates until it converges
under the stopping criterion. The convergence of the value function iteration proce-
dure for the model with power utility function is ensured by the contraction mapping
theorem. It follows from Lemma 1 that the operator dened in (5) is also contraction
mapping.
    We assume that θ has two states s1 , s2 with transition matrix Π = (πij ), π11 =
π22 = π , π12 = π21 = 1 − π . We chose values s1 , s2 , π so that the process reproduces
the conditional rst and second order moments ρ, σε2 of the AR(1) process. The code
is written in MatLab. To obtain numerical solution we use values α = 0.3, β = 0.95,
δ = 0.1, σ = 1.5, ρ = 0.8, σε = 0.12. We use 1000 equally-spaced interpolation nodes
on the range of the continuous state, [0.2,6], for each discrete state θ. The algorithm
then converges in less than 250 iterations when the stopping criterion is e = 10−6 .
    Optimal value functions, policy functions, as well as consumption paths both for
PT-investor and the investor with power utility function, obtained for the stochastic
optimal growth model, are shown in Figures 3, 4 and 5, respectively.
    Numerical results show that the optimal behavior of PT decision maker is quite
dierent from the behavior of the power utility investor. For example, while the value
function of the PT-investor is increasing, it has an inexion point, where convexity
changes to concavity.
    Optimal consumption for the PT-investor (with reference point less than 0.5) is
increasing and is close to the optimal consumption for the power utility investor. For
small values of k the optimal consumption of the PT-investor is greater than the optimal
consumption of the power utility investor. If the reference point is bigger than 0.5 then
the optimal consumption of the PT-investor is not monotonic and much greater than
the optimal consumption of the power utility investor for small values of k . Starting
                 Stochastic Optimal Growth Model with S-Shaped Utility Function        239

                      6


                      4


                      2


                      0


                     −2                              EU investor, s0
                                                     EU investor, s1
                                                     PT investor, s0
                     −4                              PT investor, s1

                          0    1      2      3       4     5      6
                                                 k

  Fig. 3. Value functions for EU-investor and PT-investor with the reference point X = 1




with the inexion point of optimal consumption for the PT-investor, the values of
consumption for both types of investors are nearly identical.
    While the policy function of the power utility investor is linear, the policy function
for PT-investor (with reference point less than 0.5) is monotonic and convex. For small
values of k the policy function of the PT-investor is greater than the policy function of
the power utility investor. The policy functions for PT-investors are also non-monotonic
for reference points bigger than 0.5 and for small values of k are much bigger than the
policy function of the power utility investor.
    Table 1 shows that the number of iterations needed for solving the problem for the
power utility investor by means of the value function iteration algorithm is essentially
smaller than the number of iterations needed for solving the problem for the PT-
investor. It is not surprisingly, since the PT-utility function is S-shaped and non convex.
It should be noted that if the reference point of PT-investor is close to one, the number
of iterations needed to achieve a desired accuracy are quite close both for the power
utility investor and for PT-investors.


                   e, accuracy EU PT, X = 0.4 PT, X = 0.7 PT, X = 1
                       10−3    62     126         113         69
                       10−4    106    171         158         113
                       10−5    151    216         203         158
                       10−6    196    261         248         203
                            Table 1. The number of iterations
240     S. V. Mironov, A. Faizliev, S. P. Sidorov, A. Gudkov

                     1.2


                     1.1


                       1


                     0.9

                                                          EU investor, s0
                     0.8                                  EU investor, s1
                                                          PT investor, s0
                                                          PT investor, s1
                     0.7
                           0    1      2      3       4         5       6
                                                  k

Fig. 4. Optimal consumptions for EU-investor and PT-investor with the reference point X = 1




                      5


                      4


                      3


                      2                                   EU investor, s0
                                                          EU investor, s1
                                                          PT investor, s0
                      1
                                                          PT investor, s1

                          0    1      2       3       4         5      6
                                                  k

Fig. 5. Optimal policy functions for EU-investor and PT-investor with the reference point
X=1


References
1. L. Guerrieri, M. Iacoviello, A toolkit for solving dynamic models with occasionally binding
   constraints easily, Journal of Monetary Economics 70 (2015) 2238.
2. D. Kahneman and A. Tversky, Prospect theory: An analysis of decision under risk Econo-
   metrica 62 (1979) 12911326.
3. A. Tversky and D. Kahneman, Advances in prospect theory: Cumulative representation of
   uncertainty Journal of Risk and Uncertainty 5 (1992) 297323.
                 Stochastic Optimal Growth Model with S-Shaped Utility Function        241

4. N. C. Barberis, Thirty years of prospect theory in economics: A review and assessment,
   Journal of Economic Perspectives 27 (1) (2013) 173196.
5. M. Dempster,         Asset liability management for individual households - Ab-
   stract of the London Discussion. British Actuarial Journal 16 (1) (2011) 441467.
   doi:10.1017/S1357321711000171.
6. K. Lin, Stochastic systems with cumulative prospect theory. Ph. D. Thesis, University of
   Maryland, 2013.
7. A. Ruszczynski, Risk-averse dynamic programming for Markov decision processes. Mathe-
   matical Programming 125 (2) (2010) 235261.
8. L. Maliar, S. Maliar, Chapter 7 - Numerical Methods for Large-Scale Dynamic Economic
   Models, In: Karl Schmedders and Kenneth L. Judd, Editor(s), Handbook of Computational
   Economics, Elsevier, 2014, Volume 3, Pages 325-477.
9. K. Lin, S. I. Marcus, Dynamic programming with non-convex risk-sensitive measures.
   Proceedings of the 2013 American Control Conference, 67786783, 2013.