=Paper= {{Paper |id=Vol-2711/paper32 |storemode=property |title=Multivariate Distribution Model for Financial Risks Management |pdfUrl=https://ceur-ws.org/Vol-2711/paper32.pdf |volume=Vol-2711 |authors=Jozef Korbicz,Petro Bidyuk,Nataliia Kuznietsova,Arsenii Kroptya,Oleksandr Terentiev,Tetyana Prosiankina-Zharova |dblpUrl=https://dblp.org/rec/conf/icst2/KorbiczBKKTP20 }} ==Multivariate Distribution Model for Financial Risks Management== https://ceur-ws.org/Vol-2711/paper32.pdf
    Multivariate Distribution Model for Financial Risks
                       Management

           Jozef Korbicz1[0000-0001-8688-3497] , Petro Bidyuk2[0000-0002-7421-3565],
      Nataliia Kuznietsova2[0000-0002-1662-1974], Arsenii Kroptya2[0000-0003-1740-3837],
 Oleksandr Terentiev3[0000-0001-6995-1419], Tetyana Prosiankina-Zharova3[0000-0002-9623-8771]
        1
            Institute of Control and Computation Engineering, University of Zielona Gora, Zielona
                                              Gora, Poland
                                j.korbicz@issi.uz.zgora.pl
        2
            Institute for Applied System Analysis of the National Technical University of Ukraine
                         "Igor Sikorsky Kyiv Polytechnic Institute", Kyiv, Ukraine
                                      pbidyuke_00@ukr.net
                                      natalia-kpi@ukr.net
                                        feodorit@ukr.net
    3
        Institute of Telecommunications and Global Information Space of the National Academy
                             of Sciences of Ukraine, Kyiv, Ukraine
                    o.terentiev@gmail.com, t.pruman@gmail.com



            Abstract. The method for constructing joint distribution copula based models is
            proposed. The copula model parameters are estimated by the method of
            maximum likelihood which turned out to be effective according to the mean
            squared error criterion. The model for estimating risks of various types is
            proposed and constructed using copula function approach. Higher quality of the
            tail risk measures was achieved for the data samples selected that leads to the
            necessity of improvement formal description for the central part of
            observations. This is true when the model is used based on combined marginal
            distributions along with normal and generalized Pareto distributions. Results of
            computational experiments are also provided.

            Keywords: Financial Risks, Copula, Marginal Distribution, Risks Analysis,
            Risks Measures, Value-at-Risk


1           Introduction

Risk estimation and management can be performed by changing the structure of
financial instruments portfolio that requires adequate mathematical model
constructing for multivariable processes. One of the possible approaches is based on
separate modeling of marginal distributions and dependency structure between
corresponding variables by using of special copula functions [1 – 4]. The adequacy
criterion for such models is based on the quality of risk measures estimation [5 – 9]

Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons
License Attribution 4.0 International (CC BY 4.0). ICST-2020
that provide a quantitative characteristics of risk degree, and it is convenience for
applying such traditional procedures for risk management as scenario analysis [10,
11]. The complexity of constructing joint distribution for several variables that belong
to different types of risks imposes restrictions on the procedures of analytical
expressions development for estimating appropriate risk measure. It should be
stressed that using of Monte Carlo procedures to obtain quality estimates of risk
measures [12, 13] requires application of effective methods for generating appropriate
pseudorandom numbers (PRN) by making use of such special functions as copula
[13–16].


2      Problem statement

In the frames of this study it is planned to solve the following problems: (1) to
formulate systemic approach to risks analysis for multidimensional portfolio of
financial instruments (PFI); (2) to determine possible applications of widely used risk
measures when the portfolio under risk is composed; (3) to develop theoretically
substantiated risk measures for the PFI risk measures on the basis of probabilistic and
statistical modeling using the outlier theory and copulas; (4) to determine the
possibility and effectiveness of practical application of proposed methods using actual
statistical data.


3      Mathematical model of risks distribution

To perform statistical analysis of financial risk level the models are used on the basis
of combined marginal distributions and copulas. The joint distribution for possible
risks is considered in the following form:
           H ( x1  … x n ) = P [ X 1  x1  … X n  x n ] = C (F1 ( x1 ) … Fn ( x n ) )
                                                                                             .
    Here, F1  … Fn , are marginal distribution functions for separate risks; C is
 n − copula, that characterizes the dependency structure between the analyzed risks.
Application of the model is oriented to elliptical copulas, Archimedes copulas, and
extreme value copulas. The right tail of marginal distribution is formally described by
the distribution of a threshold exceeding values in the form of generalized Pareto
distribution (GPD) as follows:
                                          1 − (1 +  x  ) −1     0
                          GPD   ( x) = 
                                            1 − exp (− x  )  = 0

                                                        −
where,   0 , and, x  0 , with   0 , and, 0  x   , with   0 ;  is parameter of
the distribution form;  is scale parameter. The central data observations are modeled
with normal distribution.
4      Scenario analysis

One of the well-known approaches to formal risks description is based upon
computing PFI cost for the risk positions under various market conditions. This
approach is called scenario analysis. The scenario analysis approach has the following
two basic types: shock testing and sensitivity analysis.
   The shock testing is dedicated to analysis of consequences provoked by the sudden
and substantial variations of market processes. It is especially useful in the cases
when the market movements are resulted in qualitative changes of some market
conditions. Actually the shock testing is based on the process of creating and analysis
of such situations that could be helpful in identifying extreme portfolio risks and
implementing the scenarios in a simulation model for estimating the qualitative and
quantitative risk measures.
   The scenario development is performed on the basis of observations received
during former crisis events, the observations coming from other spheres of activities,
or realistic, potentially extreme scenarios. Systemic approaches to creating scenarios
of this type do not exist at the moment. It is necessary to create a viable model of risks
to analyze results of the scenarios implementation that are characterized by substantial
quantitative and qualitative deviations of the processes under study from normal mode
of functioning. The model on the basis of existing link functions and combined
marginal distributions provides a possibility for variations in the scenarios regarding
the parameters of separate risk distributions and utilize the types of marginal
distributions that differ from the types of distributions characteristic for the basic
accepted model. It is also possible to incorporate extreme modifications into the
process under study through the values of tail distribution parameters and vary the
dependency characteristics between the risks. However, the shock testing does not
provide the possibility for estimating probabilities of the scenarios that result in high
losses. So, its application in risk management systems is restricted by taking into
account the risks for each potential risky event separately.
   The sensitivity analysis provides a possibility for estimating the risk characteristic
of a portfolio in non-critical market conditions when actual modifications are not
substantial. This type of analysis is used in conditions of availability of a large
number of realistic scenarios that do not predict/suppose extreme events. Using of the
models based upon link functions and combined marginal distributions provides a
possibility for changing not only for distribution parameters but also the non-tail parts
of marginal distributions too, separately from the tails. This way the model for
nominal market conditions can be changed leaving unchanged the model for extreme
values that is constructed on the data generated within longer periods of time and very
often is weakly depending upon the distribution for central values.


5      The risk measures

The quantitative estimate of risk with using specific measures is one of the basic tasks
of risk management due to necessity of comparing the risks provoked by different
decision alternatives. On one side, the risk can be considered as the losses or
shortened income that is determined by the future value of a certain position. On the
other side, the risk is provoked by the high volatility of value for some positions.
    The examples of risks dynamical models are shown for credit and outflow risks in
[17, 18] with risk degree and level estimation through the probability and size of the
losses. The risk measures that are based upon understanding the risk as a future cost
of an instrument are called coherent. The necessary requirements to the coherent risk
measure,  , were formulated in the form of axioms in [19]. These requirements are
of financial nature and are derived from the rules of accepting risks proposed by
investment managers as well as practically induced regulations. The non-subadditive
coherent risk measures should satisfy the axioms 1 – 3 given below.
    Let X and Y are random variables that reflect future values for portfolio
composed of market positions. The portfolio components which exhibit higher future
values than positions of another portfolio is considered as a less risky one.
    Axiom 1. Monotonicity: if X  Y , then, ( X )  (Y ) . Investing into the stock
instrument with known return level, r , for which the future value is deterministic and
known, decreases the risk by the invested value.
    Axiom 2. Invariance to bias: for the constant a , ( X + a  (1 + r )) = ( X ) − a , the
risk is growing with growth of risky position value. For large value positions the
simple growth of risk (according to the growth of invested capital) is added to the
growth of liquidity risk. The large position cannot be realized at the market as easy as
the small one. However, the risk measure for small position should be proportional to
the value of the position.
    Axiom 3. The positive homogeneity: (bX ) = b( X ) for any positive number, b .
    The measures of risk deviation,  , should be defined by some other axioms [20].
The non-subadditive measures of risk deviation satisfy the axioms 4 – 6. The
characteristic of deviations for non-deterministic risk X should always be positive
and equal to zero in a case when X is a constant.
    Axiom 4. X , ( X )  0 .
    Investing into stock instrument the future value of which is known thanks to the
level of return, r , that is not volatile should not influence the characteristic of
portfolio deviations.
    Axiom 5. ( X + a  (1 + r )) = ( X ) .
    For the market of complete liquidity increase of the investment volume results in
proportional growth of cost deviation.
    Axiom 6. X    0 (X ) = ( X ) .
This subadditivity axiom is based upon suggestion that diversification of the stock
instruments results in decreasing of risks. Besides, the subadditive risk measures are
convenient for the use at the enterprise level, as far as they guaranty that integrated
risk will not exceed the sum of risks characteristic for separate divisions, portfolios or
positions.
    Axiom 7. The subadditivity of coherent risk measures is determined as follows:
 ( X + Y )  ( X ) + (Y ) .
   Axiom 8. The subadditivity of measures for risk deviation is determined as
follows: ( X + Y )  ( X ) + (Y ) .
   Holding of axiom 7 is necessary condition for the coherent risk measures
(subadditive coherent risk measures); and axiom 8 for the deviation measures of risks
(subadditive measures of risk deviation).
   The axioms 1 – 3 and subadditivity axiom could be used to derive an expression
for the coherent risk measures of the following type [19]:
                                             −X 
                             ( X ) = sup E       ,                            (1)
                                          1 + r 
where,  , is a family of probabilistic measures. Thus, any coherent measure of risk
is defined by the mathematical expectation of maximum loss for a certain set of
scenarios. As it can be seen from (1) extension of the scenario set leads to
enhancement of the risk measure too. The fact that the risk measures depend on the
set of scenarios proves practical validity of the scenario analysis approach in the risk
management systems. The coherent risk measures for which the following inequality
holds: ( X )  E [− X ] , when X is not a constant, and, ( X )  E [− X ] , when X is a
constant are called restricted expectations. Between coherent risk measures that are
restricted by expectation and risk deviation measures there exists one-to-one
correspondence of the form:
                                  ( X ) = ( X − EX )
                                 ( X ) = E [− X ] + ( X )
   If these conditions hold, then  is a measure of risk deviation linked to the
coherent measure of risk,  , and,  , is a measure of risk linked to  . The couple of
the two measures together create a risk profile.


6      Estimation of basic risk measures

The measure of Value-at-Risk (VaR). VaR, or value at risk is a response to extreme
financial events and catastrophes. The initial purpose was to create quantitative
measure of risk based on available statistical techniques. Thus, VaR provides for a
probabilistic measure of potential loss pointing out to the threshold that could be
exceeded with expected loss in normal market conditions on definite time horizon and
given confidence level.
   Definition 1. For a given value of   (01] and random variable, X , the  –
quantile is defined by the expression:
                              q = x  R  P[ X  x]  
   The values of quantiles with sufficient set of values for  characterize sufficiently
form and scatterplot of probabilities distribution.
   Definition 2. The measure of risk VaR at confidence level  for random variable
(process) X (returns and losses where losses are negatively defined values) is
formally defined as follows:
                                 VaR ( X ) = q (− X )
   If losses have distribution function, F , then VaR = F −1 () , where F −1 is
inverse for F .
   As a quantitative characteristic of short-term market risk, VaR is often used in
management systems for analysis of credit and operational risks. This measure is one
of the most important components of general methodology of quantitative risk
estimation that is used in practice. The VaR-based instruments are used for solving
the problems of investments and estimating the compromise between returns and risk.
The measure satisfies the axioms 1 – 3 for the coherent risk measures but does not
satisfy the axiom of subadditivity.
   However, analytical estimation of VaR is not always convenient due to relative
complexity of finding appropriate solution. This is true for the case considered in this
study where joint distribution for risks with combined marginal distributions is used.
That is why an empirical estimate will be used based on application of the Monte
Carlo technique. If a sample of random values { X i j } is considered of power, n , that
is arranged in the way that X 1n  …  X nn , then empirical estimate of risk measure
VaR is determined as follows:
                          VaR  ( X ) = X max (i  N  i  n  )  n 
   The computations necessary for estimating this value include the following steps:
    1. Selection of confidence level,  .
    2. Generating sufficiently large number of pseudorandom vectors according to
           previously estimated marginal distributions for the central and tail parts as
           well as link function for the joint distribution of losses constructed.
    3. Using the price models compute simulated samples of losses and returns for
           all instruments.
    4. Computing VaR as the lowest losses for 1 −  percentage of the worst cases.
   The risk measure ES (Expected Shortfall). An alternative to VaR coherent risk
measure is Expected Shortfall.
   Definition 3. The measure of risk ES at confidence level  for random variable
 X is determined as follows:
      ES  ( X ) = −
                       1
                          (
                     1 −  
                                                                                         )
                           E  X |{− X  q  ( − X )}  + q (− X )  − P [− X  q (− X )] .
   ES is mathematical expectation of losses on a given time horizon and confidence
level; at the same time VaR represents minimum loss:
                                ES  = − E [ X  X  − VaR ] .
   The measure ES is also called conditional VaR. An empirical estimate of ES is
determined by the expression:
                                              max(i N i  n  )
                                             k =1                  X k n
                           ES  ( X ) = −                        
                                           max (i  N  i  n )
    Markowitz’ measures of risk. First, Markowitz proposed to use as a measure of
risk the standard deviation,  ( X ) = X − E X  , and then semi-deviation
  − ( X ) = [ X − E X ]−  [5]. Both risk measures satisfy all four axioms for risk
deviation measures. In risk management systems it is more convenient to use the
measure in the form of semi-deviation,  − ( X ) . For a sample, X i , of power, n , the
empirical estimates for risk deviation measures are the following:
    – for standard deviation:
                                                                         2
                                                                    
                                           1 n       1 n      
                            ˆ ( X ) =         Xi −
                                           n k=1     n k=1
                                                             
                                                            Xi  ;
                                                               
                                                                    
    –    and for semi-deviation:
                                                                         2
                                                                 
                                                 
                                         1 n           1 n
                        ˆ − ( X ) =              Xi −
                                         n k =1  
                                                                 
                                                               Xi  
                                                         n k =1  
                                                                             .
                                                                 −


    The empirical estimates of the risk measure VaR or coherent risk measure ES and
risk deviation measures together provide an estimate of risks portfolio.
    Generating risk measures from copulas. When copulas are used in the risk
management systems the Monte Carlo techniques should be used for estimating risk
measures, risk deviation measures, and scenario analysis. It means generating
pseudorandom dependent values with given marginal distributions and copula. There
exists general scheme for sample generating from joint distribution, the structure of
dependence in which is determined by copula. Using Sklyar theorem, the procedure
of generating random sample, X 1 … X n , with marginal distribution functions,
 F1 … Fn and copula, C , is described as given below [21].
    1. Generate random numbers u1 … u n with scalar uniform distributions with
parameters, [0 1] , and joint distribution over copula C .
    2. Compute the values of xi using the transformation:
                                               xi = Fi−1 (u i )
where, Fi−1 , is inverse for Fi .
   Depending on the features of the copula family implementation of the step 1 can be
simplified.
   Elliptical copulas.
   Definition 4. The characteristic function   R n → C for a random n -vector X is
defined as follows:
                                  X (t ) = E [e i t X ] ,
where, t X , is coordinate-to-coordinate product of the vectors t and X . The
characteristic function represents probabilistic distribution as a single-valued
meaning.
  Definition 5 [22]. The probabilistic distribution for n -vector X is called elliptical
if for a certain vector,   R n , positively defined matrix,   R n  n , and some
function   [0 ) → R , the characteristic function of vector, X −  , is defined as
                           (
follows:  X −  (t ) =  t T  t . )
    The elliptical functions got their name due to the elliptical form of invariability
lines for densities of the distributions. Any combination of elliptical distributions also
creates elliptical distribution. The marginal distributions of joint elliptical distribution
also belong to elliptical distributions. The elliptical copulas are defined through joint
elliptical distributions, H , and inverse functions, Fi( −1) (u i ) , as follows:
                           C (u1 … u n ) = H ( F1( −1) (u1 )… Fn( −1) (u n )) .
   Generating random numbers with the dependency structure defined by the elliptical
copula is actually sampling from respective elliptical distribution. The algorithm
given below generates PRN from Gaussian copula, corresponding to normal
distribution, in the form of n -vector, ( X 1 … X n ) , with correlation matrix,  , as
follows:
     1. Generate n independent random numbers, u1 … u n , according to normal
          distribution.
     2.   Represent,  , in the form of decomposition,  = A  AT , where A is a low
          triangle matrix.
                           
     3.   Compute, y = A  u .
     4.   Using the scalar normal distribution function,  , compute, xi =  ( yi ) .
   To generate PRN from more complex elliptical copulas any zero mean elliptically
distributed vector, X , is represented via centered normally distributed vector, N ,
with the same correlation matrix, and independent on, N , random value, r : X = rN .
For example, for Student t -distribution with  degrees of freedom, r has  2 -
distribution with,  , degrees of freedom.
   The Archimedean copulas. The copulas that can be represented in the form:
C (u1… u n ) =  [ −1] ( (u1 ) + …+ (u n ) ) , are called Archimedean. The unified
algorithm for generating the copulas is constructed on the following basis:
   C (u1 … u n ) = C n (u n  u1 … u n−1 )…C 2 (u 2  u1 )C1 (u1 ) ,
where, C 2 , is copula for the first k components; and C1 (u1 ) = u1 . To accomplish
sampling from this joint distribution it is necessary to perform the following steps:
   – Generate, n , independent PRN, v1 … v n ;
     –    then,      sequentially          compute,          u1 = v1 ,        u 2 = C 2−1 (v 2  u1 ) ,   ...,
          u n = C n−1 (v n  u1 … u n−1 ) .
   For Archimedean copulas the computational algorithm can be simplified to the
following form:
                                                   (u )             
                   u 2 =  [2−1]   2   1[ −1]  1 1   −  2 (u1 ) 
                                                                       
                                                  v2                 
   The extreme value copulas. Let, ( X i1 … X im ) , i = 12…, are independent
identically distributed m -vectors with distribution function, F , and let
                             M ij = max X ij  j = 1… m ,
                                                  1 i  n
represents maximum for each component. The multidimensional distribution for
                                                                      ( M 1n −a 1 n )     (M − a )
extreme values is a limit of the random vectors, (                         b1 n
                                                                                      …  mbn m n ) . If the
                                                                                              mn

limit distribution exists, then each component of the distribution represents one-
dimensional distribution of extreme values that can be represented in the form:
                              C (H ( z1  1 )… H ( z m   m ) )
where, H ( z j   j ) , is generalized distribution of extreme values, and, C , is a copula.
It is natural that marginal distributions are related to the extreme value distributions,
but more interesting is the case of the following copula:
G ( z ) = lim F n (a1n + b1n z1 … amn + bmn z m ) = C (H ( z1 1 )… H ( z m   m ) ) .             (2)
         n →
   Let, r j , is strictly increasing transform for X ij ; denote the variables and

maximum after the transform as, X ij , and M in
                                              
                                                 , respectively. Suppose also that
                M i n − aj n
expression,                      , tends to the extreme value distribution with, n →  , for all
                    bj n
 j ; and, let
                 G ( z ) = lim P( M 1n  a1n + b1n z1 … M mn
                                                                      
                                                                    a mn     
                                                                          + bmn zm )
                             n→
                                              (
                                       = C  H ( z1  1 )… H ( z m
                                                                       
                                                                             )
                                                                        m ) 
                                          
   It can be shown that, C = C , and,
                             C (u1t … u m
                                          t
                                            ) = C t (u1 … u m ) ,                    (3)
for all t  0 . The copulas that satisfy this condition are called extreme value copulas.
   This set of copulas includes, for example, the family of two-dimensional Gumbel
copulas of the form:
                                                                                      1
                                 C (u v) = exp( − (− ln u )  + (− ln v)   ) ,
                                                                                      
                                                                                 
where,   1 . Sampling of PRN from extreme copulas is based on the use of universal
generating procedures from known probabilistic distributions. To solve the problem it
is proposed to apply multidimensional generalization of one-dimensional method of
so called slice sampling [14].
   The method of slice sampling. Suppose it is necessary to perform sampling from
probabilistic distribution on some subset from, R n , that is determined by some
probability density function (PDF) proportional to some function, f (x ) . Such task
can be accomplished by the uniform sampling from n + 1 -dimensional area under
graph of the function, f (x ) . Formally this idea can be implemented by introducing
extra variable, y , and determining joint distribution for, x , and y , that is uniform
over the area under the boundary described by the function f (x ) :
                              U = ( x y )  0  y  f ( x)
   Thus, the joint distribution for ( x y ) is determined as follows:
                                          
                                       
                             p ( x y ) = 1 f ( x) dx,      0  y  f ( x) .
                                            0         y  0 y  f ( x)
    Generating of independent values uniformly distributed over the set, U, is not
simple task. That is why Markov chain is generated converging to this distribution.
One of possible solutions is based upon Gibbs sampling, i.e. sampling from
conditional distribution, P ( y | x ) . The distribution is uniform over the interval,
 (0 f ( x )) , and from conditional distribution, P ( x | y ) , that is uniform over the
interval, S = x  y  f (x) . The resulting distribution is called a “slice” (or sector)
defined over the variable, y . Rather complicated task can be generating of
independent, uniformly distributed value from, S . It can be replaced by some
innovation for, x , that doesn’t violate uniform distribution over S .
    Introduce the following notations: let f (x ) is the function proportional to the
density distribution we are looking for; x 0 is current (initial) state; x1 is a new state.
Now, the method for sampling in one-dimensional case can be formulated as given
below [14].
    1.       Generate real value, y , uniformly from (0 f ( x0 )) , and this way get
horizontal “slice”, S = x  y  f (x) . Note, that x0 always belongs to, S .
    2.       Define the interval, I = ( L R ) , around x 0 that contains at least large part of
the slice.
    3.       Generate new point, x1 , that belongs to the interval defined, i.e. to the
intersection, S  I .
    On the first initialization step a value for an extra variable should be selected that is
characteristic for the particular case of slice sampling. There is no need to save the
value for further iterations. To avoid the problem touching upon precision of
representing floating point numbers it is recommended to use the following function:
 g ( x) = log( f ( x)) , instead of f (x ) . In this case the extra variable is defined as
follows: z = log( y ) = g ( x0 ) − e , where, e , is exponentially distributed value with
unity mathematical expectation; and the slice is defined as S = x  z  g (x) . The
second and the third steps of sampling in one-dimensional case could be implemented
in various ways but the result should be in the form of Markov chain that does not
affect the distribution defined by the function, f (x ) .
    On the second step, the respective numerical interval is determined. It is desirable
that the interval should include as larger part of the slice as possible. This is necessary
for distinguishing the new point (number) from the previous one as much as possible.
Though it is also necessary to avoid such intervals that would exceed the slice
substantially; the matter is that this results in less effective generating of the new
point. The interval can be found in several possible ways given below.
    1.    It is ideally to have, L = inf ( S ) , and, R = sup ( S ) . It means that interval, I ,
is equal to the least possible interval that contains the whole set, S . However, this
situation is difficult to implement in practice.
    2.    If the values of, x , are restricted, then interval, I , may be defined over the
whole admissible range. However, this is not effective approach when the slice size is
substantially less than the range.
    3.    Define size of the slice as,  , and randomly select initial interval, 0 , that
would include the point, x 0 , with possibility of expanding the interval. For example,
it could be doubled to one side or expanded to other side until both ends of the
interval will be outside of the slice.
    In multidimensional case there are two different ways of sample generating.
    The first one supposes application of one-dimensional approach to each variable
from the multidimensional distribution. The second approach is based on application
of slice sampling to generating multidimensional distribution by forming uniform
sample from the area under the graph of PDF for this distribution.
    The second approach was used in performing computational experiments because
it is more natural and can be substantiated analogously to one-dimensional case. Here
sample generating is performed from uniform distribution in the interval from zero to
the value of density function at current point. The uniform value defined by slice is
determined by the vertical dimension. It is obvious that uniform sample generating in
a case of multidimensional slice is more difficult. In this case the interval, I = ( L R ) ,
is replaced by the hyper-interval:
                                      
                                H = x  L i  xi  R i i = 1… n .   
Here, Li and, Ri , determine the length of the hyper-interval along the axis, x i .
   The simplest way of determining, H , is its placement randomly over dimensions
in such a way that its value would be uniform over all possible values of, H , that
contain initial point, x 0 . Other procedures for determining the interval don’t exhibit
such simple generalization. For example, the procedure of expanding the interval until
all boundaries will exceed the slice limits would not be effective because n -
dimensional interval has 2 n peaks. That is why in the computational experiments the
approach was used based on the interval expanding until the point taken uniformly
from the interval belongs to the slice.


7      Estimation of basic risk measures

The model has been constructed for analysis of exchange rates for the currencies:
Swiss franc, British pound, Japanese yen, and US dollar to euro. To estimate model
parameters the data for daily exchange rates were taken from 03.1998 to 01.2006.
After preliminary data processing 1643 observations were selected for the model
building. Using maximum likelihood technique parameters for one-dimensional
marginal distributions estimated for the exchange rates selected, as for respective
copula parameters (Table 1).

                         Table 1. Estimates for copula parameters

             Copula                Parameter        Value            MSE
             Gumbel                                1.6720          0.0158
             Normal                    1           0.5637          0.0118
                                       2           0.3318          0.0136
                                       3           0.5943          0.0120
                                       4           0.8241          0.0054
                                       5           0.8593          0.005
                                       6           0.8037          0.0061
             Frank                                 4.5874          0.0911


   An empirical measure of VaR with quantile 0.03, i.e. for 50 observations that
exceed the threshold selected amounted to 3.4967. For quantile 0.01, there are 16
observations that exceed the threshold 3.5345. There are enough observations for the
quintile 0.03 to use in practice for calculating empirical estimate, but for quantile 0.01
the data sample is too short. That is why it is necessary to construct risk distribution
model and estimate the VaR value using this model (Table 2).
   Table 2 shows that VaR measures computed using the models on the basis of
combined marginal distributions that were used to form joint distribution with
Gumbel and Frank copulas exhibit relative error to the empirical value for quintile
0.03 of the following values: 0.203%, and 0.022%, respectively.

                         Table 2. VaR estimates using the models

                       Copula    Quantile            Sample size
                                            100       1000 10000
                       Gumbel        0.03   3.414     3.477 3.489
                                     0.01   3.437     3.566 3.601
                       Normal        0.03   3.500     3.538 3.534
                                     0.01   3.617     3.688 3.660
                       Frank         0.03   3.498     3.479 3.495
                                     0.01   3.513     3.563 3.589

The model with normal copulas showed the error of about 1%. On the basis of the
results obtained it can be concluded that all three models constructed are adequate but
the model on the basis of Frank copula showed the best result. Thus, the VaR measure
for, 0.01, quantile will be the value of 3.5892. The same three models were used to
compute the coherent risk measures ES. An empirical value for measure ES with
quantile 0.03 is 3.6074. Table 3 shows that the most adequate model for estimating
the measure ES is also the model based on Frank copula: an estimate for ES with
quintile 0.01 is 3.6822. All three models showed worse results for the risk deviation
measure proposed by Markowitz than two tail measures considered above (Table 4).

                          Table 3. Estimates of risk measure ES

                       Copula      Quintile             Sample size
                                                100      1000 10000
                       Gumbel         0.03      3.512    3.575 3.583
                                      0.01      3.534    3.647 3.673
                       Normal         0.03      3.610    3.647 3.643
                                      0.01      3.712    3.773 3.752
                       Frank          0.03      3.585    3.566 3.583
                                      0.01      3.602    3.655 3.682

                        Table 4. Estimates of Markowitz measure

                           Estimation method             +
                           Gumbel copula                  0.1330
                           Normal copula                  0.1462
                           Frank copula                   0.1370
                           Empirical estimate             0.1733

   Thus, quality of the models constructed shows that they can be used for active
management of risks by changing the portfolio structure aiming to optimization of the
risk measure selected.


8      Conclusion

The method for constructing joint distribution, copula based models, is proposed. The
copula model parameters are estimated with the method of maximum likelihood
which turned out to be effective according to the mean squared error criterion. Three
types of copulas were studied and three models constructed. All three models turned
out to be adequate and practically useful. Used approach to evaluate the risk measures
based on sampling measures provided high precision of estimates when non-extreme
quantiles were estimated. At the same time the quality of risk deviation measures that
create a part of a risk profile requires model refinement in the future.
   High quality of the tail measure estimates supports the idea that the model based
upon combined marginal distributions, with the use of normal and generalized Pareto
distribution, requires description improvement for the central part of observations.
The future studies will take into consideration the possibility for applying to modeling
central observations some other types of distribution than normal one. The purpose is
to further improve quality of the model in the form of combined marginal distribution,
and to refine final results of risk estimation.
References
 1. Clemente, Di A., Romano, C.: Measuring and Optimizing Portfolio Credit Risk: A Copula-
    based Approach. Economic Notes, vol. 33, n.3, pp. 325–357 (2004).
 2. Hiirlimann, W.: Multivariate Frechet Copulas and Conditional Value-at-Risk. International
    Journal of Mathematics and Mathematical Sciences, vol.7, pp. 345–364 (2004).
 3. Li, D. X.: On Default Correlation: A Copula Function Approach. Journal of Fixed Income,
    vol. 9, pp. 43–54 (2000).
 4. Breymann, W., Dias, A., Embrechts, P.: Dependence Structures for Multivariate High-
    Frequency Data in Finance. Quantitative Finance, vol. 3, n.1, pp. 1–14 (2003).
 5. Markowitz, H.: Portfolio Selection. Journal of Finance, n.7, pp. 77–91 (1952).
 6. Christoffersen, P., Hahn, J., Inoue, A.: Testing and Comparing Value at Risk Measures.
    Journal of Empirical Finance, vol. 8, n. 3, July 2001, pp. 325–342 (2001).
 7. Guojun, W., Zhijie, X.: An Analysis of Risk Measures. Journal of Risk, n. 4, pp. 53–75
    (2002).
 8. Acerbi, C., Tasche, D.: Expected Shortfall: a Natural Coherent Alternative to Value at
    Risk. Economic Notes, vol. 31, n. 2, pp. 379–388 (2002).
 9. Acerbi, C., Tasche, D.: On the Coherence of Expected Shortfall. Journal of Banking and
    Finance, vol. 26, n. 7, pp. 1487–1503 (2002).
10. Plackov, D., Sadus, R.J., Dimson, E.: Stress Tests of Capital Requirements. Journal of
    Banking and Finance, vol. 21, n. 11, pp. 1515–1546 (1997).
11. Longin, F.M.: From Value at Risk to Stress Testing: The Extreme Value Approach.
    Journal of Banking and Finance, vol. 24, pp. 1097–1130 (2000).
12. Bidyuk, P. I., Kuznietsova, N. V.: Forecasting the Volatility of Financial Processes with
    Conditional Variance Models. Journal of Automation and Information Sciences, vol. 46
    (10), 2014, pp. 11–19.
13. Glasserman, P., Heidelberger, P., Shahabuddin, P.: Efficient Monte Carlo Methods for
    Value-at-Risk. Mastering Risk, vol. 2: Applications, Ed: C. Alexander, Prentice Hall, pp.
    7–20 (2001).
14. Neal, R.: Slice Sampling. Ann. Statist., vol. 31, n 3, pp 705–767 (2003).
15. Evans, M., Swartz, T.: Random Variable Generation Using Concavity Properties of
    Transformed Densities. Journal of Computational and Graphical Statistics, vol. 7, n. 4, pp.
    514–528 (1998).
16. Chernozhukov, V., Hong, H.: An MCMC Approach to Classical Estimation. Journal of
    Econometrics, vol. 115, n. 2, pp. 293–346 (2003).
17. Kuznietsova, N. V., Bidyuk, P. I.: Modeling of Credit Risks on the Basis of the Theory of
    Survival. Journal of Automation and Information Sciences, vol. 49(11), pp. 11–24 (2017).
18. Kuznietsova, N., Bidyuk, P.: Forecasting of Financial Risk Users’ Outflow. IEEE First
    International Conference on System Analysis & Intelligent Computing (SAIC), Kyiv
    (2018). pp. 250–255, DOI: https://ieeexplore.ieee.org/abstract/document/8516782, last
    accessed 2020/05/12.
19. Artzner, P., Delbaen, F., Eber, J.-M., Heath, D.: Coherent Measures of Risk. Math.
    Finance, n. 3, pp. 203–228 (1999).
20. Rockafellar, R. T., Uryasev, S., Zabarankin, M.: Generalized Deviations in Risk Analysis.
    Finance and Stochastics, vol. 10, n. 1, pp. 51–74 (2006).
21. Nelsen, R.: An Introduction to Copulas, 2nd ed. Springer-Verlag, Berlin, 269 p. (2006).
22. Hult, H., Lindskog, F.: Multivariate Extremes, Aggregation and Dependence in Elliptical
    Distributions. Advances in Applied Probability, vol. 34, n. 3, pp. 587–608 (2002).