Motivation

Hierarchical risk aggregation

Reordering algorithm

High dimensional risk aggregation: a hierarchical approach with copulas Philipp Arbenz ETH Zurich, SCOR www.math.ethz.ch/∼arbenz/ Joint work with Christoph Hummel and Georg Mainik

Bahnhofskolloquium, Z¨ urich, 9.1.2012

Conclusion

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Risk aggregation: why?

Swiss Solvency Test (SST): one part of the solvency capital requirement (SCR) is   Assets(1) − Liabilities(1) ES99% − (Assets(0) − Liabilities(0)) , 1+r where Assets(t) − Liabilities(t) is the market consistent valuation of the available capital (= all assets minus all liabilities) at time t. Assets(1) − Liabilities(1) is random at time 0!

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Risk aggregation: why? One component for solvency capital requirements and risk management: Calculate the distribution of Liabilities(1), which is the sum of all liabilities Xi :

Liabilities(1) =

d X

Xi .

i=1

• •

Xi : value of liability i at time 1 (random at time 0) d : number of liabilities (usually huge!)

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Risk aggregation: dependence matters

• Many risks are correlated • Risks which are uncorrelated in “normal times” become dependent in the extremes. Examples: - 9/11 terrorist attacks - 2011 T¯ ohoku earthquake (Tsunami, Fukushima, etc.)

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Risk aggregation: dependence matters

• Many risks are correlated • Risks which are uncorrelated in “normal times” become dependent in the extremes. Examples: - 9/11 terrorist attacks - 2011 T¯ ohoku earthquake (Tsunami, Fukushima, etc.)

Dependence cannot be ingored!

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Popular risk aggregation methodologies •

Variance - Covariance approaches - In high dimensions, number of correlation parameters (= d(d − 1)/2) becomes overwhelming - Conclusions are limited to statements on mean and (co-)variance

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Popular risk aggregation methodologies •

Variance - Covariance approaches - In high dimensions, number of correlation parameters (= d(d − 1)/2) becomes overwhelming - Conclusions are limited to statements on mean and (co-)variance



Risk factor models - Explicitly modelling risk factors can be difficult - Estimating risk factor sensitivities for all risks is challenging

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Popular risk aggregation methodologies •

Variance - Covariance approaches - In high dimensions, number of correlation parameters (= d(d − 1)/2) becomes overwhelming - Conclusions are limited to statements on mean and (co-)variance



Risk factor models - Explicitly modelling risk factors can be difficult - Estimating risk factor sensitivities for all risks is challenging



Copula models - Can theoretically capture all aspects of dependence - Finding an adequate copula model is difficult - more details later

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Copulas: definition The joint cumulative distribution function (cdf) of (X1 , . . . , Xd ) can be written as:    P X1 ≤ x1 , . . . , Xd ≤ xd = C F1 (x1 ), . . . , Fd (xd ) , where • copula function C : [0, 1]d → [0, 1] • marginal cdf’s Fi (x) = P[Xi ≤ x]

(Fi : R → [0, 1])

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Copulas: definition The joint cumulative distribution function (cdf) of (X1 , . . . , Xd ) can be written as:    P X1 ≤ x1 , . . . , Xd ≤ xd = C F1 (x1 ), . . . , Fd (xd ) , where • copula function C : [0, 1]d → [0, 1] • marginal cdf’s Fi (x) = P[Xi ≤ x]

(Fi : R → [0, 1])

• C captures all aspects of dependence • The Fi capture all aspects of the marginal distributions

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Copula models Setting up a copula model for the distribution of (X1 , . . . , Xd ) is easy: 1

set a model for the Fi

2

set a model for C

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Copula models Setting up a copula model for the distribution of (X1 , . . . , Xd ) is easy: 1

set a model for the Fi

2

set a model for C

There are many models for copulas: • parametric -

elliptic (Gaussian, t, ...) Archimedean (Clayton, Gumbel, Frank, ...) Vines etc

• nonparametric -

Bernstein copulas Box copulas Fourier copulas etc

Motivation

Hierarchical risk aggregation

Reordering algorithm

Copula model simulation

C is the cdf of a random vector (U1 , . . . , Ud ) with uniform margins

To simulate from (X1 , . . . , Xd ): 1

Draw a sample (U1 , . . . , Ud ) ∼ C

2

 Set (X1 , . . . , Xd ) = F1−1 (U1 ), . . . , Fd−1 (Ud )

Conclusion

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Problems with copulas in high dimensions In high dimensions, most popular (parametric) copula classes are difficult to justify. Possible issues are • too symmetric dependence structure

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Problems with copulas in high dimensions In high dimensions, most popular (parametric) copula classes are difficult to justify. Possible issues are • too symmetric dependence structure • difficult to calibrate - Not enough information/data - too many/too few parameters

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Problems with copulas in high dimensions In high dimensions, most popular (parametric) copula classes are difficult to justify. Possible issues are • too symmetric dependence structure • difficult to calibrate - Not enough information/data - too many/too few parameters

• numerically slow simulation

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Problems with copulas in high dimensions In high dimensions, most popular (parametric) copula classes are difficult to justify. Possible issues are • too symmetric dependence structure • difficult to calibrate - Not enough information/data - too many/too few parameters

• numerically slow simulation • hard to justify (in front of management, regulators, rating agencies)

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Problems with copulas in high dimensions In high dimensions, most popular (parametric) copula classes are difficult to justify. Possible issues are • too symmetric dependence structure • difficult to calibrate - Not enough information/data - too many/too few parameters

• numerically slow simulation • hard to justify (in front of management, regulators, rating agencies) Hierarchical risk aggregation circumvents these problems.

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Hierarchical aggregation: Explanation through an example Suppose we have risks from three categories: Xi , Yi and Zi . Total risk: T = X1 + X2

+

Y1 + Y2 + Y3

+

Z1 + Z2 + Z3 + Z4

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Hierarchical aggregation: Explanation through an example Suppose we have risks from three categories: Xi , Yi and Zi . Total risk: T = X1 + X2

+

Y1 + Y2 + Y3

+

Z1 + Z2 + Z3 + Z4

Classical approach: model the joint distribution of (X1 , X2 , Y1 , Y2 , Y3 , Z1 , Z2 , Z3 , Z4 ) and directly calculate (simulate) the distribution of T .

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Hierarchical aggregation: Example (cont’) Hierarchical approach: first aggregate towards subaggregates X = X1 + X2

Y = Y1 + Y2 + Y3

Z = Z1 + Z2 + Z3 + Z4

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Hierarchical aggregation: Example (cont’) Hierarchical approach: first aggregate towards subaggregates X = X1 + X2

Y = Y1 + Y2 + Y3

then to the total T = X + Y + Z .

Z = Z1 + Z2 + Z3 + Z4

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Hierarchical aggregation: Example (cont’) Hierarchical approach: first aggregate towards subaggregates X = X1 + X2

Y = Y1 + Y2 + Y3

Z = Z1 + Z2 + Z3 + Z4

then to the total T = X + Y + Z .

T =X +Y +Z

X = X1 + X2

Y = Y1 + Y2 + Y3

X1

Y1

X2

Y2

Y3

Z = Z1 + Z2 + Z3 + Z4

Z1

Z2

Z3

Z4

Motivation

Hierarchical risk aggregation

Reordering algorithm

Modelling point of view Classical approach: determine one 9-variate copula describing the dependence structure of (X1 , X2 , Y1 , Y2 , Y3 , Z1 , Z2 , Z3 , Z4 )

Conclusion

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Modelling point of view Classical approach: determine one 9-variate copula describing the dependence structure of (X1 , X2 , Y1 , Y2 , Y3 , Z1 , Z2 , Z3 , Z4 )

Hierarchical approach: determine 4 copulas CX , CY , CZ and CT such that (X1 , X2 ) ∼ CX (FX1 , FX2 ) (Y1 , Y2 , Y3 ) ∼ CY (FY1 , FY2 , FY3 ) (Z1 , Z2 , Z3 , Z4 ) ∼ CZ (FZ1 , FZ2 , FZ3 , FZ4 ) (X , Y , Z ) ∼ CT (FX , FY , FZ )

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Modelling point of view Classical approach: determine one 9-variate copula describing the dependence structure of (X1 , X2 , Y1 , Y2 , Y3 , Z1 , Z2 , Z3 , Z4 )

Hierarchical approach: determine 4 copulas CX , CY , CZ and CT such that (X1 , X2 ) ∼ CX (FX1 , FX2 ) (Y1 , Y2 , Y3 ) ∼ CY (FY1 , FY2 , FY3 ) (Z1 , Z2 , Z3 , Z4 ) ∼ CZ (FZ1 , FZ2 , FZ3 , FZ4 ) (X , Y , Z ) ∼ CT (FX , FY , FZ ) These copulas are of lower dimension - “Divide & Conquer” strategy.

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Sampling the tree Generating i.i.d. samples from the aggregation tree is NOT possible.

T =X +Y +Z

X = X1 + X2

Y = Y1 + Y2 + Y3

X1

Y1

X2

Y2

Y3

Z = Z1 + Z2 + Z3 + Z4

Z1

Instead: reordering algorithm for approximation. Inspired by the Iman-Conover method.

Z2

Z3

Z4

Motivation

Hierarchical risk aggregation

Reordering algorithm

Reordering algorithm

Illustration based on a simple problem: (X , Y ) ∼ C (FX , FY ).

Conclusion

Motivation

Hierarchical risk aggregation

Reordering algorithm

Reordering algorithm

Illustration based on a simple problem: (X , Y ) ∼ C (FX , FY ). 1 2

Fix n ∈ N Simulate independently - Xi ∼ FX , - Yi ∼ FY , - Ui = (Ui1 , Ui2 ) ∼ C

for i = 1, . . . , n.

Conclusion

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Reordering algorithm

Illustration based on a simple problem: (X , Y ) ∼ C (FX , FY ). 1 2

Fix n ∈ N Simulate independently - Xi ∼ FX , - Yi ∼ FY , - Ui = (Ui1 , Ui2 ) ∼ C

for i = 1, . . . , n. 3

Construct “samples” of (X , Y ) by merging the order statistics X(i) and Y(j) according to the observed joint ranks in the copula sample.

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Reordering algorithm: Sampling margins and Copula Let n = 4. Sample i.i.d. Xi ∼ FX , i = 1, 2, 3, 4.

Xi ∼ FX sample 3.1 6.3 1.4 5.9

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Reordering algorithm: Sampling margins and Copula Let n = 4. Sample i.i.d. Xi ∼ FX , i = 1, 2, 3, 4.

Xi ∼ FX sample rank 3.1 2 4 6.3 1.4 1 3 5.9

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Reordering algorithm: Sampling margins and Copula Let n = 4. Sample i.i.d. Xi ∼ FX , i = 1, 2, 3, 4. Sample Yi ∼ FY i.i.d., independent of the Xi .

Xi ∼ FX sample rank 3.1 2 4 6.3 1.4 1 3 5.9

Yi ∼ FY sample 67.9 22.8 12.2 43.7

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Reordering algorithm: Sampling margins and Copula Let n = 4. Sample i.i.d. Xi ∼ FX , i = 1, 2, 3, 4. Sample Yi ∼ FY i.i.d., independent of the Xi .

Xi ∼ FX sample rank 3.1 2 4 6.3 1.4 1 3 5.9

Yi ∼ FY sample rank 67.9 4 22.8 2 12.2 1 43.7 3

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Reordering algorithm: Sampling margins and Copula Let n = 4. Sample i.i.d. Xi ∼ FX , i = 1, 2, 3, 4. Sample Yi ∼ FY i.i.d., independent of the Xi . Sample Ui ∼ C i.i.d., Ui ∈ [0, 1]2 , independent of the Xi and Yi . Xi ∼ FX sample rank 3.1 2 4 6.3 1.4 1 3 5.9

Yi ∼ FY sample rank 67.9 4 22.8 2 12.2 1 43.7 3

Ui ∼ C sample (0.4,0.7) (0.5,0.9) (0.1,0.3) (0.7,0.4)

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Reordering algorithm: Sampling margins and Copula Let n = 4. Sample i.i.d. Xi ∼ FX , i = 1, 2, 3, 4. Sample Yi ∼ FY i.i.d., independent of the Xi . Sample Ui ∼ C i.i.d., Ui ∈ [0, 1]2 , independent of the Xi and Yi . Xi ∼ FX sample rank 3.1 2 4 6.3 1.4 1 3 5.9

Yi ∼ FY sample rank 67.9 4 22.8 2 12.2 1 43.7 3

Ui ∼ C sample rank (0.4,0.7) (2,3) (0.5,0.9) (3,4) (0.1,0.3) (1,1) (0.7,0.4) (4,2)

Motivation

Hierarchical risk aggregation

Reordering algorithm

Reordering algorithm: Reordering Xi ∼ FX sample rank 3.1 2

Yi ∼ FY sample rank 67.9 4

Ui ∼ C sample rank (0.4,0.7) (2,3)

6.3

4

12.2

1

(0.5,0.9)

(3,4)

1.4

1

22.8

2

(0.1,0.3)

(1,1)

5.9

3

43.7

3

(0.7,0.4)

(4,2)

Samples of (X , Y ):

Conclusion

Motivation

Hierarchical risk aggregation

Reordering algorithm

Reordering algorithm: Reordering Xi ∼ FX sample rank 3.1 2

Yi ∼ FY sample rank 67.9 4

Ui ∼ C sample rank (0.4,0.7) (2,3)

6.3

4

12.2

1

(0.5,0.9)

(3,4)

1.4

1

22.8

2

(0.1,0.3)

(1,1)

5.9

3

43.7

3

(0.7,0.4)

(4,2)

Samples of (X , Y ):

Conclusion

Motivation

Hierarchical risk aggregation

Reordering algorithm

Reordering algorithm: Reordering Xi ∼ FX sample rank 3.1 2

Yi ∼ FY sample rank 67.9 4

Ui ∼ C sample rank (0.4,0.7) (2,3)

6.3

4

12.2

1

(0.5,0.9)

(3,4)

1.4

1

22.8

2

(0.1,0.3)

(1,1)

5.9

3

43.7

3

(0.7,0.4)

(4,2)

Samples of (X , Y ):

Conclusion

Motivation

Hierarchical risk aggregation

Reordering algorithm

Reordering algorithm: Reordering Xi ∼ FX sample rank 3.1 2

Yi ∼ FY sample rank 67.9 4

Ui ∼ C sample rank (0.4,0.7) (2,3)

6.3

4

12.2

1

(0.5,0.9)

(3,4)

1.4

1

22.8

2

(0.1,0.3)

(1,1)

5.9

3

43.7

3

(0.7,0.4)

(4,2)

Samples of (X , Y ):

Conclusion

Motivation

Hierarchical risk aggregation

Reordering algorithm

Reordering algorithm: Reordering Xi ∼ FX sample rank 3.1 2

Yi ∼ FY sample rank 67.9 4

Ui ∼ C sample rank (0.4,0.7) (2,3)

6.3

4

12.2

1

(0.5,0.9)

(3,4)

1.4

1

22.8

2

(0.1,0.3)

(1,1)

5.9

3

43.7

3

(0.7,0.4)

(4,2)

(3.1, Samples of (X , Y ):

Conclusion

Motivation

Hierarchical risk aggregation

Reordering algorithm

Reordering algorithm: Reordering Xi ∼ FX sample rank 3.1 2

Yi ∼ FY sample rank 67.9 4

Ui ∼ C sample rank (0.4,0.7) (2,3)

6.3

4

12.2

1

(0.5,0.9)

(3,4)

1.4

1

22.8

2

(0.1,0.3)

(1,1)

5.9

3

43.7

3

(0.7,0.4)

(4,2)

(3.1, Samples of (X , Y ):

Conclusion

Motivation

Hierarchical risk aggregation

Reordering algorithm

Reordering algorithm: Reordering Xi ∼ FX sample rank 3.1 2

Yi ∼ FY sample rank 67.9 4

Ui ∼ C sample rank (0.4,0.7) (2,3)

6.3

4

12.2

1

(0.5,0.9)

(3,4)

1.4

1

22.8

2

(0.1,0.3)

(1,1)

5.9

3

43.7

3

(0.7,0.4)

(4,2)

(3.1, Samples of (X , Y ):

Conclusion

Motivation

Hierarchical risk aggregation

Reordering algorithm

Reordering algorithm: Reordering Xi ∼ FX sample rank 3.1 2

Yi ∼ FY sample rank 67.9 4

Ui ∼ C sample rank (0.4,0.7) (2,3)

6.3

4

12.2

1

(0.5,0.9)

(3,4)

1.4

1

22.8

2

(0.1,0.3)

(1,1)

5.9

3

43.7

3

(0.7,0.4)

(4,2)

(3.1, 43.7) Samples of (X , Y ):

Conclusion

Motivation

Hierarchical risk aggregation

Reordering algorithm

Reordering algorithm: Reordering Xi ∼ FX sample rank 3.1 2

Yi ∼ FY sample rank 67.9 4

Ui ∼ C sample rank (0.4,0.7) (2,3)

6.3

4

12.2

1

(0.5,0.9)

(3,4)

1.4

1

22.8

2

(0.1,0.3)

(1,1)

5.9

3

43.7

3

(0.7,0.4)

(4,2)

(3.1, 43.7) Samples of (X , Y ):

(5.9, 67.9)

Conclusion

Motivation

Hierarchical risk aggregation

Reordering algorithm

Reordering algorithm: Reordering Xi ∼ FX sample rank 3.1 2

Yi ∼ FY sample rank 67.9 4

Ui ∼ C sample rank (0.4,0.7) (2,3)

6.3

4

12.2

1

(0.5,0.9)

(3,4)

1.4

1

22.8

2

(0.1,0.3)

(1,1)

5.9

3

43.7

3

(0.7,0.4)

(4,2)

(3.1, 43.7) Samples of (X , Y ):

(5.9, 67.9) (1.4, 12.2)

Conclusion

Motivation

Hierarchical risk aggregation

Reordering algorithm

Reordering algorithm: Reordering Xi ∼ FX sample rank 3.1 2

Yi ∼ FY sample rank 67.9 4

Ui ∼ C sample rank (0.4,0.7) (2,3)

6.3

4

12.2

1

(0.5,0.9)

(3,4)

1.4

1

22.8

2

(0.1,0.3)

(1,1)

5.9

3

43.7

3

(0.7,0.4)

(4,2)

(3.1, 43.7) Samples of (X , Y ):

(5.9, 67.9) (1.4, 12.2) (6.3, 22.8)

Conclusion

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Reordering algorithm: Reordering Xi ∼ FX sample rank 3.1 2

Yi ∼ FY sample rank 67.9 4

Ui ∼ C sample rank (0.4,0.7) (2,3)

6.3

4

12.2

1

(0.5,0.9)

(3,4)

1.4

1

22.8

2

(0.1,0.3)

(1,1)

5.9

3

43.7

3

(0.7,0.4)

(4,2)

(3.1, 43.7) Samples of (X , Y ):

(5.9, 67.9) (1.4, 12.2) (6.3, 22.8)

3.1 + 43.7 = 46.8 Samples of X + Y :

5.9 + 67.9 = 73.8 1.4 + 12.2 = 13.6 6.3 + 22.8 = 29.1

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Reordering algorithm: Reordering Xi ∼ FX sample rank 3.1 2

Yi ∼ FY sample rank 67.9 4

Ui ∼ C sample rank (0.4,0.7) (2,3)

6.3

4

12.2

1

(0.5,0.9)

(3,4)

1.4

1

22.8

2

(0.1,0.3)

(1,1)

5.9

3

43.7

3

(0.7,0.4)

(4,2)

(3.1, 43.7) Samples of (X , Y ):

(5.9, 67.9) (1.4, 12.2) (6.3, 22.8)

3.1 + 43.7 = 46.8 Samples of X + Y :

5.9 + 67.9 = 73.8 1.4 + 12.2 = 13.6 6.3 + 22.8 = 29.1

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Sampling the aggregation tree - recall structure T =X +Y +Z

X = X1 + X2

Y = Y1 + Y2 + Y3

X1

Y1

X2

Y2

Y3

Z = Z1 + Z2 + Z3 + Z4

Z1

Z2

Dependence is described through 4 copulas: (X1 , X2 ) ∼ CX (FX1 , FX2 ) (Y1 , Y2 , Y3 ) ∼ CY (FY1 , FY2 , FY3 ) (Z1 , Z2 , Z3 , Z4 ) ∼ CZ (FZ1 , FZ2 , FZ3 , FZ4 ) (X , Y , Z ) ∼ CT (FX , FY , FZ )

Z3

Z4

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Aggregation example, sampling through reordering

T

X

X1

Y

X2

Y1

Y2

Z

Y3

Z1

Z2

Z3

Reorder samples of X1 and X2 according to the copula CX . Calculate samples of X

Z4

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Aggregation example, sampling through reordering

T

X

X1

Y

X2

Y1

Y2

Z

Y3

Z1

Z2

Z3

Reorder samples of Y1 , Y2 and Y3 according to the copula CY . Calculate samples of Y

Z4

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Aggregation example, sampling through reordering

T

X

X1

Y

X2

Y1

Y2

Z

Y3

Z1

Z2

Z3

Z4

Reorder samples of Z1 , Z2 , Z3 and Z4 according to the copula CZ . Calculate samples of Z

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Aggregation example, sampling through reordering

T

X

X1

Y

X2

Y1

Y2

Z

Y3

Z1

Z2

Z3

Z4

Through the previous reorderings, we have samples of X , Y and Z ! Reorder those according to the copula CT , in order to get samples of T .

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Aggregation example, convergence Reordering algorithm is not classical Monte Carlo: the sample is not i.i.d. Theorem: Suppose the copulas are absolutely continuous with bounded densities. Then, the empirical cdf of T converges uniformly: n

1X n→∞ 1{Ti ≤ x} −−−→ P[T ≤ x] n i=1

√ with convergence rate O(1/ n).

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Aggregation example, convergence Reordering algorithm is not classical Monte Carlo: the sample is not i.i.d. Theorem: Suppose the copulas are absolutely continuous with bounded densities. Then, the empirical cdf of T converges uniformly: n

1X n→∞ 1{Ti ≤ x} −−−→ P[T ≤ x] n i=1

√ with convergence rate O(1/ n). • Why only bounded densities? Underlying set classes do not satisfy ˘ Vapnik-Cervonenkis (VC) property! • For unbounded densities: - works for few examples (e.g. bivariate Clayton) - in general: open problem

Motivation

Hierarchical risk aggregation

Reordering algorithm

How to set the aggregation tree? For 9 risks, there are 12’818’912 aggregation trees!

Conclusion

Motivation

Hierarchical risk aggregation

Reordering algorithm

Estimation of the tree Estimating the tree from data: not feasible. Model identification problems!

Conclusion

Motivation

Hierarchical risk aggregation

Reordering algorithm

Estimation of the tree Estimating the tree from data: not feasible. Model identification problems! Heuristics: Aggregate by risk types. Groupings are inherent due to • line of business • location • maturity

Conclusion

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Estimation of the tree Estimating the tree from data: not feasible. Model identification problems! Heuristics: Aggregate by risk types. Groupings are inherent due to • line of business • location • maturity Dependence between risks gets weaker the farther they are apart • Keep the number of aggregation levels low • Strongest dependencies at the bottom • Subaggregates with similar roles should be on the same level in the tree.

Motivation

Hierarchical risk aggregation

Reordering algorithm

Capital allocation Capital allocation is easy: allocate hierarchically 1

Risk capital: KT

KT

Conclusion

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Capital allocation Capital allocation is easy: allocate hierarchically 1 2

Risk capital: KT One has a sample of (X , Y , Z )! Allocate to X , Y , Z by splitting KT

KT

KX

KY

KZ

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Capital allocation Capital allocation is easy: allocate hierarchically 1 2

3

Risk capital: KT One has a sample of (X , Y , Z )! Allocate to X , Y , Z by splitting KT One has a sample of (X1 , X2 )! Allocate to X1 and X2 by splitting KX . Analogous for Y and Z

KT

KX

KX1

KX2

KY

KY1

KY2

KZ

KY3

KZ1 KZ2 KZ3 KZ4

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Conclusion • Very high dimensions are feasible • Flexible dependence structure - Any type of copulas can be combined

• Simulation is easy • Selection of the aggregation tree: tricky • Calibration is easier than with common copula models (divide & conquer). Statistical complexity can be adjusted through choice of tree and copula families • Capital allocation is possible • The reordering method can also be used with other aggregation functionals

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

References • P. Arbenz, C. Hummel, G. Mainik (2011): Copula based hierarchical risk aggregation through sample reordering. Submitted. • S. Mildenhall (2005): Correlation and aggregate loss distributions with an emphasis on the Iman-Conover Method. Casualty Actuarial Society Forum Winter 2005.

Preprint, presentation and code examples are available on my homepage: www.math.ethz.ch/∼arbenz/ (find it by Googling my name)

Thank you!

Motivation

Hierarchical risk aggregation

Reordering algorithm

Conclusion

Appendix: Sampling the whole tree Up to now: sampling described only for each aggregation step. How to sample from the whole tree? Idea: pull back permutations from top to bottom of the tree!

X∅ = X1 + X2 C∅ X1 = X1,1 + X1,2 C1 X1,1

X1,2

X2

Motivation

Hierarchical risk aggregation

X1,1 0 marginal samples:   0.1 0.2

reordering of X1,1

Reordering algorithm

X1,2   0 1 2

X2 0 10 20

X1,1 X1,2 0.1 0 and X1,2 :  0 2 0.2 1

 X1 X2  2 10 reordering of X1 and X2 :  1.2 20 0.1 0 Apply to X1,1 and X1,2 the permutations which were applied to X1 . I.e., pull back permutations to leaf nodes to construct a joint sample:

Conclusion





P

P





X1,1 X1,2 0 2  0.2 1 0.1 0

 X1  0.1 2 1.2  X∅  12 21.2 0.1 X1 X2 X∅  2 10 12 1.2 20 21.2 0.1 0 0.1

High dimensional risk aggregation: a hierarchical ...

Hierarchical risk aggregation. Reordering algorithm ... Risk aggregation: dependence matters. • Many risks are .... Not enough information/data. - too many/too ...

472KB Sizes 1 Downloads 214 Views

Recommend Documents

A Heterogeneous High Dimensional ... - Research at Google
Dimensional reduction converts the sparse heterogeneous problem into a lower dimensional full homogeneous problem. However we will ...... [6] C.Gennaro, P.Savino and P.Zezula Similarity Search in Metric Databases through Hashing Proc.

Aggregation Rates in One-dimensional Stochastic ...
We consider one-dimensional systems of auto-gravitating sticky particles with random initial data and describe the process of aggre- gation in terms of the ...

A quasi-Newton acceleration for high-dimensional ... - Springer Link
Dec 12, 2009 - 3460. 0.7246. F(x) by its s-fold functional composition Fs(x) before at- ... dent, identically distributed uniform deviates from the in- terval [−5,5].

A quasi-Newton acceleration for high-dimensional ... - Springer Link
Dec 12, 2009 - This tendency limits the application of these ... problems in data mining, genomics, and imaging. Unfortu- nately ... joy wide usage. In the past ...

Power in High-Dimensional Testing Problems - ULB
In the sequel we call tests ... As a consequence, in many high-dimensional testing problems the power .... for all possible rates d(n) at which the dimension of the parameter space can increase ..... that the total variation distance between Nn(θn,n

Reporting Neighbors in High-Dimensional Euclidean Space
(c) For each cell τ with |Pτ | ≥ 2, go over all pairs of points of Pτ and report those pairs at ...... Geometry, Second Edition, CRC Press LLC, Boca Raton, FL, 2004.

Shrinkage Estimation of High Dimensional Covariance Matrices
Apr 22, 2009 - Shrinkage Estimation of High Dimensional Covariance Matrices. Outline. Introduction. The Rao-Blackwell Ledoit-Wolf estimator. The Oracle ...

Power in High-Dimensional Testing Problems - ULB
In regression models power properties of F-tests when the ...... But a(n) → ∞ hence implies (via the triangle inequality, together with dw-continuity of (µ, σ2) ↦→.

On high-dimensional acyclic tournaments
Jul 11, 2013 - ∗School of Computer Science and engineering, The Hebrew ... The degree sequence of the tournament is the vector A · 1 where 1 is the vector of 1's ..... In the opposite direction, we want to associate a chamber C to a given ...

Hierarchical shape modeling of the cochlea and surrounding risk ...
adequately deal with undefined intermediate regions but also extract the relevant ana- ... was segmented using the software Seg3D [9]. In particular, a threshold ...

Predicting Future High-Cost Patients: A Real-World Risk Modeling ...
care application to build predictive risk models for forecasting future high-cost users. Such predictive risk modeling has received attention in recent years with.

Developing a Community Response for High-Risk ... -
Eastern District of Wisconsin. Developing a Community Response for. High-Risk Victims of Child Sex Trafficking and Exploitation. Aug 28-29, 2018 | Franklin, WI.

Salient Region Detection via High-Dimensional Color Transform
As demonstrated in our experimental results, our per-pixel saliency map represents how distinctive the color of salient regions is compared to the color of the background. Note that a simple linear combination or transformation of the color space can

Finding k-Dominant Skylines in High Dimensional Space - Database ...
2Dept. of Electrical Engineering & Computer Science. National .... The basic idea of skyline queries came from some old re- ..... Let A be a class of algorithms, and let D be a class of ...... in the sky: an online algorithm for skyline queries. In.

Service-Oriented Architecture for High-Dimensional Private Data ...
Service-Oriented Architecture for High-Dimensional Private Data Mashup..pdf. Service-Oriented Architecture for High-Dimensional Private Data Mashup..pdf.

Exploiting Feature Covariance in High-Dimensional Online Learning
with the number of features, are often used in prac- tice (Dredze et al., 2008; .... increased confidence in µp and smaller subsequent up- dates to that value. Thus ...

Reporting Neighbors in High-Dimensional ... - Research at Google
Although the algorithms also work for any Lp-distance, no analysis of the ..... of the leading existing software packages for reporting neighbors. As the results ...

High Dimensional Inference in Partially Linear Models
Aug 8, 2017 - belong to exhibit certain sparsity features, e.g., a sparse additive ...... s2 j ∨ 1. ) √ log p n.. ∨. [( s3 j ∨ 1. ) ( r2 n ∨ log p n. )] = o(1). 8 ...

pdf-0426\multivariate-statistics-high-dimensional-and-large-sample ...
... apps below to open or edit this item. pdf-0426\multivariate-statistics-high-dimensional-and-la ... asunori-fujikoshi-vladimir-v-ulyanov-ryoichi-shimizu.pdf.

Number of spanning clusters at the high-dimensional ...
Nov 18, 2004 - d=6. The predictions for d 6 depend on the boundary conditions, and the ... of the five-dimensional multiplicity data with a constant plus a simple linear ..... P k,L of samples with k spanning clusters each, versus k, with the.