Understand Volatility and Copula Correlation in Risk Management context

Volatility and Copula Correlation are the most important parameters in finance, especially risk management area because of their existence in almost controversial issues today such as risk measurement, wrong-way risk, risk interdependence, long-memory patterns, etc. However, a large majority of people misunderstand them, leading to several deadly serious frauds in trading and risk management practice. Consequently, it is important to re-state the nature of volatility and correlation.

Volatility

Financial asset prices are observed at any time from the past to current but how they behave in the future is a mystery question. We cannot make sure what the exact price of Google stock, for example, tomorrow will be. However we can forecast it based on probabilistic theories. Normally analysts say the next week price of Google will be XXX dollars. They are not confident about what they said, of course, but it is based on their expectation.

And volatility is a measure of the dispersion from an expected value. Price is observable, but volatility is not. We can find price anywhere in the market because there is only one way to produce ‘price’, which is the number of dollar units where all market participants agree to trade with. We cannot find a true volatility value, however. There is no clear place for trading volatility and there is a wide variety of kinds of volatility measurement such as implied volatility, statistical volatility, realized volatility, historical volatility, etc. And the most common measure of volatility in statistics is standard deviation (\(\sigma\)) of a random variable.

curve(dnorm(x,0,1), xlim=c(-6,6), ylim=c(0,0.4), xlab="", ylab="")
curve(dnorm(x,0,2), add=T, lty=3, lwd=3)
legend(2,0.4, c("N(0,1)","N(0,2)"), lty=c(1,3), lwd=c(1,3))
title(main = "Normal Distributions")

Volatility is mainly used in risk measurement content in risk management.

Correlation and Covariance

I firstly show what joint distributions are. Below is Bivariate Normal Distributions with two covariance parameters, as typical examples of joint distributions. We can see a bivariate normal density may be visualized as a mountain: for highly correlated variables (and hence high covariance parameter), the joint density will have more of a ridge in a direction between the axes of the two variables, as we can see in below figure.

A measure of co-movements between two random variables is Covariance. It is also known as the first product moment about the mean of the joint density function. That is

\[cov(X,Y) = E[(X-E(X))(Y-E(Y))]\]

However, covariance determines not only co-movements between two random variables X, Y but also sizes of X, Y. In other words, covariance depends on units of measurement. This leads to difficulty in comparison several different variables. Therefore, a standardize approach is necessary. That’s where correlation come from

\[corr(X,Y) = \frac{cov(X,Y)}{sd(X)(sd(Y)}\]

or equivalently

\[\rho_{X,Y} = \frac{\sigma_{X,Y}}{\sigma_X \sigma_Y}\]

This is formula of Pearson correlation. Strong positive correlation shows that upward movements of one variable from its expectation is accompanied by upward movements of another one from its expected value. The oposite relationship is illustrated as for strong negative correlation. In the case of zero correlation, the term ‘orthogonal’ is used.

A major drawback of correlation is that financial markets is where there is very often a non-linear dependence between variables which correlation finds tough to show. This motivates the appearance of a new approach to capture co-dependency, so-called Copula.

Copulas

Definition of copulas

A copula is a cumulative function of several uniform variables. In fact, if \(u_1, u_2, ..., u_n\) are values of n univariate distribution functions, each \(u_i \in [0,1]\) then copula \(C\) is defined as

\[C_Y (u_1,...,u_n) \rightarrow CDF_Y [0,1]^n\]

The univariate distributions are called marginal distributions and \(C\) is the unique function which can combines those marginal distributions into a multivariate distribution.

Copula intuitions

Before we go into details, recall some relevant concepts:

\(CDF_X (x) = P(X \leq x)\): CDF of variable \(X\) at the value \(x\) is the probability of event \(X\) has values no more than particular value \(x\). If \(X \sim U(0,1)\) then \(CDF(X \le x) = x\). Because PDF of \(X\sim U(a,b)\) is \(latex \frac{1}{b-a}\), PDF of \(X\sim U(0,1)\) is 1 and hence, its intergration, which is its CDF: \(\int^{1}_{0}1dx = x\)

So why are they uniform variables but not follow another distributions? Because for any variable \(X\) following any distribution, its CDF \(F_X(X)\) will follow uniform distribution, \(U(0,1)\). In fact, if we employ inverse function \(F^{-1}\), we have

\[P[F_X (X) \leq x] = P[X \leq F_X^{-1}(x)] = CDF_X [F_X^{-1}(x)] \stackrel{CDF_X = F_X} {=} F_X[F_X^{-1}(x)] = x\]

Back to our problem, suppose that vector \(Y\) where \(Y=(Y_1, Y_2,...,Y_n)\) follows a multivariate distribution with CDF \(F_Y\) with continuous marginal univariate CDFs \(F_{Y_1}, F_{Y_2},..., F_{Y_n}\). Above we have prooved that each \(F_{Y_i}\) is Uniform(0,1); thus, by definition of copulas, CDF of \([F_{Y_1},F_{Y_2},...,F_{Y_n}]\) is a copula, denoted by \(C_Y\).

Since copula \(C\) is the cumulative function, by definition of CDFs under probability theories we have

\[\begin{aligned}C_Y(u_1,u_2,...,u_n) &= P[F_{Y_1} (Y_1) \leq u_1, F_{Y_2} (Y_2) \leq u_2, ..., F_{Y_n} (Y_n) \leq u_n] \\ &=P[Y_1 \le F^{-1}_{Y_1}(u_1),..., Y_n \le F^{-1}_{Y_n}(u_n)] \\ &=F_Y[F^{-1}_{Y_1}(u_1),..., Y_n \le F^{-1}_{Y_n}(u_n)] \end{aligned}\]

We know that each \(F_{Y_i}(y_i)\) is uniform variable where \(y_i\)s are values of \(Y_i\), we let \(u_i\) be value of \(F_{Y_i}(y_i)\) so that: \(F_{Y_i}(y_i) = u_i\). Then our final version is

\[C_Y[F_{Y_1}(y_1),...,F_{Y_n}(y_n)] =F_Y[F^{-1}_{Y_1}(u_1),..., Y_n \le F^{-1}_{Y_n}(u_n)] \stackrel{F^{-1}_Y(u_i)=y_i}{=} F_Y[y_1,...,y_n]\]

Based on above equation, we can state that a multivariate distribution function (right-hand side) can be written in the form of a copula function. This is a part of a famous theorem due to Sklar (1973) [^Sklar, A., 1973. Random Variables, Joint Distribution Functions and Copulas. Kybernetika, 9, pp.449-460.].

The action of aligning \(F_{Y_i}(y_i)\) to \(u_i\) is called ‘percentile-to-percentile mapping’. The term percentile is derived from the fact that \(u_i \in [0,1]\).

One important property of copulas is that they are invariant to strictly increasing transformations of variables. This partly outweighs the limitation of Pearson correlation as mentioned above. The proof is provided as the end of this post.

Okay, Copula is a CDF function. So where is correlation?

Multivariate normal and t-distributions offer a convinient way to generate families of copulas.

Let vector \(Y=(Y_1, Y_2)\) have a bivariate normal distribution. The copula function \(C_Y\) depends only on the correlation matrix, denoted by \(\Omega\), of \(Y\) or dependencies within \(Y\), not the univariate marginal distributions. We have

\[C_Y[N_{Y_1}(y_1),N_{Y_2}(y_2)] = N_Y(y_1,y_2, \Omega)\]

The correlation matrix does not necessarily equal the usual correlation coefficient defined by Pearson nor Spearman’s \(\rho_S\), nor Kendall’s \(\tau\). The two latter terms can be defined using a copula function only as follows

\[\rho_S = 12 \int \int [C_Y[u_1,u_2] -u_1 \times u_2]du_1 du_2 -3 \\\tau = 4 \int \int C_Y(u_1,u_2)dC(u_1,u_2) -2\] where \(u_i = N_{Y_i}(y_i)\).

The Pearson’s correlation depends not only on copula functions, but also on marginal distribution themeselves. Its complication is out of this post’s scope and hence, I will not mention here.

If a random vector \(Y\) has a Gaussian copula, then Y is said to have a ‘meta-Gaussian distribution’. This does not, of course, mean that \(Y\) has a multivariate Gaussian distribution, since the univariate marginal distributions of \(Y\) could be any distributions at all. In this case, our Spearman’s and Kendall’s correlation coefficients are

\[\rho_S = \frac{6}{\pi} \arcsin(\Omega/2)\]

\[\tau = \frac{2}{\pi} \arcsin(\Omega)\]

We know that \(\Omega \in [-1,1]\) then

\[\Omega \approx \frac{6}{\pi} \arcsin(\Omega/2) = \rho_S\]

Therefore, our correlation matrix in Gaussian copula can be employed from Rank correlation of Spearman.

In risk management context, Gaussian copulas are used much frequently, especially in modelling Credit risk (see CreditMetrics.

Summary

This post just stops by introducing some fundamental concepts in terms of volatility and correlation. There is numerous approaches to expand the ideas of those measure of risk. One of the most practical examples is conducting a volatility analysis or correlation analysis for a particular stock index, stock price series, etc. Regarding theoretical viewpoint, volatility and correlation are not perfect measures of risk. The true risk measurement is still left with various fierce reactions from distinct schools of professionals, who come from traditional statistics and who have developed from applied deep learning.