Computing an optimum portfolio for a CARA utility function

TaPaKaH
Messages
51
Reaction score
0

Homework Statement


##u(x)=1-e^{-ax}##.
Random variable ##Y\in\mathbb{R}^d## is a multivariate normal distribution with mean vector ##m## and invertible covariance matrix ##\Sigma##.
Task: Find ##\xi^*\in\mathbb{R}^d## that maximises ##\mathbb{E}u(\xi\cdot Y)## over ##\xi##.

Homework Equations

.[/B]
For ##Y## above, its density function is $$f_Y(x)=\frac{1}{(2\pi)^{d/2}|\Sigma|^{1/2}}e^{-\frac{(x-m)^T\Sigma^{-1}(x-m)}{2}}$$

The Attempt at a Solution


Do I get it right that in order to find ##\xi^*## I need to solve ##\frac{d}{d\xi_i}\mathbb{E}u(\xi\cdot Y)=0## for ##i=1..d## and then check whether all ##\frac{d}{d\xi_i^2}\mathbb{E}u(\xi\cdot Y)## are negative?
 
Physics news on Phys.org
TaPaKaH said:

Homework Statement


##u(x)=1-e^{-ax}##.
Random variable ##Y\in\mathbb{R}^d## is a multivariate normal distribution with mean vector ##m## and invertible covariance matrix ##\Sigma##.
Task: Find ##\xi^*\in\mathbb{R}^d## that maximises ##\mathbb{E}u(\xi\cdot Y)## over ##\xi##.

Homework Equations

.[/B]
For ##Y## above, its density function is $$f_Y(x)=\frac{1}{(2\pi)^{d/2}|\Sigma|^{1/2}}e^{-\frac{(x-m)^T\Sigma^{-1}(x-m)}{2}}$$

The Attempt at a Solution


Do I get it right that in order to find ##\xi^*## I need to solve ##\frac{d}{d\xi_i}\mathbb{E}u(\xi\cdot Y)=0## for ##i=1..d## and then check whether all ##\frac{d}{d\xi_i^2}\mathbb{E}u(\xi\cdot Y)## are negative?

Yes, for ##F(\vec{\xi}) = E u (\vec{\xi} \cdot \vec{Y})## you need to set
\frac{\partial F}{\partial \xi_i} = 0, \: i = 1, \ldots, d,
but the second-order test is more complicated that what you indicated: you need to check that the ##d \times d## Hessian matrix
H(\vec{\xi}) = \left[ \begin{array}{cccc}<br /> \partial^2 F/ \partial \xi_1 ^2 &amp; \partial^2 F / \partial \xi_1 \partial \xi_2 &amp; \cdots &amp; <br /> \partial^2 F/ \partial \xi_1 \partial \xi_d\\<br /> \partial^2 F /\partial \xi_2 \partial \xi_1 &amp; \partial^2 F/ \partial \xi_2 ^2 &amp; \cdots &amp;<br /> \partial^2 F/ \partial \xi_2 \partial \xi_d\\<br /> \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\<br /> \partial^2 F /\partial \xi_d \partial \xi_1 &amp;\partial^2 F /\partial \xi_d \partial \xi_2 &amp;<br /> \cdots &amp; \partial^2 F/ \partial \xi_d ^d<br /> \end{array}\right]
is negative-definite at the optimal value ##\vec{\xi} =\vec{\xi^*} ##.

However, before embarking on any of that I urge you to first evaluate ##F(\vec{\xi})## as an explicit formula; believe it or not it is not too bad!
 
Last edited:
This is what I got so far:
for ##g(\xi)=\mathbb{E}e^{-\xi\cdot Y}## which we are now looking to minimise,
$$g(\xi)=\int_{\mathbb{R}^d}e^{-\xi\cdot x}f_Y(x)dx=\frac{1}{(2\pi)^{d/2}|\Sigma|^{1/2}}\int_{\mathbb{R}^d}e^{-\xi\cdot x}e^{-(x-m)^T\Sigma^{-1}(x-m)/2}dx$$
$$\frac{\partial g}{\partial\xi_i}(\xi)=\frac{-1}{(2\pi)^{d/2}|\Sigma|^{1/2}}\int_{\mathbb{R}^d}x_ie^{-\xi\cdot x}e^{-(x-m)^T\Sigma^{-1}(x-m)/2}dx=-\mathbb{E}(Y_ie^{-\xi\cdot Y})$$
$$\frac{\partial^2g}{\partial\xi_i\xi_j}(\xi)=\mathbb{E}(Y_iY_je^{-\xi\cdot Y})$$
Indeed, it doesn't seem to look too bad, but now I am at slight loss at how to compute the derivatives.
 
TaPaKaH said:
This is what I got so far:
for ##g(\xi)=\mathbb{E}e^{-\xi\cdot Y}## whicdh we are now looking to minimise,
$$g(\xi)=\int_{\mathbb{R}^d}e^{-\xi\cdot x}f_Y(x)dx=\frac{1}{(2\pi)^{d/2}|\Sigma|^{1/2}}\int_{\mathbb{R}^d}e^{-\xi\cdot x}e^{-(x-m)^T\Sigma^{-1}(x-m)/2}dx$$
$$\frac{\partial g}{\partial\xi_i}(\xi)=\frac{-1}{(2\pi)^{d/2}|\Sigma|^{1/2}}\int_{\mathbb{R}^d}x_ie^{-\xi\cdot x}e^{-(x-m)^T\Sigma^{-1}(x-m)/2}dx=-\mathbb{E}(Y_ie^{-\xi\cdot Y})$$
$$\frac{\partial^2g}{\partial\xi_i\xi_j}(\xi)=\mathbb{E}(Y_iY_je^{-\xi\cdot Y})$$
Indeed, it doesn't seem to look too bad, but now I am at slight loss at how to compute the derivatives.

This is not what I meant: I suggested you derive an explicit formula for ##F(\xi)##, and that means calculating the expectation first, by actually doing the d-dimensional integrations! YES: believe it or not, it is quite easy. You just need to use a number of fundamental facts about multivariate normal and univariate normal random variables, and these are available via a Google search on the keywords 'multivariate normal distribution'.
 
It might be instructive to first solve this in the case where \Sigma is a diagonal matrix.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top