Maximizing an evolutionary biology equation (vector calculus)

mliuzzolino
Messages
58
Reaction score
0

Homework Statement



For a Gaussian landscape, the log-fitness change caused by a mutation of size r in chemotype element i is

Q_i(r) = -\vec{k} \cdot S \cdot \hat{r_i}r - \dfrac{1}{2} \hat{r_i} \cdot S \cdot \hat{r_i}r^2.

To find the largest possible gain in log-fitness achievable by mutating chemotype element i, maximize Q_i(r) with respect to r.


Homework Equations



The solution is:

\Theta _i = \dfrac{|\vec{k} \cdot S \cdot \hat{r_i}|^2}{2\hat{r_i} \cdot S \cdot \hat{r_i}}

The Attempt at a Solution



Q_i(r)' = -\vec{k} \cdot S \cdot \hat{r_i} - \hat{r_i} \cdot S \cdot \hat{r_i}r = 0

\hat{r_i} \cdot S \cdot \hat{r_i} r = -\vec{k} \cdot S \cdot \hat{r_i}

r = \dfrac{-\vec{k} \cdot S \cdot \hat{r_i}}{\hat{r_i} \cdot S \cdot \hat{r_i}}

It's been forever since I've dealt with vector calculus so I know that I'm approaching this entirely the wrong way. Any points in the right direction will be greatly appreciated!
 
Physics news on Phys.org
mliuzzolino said:

Homework Statement



For a Gaussian landscape, the log-fitness change caused by a mutation of size r in chemotype element i is

Q_i(r) = -\vec{k} \cdot S \cdot \hat{r_i}r - \dfrac{1}{2} \hat{r_i} \cdot S \cdot \hat{r_i}r^2.
It looks like you put a great deal of effort into formatting the equation above, but I'm having a hard time understanding what it says. If you "dot" two vectors, you get a scalar, but you can't dot that scalar with another vector. In other words, an expression such as ##\vec{u} \cdot \vec{v} \cdot \vec{w}## doesn't make sense.

Also, is S a scalar? How you wrote it suggests that it is.
mliuzzolino said:
To find the largest possible gain in log-fitness achievable by mutating chemotype element i, maximize Q_i(r) with respect to r.

Homework Equations



The solution is:

\Theta _i = \dfrac{|\vec{k} \cdot S \cdot \hat{r_i}|^2}{2\hat{r_i} \cdot S \cdot \hat{r_i}}

The Attempt at a Solution



Q_i(r)' = -\vec{k} \cdot S \cdot \hat{r_i} - \hat{r_i} \cdot S \cdot \hat{r_i}r = 0

\hat{r_i} \cdot S \cdot \hat{r_i} r = -\vec{k} \cdot S \cdot \hat{r_i}

r = \dfrac{-\vec{k} \cdot S \cdot \hat{r_i}}{\hat{r_i} \cdot S \cdot \hat{r_i}}

It's been forever since I've dealt with vector calculus so I know that I'm approaching this entirely the wrong way. Any points in the right direction will be greatly appreciated!
 
Sorry! I forgot to state that S is a symmetric positive definite matrix. I believe that the operation will just be taking the dot product of \vec{k} and S, and then using that as the scalar weight on \vec{k}.

This is for a research project and I'm just going through old literature trying to rederive the equations so that I can better understand what's going on, and I kind of mindlessly transcribed it exactly as it was in the paper (with two dots). I'm not sure of the rationale behind putting the two dots in the paper, but it's there nonetheless.

Hope this helps explain it better...
 
mliuzzolino said:

Homework Statement



For a Gaussian landscape, the log-fitness change caused by a mutation of size r in chemotype element i is

Q_i(r) = -\vec{k} \cdot S \cdot \hat{r_i}r - \dfrac{1}{2} \hat{r_i} \cdot S \cdot \hat{r_i}r^2.

To find the largest possible gain in log-fitness achievable by mutating chemotype element i, maximize Q_i(r) with respect to r.


Homework Equations



The solution is:

\Theta _i = \dfrac{|\vec{k} \cdot S \cdot \hat{r_i}|^2}{2\hat{r_i} \cdot S \cdot \hat{r_i}}

The Attempt at a Solution



Q_i(r)' = -\vec{k} \cdot S \cdot \hat{r_i} - \hat{r_i} \cdot S \cdot \hat{r_i}r = 0

\hat{r_i} \cdot S \cdot \hat{r_i} r = -\vec{k} \cdot S \cdot \hat{r_i}

r = \dfrac{-\vec{k} \cdot S \cdot \hat{r_i}}{\hat{r_i} \cdot S \cdot \hat{r_i}}

It's been forever since I've dealt with vector calculus so I know that I'm approaching this entirely the wrong way. Any points in the right direction will be greatly appreciated!

If I understand correctly, you have an expression of the form
Q(r) = -a r - \frac{1}{2} b r^2 \\<br /> \text{where } a = \vec{k} \cdot S \hat{r}_i,\text{ and } b = \hat{r}_i \cdot S \hat{r}_i
with ##a, b## being constants, independent of ##r##. Maximizing Q(r) is a simple excercise in univariate calculus, and you did it correctly. Why do you think you have made an error?
 
Unfortunately, my result does not seem to match with the solution arrived at in the paper which is provided in 2. Homework Equations .
 
Mark44 said:
It looks like you put a great deal of effort into formatting the equation above, but I'm having a hard time understanding what it says. If you "dot" two vectors, you get a scalar, but you can't dot that scalar with another vector. In other words, an expression such as ##\vec{u} \cdot \vec{v} \cdot \vec{w}## doesn't make sense.

Also, is S a scalar? How you wrote it suggests that it is.

My mistake in the previous reply to you. The expression should be \vec{k} \cdot S \cdot \vec{k}^T. Let \vec{k} be a 1 x N matrix and S an N x N matrix. The dot product of S and \vec{k}^T will result in an N x 1 matrix which is then dotted with the 1 x N \vec{k} matrix, resulting in a scalar.
 
mliuzzolino said:
Unfortunately, my result does not seem to match with the solution arrived at in the paper which is provided in 2. Homework Equations .

I think that reason for the mis-match is that you are not computing the same thing the paper is computing. You calculated the best value of ##r##; the paper calculated the maximum value of ##Q##. Can you see now what you need to do?

BTW: when replying, always use the "quote" button; otherwise nobody can figure out which message you are responding to.
 
  • Like
Likes 1 person
Ray Vickson said:
I think that reason for the mis-match is that you are not computing the same thing the paper is computing. You calculated the best value of ##r##; the paper calculated the maximum value of ##Q##. Can you see now what you need to do?

BTW: when replying, always use the "quote" button; otherwise nobody can figure out which message you are responding to.

Ah. I have it figured out now. I can't believe I overlooked such an elementary concept...

Thank you Ray!
 
Back
Top