Maximizing Summation of F_t*R: Partial Derivative w.r.t R

Patrick94
Messages
3
Reaction score
0
I have the summation (from i=0 to n) of (F_{t})(R) / (1+d)^i
where F_{t} = (F_{t-1})[(R)(I_{p}) +(1-R)(I_{r})]

(The F_{t-1} is referring to the value of F in the last period)

I want to find the value of R that maximizes the summation. So must I take the partial derivative wrt R? How do I do this?

Thanks
 
Mathematics news on Phys.org
So wait, F_t and R don't depend on i? What is I_p and I_r?

If those things don't depend on i you can pull them out of the sum.

Regardless, you can treat the entire thing as a function of R. (f(R)=.... Summation on right side.)

You can imagine that as you change the value of R, the value of f changes as well. It's hard to get an idea of what this function may look like if you plotted it, but the point is to find \frac{\partial f}{\partial R} and set it equal to zero.

F_t is a constant that depends on your choice of R and depends on the already defined constant F_{t-1}. Since you are taking the derivative w.r.t R, you first want to replace F_t in the sum with its definition. You may have to do something similar with I_p and I_r, depending on what they are defined as. However, once you have the entire thing expressed explicitly in terms of R, (i.e. all values that depend on R are written out in full), then you can take \frac{\partial}{\partial R}. It's important to remember that the differentiation operator \frac{\partial}{\partial R} can be moved inside the sum and you can take the derivative of each term of the sum independently.

Once you have taken the derivative w.r.t R, (i.e. you've found \frac{\partial f}{\partial R},) you must set it equal to zero and solve like a normal optimization problem. Hopefully there is only one solution to the resulting equation, telling you the desired value of R. It is possible however that there may be multiple places were \frac{\partial f}{\partial R} is equal to zero, in this case you have to do a little bit of investigating to determine which value of R really maximizes the sum.

I hope I am understanding your question right and I hope this helps in some way. Good luck
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top