Projectile motion with ##N## bounces on the ground

AI Thread Summary
The discussion focuses on the physics of a ball's projectile motion with multiple bounces, detailing how its horizontal and vertical velocities change upon impact with a wall and the ground. The ball follows a parabolic trajectory, with its vertical speed decreasing by a factor of k after each bounce, while the horizontal speed remains constant in magnitude. The total time the ball is in the air is calculated based on the initial vertical speed and the number of bounces, leading to a formula that incorporates k and the initial conditions. For infinite bounces, the limit of the total time converges to a specific expression involving the initial vertical speed and k, which is determined to be √2 - 1 for infinite bounces. The conversation also touches on the implications of k being equal to 1, indicating a perfect rebound scenario.
Meden Agan
Messages
94
Reaction score
13
Homework Statement
During the school holidays, we kick a football against the wall. The place of the kick is the point marked ##\text{A}## in the diagram. The balls hits the wall exactly perpendicular at point ##\text{B}##, and after the bounce it lands at point ##\text{C}##. After bouncing off the ground, the ball bounces back to its original starting point. A collision with a wall or a ground can be modelled as follows: the velocity component parallel to the surface does not change, while the one perpendicular to it changes by a factor of ##(-k)##. Generalize the above case and bounce the ball exactly ##N## times on the ground before returning to its starting position.

a) What is the value of ##k## for a given ##N##?
b) How long is the ball in the air?

Examine in detail the case ##N \to \infty##.
Relevant Equations
Projectile motion equations, some Calculus.
Figure.webp


Imagine we kick the ball from point ##\text{A}## with horizontal speed ##u_x^{\text{initial, A}} = v \cos \alpha## and vertical speed ##u_y^{\text{initial, A}} = v \sin \alpha##.
The gravitational acceleration is ##\vec g##, the x-axis points towards the wall, the y-axis points upwards.
The first flight ##\text A \to \text B## draws a parabolic trajectory ending when the ball touches the wall at point ##\text B##, at the very highest point of the arc. At that instant, the vertical speed is ##u_y^{\text{initial, B}}=0##; the horizontal speed is still ##u_x^{\text{initial, \, B}}= u_x^{\text{initial, \, A}}##. The height of top point ##\text B## is ##H = \dfrac{{u_y^2}^{\text{initial, A}}}{2g}##; the ascent time is ##t_{\text{ascent}}=\dfrac{u_y^{\text{initial, A}}}{g}##. Then, the horizontal distance between wall and starting point is
$$\begin{aligned}
D_{\text{AB}} &= u_x^{\text{initial, A}} \, t_{\text{ascent}} \\
&= u_x^{\text{initial, A}} \frac{u_y^{\text{initial, A}}}{g}.
\end{aligned}$$

After the ball hits the wall, what happens is described by the problem: the velocity component parallel to the wall doesn't change, while the one perpendicular to the wall changes by a factor of ##(-k)##. Since the velocity component parallel to the wall is the vertical component, then the vertical component of velocity doesn't change; thus, its magnitude is ##u_y^{\text{final, B}} = 0##. Since the velocity component perpendicular to the wall is the horizontal component, then the horizontal component of velocity is reversed and decreased; thus, from ##u_x^{\text{initial, B}}## it becomes ##u_x^{\text{final, B}} = -k \, u_x^{\text{initial, B}}##, with ##0 < k < 1##. The ball falls from height ##H = \dfrac{{u_y^2}^{\text{initial, A}}}{2g}## and again takes time ##t_{\text{descent}}= t_{\text{ascent}}=\dfrac{u_y^{\text{initial, A}}}{g}## to drop. During the fall from ##\text{B}## to ##\text{C}##, the ball moves horizontally by a distance
$$\begin{aligned}
D_{\text{BC}} &= k \, u_x^{\text{initial, A}} \, t_{\text{descent}} \\
&= k \, \underbrace{u_x^{\text{initial, A}} \frac{u_y^{\text{initial, A}}}{g}}_{= \, D_{\text{AB}}}\\
&= k \, D_{\text{AB}}.
\end{aligned}$$
The ball lands at point ##\text C##, which is at a horizontal distance from ##\text A## equal to
$$\begin{aligned}
D_{\text{AC}} &= D_{\text{AB}} - D_{\text{BC}} \\
&= D_{\text{AB}} - k \, D_{\text{AB}} \\
&= D_{\text{AB}} \, (1-k).
\end{aligned}$$
Then, $$D_{\text{AC}} = D_{\text{AB}} \, (1-k).$$

After the ball hits the ground, what happens is described by the problem: the velocity component parallel to the wall doesn't change, while the one perpendicular to the wall changes by a factor of ##(-k)##. Since the velocity component parallel to the gound is the horizontal component, then the horizontal component of velocity doesn't change; thus, it is ##u_x^{\text{initial, C}} = u_x^{\text{final, B}} = - k \, u_x^{\text{initial, A}}##. Since the velocity component perpendicular to the ground is the vertical component, then the vertical component of velocity is reversed and decreased; thus, from ##u_y^{\text{final, B}} = 0## it becomes ##u_y^{\text{initial, C}} = - k \, u_y^{\text{initial, A}}##, with ##0 < k < 1##.
I'm not sure about that.


a) From here, repeated parabolic trajectories start. After each bounce on the ground, vertical component of speed is multiplied by ##k##, horizontal component of velocity remains constant in magnitude.
If we number such paths with ##j=0, 1, 2, \ldots, N-1## starting from point ##\text C##:

1) time of flight of the ball is
$$\begin{cases}
t_0 = 2 \dfrac{k \, u_y^{\text{initial, A}}}{g} \quad &\text{for the first parabola} \,\, j=0 \,\, \text{from point C to point D}\\
t_1 = 2 \dfrac{k \cdot (k \, u_y^{\text{initial, A}})}{g} = 2 \dfrac{k^2 \, u_y^{\text{initial, A}}}{g} \quad &\text{for the second parabola} \,\, j=1 \,\, \text{from point D to point E} \\
t_2 = 2 \dfrac{k \cdot (k \cdot k \, u_y^{\text{initial, A}})}{g} = 2 \dfrac{k^3 \, u_y^{\text{initial, A}}}{g} \quad &\text{for the third parabola} \,\, j=2 \,\, \text{from point E to point F} \\
\qquad \qquad \vdots \\
t_j = 2 \dfrac{k \cdot (\underbrace{k \cdot k \cdot \, \cdots \, \cdot k}_{j \, \text{times}} \, \, u_y^{\text{initial, A}})}{g} = 2 \dfrac{k^{j+1} \, u_y^{\text{initial, A}}}{g} \quad &\text{for the last parabola} \,\, j \,\, \text{from last point of impact to point A}.
\end{cases}$$
Then $$t_j = 2 \frac{k^{j+1} \, u_y^{\text{initial, A}}}{g} \qquad \text{for} \quad j=0, 1, 2, \ldots, N-1.$$

2) the horizontal displacement of the ball is
$$\begin{aligned}
x_j &= k \, u_x^{\text{initial, A}} \cdot t_j \\
&= k \, u_x^{\text{initial, A}} \cdot 2 \frac{k^{j+1} \, u_y^{\text{initial, A}}}{g} \\
&= 2 \, k^{j+2} \, \underbrace{u_x^{\text{initial, A}} \frac{u_y^{\text{initial, A}}}{g}}_{= \, D_{\text{AB}}} \\
&= 2 \, k^{j+2} \, D_{\text{AB}}.
\end{aligned}$$
Then $$x_j = 2 \, k^{j+2} \, D_{AB} \qquad \text{for} \quad j=0, 1, 2, \ldots, N-1.$$
The sum of horizontal displacements ##x_j## from ##j=0## to ##j=N-1## must then be equal to ##D_{\text{AC}}##. Thus:
$$\begin{aligned}
\sum_{j=0}^{N-1}x_j = D_{\text{AC}} &\implies \sum_{j=0}^{N-1} 2 \,k^{j+2} \, D_{\text{AB}} = D_{\text{AB}} \, (1-k) \\
&\iff D_{\text{AB}}\sum_{j=0}^{N-1} 2 \,k^{j+2} \, = D_{\text{AB}} \, (1-k) \\
&\iff \cancel{D_{\text{AB}}}\sum_{j=0}^{N-1} 2 \,k^{j+2} \, = \cancel{D_{\text{AB}}} \, (1-k) \\
&\iff \sum_{j=0}^{N-1} 2 \,k^{j+2} = 1 - k \\
&\iff \sum_{j=0}^{N-1} 2 \,k^{2} \, k^j = 1- k \\
&\iff 2 \, k^2 \sum_{j=0}^{N-1} k^j = 1 - k \\
&\implies 2 \, k^2 \left(\frac{1-k^N}{1-k}\right) = 1 - k \\
&\iff 2 \, k^2 \frac{1-k^N}{1-k} = 1-k \\
&\iff 2 \, k^2 (1-k^N) = (1-k)^2 \qquad \qquad \text{for} \quad k \ne 1, \, N \ne 0 \\
&\iff 2 \, k^2 - 2 \, k^{N+2} = k^2 - 2 \, k + 1 \\
&\iff k^2 + 2 \, k - 1 = 2 \, k^{N+2}.
\end{aligned}$$
Then: $$k^2 + 2 \, k - 1 = 2 \, k^{N+2} \qquad \text{for} \quad k \ne 1, \, N \ne 0. \tag{*}$$

For ##N = 1##, we have:
$$\begin{aligned}
k^2 + 2 \, k - 1 = 2 \, k^{1+2} &\iff k^2 + 2 \, k - 1 = 2 \, k^{3} \\
&\iff 2 \, k^3 - k^2 - 2k + 1 = 0 \\
&\iff 2 \, k \, (k^2-1) - 1 (k^2-1) =0 \\
&\iff (2 \, k -1)(k^2-1) = 0 \\
&\iff (2 \, k -1)\ \underbrace{\cancel{(k-1)}}_{\text{since} \, k \, \ne \, 1} \, \,\underbrace{\cancel{(k+1)}}_{\text{since} \, 0 \, < \, k \, < \, 1}=0 \\
&\iff 2 \, k - 1 = 0 \\
&\iff k= 1/2.
\end{aligned}$$
So, ##k = 1/2## for ##N=1##.

For ##N \geqslant 2##, I really don't know how to solve Equation ##(*)##.

For ##N \to \infty##, we have
$$\begin{aligned}
\lim_{n \to \infty} \left(k^2 + 2 \, k - 1 \right)= \lim_{n \to \infty}2 \, k^{N+2} &\iff k^2 + 2 \, k -1 = 2 \, k^2 \lim_{n \to \infty} k^{N} \\
&\iff k^2 + 2 \, k -1 = 2 \lim_{n \to \infty} k^{N} k^2 \\
&\iff k^2 + 2 \, k -1 = 2 \, k^2 \underbrace{\lim_{n \to \infty} k^{N}}_{= \, 0 \, \text{since} \, 0 \, < \, k \, < \, 1} \\
&\iff k^2 + 2 \, k -1 = 2 \, k^2 \cdot 0 \\
&\iff k^2 + 2 \, k -1 = 0 \\
&\iff \underbrace{\cancel{k = - \sqrt 2 - 1}}_{\text{because} \, 0 \, < \, k \, < \, 1} \quad \vee \quad k = \sqrt 2 - 1 \\
&\iff k = \sqrt 2 - 1.
\end{aligned}$$
So, $$\boxed{k= \sqrt 2 - 1 \qquad \text{for} \quad N \to \infty}.$$

b) The total time the ball is in the air is equal to the time of the first two trajectories ##\text A \to \text B## (which is equal to ##t_{\text{ascent}}=\dfrac{u_y^{\text{initial, A}}}{g}## + ##t_{\text{descent}}=\dfrac{u_y^{\text{initial, A}}}{g}##) plus the sum of the trajectory times of the next ##N## parabolas. Then:
$$\begin{aligned}
T(N) &= t_{\text{ascent}} + t_{\text{descent}} + \sum_{j=0}^{N-1} t_j\\
&= \frac{u_y^{\text{initial, A}}}{g} + \frac{u_y^{\text{initial, A}}}{g} + \sum_{j=0}^{N-1} 2 \frac{k^{j+1} \, u_y^{\text{initial, A}}}{g} \\
&= 2 \frac{u_y^{\text{initial, A}}}{g} + 2 \frac{u_y^{\text{initial, A}}}{g} \sum_{j=0}^{N-1} k^{j+1} \\
&= 2 \frac{u_y^{\text{initial, A}}}{g} + 2 \frac{u_y^{\text{initial, A}}}{g} \sum_{j=0}^{N-1} k^{j} \cdot k \\
&= 2 \frac{u_y^{\text{initial, A}}}{g} + 2 \frac{u_y^{\text{initial, A}}}{g} k\sum_{j=0}^{N-1} k^{j} \\
&= 2 \frac{u_y^{\text{initial, A}}}{g} + 2 \frac{u_y^{\text{initial, A}}}{g} k \frac{1-k^N}{1-k} \\
&= 2 \frac{u_y^{\text{initial, A}}}{g} \left(1+ k \frac{1-k^N}{1-k}\right) \\
&= 2 \frac{u_y^{\text{initial, A}}}{g} \left[\frac{1-k+k \left(1-k^N \right)}{1-k}\right] \\
&= 2 \frac{u_y^{\text{initial, A}}}{g} \left[\frac{1-k+k-k^{N+1}}{1-k}\right] \\
&= 2 \frac{u_y^{\text{initial, A}}}{g} \left[\frac{1 \cancel{-k}\cancel{+k}-k^{N+1}}{1-k}\right] \\
&= 2 \frac{u_y^{\text{initial, A}}}{g} \left(\frac{1-k^{N+1}}{1-k}\right). \\
\end{aligned}$$
Finally, $$\boxed{T(N) = 2 \frac{u_y^{\text{initial, A}}}{g} \left(\frac{1-k^{N+1}}{1-k}\right)}.$$

For ##N \to \infty##, we have
\begin{aligned}
\lim_{N \to \infty} T(N) &= \lim_{N \to \infty}2 \frac{u_y^{\text{initial, A}}}{g} \left(\frac{1-k^{N+1}}{1-k}\right)\\
&= 2 \frac{u_y^{\text{initial, A}}}{g} \lim_{N \to \infty}\left(\frac{1-k^{N+1}}{1-k}\right) \\
&= 2 \frac{u_y^{\text{initial, A}}}{g} \frac{1}{1-k}\lim_{N \to \infty} \left(1- k^{N+1}\right) \\
&= 2 \frac{u_y^{\text{initial, A}}}{g} \frac{1}{1-k} \left(\lim_{N \to \infty} 1 - \lim_{N \to \infty} k^{N+1}\right) \\
&= 2 \frac{u_y^{\text{initial, A}}}{g} \frac{1}{1-k} \left(1 - \lim_{N \to \infty} k^{N+1}\right) \\
&= 2 \frac{u_y^{\text{initial, A}}}{g} \frac{1}{1-k} \left(1 - \lim_{N \to \infty} k \cdot k^{N} \right) \\
&= 2 \frac{u_y^{\text{initial, A}}}{g} \frac{1}{1-k} \left(1 - k \underbrace{\lim_{N \to \infty} k^{N}}_{= \, 0 \, \text{since} \, 0 \, < \, k \, < \, 1} \right) \\
&= 2 \frac{u_y^{\text{initial, A}}}{g} \frac{1}{1-k} \cancelto{1}{\left(1 - k \cdot 0 \right)} \\
&= 2 \frac{u_y^{\text{initial, A}}}{g} \frac{1}{1-k}.
\end{aligned}
So, $$\lim_{N \to \infty} T(N) = 2 \frac{u_y^{\text{initial, A}}}{g} \frac{1}{1-k}. \tag{**}$$
Since ##k = \sqrt 2 - 1## for ##N \to \infty##, plugging that value into ##(**)## yields:
$$\begin{aligned}
\lim_{N \to \infty} T(N) &= 2 \frac{u_y^{\text{initial, A}}}{g} \frac{1}{1- \left(\sqrt 2 -1 \right)} \\
&= 2 \frac{u_y^{\text{initial, A}}}{g} \frac{1}{1- \sqrt 2 + 1} \\
&= 2 \frac{u_y^{\text{initial, A}}}{g} \frac{1}{2- \sqrt 2 } \\
&= 2 \frac{2+\sqrt 2}{\left(2- \sqrt 2 \right) \left(2+ \sqrt 2 \right)} \frac{u_y^{\text{initial, A}}}{g} \\
&= 2 \frac{2+\sqrt 2}{4-2} \frac{u_y^{\text{initial, A}}}{g} \\
&= 2 \frac{2+\sqrt 2}{2} \frac{u_y^{\text{initial, A}}}{g} \\
&= \cancel{2} \frac{2+\sqrt 2}{\bcancel{2}} \frac{u_y^{\text{initial, A}}}{g} \\
&= \left(2+\sqrt 2\right) \frac{u_y^{\text{initial, A}}}{g}.
\end{aligned}$$
Finally, $$\boxed{T(N) = \left(2+\sqrt 2\right) \frac{u_y^{\text{initial, A}}}{g} \qquad \text{for} \quad N \to \infty}.$$
 
Physics news on Phys.org
What is it about the system that you wish to know? The x component is not interesting unless the ball is allowed to spin. Otherwise the problem separates.
Also k is given by the materials. What if k=1 (perfect rebound) . Then your solution makes no sense, yes ? The real (idealized) ball should just bounce forever Incidentally k isusually defined in terms of the energy and is called the collisioin "coefficient of restitution" as I recall.. So your solution is suspect
 
@Meden Agan, Your work looks excellent, and I think your results are correct.

Like you, I don't see how to solve your equation (*) analytically for ##k## for general values of ##N##. But this equation does give an implicit equation for ##k## that can be solved numerically.
 
A couple of thoughts…

You don’t need to exclude N=0, k=1. That only seemed to be invalid because the closed form expression for the geometric sum has a divisor k-1. But that later cancelled and the resulting expression is correct for N=0, k=1.

Your method for finding the limit as ##N\rightarrow \infty## is invalid, however. You overlook that k is a function of N. You cannot take the limit of ##f(k)g(k, N)## as being ##f(k)\lim g(k,N)## treating k as a constant.
See if you can do it rigorously.
hutchphd said:
What if k=1 (perfect rebound) . Then your solution makes no sense, yes ? The real (idealized) ball should just bounce forever Incidentally k isusually defined in terms of the energy and is called the collisioin "coefficient of restitution" as I recall.. So your solution is suspect
@Meden Agan specifically excluded that case, so the solution does make sense. Indeed, it is not hard to tweak it (mostly in the wording) to make it valid even in that case. It doesn’t bounce forever because the process is defined to end when it reaches its starting position, and anyway k=1 is not special in that regard. Take away that termination definition and it will bounce forever for any k>0.

Also, the coefficient of restitution is defined as a ratio of velocity scalars, not of energies.
 
  • Like
Likes hutchphd, PeroK and Meden Agan
haruspex said:
You don’t need to exclude N=0, k=1. That only seemed to be invalid because the closed form expression for the geometric sum has a divisor k-1. But that later cancelled and the resulting expression is correct for N=0, k=1.

Your method for finding the limit as ##N\rightarrow \infty## is invalid, however. You overlook that k is a function of N. You cannot take the limit of ##f(k)g(k, N)## as being ##f(k)\lim g(k,N)## treating k as a constant.
See if you can do it rigorously.

@Meden Agan specifically excluded that case, so the solution does make sense. Indeed, it is not hard to tweak it (mostly in the wording) to make it valid even in that case. It doesn’t bounce forever because the process is defined to end when it reaches its starting position, and anyway k=1 is not special in that regard. Take away that termination definition and it will bounce forever for any k>0.

Also, the coefficient of restitution is defined as a ratio of velocity scalars, not of energies.
Mhm, let's see if I have it right.
Consider the equation ##2 \, k^2 \, \dfrac{1-k^N}{1-k}=1-k##. We have ##1-k## as denominator. Hence, it is quite logical to exclude ##k=1## because it makes the denominator zero. ##N=0## results in the LHS giving ##\dfrac{0}{0}##, which is an indeterminate form.
The final expression, however, is $$k^2+2 \, k-1=2 \, k^{N+2}. \tag{*}$$ Here there is no need to exclude ##k=1, \, N=0##. By plugging ##k=1, \, N=0## into ##(*)##, we obtain equality
$$\begin{aligned}
1^2 + 2 \cdot 1 - 1= 2 \cdot 1^{0+2} &\iff 1 + 2 -1 = 2 \cdot 1 \\
&\iff 2 = 2,
\end{aligned}$$
which is obviously true. So, ##(*)## is also valid for ##k=1, \, N=0##.
Correct?
haruspex said:
Your method for finding the limit as ##N\rightarrow \infty## is invalid, however. You overlook that k is a function of N. You cannot take the limit of ##f(k)g(k, N)## as being ##f(k)\lim g(k,N)## treating k as a constant.
See if you can do it rigorously.
Mhm, you're quite right. Any hints on how to do it rigorously?
I have no idea how to approach the limit of ##f(k) \, g(k,N)## for ##N \to \infty## treating ##k## as a function of ##N##.
 
Meden Agan said:
##N=0## results in the LHS giving ##\dfrac{0}{0}##, which is an indeterminate form.
The final expression, however, is $$k^2+2 \, k-1=2 \, k^{N+2}. \tag{*}$$ Here there is no need to exclude ##k=1, \, N=0##. By plugging ##k=1, \, N=0## into ##(*)##, we obtain equality
$$\begin{aligned}
1^2 + 2 \cdot 1 - 1= 2 \cdot 1^{0+2} &\iff 1 + 2 -1 = 2 \cdot 1 \\
&\iff 2 = 2,
\end{aligned}$$
which is obviously true. So, ##(*)## is also valid for ##k=1, \, N=0##.
Correct?
Yes.
Meden Agan said:
Mhm, you're quite right. Any hints on how to do it rigorously?
I have no idea how to approach the limit of ##f(k) \, g(k,N)## for ##N \to \infty## treating ##k## as a function of ##N##.
Write ##k=1+\epsilon##. What can you do with ##(1+\epsilon)^N## for small ##\epsilon##? You will discover how many terms you need to keep.
 
haruspex said:
Write ##k=1+\epsilon##. What can you do with ##(1+\epsilon)^N## for small ##\epsilon##? You will discover how many terms you need to keep.
Binomial expansion of ##\left(1+ \varepsilon\right)^N##? We would have $$\left(1+ \varepsilon \right)^N = 1+ N \, \varepsilon + \frac{N \, (N-1)}{2!} \varepsilon^2 + \ldots \, .$$ Yes?
 
Last edited:
Meden Agan said:
Binomial expansion of ##\left(1+ \varepsilon\right)^N##? We would have $$\left(1+ \varepsilon \right)^N = 1+ N \, \varepsilon + \frac{N \, (N-1)}{2!} \varepsilon^2 + \ldots \, .$$ Yes?
Yes, or via the ##e^{\epsilon N}## approximation. Either works.
 
haruspex said:
Yes, or via the ##e^{\epsilon N}## approximation. Either works.
OK.

We know that $$k^2 + 2 \, k - 1 = 2 \, k^{N+2}. \tag{*}$$
If we write ##k = 1 + \varepsilon##, under the first-order approximation ##(1+ \varepsilon)^N \approx 1 + N \, \varepsilon## (binomial expansion), ##(*)## becomes:
$$\begin{aligned}
(1+ \varepsilon)^2 + 2 \, (1+\varepsilon) - 1 = 2 \, (1+\varepsilon)^{N+2} &\iff 1 + 2 \, \varepsilon + \varepsilon^2 + 2 + 2 \, \varepsilon - 1 = 2 \, (1+\varepsilon)^2 \, (1+\varepsilon)^N \\
&\iff \cancel{1} + 2 \, \varepsilon + \varepsilon^2 + 2 + 2 \, \varepsilon \cancel{- 1} = 2 \, (1+\varepsilon)^2 \, (1+\varepsilon)^N \\
&\iff \varepsilon^2 + 4 \, \varepsilon + 2 = 2 \, \left(1+ 2 \, \varepsilon +\varepsilon^2 \right)\, (1+\varepsilon)^N \\
&\iff \varepsilon^2 + 4 \, \varepsilon + 2 = (2 + 4 \, \varepsilon + 2 \, \varepsilon^2)(1+ N \, \varepsilon) \\
&\iff \varepsilon^2 + 4 \, \varepsilon + 2 = 2 + 4 \, \varepsilon + 2 \, \varepsilon^2 + 2 \, N \, \varepsilon + 4 \, N \, \varepsilon^2 + 2 \, N \, \varepsilon^3 \\
&\iff \varepsilon^2 \cancel{+ 4 \, \varepsilon} \cancel{+ 2} = \cancel{2} \cancel{+ 4 \, \varepsilon} + 2 \, \varepsilon^2 + 2 \, N \, \varepsilon + 4 \, N \, \varepsilon^2 + 2 \, N \, \varepsilon^3 \\
&\iff 2 \, N \, \varepsilon^3 + \varepsilon^2 + 4 \, N \, \varepsilon^2 + 2 \, N \, \varepsilon = 0 \\
&\iff \varepsilon \, [2 \, N \, \varepsilon^2 + (1+4 \, N) \, \varepsilon + 2 \, N] = 0 \\
&\iff \varepsilon = 0 \quad \vee \quad \varepsilon = \frac{-(1+4 \, N) \pm \sqrt{(1+4 \, N)^2 - 16 \, N^2}}{4 \, N} \\
&\iff \varepsilon = 0 \quad \vee \quad \varepsilon = \frac{-(1+4 \, N) \pm \sqrt{1 + 8 \, N + 16 \, N^2 - 16 \, N^2}}{4 \, N} \\
&\iff \varepsilon = 0 \quad \vee \quad \varepsilon = \frac{-(1+4 \, N) \pm \sqrt{1 + 8 \, N \cancel{+ 16 \, N^2} \cancel{- 16 \, N^2}}}{4 \, N} \\
&\iff \varepsilon = 0 \quad \vee \quad \varepsilon = \frac{-(1+4 \, N) \pm \sqrt{1 + 8 \, N}}{4 \, N}.
\end{aligned}$$

For ##\varepsilon = 0##, then ##k = 1## for all ##N##.
Then, ##k = 1## as ##N \to \infty## would be a solution, but I'm very suspicious of that.

For ##\varepsilon = \dfrac{-(1+4 \, N) \pm \sqrt{1 + 8 \, N}}{4 \, N}##, then ##k = 1 + \dfrac{-(1+4 \, N) \pm \sqrt{1 + 8 \, N}}{4 \, N}##.
Evaluating the limit as ##N## approaches ##\infty##, we have:
$$\begin{aligned}
\lim_{N \to \infty} k(N) &= \lim_{N \to \infty}\left[1 +\frac{-(1+4 \, N) \pm \sqrt{1 + 8 \, N}}{4 \, N}\right]\\
&= \lim_{N \to \infty} 1 + \lim_{N \to \infty}\left[\frac{-(1+4 \, N) \pm \sqrt{1 + 8 \, N}}{4 \, N}\right] \\
&= 1 + \lim_{N \to \infty}\left[\frac{-(1+4 \, N) \pm \sqrt{1 + 8 \, N}}{4 \, N}\right] \\
&= 1 + \lim_{N \to \infty}\left(\frac{-1 -4 \, N \pm \sqrt{1 + 8 \, N}}{4 \, N}\right) \\
&= 1 + \lim_{N \to \infty}\left(\frac{-1}{4 \, N} - \frac{4 \, N}{4 \, N} \pm \frac{\sqrt{1 + 8 \, N}}{4 \, N}\right) \\
&= 1 - \underbrace{\lim_{N \to \infty} \frac{1}{4 \, N}}_{= \, 0} - \underbrace{\lim_{N \to \infty} 1}_{= \, 1} \pm \underbrace{\lim_{N \to \infty} \frac{\sqrt{1 + 8 \, N}}{4 \, N}}_{= \, 0} \\
&= 1 - 0 - 1 \pm 0 \\
&= 0.
\end{aligned}$$
Then, ##k \to 0## as ##N \to \infty##, but I'm not sure about that.

Does that make sense?
 
  • #10
When making approximations you need to be consistent about what order terms you keep. You kept ##\epsilon^2## in some places but not all.
 
  • #11
haruspex said:
When making approximations you need to be consistent about what order terms you keep. You kept ##\epsilon^2## in some places but not all.
Mhm, OK.
Then, should I consider $$\left(1+ \varepsilon \right)^N \approx 1+ N \, \varepsilon + \frac{N \, (N-1)}{2} \varepsilon^2$$ rather than $$\left(1+ \varepsilon \right)^N \approx 1+ N \, \varepsilon \, ?$$
 
  • #12
Meden Agan said:
Mhm, OK.
Then, should I consider $$\left(1+ \varepsilon \right)^N \approx 1+ N \, \varepsilon + \frac{N \, (N-1)}{2} \varepsilon^2$$ rather than $$\left(1+ \varepsilon \right)^N \approx 1+ N \, \varepsilon \, ?$$
Yes. But since N will become very large you can use ##N^2## rather than ##N(N-1)##.
 
  • #13
haruspex said:
Yes. But since N will become very large you can use ##N^2## rather than ##N(N-1)##.
Mhm. I get really tedious and lengthy algebra. You too?

I obtain
$$\varepsilon(N) = -\frac{2 \, N^2 + 2 \, N}{3 \, N^2} + \frac{\sqrt[3]{N^6 - 6 \, N^5 + 21 \, N^4 + \, N^3 + 3\sqrt{3}\sqrt{N^{10} - 6 \, N^9 + 14 \, N^8 + 2 \, N^7}}}{3 \, N^2} - \frac{-N^4 + 4 \, N^3 - N^2}{3 \, N^2\sqrt[3]{N^6 - 6 \, N^5 + 21 \, N^4 + N^3 + 3\sqrt{3}\sqrt{N^{10} - 6 \, N^9 + 14 \, N^8 + 2 \, N^7}}}.$$
Then
$$k(N) = 1 -\frac{2 \, N^2 + 2 \, N}{3 \, N^2} + \frac{\sqrt[3]{N^6 - 6 \, N^5 + 21 \, N^4 + \, N^3 + 3\sqrt{3}\sqrt{N^{10} - 6 \, N^9 + 14 \, N^8 + 2 \, N^7}}}{3 \, N^2} - \frac{-N^4 + 4 \, N^3 - N^2}{3 \, N^2\sqrt[3]{N^6 - 6 \, N^5 + 21 \, N^4 + N^3 + 3\sqrt{3}\sqrt{N^{10} - 6 \, N^9 + 14 \, N^8 + 2 \, N^7}}}$$
and $$\lim_{N \to \infty} k(N) = 1.$$

If that were true, we would have ##k \to 1## as ##N \to \infty##.
 
Last edited:
  • #14
No, nothing like that.
As mentioned, might as well use N rather than N+2. They’re both going to infinity:
##(1+\epsilon)^2+2(1+\epsilon)-1=2(1+\epsilon)^N##
Expand both sides up to ##\epsilon^2##. The constant term cancels, allowing you to cancel a factor of ##\epsilon## and yielding an expression for ##\epsilon##.
 
  • #15
There's no need for the approximations. Note that ##k^N \to 0## as ##N \to \infty##. In which case, the equation for ##k## reduces to:
$$k^2 +2k -1 =0$$
 
  • #16
Here's how I did it. Let ##t_1/2## be the time to hit the wall, and also the time after hitting the wall to the first bounce. Then ##t_j = k^{j-1}t_1## is the time for the jth bounce. Let ##u## be the initial horizontal velocity and ##T## be the total time after hitting the wall. And ##d## be the distance to the wall and ##h## the height of impact on the wall.

We have:$$\frac {t_1}{2} = \frac d u$$We can calculate ##T## two ways:
$$T = \frac d {ku} = \frac{t_1}{2k}$$And
$$T = \frac {t_1}{2} + \sum_{j =2}^N k^{j-1}t_1 = t_1\big (\frac 1 2 + \frac {k -k^N}{1-k}\big )$$This leads to:
$$k^2+2k-1 = 2k^{N+1}$$
 
  • #17
PeroK said:
There's no need for the approximations. Note that ##k^N \to 0## as ##N \to \infty##. In which case, the equation for ##k## reduces to:
$$k^2 +2k -1 =0$$
That is what I did in post #1. See post #4.
 
  • #18
haruspex said:
No, nothing like that.
As mentioned, might as well use N rather than N+2. They’re both going to infinity:
##(1+\epsilon)^2+2(1+\epsilon)-1=2(1+\epsilon)^N##
Expand both sides up to ##\epsilon^2##. The constant term cancels, allowing you to cancel a factor of ##\epsilon## and yielding an expression for ##\epsilon##.
We know that $$k^2 + 2 \, k - 1 = 2 \, k^{N+2}. \tag{*}$$
If we write ##k = 1 + \varepsilon##, ##(*)## becomes:
$$(1+ \varepsilon)^2 + 2 \, (1+\varepsilon) - 1 = 2 \, (1+\varepsilon)^{N+2}. \tag{1}$$
As ##N \to \infty##, ##N+2 \sim N##. Then ##(1)## becomes:
$$(1+ \varepsilon)^2 + 2 \, (1+\varepsilon) - 1 = 2 \, (1+\varepsilon)^{N}. \tag{2}$$
If we expand ##(1+ \varepsilon)^N## up to second-order, we have ##(1+ \varepsilon)^N \approx 1 + N \, \varepsilon + \dfrac{N \, (N-1)}{2} \varepsilon^2##.
Then ##(2)## becomes:
$$(1+ \varepsilon)^2 + 2 \, (1+\varepsilon) - 1 = 2 \left(1 + N \, \varepsilon + \frac{N \, (N-1)}{2} \varepsilon^2\right). \tag{3}$$
As ##N \to \infty##, ##N-1 \sim N \implies N \, (N-1) \sim N^2##. Then ##(3)## becomes:
$$\begin{aligned}
(1+ \varepsilon)^2 + 2 \, (1+\varepsilon) - 1 = 2 \left(1 + N \, \varepsilon + \frac{N^2}{2} \varepsilon^2\right) &\iff 1 + 2 \, \varepsilon + \varepsilon^2 + 2 + 2 \, \varepsilon - 1 = 2 + 2 \, N \, \varepsilon + 2 \, \frac{N^2}{2} \, \varepsilon^2 \\
&\iff \cancel{1} + 2 \, \varepsilon + \varepsilon^2 + 2 + 2 \, \varepsilon \cancel{- 1} = 2 + 2 \, N \, \varepsilon + \cancel{2} \, \frac{N^2}{\cancel{2}} \, \varepsilon^2 \\
&\iff \varepsilon^2 + 4 \, \varepsilon + 2 = 2 + 2 \, N \, \varepsilon + N^2 \, \varepsilon^2 \\
&\iff \varepsilon^2 + 4 \, \varepsilon \cancel{+ 2} = \cancel{2} + 2 \, N \, \varepsilon + N^2 \, \varepsilon^2 \\
&\iff \varepsilon^2 - N^2 \, \varepsilon^2+ 4 \, \varepsilon - 2 \, N \, \varepsilon = 0 \\
&\iff \varepsilon^2 \, \left(1- N^2 \right)+ 2 \, \varepsilon \left(2- N \right)= 0 \\
&\iff \varepsilon \left[\varepsilon \, \left(1- N^2 \right) + 2 \left(2-N \right)\right] = 0 \\
&\iff \varepsilon = 0 \quad \vee \quad\varepsilon \, \left(1- N^2 \right) + 2 \left(2-N \right) = 0 \\
&\iff \varepsilon = 0 \quad \vee \quad \varepsilon \, \left(1- N^2 \right) - 2 \left(N-2 \right) = 0 \\
&\iff \varepsilon = 0 \quad \vee \quad \varepsilon \, \left(1- N^2 \right) = 2 \left(N-2 \right) \\
&\iff \varepsilon = 0 \quad \vee \quad \varepsilon = \frac{2 \, \left(N-2 \right)}{\left(1- N^2 \right)}.
\end{aligned}$$

For ##\varepsilon = 0##, then ##k = 1## for all ##N##.
Then, ##k = 1## as ##N \to \infty## would be a solution.

For ##\varepsilon = \dfrac{2 \, \left(N-2 \right)}{\left(1- N^2 \right)}##, then ##k(N) = 1 + \dfrac{2 \, \left(N-2 \right)}{\left(1- N^2 \right)}##.
Evaluating the limit of ##k(N)## as ##N## approaches ##\infty##, we have:
\begin{aligned}
\lim_{N \to \infty} k(N) &= \lim_{N \to \infty} \left[1+\frac{2 \, \left(N-2 \right)}{\left(1- N^2 \right)}\right] \\
&= \lim_{N \to \infty} 1 + \lim_{N \to \infty} \frac{2 \, \left(N-2 \right)}{\left(1- N^2 \right)} \\
&= 1 + \lim_{N \to \infty} \frac{2 \, \left[N\left(1- \cancelto{0}{\dfrac{2}{N}} \right)\right]}{\left[-N^2 \left(1 \cancelto{0}{- \, \dfrac{1}{N^2}} \right)\right]} \\
&= 1 +\lim_{N \to \infty} \frac{2 \, N}{- N^2} \\
&= 1 +\lim_{N \to \infty} \frac{2 \, \cancel{N}}{- N^{\cancel{2}}} \\
&= 1 - \underbrace{\lim_{N \to \infty} \frac{2}{N}}_{= \, 0} \\
&= 1.
\end{aligned}
Then ##k \to 1## as ##N \to \infty##.

In both cases, we would have ##k \to 1## as ##N \to \infty##.
 
  • #19
I do apologise, I have taken you the wrong way.

Just realised there should have been a caveat where you cancelled ##D_{AB}## in post #1. If k=1 then ##D_{AB}=0##, so N=0 is the only solution for k=1.
To take the limit, the substitution for k should have been ##k=\sqrt 2-1+\epsilon## (Doh!).
This leads to ##-2\sqrt 2\epsilon=(\sqrt 2-1)^N(1+\frac{N\epsilon}{\sqrt 2-1})##, which gives the desired result.
 
  • #20
haruspex said:
I do apologise, I have taken you the wrong way.

Just realised there should have been a caveat where you cancelled ##D_{AB}## in post #1. If k=1 then ##D_{AB}=0##, so N=0 is the only solution for k=1.
To take the limit, the substitution for k should have been ##k=\sqrt 2-1+\epsilon## (Doh!).
This leads to ##-2\sqrt 2\epsilon=(\sqrt 2-1)^N(1+\frac{N\epsilon}{\sqrt 2-1})##, which gives the desired result.
Mhm. How can we guess ##k = \sqrt 2 - 1 + \varepsilon##? Seems we need to know in advance the result is ##k= \sqrt 2 - 1##...
 
  • #21
Meden Agan said:
Mhm. How can we guess ##k = \sqrt 2 - 1 + \varepsilon##? Seems we need to know in advance the result is ##k= \sqrt 2 - 1##...
Your original approach strongly suggested that limiting value, it just wasn't a valid proof.
 
  • #22
Meden Agan said:
Mhm. How can we guess ##k = \sqrt 2 - 1 + \varepsilon##? Seems we need to know in advance the result is ##k= \sqrt 2 - 1##...
For ##N > 1## the two functions intersect at a point greater than ##\sqrt 2 -1##. You do know that is the positive zero of the quadratic. They also intersect at ##k=1##.

For large ##N## the power function converges pointwise to zero. The sequence of intersection points as N increases must have arbitrarily small function vales, hence must converge to the zero of the quadratic.

You could do an explicit epsilon-based argument if you wanted, but that's the outline.
 
Back
Top