Solve the problem that involves iteration

In summary, the method of checking for sign changes between two points to determine the existence of roots may not always be accurate. It can result in a false negative if there are an even number of simple roots in the interval or an odd number of simple roots and a vertical asymptote at which the function changes sign, or a double root. It can also result in a false positive if the interval contains an odd number of vertical asymptotes at which the function changes sign. Therefore, this method should be used with caution and other techniques, such as graphing or using derivatives, should also be considered.
  • #1
chwala
Gold Member
2,728
382
Homework Statement
see attached.
Relevant Equations
iterative techniques
1686668177209.png


part (a)

Asymptote at ##x=0.5##

part (b)

##\dfrac{e^x}{4x^2-1}= -2##

##e^x=2-8x^2##

##2e^x= 4-16x^2##

##16x^2=4-2e^x##

##x^2= \dfrac{4-2e^x}{16}##

##x=\dfrac{\sqrt{4-2e^x}}{4}##

part (c)

...

##x_{2}=0.2851##

##x_{3}=0.2894##

##x_{4}=0.2881##

##x_{5}=0.2885##

##x_{6}=0.2884##

##x_{7}=0.2884##

##α=0.2884##

For part(d)

not sure here, but i checked directly with

##F(x_n) = \ln (2-8x^2_n)## ...and noted that after a few iterations we were ending up with natural log of negative numbers thus ##α## cannot be found.

on a different approach, using second derivative,

##F^{'}(x) = \left[\dfrac{1}{2-8x^2} × -16x\right]##

##F^{'}(x) = \dfrac{8x}{4x^2-1}##

##F^{'}(0.3) = -3.75##

##F^{''}(x)=- \left[\dfrac{8+32x^2}{(4x^2-1)^2}\right]##

This will always be negative irrespective of ##x## value thus ##α## cannot be found.
 
Last edited:
Physics news on Phys.org
  • #2
chwala said:
Homework Statement: see attached.
Relevant Equations: iterative techniques

View attachment 327810

part (a)

Asymptote at ##x=0.5##
This doesn't explain why the sign change method doesn't work for x = 0 and x = 1.
chwala said:
part (b)

##\dfrac{e^x}{4x^2-1}= -2##

##e^x=2-8x^2##

##2e^x= 4-16x^2##

##16x^2=4-2e^x##

##x^2= \dfrac{4-2e^x}{16}##

##x=\dfrac{\sqrt{4-2e^x}}{4}##
This looks to be correct, but with quite a few unnecessary steps. All you need to do is to get the ##x^2## term on one side and the term with ##e^x## on the other.
Also, there is a positive and a negative solution for x.
chwala said:
<snip>
Part c looks fine.
For part(d)

not sure here, but i checked directly with

##F(x_n) = \ln (2-8x^2_n)## ...and noted that after a few iterations we were ending up with ##\ln ## of negative numbers thus ##α## cannot be found.
...
##F^{'}(x) = \left[\dfrac{1}{2-8x^2} × -16x\right]##

##F^{'}(x) = \dfrac{8x}{4x^2-1}##

##F^{'}(0.3) = -3.75##

##F^{''}(x)=- \dfrac{8+32x^2}{(4x^2-1)^2}## will always be negative irrespective of ##x## value thus ##α## cannot be found.
 
Last edited:
  • #3
For part d, I think there might be a typo in the problem statement.
"By considering F'(0.3)..." seems irrelevant to me, but if ##x_0 = 0.3## is ##x_1 = F(x_0)## in the domain of F?
 
Last edited:
  • #4
chwala said:
on a different approach, using second derivative,
<snip>

This will always be negative irrespective of x value thus α cannot be found.
I don't see how this answers the question asked in part d. In my previous post I mentioned that the problem author might have made a typo, which in my view sent you off on a wild goose chase.
 
  • #5
@Mark44 I am informed question is just okay (correct)... one needs to check the modulus of the derivative and establish the fact that there is no convergence to a particular value...unless that's not correct boss.
 
  • #6
chwala said:
@Mark44 I am informed question is just okay (correct)
Who informed you of this? I don't see how F'(0.3) is relevant to the problem at all.
chwala said:
one needs to check the modulus of the derivative and establish the fact that there is no convergence to a particular value...
Based on what is given in the problem, the above seems to me to be just handwaving.
 
  • #7
The iteration in (d) is the (local) inverse of the iteration in (c), so if one moves you towards the root, the other must move you away from it.

[itex]\alpha[/itex] is an unstable fixed point of the iteration [itex]x_{n+1} = F(x_n)[/itex] if [itex]|F'(\alpha)| > 1[/itex]. Here we are asked to look at [itex]F'(0.3)[/itex], which is close to, but not at, the fixed point. However by continuity of [itex]F'[/itex] we might be able to conclude that [itex]|F'(\alpha)| > 1[/itex].
 
  • Informative
Likes chwala
  • #8
More rigourously, if [itex]x_n = 0.3 + \xi_n[/itex] then [itex]\xi_n[/itex] satisfies [tex]
\xi_{n+1} + 0.3 = F(0.3 + \xi_n) = F(0.3) + \xi_nF'(0.3) + \dots[/tex] which to leading order is of the form [tex]
\xi_{n+1} = \alpha + \beta\xi_n[/tex] which has solution [tex]
\xi_n = \left(\xi_0 + \frac{\alpha}{\beta - 1}\right)\beta^n - \frac{\alpha}{\beta - 1}.[/tex] We can see that if [itex]|\beta| > 1[/itex] then [itex]\xi_n \to \infty[/itex] and if [itex]|\beta| < 1[/itex] then [itex]\xi_n \to -\alpha/(\beta - 1)[/itex].
 
  • Informative
Likes Mark44 and chwala
  • #9
Mark44 said:
This doesn't explain why the sign change method doesn't work for x = 0 and x = 1.
This looks to be correct, but with quite a few unnecessary steps. All you need to do is to get the ##x^2## term on one side and the term with ##e^x## on the other.
Also, there is a positive and a negative solution for x.
For part (a),
...sign change between ##f(0)## and ##f(1)## will imply that a root exists between ##x=0## and ##x=1##. In our case,

##f(0)=1##

and

##f(1)=2.91##

there is no sign change thus no root exists between the two points.
 
Last edited:
  • #10
chwala said:
For part (a),
...sign change between ##f(0)## and ##f(1)## will imply that a root exists between ##x=0## and ##x=1##. In our case,

##f(0)=1## and ##f(1)=2.91##

there is no sign change thus no root exists between the two points.
Look at the graph you posted as part of the problem statement! Clearly there is a root between x = 0 and x = 1. What you were supposed to do was to explain why the sign-change rule isn't working here, and not conclude that there is no root, which is completely at odds with the graph.
 
  • Like
Likes chwala
  • #11
The sign method will fail with a false negative if there are an even number of simple roots in the interval, or an odd number of simple roots and a vertical asymptote at which the function changes sign, or a double root. It will fail with a false positive if the interval contains an odd number of vertical asymptotes at which the function changes sign.
 
  • Like
Likes chwala
  • #12
pasmith said:
The sign method will fail with a false negative if there are an even number of simple roots in the interval, or an odd number of simple roots and a vertical asymptote at which the function changes sign, or a double root. It will fail with a false positive if the interval contains an odd number of vertical asymptotes at which the function changes sign.
@pasmith
thanks, ...is this the correct way to reason on this question? The term false negative, false positive...are relatively new to me. Thanks mate.
 
  • #13
chwala said:
For part (a),
...sign change between ##f(0)## and ##f(1)## will imply that a root exists between ##x=0## and ##x=1##. In our case,

##f(0)=1##

and

##f(1)=2.91##

there is no sign change thus no root exists between the two points.

Ok, would it be correct if i handle part (a) as follows;

...there is no sign change at ##f(0)## and ##f(1)## but a root does exist between ##x=0## and ##x=1## implying that the sign-rule change does not work for this problem.
 
  • #14
chwala said:
Ok, would it be correct if i handle part (a) as follows;

...there is no sign change at ##f(0)## and ##f(1)## but a root does exist between ##x=0## and ##x=1## implying that the sign-rule change does not work for this problem.
No, not correct. Again, they are asking why the sign-change rule doesn't work, not just to state that it doesn't work based on looking at the graph.

The textbook you're working from should state exactly what the sign-change rule is, as well as what limitations it has. Does it require that the function in question be a polynomial (in which case it is continuous everywhere) or does it merely require that the function be continuous?
 
  • #15
Mark44 said:
No, not correct. Again, they are asking why the sign-change rule doesn't work, not just to state that it doesn't work based on looking at the graph.

The textbook you're working from should state exactly what the sign-change rule is, as well as what limitations it has. Does it require that the function in question be a polynomial (in which case it is continuous everywhere) or does it merely require that the function be continuous?
The sign-change rule does not work because the function is not continous between ##x=0## and ##x=1##.

The rule is appropriate for continous functions in small intervals.
 
  • #16
chwala said:
The sign-change rule does not work because the function is not continous between ##x=0## and ##x=1##.

The rule is appropriate for continous functions in small intervals.
Now you're on the right track. Can you state the sign-change rule and any restrictions that are placed on its use?
 
  • #18
What restriction is given in the page at the link you included? Does the function in this thread satisfy this restriction?
 
  • #19
Mark44 said:
What restriction is given in the page at the link you included? Does the function in this thread satisfy this restriction?
Our function is discontinous...there is an asymptote and that is why the sign-change rule fails. I hope i am getting your question correctly.

The other scenario on where the sign-change rule fails is when the interval is too large giving a possibility of many roots occuring; in particular,
even number of roots may mean that roots are missed entirely or odd number of roots may mean that not all roots are identified.
 
  • #21
For part (c) my solution was not correct, i should have taken more significant figures...

part (c)

...

##x_{2}=0.28507##

##x_{3}=0.28943##

##x_{4}=0.28817##

##x_{5}=0.28853##

##x_{6}=0.28843##

##x_{7}=0.28846##

##x_{8}=0.28845##
##α=0.2885##
 

FAQ: Solve the problem that involves iteration

What is iteration in problem-solving?

Iteration in problem-solving refers to the process of repeating a set of instructions or calculations until a certain condition is met. It is commonly used in algorithms and computer programming to solve complex problems by breaking them down into simpler, repeatable steps.

How do you determine the stopping condition for an iterative process?

The stopping condition for an iterative process is determined by the specific problem you are trying to solve. Common stopping conditions include reaching a predetermined number of iterations, achieving a desired level of accuracy, or meeting a specific convergence criterion where the difference between successive iterations is below a certain threshold.

What are some common types of iterative algorithms?

Some common types of iterative algorithms include the Newton-Raphson method for finding roots of equations, the Gradient Descent method for optimization problems, the Jacobi and Gauss-Seidel methods for solving linear systems, and various iterative methods used in machine learning and data analysis like k-means clustering and iterative deepening search.

What are the advantages of using iteration over recursion?

Iteration often has advantages over recursion in terms of memory usage and performance. Iterative solutions typically use a constant amount of memory, whereas recursive solutions can consume more memory due to the call stack. Iteration can also be more straightforward to implement and debug, especially for problems where the depth of recursion can be very large.

Can all problems be solved using iteration?

Not all problems can be easily solved using iteration. Some problems are inherently recursive in nature and are more naturally expressed and solved using recursive techniques. However, many problems that can be solved recursively can also be translated into iterative solutions, often with the help of data structures like stacks or queues to mimic the recursive behavior.

Back
Top