Solving an Equation Numerically

  • Thread starter Manchot
  • Start date
In summary, the conversation discusses various methods for finding the zero of a function, specifically when the function is in the form of a step function. The methods mentioned include using the floor function, binary search algorithm, and Newton's method. However, it is noted that without any functional information, finding the root can become a guessing game. The conversation also mentions the possibility of using arc-length methods and optimization routines in certain cases.
  • #1
Manchot
473
4
Hey everyone. I've been looking for a method to find a zero of a certain equation numerically, but I've been having a problem doing so. I guess that I could do some form of transformation on it, but I haven't been successful thus far. Basically, the equation's in the form

[tex]f(x) = \left\{ \begin{array}{rcl}
1 & \mbox{for}
& x \neq a \\ 0 & \mbox{for} & x=a
\end{array}\right.[/tex]

I'd like to be able to solve for a. Now, I do know the interval that a's in: it's between c and d. We also know that a is a positive integer. Now, one idea I had was able to use the floor function to make an "indentation" in the graph.
[tex]g(x) = f(\lfloor{x}\rfloor) + (f(\lfloor{x}\rfloor+1)-f(\lfloor{x}\rfloor))(x-\lfloor{x}\rfloor)[/tex]

Another idea that I had was to try to "flip" half of the graph around by negating it. I could then use the binary search algorithm. Unfortunately, to know which half to flip, I'd have to use a itself, which I don't know.

Is there any sort of transformation that I can do on the function to get it to "behave" smoothly, and allow me to use Newton's method on it (or any other quick approximation) in a computer? Keep in mind that I can only evaluate the function or its derivative, since I'm using numerical methods. Thanks.
 
Last edited:
Mathematics news on Phys.org
  • #2
I'm not sure if I'm getting this right but is it possible to modify the var(s) causing the step - like behavior by replacing them with something like a smoothened-over-interval stepfunction (with continuous derivatives to the required extent) or such a construct ? Then solving numerically for that problem and by modifying the smoothening seeing that it converges to the correct a. Or you've probably thought about something like this already / is prevented by something ?
 
  • #3
Since a is given in the problem definition, for this to make any sense at all, a must be know by someone or something else (a computer) and you are doing a search to find it. You need directional information like High or low, which is not available for a binary search to work.
What you are given is essentially a mathematical statement of "What number am I thinking" with the only responses being Yes or NO. The only possible systematic search is a sequential one.

Generally speaking the more functional information you use in your root finding routine the quicker the method will converge. Here there is no functional information as to the location of the root. It is a guessing game.
 
  • #4
Whoops, you're right. I had basically neutered my function to be information-less. (f wasn't the original one, it was the signum'ed version). Thanks.
 
  • #5
I have another question. In general, then, is it impossible to find an algorithm which can find the zero of a function which "bounces" off the x-axis? That is, if [itex]f(x) \geq 0[/itex], but the function has all sorts of crazy shapes on it, is it impossible for you to find an algorithm which will always work?
 
  • #6
For an arbitrary function? No.
For a differentiable one there are numerical ways of finding roots. See eg, Newton raphson. Of course, if the function really is never negative, and it is differentiable, the roots are repeated, and hence local minima too.
 
  • #7
I have another question. In general, then, is it impossible to find an algorithm which can find the zero of a function which "bounces" off the x-axis? That is, if LaTeX graphic is being generated. Reload this page in a moment., but the function has all sorts of crazy shapes on it, is it impossible for you to find an algorithm which will always work?

As above + for some really pathological cases it's possible to try something like arc - length methods + optimization routines to creep through a section and work with local min/max.
 

1. What does it mean to solve an equation numerically?

Solving an equation numerically means finding the values of the variables in the equation using numerical methods, without solving for the exact algebraic expression of the solution.

2. What are some common numerical methods used to solve equations?

Some common numerical methods used to solve equations include the bisection method, Newton's method, and the secant method.

3. When is it necessary to solve an equation numerically instead of algebraically?

It is necessary to solve an equation numerically when the equation cannot be solved algebraically, or when the solution is too complex to find using algebraic methods.

4. What is the difference between a numerical solution and an analytical solution?

A numerical solution is an approximation of the exact solution, while an analytical solution is an exact expression for the solution of an equation.

5. Can numerical solutions be more accurate than analytical solutions?

Yes, numerical solutions can be more accurate than analytical solutions because they can take into account rounding errors and other sources of error that may arise during the calculation process.

Similar threads

Replies
9
Views
1K
  • General Math
Replies
2
Views
1K
Replies
5
Views
2K
Replies
1
Views
1K
Replies
2
Views
128
Replies
16
Views
1K
  • Math POTW for University Students
Replies
1
Views
2K
Replies
3
Views
335
  • General Math
Replies
8
Views
1K
  • Math POTW for Secondary and High School Students
Replies
1
Views
1K
Back
Top