Why do we care so much about the roots of equations?

AI Thread Summary
Roots of equations are crucial because they represent points where functions equal zero, which is foundational for solving various mathematical problems. Understanding roots aids in analyzing the behavior of functions, including their growth rates and intersections, such as where two lines meet. Historical mathematicians focused on solving equations rather than specifically finding roots, as their mathematical frameworks were less developed regarding negative numbers and subtraction. The relationship between a function's zeros and its growth rate is significant, with more roots indicating faster growth. Overall, finding roots simplifies complex problems and is essential in various mathematical applications.
musicgold
Messages
303
Reaction score
19
Hi,

I know what roots are and how to find them but I don’t know why they are so important.

What is that makes the points where a function become zero so important? I saw a similar post on this topic, but it talks about roots from an optimization point of view. However, finding roots fascinated even ancient mathematicians, much before the optimization problems appeared.

I am wondering if anyone has a better explanation.

Thanks.
 
Mathematics news on Phys.org
There is nothing particularly special about 0. If you have an expression in x equal to any number, you can always subtract that number from both sides and make it "equal to 0". And solving equations is just like what we do in all forms of mathematics- looking for numbers (or other mathematical entities) that has given properties.
 
In more advanced mathematics, the rate at which a function grows is intimately related to the distribution of its zeros. So if you know something about how the zeros are located, you can guess how fast the function grows. To see this consider the fact that the more roots a polynomial has the faster it grows. For example, the polynomial function $$f(x)=x^2-1$$ has two roots and the polynomial function $$g(x)=x$$ has one root. So f grows faster than g. You can extend this idea to functions on the complex plane although this is easier said than done. Most of the theory of entire functions deals with that.

An good example of a result from this theory says something like this: If the zeroes of an entire function are arranged densely enough in the complex plane and the function is growing too slow it means the function is identically zero.
 
Hey musicgold.

Finding roots are a means to an end in solving sets of equalities (and are useful for understanding inequalities as well).

For example if you need to find where two lines meet, then you set up equalities and solve for the unknowns. If you need to find the turning points of a function, then you will need to solve f'(x) = 0 for differentiable function f.

If you want to find fixed points in DE's then you solve for the roots as well.

Again basically what happens is that you get a problem and generally if you want to find solutions, you can reduce it down to solving the roots of those equations.
 
Solving P=Q is the same as solving P-Q=0.

I doubt the ancient mathematicians were particularly interested in finding the roots of a function; solving equations is the interesting thing, and they didn't have an adequately developed notion of subtraction and negative numbers to turn equations into root-finding problems.

e.g., IIRC, they had four versions of (something equivalent to) the quadratic formula, one for solving each type of equation below:
  • a x^2 + b x + c = 0
  • ax^2 + b x = c
  • ax^2 + c = bx
  • ax^2 = bx + c
 
Thanks folks.
 
Thread 'Video on imaginary numbers and some queries'
Hi, I was watching the following video. I found some points confusing. Could you please help me to understand the gaps? Thanks, in advance! Question 1: Around 4:22, the video says the following. So for those mathematicians, negative numbers didn't exist. You could subtract, that is find the difference between two positive quantities, but you couldn't have a negative answer or negative coefficients. Mathematicians were so averse to negative numbers that there was no single quadratic...
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes. I have seen that this is an important subject in maths My question is what physical applications does such a model apply to? I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...
Back
Top