Justification for cancellation in rational functions

Mr Davis 97
Messages
1,461
Reaction score
44
For example, say we have ##\frac{x^4(x - 1)}{x^2}##. The function is undefined at 0, but if we cancel the x's, we get a new function that is defined at 0. So, in this case, we have ##x^2(x - 1)##, then ##x^2(x - 1)(1)##, and since ##\frac{x^2}{x^2} = 1##, we then have ##\frac{x^4(x - 1)}{x^2}##. However, this is a new function, since the domain has changed to exclude x = 0. How is this justified? Why can we go about changing the function in that way. Specifically, when we evaluate limits, in the case where we have ##\frac{x^4(x - 1)}{x^2}##, how to we know that cancelling the x's will lead to the correct limit, since that is in effect the limit of the function ##x^2(x−1)## and not ##\frac{x^4(x - 1)}{x^2}##?
 
Mathematics news on Phys.org
We can't in general do that and say they're the same function. However, there's a property of limits that, given a function f and a function g such that g(x) = f(x) except maybe at a point x = c, then \lim_{x \to c} f(x) = \lim_{x \to c} g(x) What we care about is the limit. We're not worried about what actually happens at the point. We're only worried about what happens as we approach the point. What's important here is that we know that the two functions are equal everywhere except that one point.

I may be able to find a specific statement of this.
 
The following problem comes from Chapter 5, problem 10 of Spivak's Calculus 1st edition:

Suppose there is a \delta > 0 such that f(x) = g(x) when 0 < |x - a| < \delta. Prove that \lim_{x \to a} f(x) = \lim_{x \to a} g(x). In other words, the limit only depends on the values of f(x) for x near a.

If you're familiar with epsilon-delta proofs (the limit approaches L if 0 < |x - a| < \delta implies |f(x) - L| < \epsilon), then this is easy to see: We know that f(x) = g(x) for some neighborhood around a, that is, f(x) = g(x) when |x - a| < \delta '. Furthermore, if \lim_{x \to a} f(x) = L then given an \epsilon > 0, then there exists some \delta > 0 such that 0 < |x - a| < \delta \implies |f(x) - L| < \epsilon. Now just make sure \delta ' < \delta, and then we know g(x) is equivalent to f(x), so we can replace it in our mathematical statement: 0 < |x - a| < \delta ' \implies |g(x) - L| < \epsilon. Thus f and g both approach L as x \to a.

Intuitively, this just means that we only care about the neighborhoods near the point we're approaching. Since the two functions are equal everywhere except that one point, the two limits are still the same.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top