andyrk said:
No. I am not asking that. What I am asking is that when we write the limit - ##\lim_{x\to a} h(g(x))##, the general intuition is (in order to get to your explanation) as x tends towards a, g(x) tends towards L (that does not mean that g(x) actually ever reaches L). That's because that is how a limit is defined, i.e. as x approaches a number, f(x) approaches some other number. And that number is equated to be the limit of f(x) when we operate the limit on f(x). What I am asking is that while writing the limit, ##\lim_{x\to a} h(g(x))## why don't we apply the limit to g(x) so as to clearly say that as x approaches a g(x) approaches L (and then we can replace the limit ##\lim_{x\to a} h(g(x))## by h(L) directly) ?
We don't because my counterexample and others show that this leads to nonsense results like 0=1. As I said, the value of the expression ##\lim_{x\to a}h(g(x))## (i.e. what number this string of text represents) isn't determined by the value of ##h(g(a))##, which may not even be defined, but by the behavior of ##h\circ g## (the function that most people refer to as ##h(g(x))##) on a set that
doesn't include ##a##. I'm not sure what else I can tell you.
andyrk said:
Is it because of the way a limit is defined? The definition of a limit is "In mathematics, a limit is the value that a function or sequence "approaches" as the input or index approaches some value".
That's not a definition. It's a suggestion about how to think about limits. It can also be viewed as the motivation for the actual definition, which goes like this:
A real number ##L## is said to be a limit at ##a## of the function ##f##, if for all ##\varepsilon>0##, there's a ##\delta>0## such that the following implication holds for all ##x## in the domain of ##f##
$$0<|x-a|<\delta\ \Rightarrow\ |f(x)-L|<\varepsilon.$$ This means that if we plot the graph, and I draw two horizontal lines at the same distance (we call this distance ##\varepsilon##) from ##L## on the ##y## axis, then regardless of how close to ##L## I drew them, you can draw two vertical lines at the same distance (we call this distance ##\delta##) from ##a## on the ##x## axis, such that except for the single point ##(a,L)##, the part of the graph that's between your two vertical lines is also between my two horizontal lines.
As DarthMatter told you earlier, you can think of this as a game that we're playing. To say that ##L## is a limit of ##f## at ##a## is to say that you can always win the game by drawing your vertical lines close enough to ##a##.
People often simplify the explanation to "you can make ##f(x)## arbitrarily close to ##L## by choosing ##x## close enough to ##a##". This captures part of the idea, but is inadequate as a definition.