Varying Variables and Differentials in Calculus Made Easy

cra18
Messages
11
Reaction score
0
I am currently reading Calculus Made Easy by S. P. Thompson, and the author's idea of what it means for a variable to "vary" seems fundamentally different from my own, so I was hoping someone could help me correct my understanding. Here is the excerpt I'm having trouble with:


Those [quantities] which we consider as capable of growing, or (as mathematicians say) of "varying," we denote by letters from the end of the alphabet, such as x, y, z, u, v, w, or sometimes t . . . Suppose we have two variables that depend on each other. An alteration in one will bring about an alteration in the other, because of this dependence. Let us call one of the variables x, and the other that depends on it y . . . Suppose we make x to vary, that is to say, we either alter it or imagine it to be altered, by adding to it a bit which we call \mathrm{d}x. We are thus causing x to become x + \mathrm{d}x. Then, because x has been altered, y will have altered also, and will have become y + \mathrm{d}y.


My previous understanding is: a variable is an unspecified element of some set. So if I say that x is a real number, this means that x is an unspecified element of \mathbb{R}. The fact that there is more than one element in \mathbb{R} is the reason why x is capable of "varying" --- i.e., it can potentially take anyone of the values in \mathbb{R}.

But x varying to become x + \mathrm{d}x doesn't make sense to me, x being just a placeholder for an element of some set. I agree that the quantity x + \mathrm{d}x is probably also an element of the same set that x ranges over, but it seems more like a new variable than the result of simple variation. That is to say, it seems more appropriate to call x + \mathrm{d}x the output of some underlying iterating function like g(x) = x + \mathrm{d}x, instead of a new value of the original x, in which case, if I define y = f(x), then
<br /> \begin{equation}<br /> y + \mathrm{d}y = f(g(x)) = f(x + \mathrm{d}x).<br /> \end{equation}<br />
would be the corresponding definition. Could someone explain if the above is the correct way to think about "varying" as the author describes it, and whether my concept of a variable is correct?
 
Last edited:
Physics news on Phys.org
Yes that is quite good. Often your function g is denoted with E so we have
Ey=f(Ex)
That book is a bit informal.
You might like to be a bit careful about using dy like that.
Where all this is headed is we have two functions
Δy=f(x+dx)-f(x)
dy=f'(x) dx
The function Δy need not be linear, but dy is not only linear it is the best linear approximation of Δy near x
 
  • Like
Likes 1 person
Think of x as being controlled by a dial that one can adjust, like tuning a radio. Turning the dial makes x vary. Perhaps you are looking at the graph of the function on an oscilloscope screen and as you turn the dial, the dot traces out the curve.

If x is a continuous variable, it can vary continuously. This means it is like a dial without notches. We aren't going click, click, click, to different positions, the adjustment is smooth. Whereas a discrete variable would be a dial with notches, somewhat like the dial on a washing machine.

Can this be made rigorous? That is the job of set theory and real analysis, to provide continuity and smoothness using numbers.
 
  • Like
Likes 1 person
Thanks for the replies, they were very helpful.
 
Last edited:
Actually, I am still confused on a fundamental thing: I am used to plotting values of the independent variable on one axis and the dependent variable on a different axis. But when looking at a functional relationship like y=f(x), what enables us to plot both x and x + h, for some constant h, on the same axis (like when thinking about the limit definition of the derivative)? What is the guarantee that x + h, being the output of a function of x, and itself a dependent variable, doesn't just fall off the domain of f?
 
Maybe it's time that you abandoned your previous conceptions about variables being discrete elements in some set.

In my view, if x is a particular element in a set, then there is the next element after x and the element immediately preceding x. You don't have half elements, quarter elements, etc.

The calculus is based on the notion, however un-setlike, of continuous change. If we have a quantity x, then we assume that there also exists a quantity x+dx, and that the change, dx, can be made arbitrarily small. Calculus also is established on the notion that, however small the change dx is made, a function f(x+dx) will tend, in the limit, to become f(x).
 
  • Like
Likes 1 person
cra18 said:
Actually, I am still confused on a fundamental thing: I am used to plotting values of the independent variable on one axis and the dependent variable on a different axis. But when looking at a functional relationship like y=f(x), what enables us to plot both x and x + h, for some constant h, on the same axis (like when thinking about the limit definition of the derivative)? What is the guarantee that x + h, being the output of a function of x, and itself a dependent variable, doesn't just fall off the domain of f?

I don't see how you think of ##x+h## as being an "output function" of ##x##. Perhaps it is the idea of calling a point on the x-axis ##x## that is bothering you. Would it help you to think of picking two numbers ##a## and ##h## and marking both ##a## and ##a+h## on the x axis? Assuming they are both in the domain of ##f##, you can talk bout ##f(a+h)-f(a)## and ## \frac{f(a+h)-f(a)}{h}##. Taking the limit as ##h\to 0## gives ##f'(a)##. Since you could do this for any ##a## in the domain of ##f##, it seems simpler and less confusing to just use ##x## instead of ##a## in the first place.
 
LCKurtz said:
Perhaps it is the idea of calling a point on the x-axis ##x## that is bothering you.

This is EXACTLY my source of confusion. It seems so silly, when you put it like that though. I find it confusing to read about a variable x taking the value x or the value x + h. When I think of a variable x taking a value a, I interpret that as x=a, so variable x taking value x + h would then mean x = x + h, which would make sense only if the equals sign was used in the sense of assignment from computer science, but that is not the sense in which it is used in mathematics.

I guess I'm having a lot of trouble distinguishing cases where the author is talking about x as being a constant that can assume only a single unknown value, and x as being a variable that can assume any of a set of values. In the passage from the book that I typed in the first post, I cannot tell whether x is a constant or a variable.

In either case, thinking of, for instance, x as a variable "varying" by taking an original value a and then a slightly larger value a + \mathrm{d}x makes perfect sense to me, and maybe this is what the author's intent is.
 
Last edited:

Similar threads

Replies
3
Views
3K
Replies
6
Views
3K
Replies
7
Views
4K
Replies
29
Views
4K
Replies
3
Views
1K
Replies
1
Views
1K
Replies
2
Views
1K
Back
Top