I Can dp/dt Be Found When p(x) Is Inverse?

rabbed
Messages
241
Reaction score
3
If p is a function of x which is a function of t and you evaluate delta_p/delta_t as
delta_t goes to zero, it should be possible that delta_p/delta_t equals delta_p/dx
(or dp/dx) before reaching dp/dt.
Is it possible to find an expression for t where this happens?

Hm.. maybe when t = x^-1(dx) ?
Is it possible to find dp/dt for that t?
 
Last edited:
Physics news on Phys.org
$$\frac{dp}{dt} = \frac{dp}{dx}\frac{dx}{dt}=
\lim_{\Delta t \to 0} \frac{p(t+\Delta t) - p(t)}{\Delta t}$$
 
I know, but that doesn't get me an expression of dp(x^-1(dx))/dt that can be evaluated at any t, does it?

On second thought, it should be dp(t+x^-1(dx))/dt

I'm trying to follow what happens to a 2D vector derivative when it starts to grow orthogonal to the tangent..
 
Last edited:
Possible to turn this limit expression into derivatives?

Lim delta_x-> 0: ( p(x^-1(x+delta_x)) - p(x^-1(x)) ) / x^-1(delta_x)

Where x^-1(x) = t
 
Last edited:
rabbed said:
If p is a function of x which is a function of t and you evaluate delta_p/delta_t as
delta_t goes to zero, it should be possible that delta_p/delta_t equals delta_p/dx
(or dp/dx) before reaching dp/dt.
You appear to be using the intuitive idea that a "limit" involves the notion of something "approaching" something else over an interval of time or in a step-by-step fashion. If you look at the formal definition of "##\lim_{t \rightarrow a} f(t)## you will find that the definition does not define any process taking place in time or in a sequence of steps. Since the definition of a derivative is based on the definition of limit, the definition of derivative also does not involve a process of something "approaching" something else as time passes or as a number of steps are executed. So your question doesn't have any defined meaning in mathematics, because there is no process described in the definition of derivative that would involve a "before" or "after".

If you are talking about algorithms to approximate derivatives, these often do involve a specific sequence of steps. But in order to determine if a variable in such an algorithm "reaches" a certain value before another value, you would have to say which particular algorithm you are asking about.
 
Hi Stephen

I'm trying to increase understanding of what happens to a parametric 2D vector when you take its derivative.
Letting two points of a curve approach each other by letting the parameter difference go to zero, there should
be a point where the derivative has a direction normal to the curve but has length 0, and then the length should
start to grow, still having the same direction.
I'm thinking that maybe the zero-length point occurs when delta_t = x^-1(dx), and as you decrease delta_t down
to dt the length starts to grow. It should make some sense, since the zero-length point should exist?
 
Wouldn't this take you closer to that idea?

Lim delta_x-> 0: ( p(x^-1(x+delta_x)) - p(x^-1(x)) ) / x^-1(delta_x)

Seems it's called calculus of variations, if you derivate wrt a function?
 
rabbed said:
Hi Stephen

I'm trying to increase understanding of what happens to a parametric 2D vector when you take its derivative.
You apparently are thinking of some algorithm or process to approximate the derivative because, as I mentioned, "taking" a derivative is not defined in terms of process that takes place in time or in a series of steps.

Your original post didn't mention a vector. Apparently you mean a function ##F(x) = (f_1(x), f_2(x))## whose domain is a set of real numbers and whose codomain is a set of two dimensional vectors?

Letting two points of a curve approach each other by letting the parameter difference go to zero, there should
be a point where the derivative
Which derivative? ##(f_1'(x), f_2'(x))##?

has a direction normal to the curve but has length 0

Why do you think that? Suppose the curve is ##F(x) = (f_1(x), f_2(x)) = (x, x+1)## with ##x(t) = t##. Where is there a point point on the curve where ##(f_1'(x),f_2'(x))## is normal to the curve?
 
Stephen Tashi said:
Your original post didn't mention a vector. Apparently you mean a function F(x)=(f1(x),f2(x))F(x) = (f_1(x), f_2(x)) whose domain is a set of real numbers and whose codomain is a set of two dimensional vectors?
I know, sorry. But this would apply to each partial derivative so I thought it would be simpler to discuss for just one variable.

Stephen Tashi said:
Why do you think that? Suppose the curve is F(x)=(f1(x),f2(x))=(x,x+1)F(x) = (f_1(x), f_2(x)) = (x, x+1) with x(t)=tx(t) = t. Where is there a point point on the curve where (f′1(x),f′2(x))(f_1'(x),f_2'(x)) is normal to the curve?
I'm thinking of a picture like the one in the answer here: http://math.stackexchange.com/quest...e-of-a-vector-orthogonal-to-the-vector-itself
delta_v would at some point become a zero vector, before starting to grow? And since the vector derivative is created by derivating each component, it should apply to a function of a function of a single variable also?
 
  • #11
Stephen Tashi said:
Then why don't you ask the question that the answerer answered?
Since I want to study this per component
 
  • #12
rabbed said:
Since I want to study this per component

Then it isn't clear what you are asking. If you can't find the words to express your general question, try asking about a specific example.
 
  • #13
Back
Top