Does the function f(x)=infinity(x) have a derivative?

flyers
Messages
28
Reaction score
0
If we simply differentiate the equation, we get infinity as the derivative. However if we were to graph the equation, any x < 0, gives us y= infinity. How can the slope of the graph be infinity when any x<0 gives a constant y= infinity?(Is it wrong to consider infinity as a single very large number in this case?)
 
Last edited:
Physics news on Phys.org
What does "infinity(x)" even mean? Unless you're talking about the extended real numbers, infinity isn't even considered a number.
 
I meant a linear function y=ax+b with the slope a as infinity, but I guess you already answered the question.
 
x = a is a straight line with undefined slope, but it isn't a function.
 
This is probably not what you mean. But there is a way to make your "function" mathematically rigourous. It's called the Dirac Delta function. Sadly, it doesn't have a derivative (to my knowledge). But you can integrate this beast.
 
micromass said:
This is probably not what you mean. But there is a way to make your "function" mathematically rigourous. It's called the Dirac Delta function. Sadly, it doesn't have a derivative (to my knowledge). But you can integrate this beast.

What the OP is looking for isn't quite the dirac delta function. The delta function has properties which a line y = ax, with a tending to infinity, doesn't have.

Also, you may know that the dirac delta function isn't really a function, it's a distribution. In the distribution-sense, it does have a derivative. The derivative, \delta&#039; (x-x_0), has the property that

\int_S dx \delta&#039;(x-x_0)f(x) = -f&#039;(x_0)

where the set S contains the point x_0. Note that this result looks like it was obtained by integration by parts and assuming the boundary term is zero. This isn't quite true, but in a formal calculation kind of sense it works.
 
Mute said:
Also, you may know that the dirac delta function isn't really a function, it's a distribution. In the distribution-sense, it does have a derivative. The derivative, \delta&#039; (x-x_0), has the property that

Could you explain this a bit further? Is a distribution the same as a measure, and how is differentiation of measures/distributions defined?
 
Hmm, good question. A distribution is certainly not the same as a measure. A distribution is more like a generalized function. While a measure is a generalized length/area.

That said, the Dirac delta function is intuitively a 'function' such that \delta(0)=+\infty and \delta(x)=0 for other x. With the additional property that

\int_{-\infty}^{+\infty}{\delta(x)dx}=1

This function is of course not a function at all. But physicists did have many use for this "function". Thus mathematicians set out to define the dirac delta function rigourously. And it turns out that there are two methods for defining the dirac delta function. One method is as a distribution. Another is as the following measure on R:

\delta(A)=1~\text{if}~0\in A~\text{and}~\delta(A)=0~\text{otherwise}

So it is interesting that you talk about measures, since the Dirac delta function can be formalized to being a certain measure...

That said, I also want to note that you can take the derivative of a measure w.r.t. another measure. This is called the Radon-Nikodym derivative, see http://en.wikipedia.org/wiki/Radon–Nikodym_theorem
 
micromass said:
\delta(A)=1~\text{if}~0\in A~\text{and}~\delta(A)=0~\text{otherwise}

So it is interesting that you talk about measures, since the Dirac delta function can be formalized to being a certain measure...

That said, I also want to note that you can take the derivative of a measure w.r.t. another measure. This is called the Radon-Nikodym derivative, see http://en.wikipedia.org/wiki/Radon–Nikodym_theorem

Yes, this is how I know the dirac delta function/measure. The Radon-Nikodym derivative does not exist when you differentiate with respect to the lebesgue measure if I'm not mistaken, so what would "the derivative of the delta function" mean in that case?
 
  • #10
Well, if you see the Dirac function as a measure, then I suppose that this has no derivative with respect to the Lebesgue measure. So this is clearly not a very positive result. This is probably why they define the Dirac function as a distribution instead as a measure, since it does have a derivative as a distribution...
 
  • #11
I read about distributions, but I don't understand some notation. What does e.g. D^{(n,m)} \phi mean, in the sense where D\phi is the derivative of \phi. The sentence I don't understand is: For each multiindex a, the sequence of partial derivatives D^a\phi_k converges uniformly to D^a\phi.
 
  • #12
I don't know what text your reading, but I suppose it means

\frac{\partial^{n+m}\phi}{\partial x_1^n\partial x_2^m}
 
  • #13
Ok, thanks. So treating delta as a distribution severely restricts the set of integrable functions. You require them to be infinitely differentiable.
 
  • #14
The way distributions work is that you have a function space corresponding to a set of test functions, \phi(x), and a dual which consists of the generalized functions. The nicer your test functions, the meaner the kinds of dual functions you can deal with. For example, if your test functions are only once differentiable, then you can have the delta function in the dual space, but not the derivative of the delta function. If twice differentiable, you can have the derivative, but not the second, and so on.

With the space of test functions chosen, one defines a pairing between a test function and a dual function, using the dirac delta as an example:

(\phi,\delta) = \phi(0)

This replaces the formal integral notation \int dx~\phi(x) \delta(x). Although for distributions the integral notation is formal, things seem to be more or less set up such that the notation suggests correct results. If you use a genuine function from the dual space instead of just a test function, then the integral notation reduces to an actual integral. Anyways, as long as the test functions are smooth enough and decay rapidly at the infinities and have enough derivatives, we can define derivatives of the delta function as

(\phi,\delta^{(n)}) = (-1)^n \phi^{(n)}(0)

In general, for a distribution v(x) one can define a "weak derivative" u(x) by

(v,\phi) \equiv -(u,\phi&#039;).

Obviously the derivative of phi has to exist for the weak derivative to exist, etc.

This description comes from this book. It also references F.G. Friedlander's Introduction to the Theory of distributions if you're looking for more information on the topic.
 

Similar threads

Replies
7
Views
2K
Replies
6
Views
2K
Replies
7
Views
1K
Replies
1
Views
2K
Replies
17
Views
2K
Replies
4
Views
2K
Replies
53
Views
5K
Back
Top