Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

What is [itex]{\delta ^n}(f(x))[/itex] in n dimensions?

  1. Oct 26, 2012 #1
    I'm wondering what the Dirac delta of a function would be in n dimensions. What is [itex]{\delta ^n}(f(x))[/itex]?

    I understand that in 3 dimensional flat space, the Dirac delta function is
    [tex]{\delta ^3}(x,y,z) = \delta (x)\delta (y)\delta (z)[/tex]
    and
    [tex]{\delta ^3}({{\vec x}_1} - {{\vec x}_0}) = \delta ({x_1} - {x_0})\delta ({y_1} - {y_0})\delta ({z_1} - {z_0})[/tex]
    But I'm not sure what the expression would be when the 3D Dirac delta is composed with a scalar function. What is
    [tex]{\delta ^3}(f(\vec x))[/tex]

    In general [itex]f(\vec x)[/itex] is a scalar, not a vector, and I cannot break down [tex]f(\vec x) = ({f_1}(\vec x),{f_2}(\vec x),{f_3}(\vec x))[/tex]
    Even if I could, would
    [tex]{\delta ^3}(f(\vec x)) = \delta ({f_1}(\vec x)) \cdot \delta ({f_2}(\vec x)) \cdot \delta ({f_3}(\vec x))[/tex]
    Somehow, it doesn't seem to make sense to say,
    [tex]{\delta ^3}(f(\vec x)) = \delta (f(\vec x)) \cdot \delta (f(\vec x)) \cdot \delta (f(\vec x))[/tex]
    So maybe that means we have to cook up a 3D version of the Dirac delta on first principles and not try to decompose it into a multiple of 3 seperate deltas, even is flat space.
     
  2. jcsd
  3. Oct 26, 2012 #2
    It looks to me like you need stricter rules on what [itex]f[/itex] can be. If it's range is [itex]\mathbb{R}^{3}[/itex] then I'm guessing it would be as in your first equation. Otherwise you have to look at how [itex]\delta[/itex] is defined to behave on the type of objects that [itex]f[/itex] outputs. However I could be missing something, I am not familiar with this function.
     
  4. Oct 26, 2012 #3

    pwsnafu

    User Avatar
    Science Advisor

    I personally have never seen that cube. The Dirac for n-space is simply written
    ##\int_{\mathbb{R}^n} \delta(\mathbf{x}) f(\mathbf{x}) \, d\mathbf{x} = f(\mathbf{0})##.
    Which is the same as for 1-space. The notation
    ##\delta(\mathbb{x}) = \delta(x_1)\delta(x_2)\delta(x_3)##
    is very dangerous. Multiplication is not well-defined in generalised functions: ##(\delta \cdot x) \cdot \frac{1}{x} \neq \delta \cdot (x \cdot \frac{1}{x})##.

    You can't just compose any smooth function with Dirac. It needs non-zero derivatives. You can find the exact requirement on Wikipedia.
     
  5. Oct 26, 2012 #4
    But my question is, what is:
    [tex]\int_{\mathbf{R}^n} {{\delta ^n}(f(\mathbf{x}))}f'(\mathbf{x}) \,d\mathbf{x}[/tex]
     
  6. Oct 27, 2012 #5

    pwsnafu

    User Avatar
    Science Advisor

    What you have written is not allowed. You need a test function, and the constant function ##\phi(x)=1## is not a valid test function when integrating over ##\mathbb{R}^n## (it is fine if we are only interested in a bounded subset).

    Suppose ##\phi : \mathbb{R}^3 \rightarrow \mathbb{R}## is a valid test function, and suppose f is a smooth Lipschitz function. Then
    ##\int_{\mathbb{R}^3} \delta\circ f(\mathbf{x}) \, \phi \circ f(\mathbf{x}) \, | \det J_f(\mathbf{x})| \, d\mathbf{x} = \int_{f(\mathbb{R}^3)} \delta(\mathbf{u}) \, \phi(\mathbf{u}) \, d\mathbf{u}##.
    Note: Jf is the Jacobian matrix.

    Note: the only way to make what you wrote possible is if f' is the test function, that is ##f' : \mathbb{R}^3 \rightarrow \mathbb{R}##. But that is not consistent with what you want f to be.

    Edit: There is one last possibility, and that is take ##{\delta}\circ f \cdot f'## and integrate it in the generalised function sense. That means solve ##g' = {\delta}\circ f \cdot f'## as distributions. But I highly doubt that is what you mean.
     
    Last edited: Oct 27, 2012
  7. Oct 27, 2012 #6
    Actually, that's where I'm going with all this. I probably did not make myself very clear. I should probably tell you why I'm interested in this.

    I'm trying to generalize the integral of the Dirac delta function to arbitrary spaces of arbitrary dimension that also have curvature. So I start with what I know. In 1D we have
    [tex]\int_{ - \infty }^{ + \infty } {\delta (x - {x_0})dx} = 1[/tex]
    In 3D flat space this generalizes to:
    [tex]\int_R {{\delta ^3}(\vec x - {{\vec x}_0})d\vec x} = 1[/tex]
    with [itex]\vec x = ({x^1},{x^2},{x^3})[/itex]. And I assume that R is a region that spans the entire space out to infinity. In flat space this is usually written:
    [tex]\int_{ - \infty }^{ + \infty } {\int_{ - \infty }^{ + \infty } {\int_{ - \infty }^{ + \infty } {\delta ({x^1} - x_0^1)} } } \cdot \delta ({x^2} - x_0^2) \cdot \delta ({x^3} - x_0^3)d{x^1}d{x^2}d{x^3}[/tex]
    with [itex]{\delta ^3}(\vec x - {{\vec x}_0}) = \delta ({x^1} - x_0^1) \cdot \delta ({x^2} - x_0^2) \cdot \delta ({x^3} - x_0^3)[/itex], which works fine when the input of the [itex]{\delta ^3}(...)[/itex] is vector valued, [itex]\vec x - {{\vec x}_0}[/itex]. But I'm not so sure if you can separate the Dirac delta this way when the input is not a vector, such as [itex]{\delta ^3}(f(\vec x))[/itex], with [itex]f(\vec x)[/itex] a scalar. And this may be important because more generally we have
    [tex]\vec x - {{\vec x}_0} = \int_{{{\vec x}_0}}^{\vec x} {d\vec x} [/tex]
    And when we replace the metric in the line integral for general spaces with curvature we get:
    [tex]\vec x - {{\vec x}_0} = \int_{{{\vec x}_0}}^{\vec x} {\sqrt {{g_{ij}}(\vec x)d{x^i}d{x^j}} } [/tex]
    And when we parameterize the curve with the variable, t, we get:
    [tex]f(t) = \int_{{t_0}}^t {\sqrt {{g_{ij}}(\vec x(t))\frac{{d{x^i}}}{{dt}}\frac{{d{x^j}}}{{dt}}} } \,\,\,dt[/tex]
    where the function [itex]f(t)[/itex] is used as a scalar in [itex]{\delta ^n}(f(t))[/itex] to give the distance from [itex]{{\vec x}_0}[/itex] to [itex]{\vec x}[/itex].

    Then finally, we can express the Dirac delta in curved n-dimensional space as:
    [tex]\int_{{R^n}} {{\delta ^n}(\int_{{t_0}}^t {\sqrt {{g_{ij}}(\vec x(t))\frac{{d{x^i}}}{{dt}}\frac{{d{x^j}}}{{dt}}} } \,\,\,dt) \cdot \sqrt {g(x)} d\vec x} = 1[/tex]
    where [itex]{\sqrt {g(x)} d\vec x}[/itex] is the volume form for integrating over [itex]{{R^n}}[/itex]. Notice that the [itex]{\sqrt {g(x)} }[/itex] outside the delta seems to be a derivative of the integral inside the delta. Maybe this helps. But the above equation seems to be a functional of the metric [itex]{{g_{ij}}(\vec x)}[/itex]. And I wonder if the properties of the Dirac delta constrain the metric in some way that looks familiar. Any ideas on how to proceed would be appreciated.
     
  8. Oct 28, 2012 #7

    pwsnafu

    User Avatar
    Science Advisor

    I think I have told you this before: you really need more theory under your belt. You are doing this in a very ad-hoc and (as far as I can tell) mathematically incorrect fashion. I mean ##\int_{-\infty}^\infty \delta(t) \, dt## is not equal to 1, it simply is undefined. And ##\delta(x-a)## is defined as ##\tau_{a}\delta## where τ is the translation operator. It does not mean the metric of ℝ, so you can't replace x-a with a general metric. What you are supposed to do is work out what "translation by a" means on your manifold and use
    ##\int \delta(x-a) \, \phi(x) \, dx = \int \delta (x) \, \tau_{-a}\phi(x) \, dx##
    for some test function phi. I mean the metric on R is |x-a| and not x-a. You don't see people writing ##\delta(|x-a|)## do you?

    As to the question of whether it is a functional of ##\sqrt{g}##, the answer is yes. But you need rapid decrease, that is, as ##|x| \rightarrow \infty## show ##x^n \sqrt{g(x)} \rightarrow 0## for all n>0. That places huge constraints on g. So I wouldn't use it as a test function.

    The only paper I know of that concerns generalised functions on manifolds is this one, but it concerns Colombeau[/PLAIN] [Broken] algebras
    . You'll want an actual book on how it works for ℝ before tackling the paper.

    There is also this paper which directly deals with Dirac on Riemann manifolds, but I don't have access so I can't vouch for it.
     
    Last edited by a moderator: May 6, 2017
  9. Oct 28, 2012 #8
    You are absolutely correct, of course. If I thought I was a fully trained and competent mathematician, I wouldn't be asking for advice. The Dirac delta is a strange function indeed, if you can even call it a function. And I thought it best to ask around before committing too many resources.

    It's my understanding that this is exactly how one defines the Dirac delta:
    [tex]\int_{ - \infty }^{ + \infty } {\delta (x)dx} = 1[/tex]
    and that it is [itex]{\delta (x)}[/itex] that is undefined at x=0.

    Maybe there is some confusion about how various people consider how to implement the competing limiting processes involved with the Dirac delta. I think the problem is which limiting process is done first, taking the limit of the parameter, let's call it "a", that causes [itex]{\delta (x)}[/itex] to approach infinity at x=0, or taking the limiting process involved with the integration of [itex]\int_{ - \infty }^{ + \infty } {\delta (x)dx}[/itex].

    If you try to implement [itex]a \to 0[/itex] first to make [itex]\delta (x) \to \infty [/itex] at x=0, then of course the integral [itex]\int_{ - \infty }^{ + \infty } {\delta (x)dx}[/itex] is undefined because you can't evaluate [itex]\infty \cdot dx[/itex] at x=0, the integral being zero everywhere else. However, if you let "a" be large in comparison with the limiting process of the integral, then you can evaluate the integral with ease, and then do the limit as [itex]a \to 0[/itex]. At least that's the way I've always seen the Dirac delta developed. Isn't the Dirac delta suppose to be a "distribution" and aren't distributions defined so that their integral equals 1? I'm thinking in terms of a probablity distribution.

    But it seems to me I remember something in analysis that addresses competing limiting processes. Maybe something like the answer should not depend on which limit you do first. Though I'm not sure about that. Do you remember what that's called?

    Metric, shmetric. If I can compose the Dirac delta function with another function, [itex]\delta (f(x))[/itex], then I can make that function f(x) to be a function of the metric.

    Thank you.
     
  10. Oct 28, 2012 #9

    pwsnafu

    User Avatar
    Science Advisor

    Even trained mathematicians don't get this right because we don't teach students how it works! It doesn't help that we teach an abuse of notation and don't explain why. We seem to have an awful track record in analysis: remember ##\frac{dy}{dx}## is not a fraction type discussions? Yeah, our notation is dreadful. I mean ##\int \delta(x) f(x) dx## is not even an integral. We just write it like that. If you read technical books/papers on the topic you see the notation ##(\delta, f)## to prevent people from thinking it's an integral.

    Oh and Dirac is indeed a function, but it doesn't have a domain of ℝ. It's domain is the test function space. This is why I keep banging my head when I see people ignore test functions: it's the same as defining ##f:\mathbb{R}\rightarrow\mathbb{R}## and then write ##f(\infty)##. It drives me up the wall! :rofl:
    Never, and even Wikipedia explicitly mentions this:
    Yeah great, but there's Schwartz distributions and probability distributions, and we don't want the latter. The measure approach requires you to actually know how Lebesgue integration works while the distributional approach requires functional analysis. Which leads us to...

    The sequence definition! To understand this you need to realise that your test functions are dense in the space of distributions. So you can use this as your definition, and this usually is what gets taught for engineering (a book by Jones popularised this approach). Here's how it works.

    Let ##S(\mathbb{R^3})## be all smooth functions ##\phi : \mathbb{R}^3 \rightarrow \mathbb{R}## satisfying the rapid decrease property I wrote in my previous post. Take a sequence of test functions ##(f_1, f_2, \ldots)##. Suppose that, for any test function ##\phi##, the limit
    ##\lim_{n \rightarrow \infty} \int_{\mathbb{R}^3} f_n(\mathbf{x}) \, \phi(\mathbb x) \, dx##
    exists as a (finite) real number. Then ##f = (f_1, ldots)## is a generalised function for ℝ
    3. We need equality so we say ##f = g## if
    ##\lim_{n \rightarrow \infty} \int_{\mathbb{R}^3} f_n(\mathbf{x}) \, \phi(\mathbb x) \, dx = \lim_{n \rightarrow \infty} \int_{\mathbb{R}^3} g_n(\mathbf{x}) \, \phi(\mathbb x) \, dx##
    for every test function phi.

    And that's it. The point is that even in this definition you can't ignore your test functions!
    And as I said a million times, the constant is not a valid test function over ℝ3. It's fine for certain other spaces, but not this one.

    PS the limit is actually called a weak limit, which brings us back to functional analysis :wink:

    Well depends. In order for ##\delta (f(x))## to make sense there will be constraints on f. So the question becomes can you find a h(x) such that ##f = h \circ m## satisfies the requirements to be composable with Dirac? m of course is the Riemann metric. I think that's what you'll need to think about. In other words, I don't think ##\delta\left(\int\right.## stuff will work. But maybe filter the integral through h(x)?
    ##\delta(h\left(\int\right.\ldots##

    No probs. Ideally you want to find someone who specialises in generalised functions (as in actually writes journal papers on the topic). What you are attempting is worthy of at least a chapter in a PhD thesis. So expect a lot of bumps.
     
  11. Oct 28, 2012 #10
    You know, [itex]\lim_{n \rightarrow \infty} \int_{\mathbb{R}^3} f_n(\mathbf{x}) \, \phi(\mathbb x) \, dx [/itex] looks to me a lot like the heuristic form I'm accustomed to. The [itex]{\lim _{n \to \infty }}{f_n}(x)[/itex] seems to function the same as [itex]{\lim _{a \to 0}}\frac{{{e^{ - {\textstyle{{{x^2}} \over {4a}}}}}}}{{2\sqrt {\pi a} }}[/itex]. Letting [itex]{a \to 0}[/itex] gives us a sequence of functions, fn(x). This makes me wonder if we're not talking about different notation when it comes to practical applications. I mean, in practical applications such as differential equations does it really matter? Every place I've seen the Dirac delta being used it's always the form that integrates to 1. It seems to me that the only ones who cares are analysts. But I don't know any examples of practical use where it matters. Maybe you can help with that. For if all practical applications are invariant with this change of notation, I think I'll continue to use the form I'm comfortable with, thank you very much.

    Yes, I figured it be something like that.


    I'm not affiliated with any university. And this research is probably a bit too fringe to interest most professionals. But I will keep asking around. Maybe I'll eventually get what I need from people like you. Thanks.
     
  12. Oct 28, 2012 #11

    pwsnafu

    User Avatar
    Science Advisor

    It depends. For example let's take your example. How do you take the limit ##a\rightarrow0##? Well we could have ##a_n = \frac{1}{n} \rightarrow 0## or you could take the continuum limit. In the former you get Shawrtz distributions as normal. In the latter case you get Colobeau algebra. The former doesn't have multiplication but the latter does. If you are studying non-linear DEs then this difference becomes important. The initial change is small, but it changes whether the technique is applicable or not.

    Keep in mind in physical applications you always have test functions: it's the error from your apparatus. Usually that'll be a Gaussian, and lo and behold, Gaussians are the prototypical test function that's introduced in books. Physicists and modellers can get away with dropping the test function which is modelling the instrument itself. And when you model with DEs you interested in the readings from your instrument.

    But your application is geometry. There is no instrument to accurately measure the curvature of some Riemann manifold. So you need to be aware of all this.

    And moving on. Yes it is notation. But it's dangerous notation because it lying. The monkey-patch is to write
    ##\int_{\mathbb{R}^3} \delta(\mathbf x- \mathbf a) \, d\phi,##
    turning it into a Stieltjes integral. That'll evaluate to ##\phi(a)## and won't be equal to 1 for all ##a\in\mathbb{R}##, but I'm pretty sure you could find a test function phi which makes the expression arbitrarily close to 1. (In math terms that would be: for all ##\epsilon > 0## there exists test function ##phi## so that the above expression evaluates to ##1-\epsilon##.) So all of your answers are correct to some error ##\epsilon>0##.
    Edit: just realised this wouldn't work. Oh well.

    Alternate approach: it's always possible to find a phi such that for any open and bounded ##U \subset \mathbb{R}^3##,
    ##\int_{\mathbb{R}^3} \delta(\mathbf x- \mathbf a) \, d\phi = 1## for all ##a\in U##.
    Then you glue/stitch the different parts together. This approach leads to sheaf theory. This approach probably results in more bookkeeping, but is a "better" approach...for a topologist :tongue:

    Another Edit:
    I just realised how to combine the two approaches. I'll explain on ℝ1. Let ##U_1 = (-1,1)##, and let ##\phi## be a test function which is a constant 1 over U1 and has rapid decrease. Then if ##a \in U_1##
    ##\int_{R} \delta(x-a) \, d\phi = \phi(a) = 1##.
    Now if a is outside U1, we simple choose ##U_n = (-n,n) = \ni a## and repeating the process we'll get 1.
    On your manifold, you need to find an origin point, and choose successively larger open balls around that point.
     
    Last edited: Oct 28, 2012
  13. Oct 31, 2012 #12
    All this leaves me wondering if the generalized function approach to the Dirac delta may be too much of a generalization. Or, maybe we're not talking about the same Dirac delta. I mean, as first developed by Cauchy and Dirac, the Dirac delta function seems to simply be a generalization of the Kronecker delta so that instead of writing,
    [tex]\sum\nolimits_{i = 1}^n {{\delta _{ij}} = 1{\rm{ for }}1 \le j \le n} [/tex]
    we generalize to the continuous case by writing
    [tex]\int_{ - \infty }^{ + \infty } {\delta (x)dx} = 1[/tex]
    Then since the integral is still defined for small intervals,
    [tex]\int_{ - \varepsilon }^{ + \varepsilon } {\delta (x)dx} = 1[/tex]
    no matter how small [itex]\varepsilon [/itex] is, we have the usual
    [tex]\int_{ - \infty }^{ + \infty } {\delta (x)f(x)dx} = f(0)[/tex]
    for any f(x) sufficiently well behaved at x=0.
    Here it doesn't seem to matter whether f(x) is a distribution or not. All that matters is that the integral is doable.

    I think what's going on in distribution theory is that since they have the delta as a distribution with compact support, they want any function integrated against it to also be a distribution with compact support so it will make sense to them to write [itex](\delta ,f)[/itex] to stand for a generalized inner product of the same kinds of things (like vectors, or functions with compact support). This prevents them from forming [itex](\delta ,1)[/itex] since 1 is not the same kind of thing as [itex]\delta [/itex]. But all that seems to really matter in practice is that the integral is doable. So they treat the parameter [itex]\varepsilon [/itex] to be large in comparison with the differentials in the process of doing the calculus, then they make [itex]\varepsilon [/itex] approach zero if needed. Is it true that this avoids any problems with precise definitions in terms of generalized distributions?
     
  14. Nov 1, 2012 #13

    pwsnafu

    User Avatar
    Science Advisor

    The theory of generalised function was developed because Dirac delta is poorly behaved: it must, inevitably, break calculus. For example if we go with
    ##1 = \int_{-\infty}^\infty \delta(x) \, dx##
    then from the properties of the Riemann (or Lebesgue) integral
    ##\int_{-\infty}^\infty \delta(x) \, dx = \int_{-\infty}^0 \delta(x) dx + \int_{0}^0 \delta(x) dx + \int_{0}^\infty \delta(x) dx = 0+0+0##
    because the integral at a point is zero. And yes even if ##\delta(0)=\infty##, from the definition of the integral, it evaluates to zero. And someone had a proof of Dirac breaking the product rule. IIRC they showed that the Heaviside step function is equal to ##x\delta(x)+1##, or something equally meaningless. (Hint: think about what happens when x<0)

    The thing is, if you are willing to justify every single step you do (such as the substitution of x-x0, the change of variables inside and so on), sure go ahead. But you can't be casual because something will fail. You are doing a completely different calculus.

    PS. I don't know how you got ##\int_{-\epsilon}^\epsilon \delta(x)dx=1 \implies \int_{-\infty}^\infty \delta(x)f(x) \, dx = f(0)##.


    There are three parts to this, so I'll break it up.

    First the notation ##(\delta, f)## traces its origin to Dirac's bra-ket notation. There we write all vectors (i.e. functions) as ##|f\rangle## and all functionals as ##\langle \phi |##. The evaluation is written as ##\langle \phi | |f\rangle##, but usually we only write one pipe in the centre. Now quantum mechanics is usually on ##L^2## so every function ##|g \rangle## defines a functional ##\langle g|## through the inner product
    ##\langle g| f \rangle = IP(|g \rangle,|f\rangle)##.
    In parallel to this, people wanted to still use the notation for non-innner product spaces, so they wrote ##\langle \phi |f\rangle## even though there is no inner product. It's notation overload. I don't why this changed to ##(\phi, f)## in generalised functions. Maybe not to be confused with bra-ket?

    Secondly, functionals with compact support are a completely different beast. Most textbooks just say this is what they are and ignore them. You can't do a countable sum and stay in the space, you can't take the anti-derivative and stay in the space. And so on. So for the majority of applications where you want generalized functions, they are a poor fit. I personally haven't actually seen any applications for this space.

    Thirdly, as to your last question (if I am understanding your question correctly) my gut says yes. Take the set of all gen func with compact support. IIRC that set is denoted ##\mathcal{E}'(\mathbb{R})##. Now because we define it as a subset of ##\mathcal{D}'(\mathbb{R})## it'll have the same test function space. But we can enlargen the test functions to find the largest set that works for E'. I think that space is ##C^\infty(\mathbb{R})## and the constant ##f(x)=1## is in that. I don't know how to prove that though. Oh, and if you go down this route, you'll need to be careful in how you manipulate Dirac: you need to stay "inside" E'. And I have absolutely no idea how compositions work in this space.

    What worries me is that you probably won't be able to obtain global information about g. If you could do ##\sum_{n=-\infty}^\infty \delta(x- n x_0)## you could "ping" g everywhere on your manifold. But you'll be stuck with finite pings.
    Similarly, x and x0 will always need to stay "close" to each other.
    Am I making sense?
     
    Last edited: Nov 1, 2012
  15. Nov 1, 2012 #14
    I do appreciate your insights into all this.

    If I'm going to derive physics from the Dirac delta function, I'd better try to get all the information I can get about it. I ordered a book on the Dirac delta function on Amazon.com. I should be getting it in a few days. It'll probably give me enough information to be dangerous.

    Also, I read a 1995 paper on the arxiv.org site named ON DIRAC’S DELTA CALCULUS. It tries to justify the Dirac delta in terms of virtual functions and virtual numbers. But he did not do a good job at explaining what those virtual things are. Are you familiar with virtual functions?
     
  16. Nov 1, 2012 #15

    pwsnafu

    User Avatar
    Science Advisor

    Haven't read that book. Do write an Amazon review for it.

    Never heard of it! :tongue:
    I'll have a read, and give my thoughts on it in a few days.

    One option you may want to look at is hyperfunction theory. The idea is you take the complex plane and split it in to the upper half plane, the real line, and the lower half plane. Then a function on the real line is the "difference" between the upper and lower halves. A hyperfunction is basically a pair of holomorphic functions, the first is defined on the upper and the second is defined on the lower. It has the advantage that your knowledge from complex analysis will apply, so compositions and other operations are easier. It also has the really nice property that the Dirac delta is nothing more than the basic version of Cauchy's integral formula, which (I assume) is something you know a lot about.
     
  17. Nov 5, 2012 #16
    I put up a review on Amazon.com. here. It seems there's a problem with the publishing process. The formulas did not print out. Kinda crazy, eh?
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: What is [itex]{\delta ^n}(f(x))[/itex] in n dimensions?
  1. Sequence (n,1/n) (Replies: 6)

  2. Continuity of x^n (Replies: 5)

Loading...