How can I construct a C^{\infty} approximation to a tent function?

  • Thread starter Hurkyl
  • Start date
  • Tags
    Function
In summary, the conversation discussed the desire to construct a C^{\infty} approximation to a tent function, as well as a well-known theorem that states the existence of such a function. The conversation also included a discussion about using a Taylor approximation and a delta function to construct a sequence of "nice" functions that approximate a given function. Ultimately, it was determined that this idea could work and produce a C-infinity function.
  • #1
Hurkyl
Staff Emeritus
Science Advisor
Gold Member
14,981
26
Today, I had the desire to construct a [itex]C^{\infty}[/itex] approximation to a tent function. Specifically, for any positive real number e I want a [itex]C^{\infty}[/itex] function f such that:

f(x) = 0 if |x| > 1 + e
|f(x) - g(x)| < e for all x

where g(x) is the tent function given by:

[tex]
g(x) =
\begin{cases}
0 & |x| \geq 1 \\
1 - |x| & |x| \leq 1
\end{cases}
[/tex]

I'm willing to accept on faith that such things exist, but it struck me today that I don't know how to go about constructing such a thing, or at least proving its existence.

Given time I could probably figure it out, but I'm interested in a different problem (for which I want to use this), and I imagine this is a well-known thing.

So I guess what I'm looking for is at least a "yes" or "no" answer to the existence of such a function, but a hint as to the proof would be nice too.
 
Physics news on Phys.org
  • #2
I have a theorem which states:

Let [itex]g\,:\,\mathbb{R}^m\to\mathbb{R}[/itex] be a bounded, uniformly continuous function. Then

[tex](\forall\epsilon > 0)(\exists f\in C^{\infty}(\mathbb{R}^m,\,\mathbb{R}))(\forall x \in \mathbb{R}^m)(|g(x) - f(x)| < \epsilon)[/tex]


Define a function [itex]\sigma _0\,:\,x\mapsto\exp\left(\frac{-1}{x+1}\right)[/itex] for x > -1 and [itex]x\mapsto 0[/itex] otherwise. Define a function [itex]\sigma\,:\,x\mapsto\sigma_0(x)\sigma_0(-x)[/itex]. Finally, define [itex]\beta\,:\,\mathbb{R}^m\to\mathbb{R}[/itex] by:

[tex]\beta (x) = \frac{\sigma (|x|)}{\int _{|y|<1}\sigma (|y|)dy}[/tex]

By uniform continuity of g, given [itex]\epsilon > 0[/itex], you can pick [itex]\delta > 0[/itex] such that:

[tex](\forall x,z \in \mathbb{R}^m)(|x-z|<\delta \Rightarrow |g(x)-g(z)|<\epsilon)[/tex]

The function f that you want is:

[tex]f(x) = \int _{\mathbb{R}^m}g(x+\delta y)\beta (y)dy[/tex]
 
Last edited:
  • #3
I'm not sure if that's good enough for the problem of interest. And there has to be a typo in that, because g is essentially never used! (was it supposed to be in the final integral?)


The problem I want to solve (which I'm also sure is well-known by those who know it well -- but I actually want to enjoy this one!) is the following:

Suppose I have a "nice" space of functions R²->R, with some topology. I want to prove that the subspace spanned by the functions of the form f(x)g(y) for "nice" f and g is dense in the original space.

So, I take a "nice" function on two variables, and I want to find a sequence of functions in that subspace that converge to it.

My plan of attack was to form a polyhedral approximation to the function, where each polyhedron is the product of two line segments. I can then decompose the polyhedra into (a product of) tent functions.

Or, equivalently, I pick a (sufficiently fine) lattice of points in R², and use (sums of) (products of) tent functions to interpolate between points.

I then want to smooth my tent functions into "nice" functions, so that I get actual elements of the subspace of interest, and then try to produce a sequence that converges to the target function in the "nice" way specified by the topology.


One example of the class of "nice" functions of interest are C-infinity functions with compact support. Another would be the "rapidly decreasing" functions: if you take any derivative of your function, and multiply by any polynomial, the result is bounded.

The approximation in your post would seem to be inadequate for these tasks.
 
  • #4
I'm not sure if that's good enough for the problem of interest. And there has to be a typo in that, because g is essentially never used! (was it supposed to be in the final integral?)
Yes, there was a typo; I've fixed it. The reason was that my theorem had f given, and g was the desired function, so in rewriting it to match your notation I missed a g.

Anyways, would using a Taylor approximation help with your problem?
 
  • #5
I had thought about it, but had initially dismissed it because C-infinity functions are generally not equal to their Taylor series. The partial series are polynomials, which are distinctly not nice, due to their behavior "at infinity". I think, maybe, they'd be useful if I was only working over a subset of R².


Hrm, that does give me an idea. (Yes, the chain of reasoning did start with Taylor series. :tongue:)

Maybe I can construct a sequence of "nice" functions that approximate a delta function. I could make a slight variation to everyone's favorite non-analytic smooth function, to produce:

[tex]
f(x) :=
\begin{cases}
0 & x \leq 0 \\
e^{-1/x^2} e^{-1 / (x - 1)^2} & 0 \leq x \leq 1 \\
0 & 1 \leq x
\end{cases}
[/tex]

this would give me a C-infinity function that is zero outside of (0, 1). By playing with constants, I ought to be able to produce a sequence that would approximate a delta function, and if I convolve one with a tent function, maybe I'll get what I seek?
 
  • #6
Never start an analysis problem after bedtime. :redface:

Yes, this idea works. If I define:

[tex]
h_d(x) :=
\begin{cases}
K_d e^{-1/(x-d)^2} e^{-1/(x+d)^2} & |x| < d \\
0 & |x| \geq d
\end{cases}
[/tex]

then I have a C-infinity function that is nonzero outside the compact interval [-d, d]. Furthermore, it is strictly positive in (-d, d), and the constant [itex]K_d[/itex] is chosen so that the integral over [-d, d] is equal to 1.

So as d goes to zero, this approaches a delta function, so I think it can be used for smoothing. (And yes, I think this is a similar idea to what you posted)

If I want to smooth the uniformly continuous function f(x), I can define:

[tex]
g_d(x) := \int_{-d}^{d} f(x - y) h_d(y) \, dy
[/tex]

Then, if I pick [itex]\delta[/itex] such that [itex]|x - y| < \delta \Rightarrow |f(x) - f(y)| < \epsilon[/itex], we can upper and lower bound f(x - y), giving:

[tex]
f(x) - \epsilon < g_{\delta}(x) < f(x) + \epsilon
[/tex]

So is this C-infinity? Well, we can apply a change of variable:

[tex]
g_d(x) = \int_{x-d}^{x+d} f(z) h_d(x - z) \, dz
[/tex]

Bleh, it's been a while... oh bleh, I had the derivative slightly wrong. (But I was just missing multipliers of "1") Found it at Wikipedia

[tex]
g_d'(x) = f(x+d) h_d(-d) - f(x-d) h_d(d) + \int_{x-d}^{x+d} f(z) \frac{\partial}{\partial x}h_d(x - z) \, dz
= \int_{x-d}^{x+d} f(z) \frac{\partial}{\partial x}h_d(x - z) \, dz
[/tex]

and repeating,

[tex]
\left( \frac{d}{dx} \right)^n g_d(x)
= \int_{x-d}^{x+d} f(z) \left( \frac{\partial}{\partial x} \right)^n h_d(x - z) \, dz
[/tex]


and therefore, I can produce a smooth approximation of a tent function!

That was actually more fun than I thought it would be! :smile:
 

1. What is a function and why is it important?

A function is a mathematical concept that describes the relationship between two or more variables. It is important because it allows us to model and understand real-world phenomena, make predictions, and solve problems in various fields such as science, engineering, and economics.

2. How do you construct a function?

To construct a function, you need to follow certain steps: identify the variables involved, determine the type of relationship between them (linear, quadratic, exponential, etc.), write an equation that represents this relationship, and define the domain and range of the function.

3. What is the difference between a dependent and independent variable in a function?

A dependent variable is the output of a function, meaning its value depends on the input or the independent variable. The independent variable is the input of a function, and its value can be chosen arbitrarily.

4. How do you determine the domain and range of a function?

The domain of a function is the set of all possible values for the independent variable, while the range is the set of all possible values for the dependent variable. To determine the domain, you need to look at any restrictions on the independent variable, such as square roots or division by zero. To find the range, you can either graph the function or use algebraic methods such as finding the minimum and maximum values.

5. Can a function have more than one independent or dependent variable?

Yes, a function can have more than one independent or dependent variable. These types of functions are called multivariable functions and are commonly used in advanced mathematics and physics to model complex systems.

Similar threads

Replies
3
Views
2K
Replies
3
Views
2K
Replies
5
Views
1K
  • Calculus
Replies
9
Views
2K
Replies
1
Views
908
Replies
4
Views
725
Replies
2
Views
763
Replies
4
Views
872
Replies
3
Views
1K
Replies
5
Views
886
Back
Top