# The Orthonormality of real valued Functions

• ReubenThorpe
In summary, the conversation revolves around the definition of orthonormality with real valued functions and its relation to the dot product and orthogonality. The dot product is defined as the integral of the product of two functions over a prescribed interval, and this definition also applies to functions over the whole real line. This definition is analogous to the dot product of ordinary n-dimensional vectors, where the function values correspond to components and the sum becomes an integral. The inner product satisfies the Cauchy-Bunyakovsky-Schwarz inequality, which makes it possible to define the angle between two vectors in an inner product space. The concept of projection can also be helpful in understanding this topic.
ReubenThorpe
My main question to the forum surrounds the definition of orthonormality with real valued functions (refer to Image 1&2).I understand the implications of the dot product in vector space and the vector being orthonormal if it is of unit length, but I don't see how this relates to functions devoid of vectors or how it can be said that for example the Fourier series can represent any function with a orthonormal set of Trig function using the definition in Image 1&2.

I understand the orthonormality definition of a function with respect to another represents orthogonality in some way and that and an Infinite series of discretely independent functions should be able to represent any function, but I find it hard to dissect the definition below and how this all relates to vector space.

p.s this is my first post so sorry if its in the wrong section or not explicit enough tell me and I will change it =).

#### Attachments

• (2).png
1.2 KB · Views: 452
• (1).png
1.9 KB · Views: 427
In the context you are looking at, the dot product is defined as the integral of the product of the two functions involved. For example, when talking about Fourier series, the integral (assuming sin(nx) and cos(mx) are the function involved) would be over the interval (0,2π).

Similar definition can be made for functions (square integrable) over the whole real line.

This should have been posted in one of the math forums.

Sorry about not posting in the right place as I said I am new and will do so in the future thanks, but back to the question at hand I understand the definition I just don't see how it relates to the dot-product and the orthonormality of a real valued function, or is it simply a defined operator and is not derived from anywhere, if this is so how can it have any relation to the dot-product and orthogonality.

The dot product is defined as the integral of the product of two functions, where the integral is defined over a prescribed interval (a,b). The functions involved are those with the property that the square of the function is integrable over the interval.

To understand it better - for ordinary n dimensional vectors, the dot product is the sum of the products of the components. The analogy carries over to functions where the function values in the domain correspond to components and sum becomes integral.

http://en.wikipedia.org/wiki/Hilbert_space

Above gives a fuller description.

Excellent , thanks you have sent me in the direction I needed. I will start work on Hilbert space at once (I haven't covered it yet as I'm a first year undergrad) , although I think its rather weird to introduce Fourier analysis before it as its obviously involved in its proof.

I will post any conclusion I come to, and thanks again =)

p.s is there anyway for an admin to move this to the mathematics section .

Welcome to Physics Forums, by the way! You might want to spend a few minutes browsing around the forum list to see what we have. Check out the sub-forums, too. Some of them are "burled" two or three "layers" down.

ReubenThorpe said:
Sorry about not posting in the right place as I said I am new and will do so in the future thanks, but back to the question at hand I understand the definition I just don't see how it relates to the dot-product and the orthonormality of a real valued function, or is it simply a defined operator and is not derived from anywhere, if this is so how can it have any relation to the dot-product and orthogonality.
Look at the definition of inner product. A vector space with an inner product is called an inner product space. A Hilbert space is an inner product space that satisfies an additional requirement called "completeness". You do not need to understand completeness at this point, so you don't have to worry about Hilbert spaces (yet). The answer to the question of "how it relates to the dot product" is that the dot product and the product you asked about are both examples of inner products. x and y are said to be orthogonal if their inner product <x,y> is 0. This is just the definition of "orthogonal".

It's also useful to know this: The dot product on ##\mathbb R^3## satisfies ##x\cdot y=|x|\,|y|\cos\theta##, where θ is the angle between x and y. This implies that ##|x\cdot y|\leq|x|\,|y|##. Every inner product satisfies the Cauchy-Bunyakovsky-Schwarz inequality, ##|\langle x,y\rangle|\leq\|x\|\|y\|##. If we're dealing with an inner product space over ℝ, then this means that
$$-1\leq\frac{\langle x,y\rangle}{\|x\|\,\|y\|}\leq 1.$$ So it makes sense to define the angle between x and y by
$$\cos\theta=\frac{\langle x,y\rangle}{\|x\|\,\|y\|}.$$

ReubenThorpe said:
My main question to the forum surrounds the definition of orthonormality with real valued functions (refer to Image 1&2).I understand the implications of the dot product in vector space and the vector being orthonormal if it is of unit length, but I don't see how this relates to functions devoid of vectors or how it can be said that for example the Fourier series can represent any function with a orthonormal set of Trig function using the definition in Image 1&2.

I understand the orthonormality definition of a function with respect to another represents orthogonality in some way and that and an Infinite series of discretely independent functions should be able to represent any function, but I find it hard to dissect the definition below and how this all relates to vector space.

p.s this is my first post so sorry if its in the wrong section or not explicit enough tell me and I will change it =).

Hey ReubenThorpe and welcome to the forums.

It might be easier for you to think in terms of a projection.

In other words if we have a vector say <x,y,z> then we can represent that in terms X = <x,y,z> = e_0a + e_1b + e_2c where a = <X,e_0>, b = <X,e_1>, c = <X,e_2> where <e_i,e_j> = Kronecker_Delta(i,j).

The inner product with an integral can be thought of in exactly the same way, but instead of dealing with an inner product with our finite sum, we are dealing with something a little more complicated, but it still is a form of a projection that obeys the inner product laws, but also the projection laws as well which is important because it means that since it's a projection, then if we project the result of a projection twice (or any more) times then the result doesn't change: in other words P^2 = P where P is a projection operator.

So in the same way instead of talking about our basis being a finite vector, it looks more "like" an infinite vector, but the idea is exactly the same: each basis vector has to be orthogonal to another meaning that the projection of one vector onto one basis is completely separate to a projection onto another basis and then if we take the projection on to all basis vectors and write our input vector as a linear combination of our orthogonal basis vectors, then we get an expression of our original 'infinite dimensional vector' in terms of some particular basis.

## 1. What does it mean for functions to be "orthonormal"?

Orthonormality of functions refers to a set of functions that are both orthogonal and normalized. Orthogonality means that the inner product (or dot product) of any two functions in the set is equal to zero, while normalization means that the length, or magnitude, of each function in the set is equal to one.

## 2. How is orthonormality useful in mathematics and science?

Orthonormal functions are very useful in many areas of mathematics and science, particularly in linear algebra, Fourier analysis, and quantum mechanics. They allow for simplification of mathematical calculations and are often used as a basis for representing more complex functions.

## 3. Can real valued functions be orthonormal?

Yes, real valued functions can be orthonormal. In fact, many commonly used orthonormal functions, such as the sine and cosine functions, are real valued. However, it is important to note that not all real valued functions are orthonormal and that the set of orthonormal functions must meet certain criteria.

## 4. How do you determine if a set of real valued functions is orthonormal?

To determine if a set of real valued functions is orthonormal, you must first check if they are orthogonal by calculating their inner product. If the inner product is equal to zero for any two functions in the set, then they are orthogonal. Next, you must check if they are normalized by calculating the magnitude of each function. If the magnitude is equal to one for each function, then they are normalized. If both of these conditions are met, then the set of functions is orthonormal.

## 5. Are orthonormal functions unique?

No, orthonormal functions are not unique. There can be multiple sets of functions that are orthonormal, and different sets may be more suitable for different applications. Additionally, a set of orthonormal functions can be scaled or rotated and still maintain their orthonormality. Therefore, there are many possible combinations of orthonormal functions that can be used in mathematical and scientific contexts.

Replies
6
Views
2K
Replies
10
Views
2K
Replies
2
Views
5K
Replies
1
Views
2K
Replies
8
Views
3K
Replies
15
Views
4K
Replies
8
Views
3K
Replies
13
Views
2K
Replies
139
Views
6K
Replies
43
Views
6K