What is the relationship between dot products and orthogonality of functions?

In summary, the concept of orthogonality for functions can be seen as a generalization of the concept of orthogonality for vectors. While the inner product for vectors is represented as a dot product, the inner product for functions is represented as an integral. The main similarity between the two is that if two vectors or functions are orthogonal, their inner product is equal to zero. However, it is important not to take this equivalence too far and to understand the specific properties and applications of orthogonality for functions. Further resources such as books or references can help in fully understanding this concept.
  • #1
LLT71
73
5
first of all assume that I don't have proper math knowledge. I came across this idea while I was studying last night so I need to verify if it's valid, true, have sense etc.

orthogonality of function is defined like this:
https://en.wikipedia.org/wiki/Orthogonal_functions

I wanted to understand concept a bit further so I came across the explanation that says dot product of two functions is almost the same (or similar?) thing as dot product of two vectors, but I didn't knew how to visualize that multiplication of two functions "inside integral" (I assume it's dot product of two functions) to understand it. I assume inside integral should be something like cos(theta) where theta is angle between two functions, but than I got to a conclusion when you usually/always plot graphs of functions for example in 2D coordinate system functions are always/usually "parallel" to one another so cos(theta)=cos(0)=1. I remember from physics class that any point in coordinate system could be represented by a vector of position so I got an idea, can we just "transform" that integral like this:

let y(x)=any function ex. sin(x) and g(x)=any function ex. cos(x), a be position vector for every point of function y(x), and b be position vector for every point of function g(x). can we define orthogonality like this:
if the sum of all dot products of position vectors a and b for every instant, on some interval is zero than functions that those two vectors represent are orthogonal. (sorry for poor vector notation)

thank you!
 
Mathematics news on Phys.org
  • #2
There is no angle between functions, at least not with any relevant geometric meaning.
LLT71 said:
if the sum of all dot products of position vectors a→ and b→ for every instant, on some interval is zero than functions that those two vectors represent are orthogonal.
No.
 
  • Like
Likes LLT71
  • #3
LLT71 said:
first of all assume that I don't have proper math knowledge. I came across this idea while I was studying last night so I need to verify if it's valid, true, have sense etc.

You shouldn't take the definition of the inner product for functions too literally. The normal vectors you are familiar with have a set of properties that defines them. This set of properties is known as a vector space.

It turns out that sets of functions share these properities and, in fact, many of the most important vector spaces are "function" spaces. So, functions themselves can be seen as a type of generalised vector.

It's also possible to define an inner product on function spaces and this inner product has the same properties as the normal inner product between vectors. And, in fact, the specific concept of "orthogonal" functions is vital to the study of function spaces and plays a very similar role to that of orthogoinal vectors in normal vector spaces.

But, you can't take this equivalence too far. In particular, the inner product of two functions in no way equates to the inner product of a set of 2D or 3D vectors. Look at it as a generalisation of the concept of inner product/orthogonality.
 
  • Like
Likes FactChecker and LLT71
  • #4
LLT71 said:
first of all assume that I don't have proper math knowledge. I came across this idea while I was studying last night so I need to verify if it's valid, true, have sense etc.

orthogonality of function is defined like this:
https://en.wikipedia.org/wiki/Orthogonal_functions

I wanted to understand concept a bit further so I came across the explanation that says dot product of two functions is almost the same (or similar?) thing as dot product of two vectors
The wiki page in your link doesn't mention "dot product" at all, so I assume you saw this explanation of the "dot product of two functions" somewhere else.
The dot product for vectors is one of several kinds of inner product, often represented like so: ##<\vec{u}, \vec{v}>## (for vectors in a vector space) or ##<f, g>## (for functions in a function space).

The main similarity between the dot product and the inner product for function spaces is that if two vectors u and v are orthogonal (or perpendicular), then ##<\vec{u}, \vec{v}> = 0##. In alternate notation, ##\vec{u} \cdot \vec{v} = 0##. If two functions f and g are orthogonal, then <f, g> = 0, with the inner product taken to mean the integral of the product of the two functions over some interval.
 
  • Like
Likes LLT71
  • #5
mfb said:
There is no angle between functions, at least not with any relevant geometric meaning.No.
this is my idea behind "angle between functions" (denoted by theta in my picture):
http://imgur.com/vQknypW

PeroK said:
But, you can't take this equivalence too far. In particular, the inner product of two functions in no way equates to the inner product of a set of 2D or 3D vectors. Look at it as a generalisation of the concept of inner product/orthogonality.

Mark44 said:
The main similarity between the dot product and the inner product for function spaces is that if two vectors u and v are orthogonal (or perpendicular), then ##<\vec{u}, \vec{v}> = 0##. In alternate notation, ##\vec{u} \cdot \vec{v} = 0##. If two functions f and g are orthogonal, then <f, g> = 0, with the inner product taken to mean the integral of the product of the two functions over some interval.

thank you! well, I thought it can be seen as some kind of "vector-function relationship" so using one way or another will give you the same result (again, poor math knowledge led me to trying to mix concepts I already knew). is there any possible way someone can explain that integral or provide me with link, references, books so I can fully understand what it says? I mean I understand what is the importance of defining such thing as orthogonality (ex. for signals) but "why" does that integral works, why do we use it in that way? what is an idea behind that? why do we sum (integrate) all f(x)*g(x) on some interval and that will surely tell me if they are orthogonal or not? hope I am not getting too deep into rabbit hole...
 
  • #6
LLT71 said:
is there any possible way someone can explain that integral or provide me with link, references, books so I can fully understand what it says? I mean I understand what is the importance of defining such thing as orthogonality (ex. for signals) but "why" does that integral works, why do we use it in that way? what is an idea behind that? why do we sum (integrate) all f(x)*g(x) on some interval and that will surely tell me if they are orthogonal or not? hope I am not getting too deep into rabbit hole...

The idea was worked out by Hilbert and Schmidt:

https://en.wikipedia.org/wiki/Hilbert_space#History
 
  • #7
  • #8
LLT71 said:
thanks! would you recommend me some good books to get me into this topic? what should I know before I start that journey?

There are some recommendations on here for "Linear Algebra". Try searching for that.
 
  • Like
Likes LLT71
  • #9
PeroK said:
There are some recommendations on here for "Linear Algebra". Try searching for that.
thank you!
 
  • #10
LLT71 said:
this is my idea behind "angle between functions" (denoted by theta in my picture):
http://imgur.com/vQknypW
This has nothing to do with the scalar product of the functions.
 
  • #11
mfb said:
This has nothing to do with the scalar product of the functions.
now I know. I thought as if it is something similar like vector dot product thus f(x)*g(x)*cos[f(x),g(x).
 
  • #12
It seems that you are about to work on physics and QM.

Perhaps this page made for this purpose will fit your needs : Functions[/PLAIN] as Vectors
 
Last edited by a moderator:
  • Like
Likes LLT71
  • #13
Igael said:
It seems that you are about to work on physics and QM.

Perhaps this page made for this purpose will fit your needs : Functions as Vectors
looks interesting, thanks for sharing!
 
  • #14
LLT71 said:
I wanted to understand concept a bit further so I came across the explanation that says dot product of two functions is almost the same (or similar?) thing as dot product of two vectors
There are several definitions of this. Check out "Hilbert space".
 
  • Like
Likes LLT71
  • #15
Svein said:
There are several definitions of this. Check out "Hilbert space".
thanks!
what is the point of giving function "such power" => "function can be described as vector with infinite components"?
 
  • #16
LLT71 said:
what is the point of giving function "such power" => "function can be described as vector with infinite components"?
You just described Fourier series...
 
  • #17
Svein said:
You just described Fourier series...
ahhhh cool I see what you did there ;D
 

What is the concept of orthogonality in functions?

Orthogonality in functions refers to the mathematical property where two functions are perpendicular to each other when plotted on a graph. This means that the integral of the product of two orthogonal functions is equal to zero.

How is orthogonality of functions useful in mathematics?

Orthogonality of functions is useful in many areas of mathematics, including signal processing, differential equations, and linear algebra. It allows for easier manipulation and analysis of functions, making complex problems more manageable.

What are some examples of orthogonal functions?

Some examples of orthogonal functions include sine and cosine, Legendre polynomials, and Bessel functions. These functions have distinct mathematical properties that make them orthogonal to each other.

Can non-orthogonal functions be converted into orthogonal functions?

Yes, non-orthogonal functions can be converted into orthogonal functions through a process called Gram-Schmidt orthogonalization. This involves finding a set of linearly independent functions that are orthogonal to each other.

How is orthogonality of functions related to linear independence?

Orthogonality and linear independence are closely related concepts. In order for a set of functions to be orthogonal, they must also be linearly independent. This means that no function in the set can be represented as a linear combination of the other functions in the set.

Similar threads

Replies
4
Views
1K
Replies
8
Views
2K
  • Linear and Abstract Algebra
Replies
9
Views
203
Replies
17
Views
2K
Replies
14
Views
1K
  • General Math
Replies
4
Views
3K
  • Introductory Physics Homework Help
Replies
4
Views
1K
Replies
139
Views
4K
Replies
4
Views
418
  • General Math
Replies
4
Views
2K
Back
Top