# I Orthogonality of functions

1. Nov 1, 2016

### LLT71

first of all assume that I dont have proper math knowledge. I came across this idea while I was studying last night so I need to verify if it's valid, true, have sense etc.

orthogonality of function is defined like this:
https://en.wikipedia.org/wiki/Orthogonal_functions

I wanted to understand concept a bit further so I came across the explanation that says dot product of two functions is almost the same (or similar?) thing as dot product of two vectors, but I didn't knew how to visualize that multiplication of two functions "inside integral" (I assume it's dot product of two functions) to understand it. I assume inside integral should be something like cos(theta) where theta is angle between two functions, but than I got to a conclusion when you usually/always plot graphs of functions for example in 2D coordinate system functions are always/usually "parallel" to one another so cos(theta)=cos(0)=1. I remember from physics class that any point in coordinate system could be represented by a vector of position so I got an idea, can we just "transform" that integral like this:

let y(x)=any function ex. sin(x) and g(x)=any function ex. cos(x), a be position vector for every point of function y(x), and b be position vector for every point of function g(x). can we define orthogonality like this:
if the sum of all dot products of position vectors a and b for every instant, on some interval is zero than functions that those two vectors represent are orthogonal. (sorry for poor vector notation)

thank you!

2. Nov 1, 2016

### Staff: Mentor

There is no angle between functions, at least not with any relevant geometric meaning.
No.

3. Nov 1, 2016

### PeroK

You shouldn't take the definition of the inner product for functions too literally. The normal vectors you are familiar with have a set of properties that defines them. This set of properties is known as a vector space.

It turns out that sets of functions share these properities and, in fact, many of the most important vector spaces are "function" spaces. So, functions themselves can be seen as a type of generalised vector.

It's also possible to define an inner product on function spaces and this inner product has the same properties as the normal inner product between vectors. And, in fact, the specific concept of "orthogonal" functions is vital to the study of function spaces and plays a very similar role to that of orthogoinal vectors in normal vector spaces.

But, you can't take this equivalence too far. In particular, the inner product of two functions in no way equates to the inner product of a set of 2D or 3D vectors. Look at it as a generalisation of the concept of inner product/orthogonality.

4. Nov 1, 2016

### Staff: Mentor

The wiki page in your link doesn't mention "dot product" at all, so I assume you saw this explanation of the "dot product of two functions" somewhere else.
The dot product for vectors is one of several kinds of inner product, often represented like so: $<\vec{u}, \vec{v}>$ (for vectors in a vector space) or $<f, g>$ (for functions in a function space).

The main similarity between the dot product and the inner product for function spaces is that if two vectors u and v are orthogonal (or perpendicular), then $<\vec{u}, \vec{v}> = 0$. In alternate notation, $\vec{u} \cdot \vec{v} = 0$. If two functions f and g are orthogonal, then <f, g> = 0, with the inner product taken to mean the integral of the product of the two functions over some interval.

5. Nov 1, 2016

### LLT71

this is my idea behind "angle between functions" (denoted by theta in my picture):
http://imgur.com/vQknypW

thank you! well, I thought it can be seen as some kind of "vector-function relationship" so using one way or another will give you the same result (again, poor math knowledge led me to trying to mix concepts I already knew). is there any possible way someone can explain that integral or provide me with link, references, books so I can fully understand what it says? I mean I understand what is the importance of defining such thing as orthogonality (ex. for signals) but "why" does that integral works, why do we use it in that way? what is an idea behind that? why do we sum (integrate) all f(x)*g(x) on some interval and that will surely tell me if they are orthogonal or not? hope I am not getting too deep into rabbit hole...

6. Nov 1, 2016

### PeroK

The idea was worked out by Hilbert and Schmidt:

https://en.wikipedia.org/wiki/Hilbert_space#History

7. Nov 1, 2016

8. Nov 1, 2016

### PeroK

There are some recommendations on here for "Linear Algebra". Try searching for that.

9. Nov 1, 2016

### LLT71

thank you!

10. Nov 1, 2016

### Staff: Mentor

This has nothing to do with the scalar product of the functions.

11. Nov 1, 2016

### LLT71

now I know. I thought as if it is something similar like vector dot product thus f(x)*g(x)*cos[f(x),g(x).

12. Nov 2, 2016

### Igael

Last edited by a moderator: May 8, 2017
13. Nov 2, 2016

### LLT71

looks interesting, thanks for sharing!

14. Nov 3, 2016

### Svein

There are several definitions of this. Check out "Hilbert space".

15. Nov 4, 2016

### LLT71

thanks!
what is the point of giving function "such power" => "function can be described as vector with infinite components"?

16. Nov 4, 2016

### Svein

You just described Fourier series...

17. Nov 4, 2016

### LLT71

ahhhh cool I see what you did there ;D