- #1

- 46

- 1

## Summary:

- Clarifying the concept of RKHS

## Main Question or Discussion Point

I have been reading a lot about Reproducing Kernel Hilbert Spaces mainly because of their application in machine learning. I do not have a formal background in topology, took linear algebra as an undergrad but mainly have encountered things such as, inner product, norm, vector space, orthogonality, and independence with respect to these ideas associated in Physics. I think I am making headway but wanted to clarify before I go further.

-While in physics we generally think of an inner-product as a dot product giving the angle between two vectors (with the exception of quantum mechanics were inner products play a more abstract role in the sense of operators) it is really a more general mathematical concept endowed over a vector space,## \mathbf V ##, such that ##\mathbf V \mathsf x \mathbf V \rightarrow \mathbb R , \mathbb C ## with the properties of conjugate symmetry, linearity, and positive definite.

Now lets say I have a topological space in ## \mathbb R^\infty## additionally I have a vector space ##\mathbf V \subset \mathbb R^\infty## endowed with an inner product which is defined over ##\mathbf V## but not all of ##\mathbb R^\infty##.

Now here is where I have a little bit of uncertainty. Since the inner product is not defined over the whole topological space I cannot use the basis over the topological space to decompose ##\mathbf V ## into an independent combination of basis vector in ##\mathbb R^\infty##. Here is where I think the "magic" happens with RKHS. Given the defined inner product over ##\mathbf V## I can define a unique set of spanning vectors ##\mathbf K## in which I can express and vector ## v \in \mathbf V ## as a linear combination of the vectors in ##\mathbf K## such that any ## v \in \mathbf V=\alpha\mathbf K##. Furthermore there is a orthonormal basis, ##\mathbf U## over ##\mathbf V## such that ##\mathbf K=\mathbf U\mathbf U^t## allowing me to express any vector in ##\mathbf V## as ##v=\beta\mathbf U##

I just want to correct any misinterpretations as I go forward any figuring out how this allows the "kernel trick" to work in SVMs and other ML algorithms. Any help would be great!

-While in physics we generally think of an inner-product as a dot product giving the angle between two vectors (with the exception of quantum mechanics were inner products play a more abstract role in the sense of operators) it is really a more general mathematical concept endowed over a vector space,## \mathbf V ##, such that ##\mathbf V \mathsf x \mathbf V \rightarrow \mathbb R , \mathbb C ## with the properties of conjugate symmetry, linearity, and positive definite.

Now lets say I have a topological space in ## \mathbb R^\infty## additionally I have a vector space ##\mathbf V \subset \mathbb R^\infty## endowed with an inner product which is defined over ##\mathbf V## but not all of ##\mathbb R^\infty##.

Now here is where I have a little bit of uncertainty. Since the inner product is not defined over the whole topological space I cannot use the basis over the topological space to decompose ##\mathbf V ## into an independent combination of basis vector in ##\mathbb R^\infty##. Here is where I think the "magic" happens with RKHS. Given the defined inner product over ##\mathbf V## I can define a unique set of spanning vectors ##\mathbf K## in which I can express and vector ## v \in \mathbf V ## as a linear combination of the vectors in ##\mathbf K## such that any ## v \in \mathbf V=\alpha\mathbf K##. Furthermore there is a orthonormal basis, ##\mathbf U## over ##\mathbf V## such that ##\mathbf K=\mathbf U\mathbf U^t## allowing me to express any vector in ##\mathbf V## as ##v=\beta\mathbf U##

I just want to correct any misinterpretations as I go forward any figuring out how this allows the "kernel trick" to work in SVMs and other ML algorithms. Any help would be great!