# Reproducing Kernel Hilbert Spaces

• A

## Summary:

Clarifying the concept of RKHS

## Main Question or Discussion Point

I have been reading a lot about Reproducing Kernel Hilbert Spaces mainly because of their application in machine learning. I do not have a formal background in topology, took linear algebra as an undergrad but mainly have encountered things such as, inner product, norm, vector space, orthogonality, and independence with respect to these ideas associated in Physics. I think I am making headway but wanted to clarify before I go further.

-While in physics we generally think of an inner-product as a dot product giving the angle between two vectors (with the exception of quantum mechanics were inner products play a more abstract role in the sense of operators) it is really a more general mathematical concept endowed over a vector space,## \mathbf V ##, such that ##\mathbf V \mathsf x \mathbf V \rightarrow \mathbb R , \mathbb C ## with the properties of conjugate symmetry, linearity, and positive definite.

Now lets say I have a topological space in ## \mathbb R^\infty## additionally I have a vector space ##\mathbf V \subset \mathbb R^\infty## endowed with an inner product which is defined over ##\mathbf V## but not all of ##\mathbb R^\infty##.

Now here is where I have a little bit of uncertainty. Since the inner product is not defined over the whole topological space I cannot use the basis over the topological space to decompose ##\mathbf V ## into an independent combination of basis vector in ##\mathbb R^\infty##. Here is where I think the "magic" happens with RKHS. Given the defined inner product over ##\mathbf V## I can define a unique set of spanning vectors ##\mathbf K## in which I can express and vector ## v \in \mathbf V ## as a linear combination of the vectors in ##\mathbf K## such that any ## v \in \mathbf V=\alpha\mathbf K##. Furthermore there is a orthonormal basis, ##\mathbf U## over ##\mathbf V## such that ##\mathbf K=\mathbf U\mathbf U^t## allowing me to express any vector in ##\mathbf V## as ##v=\beta\mathbf U##

I just want to correct any misinterpretations as I go forward any figuring out how this allows the "kernel trick" to work in SVMs and other ML algorithms. Any help would be great!

## Answers and Replies

Related Topology and Analysis News on Phys.org
fresh_42
Mentor
I don't quite understand you. ##\mathbb{R}^\infty## is not a good notation, since it doesn't say a lot about the kind of infinity. Then you have an inner product on ##\mathbf{V}##, which one?, but not on ##\mathbb{R}^\infty##, why?

Yes, you can construct an orthonormal basis for ##\mathbf{V}## (Gram-Schmidt), and usually write ##\mathbb{R}^\infty = \mathbf{V}\oplus \mathbf{V}^\perp## but this is orthogonality in ##\mathbb{R}^\infty## where you say you have none.

So the true magic is a Hilbert space embedded in what is usually as well a Hilbert space, but you ripped of the inner product off the outer space, which sounds weird.

So the book I am reading is "A Primer on Reproducing Kernel Hilbert Spaces". So they initially talk about extrinsic vs intrinsic topology on a finite topology, ##\mathbb R^n##. they claim that ##\mathbf V \subset \mathbb R^n## endowed with an inner product. They say
" the configuration involves three aspects, the vector space ##\mathbf V## , the orientation of ##\mathbf V## in ##\mathbb R^n## and the inner product on ##\mathbf V##. Importantly the inner product is not defined on the whole of ##\mathbb R^n## otherwise ##\mathbf V=\mathbb R^n##."

So somehow you can define an inner-product on the intrinsic topology that is not defined extrinsically.

fresh_42
Mentor
I can only imagine some artificial constructions, e.g. ##\mathbb{R}^2 \subseteq \mathbb{R}^3## with the usual inner product in the plane, and as long it isn't extended to space, it will remain an inner product of only the plane. One can also extend it to space but destroy the inner product property in the third component.

I guess the author wanted to stress the fact that an embedding doesn't necessarily imply that structures are compatible. Another example would be square integrable functions in the space of all functions. The usual inner product for square integrable functions doesn't work for arbitrary functions anymore. Or we could embed square integrable function in the space of continuous functions, where both inner products are different.