Computer Vision, Corners and Eigenvalues

LouArnold
Messages
10
Reaction score
0
This question is about the use of eigenvalues in a specific application.
The subject is Computer Vision and the topic is the Harris Corner detection method. The attached file is PDF document of slides that show the math in a bit more detail.

In the slides, a corner is located by looking at the image brightness gradient in a region (say 5 x5 neighboring pixel) about each pixel in the image. The gradient value (formula) is

D(u,v) = [u v]C[u v]T = constant. C is the co-variance matrix for the neighborhood about a given pixel.

C is then diagonalized with eigenvalues, and they and their eigenvectors indicate the direction and strength of the brightness gradient.

But I’m puzzled why eigenvalues are calculated at all. Isn’t calculating D(u,v) for each pixel enough?

I am aware that eigenvalues provide a root to the homogeneous equation Ax=0, but why try and find roots at all in this situation? And what is the equation we trying to solve in this case?
 

Attachments

Physics news on Phys.org
The directions in which the gradient variation is maximal/minimal are given by the eigenvectors of D(u,v), and the magnitude of these maxima, minima is given by the eigenvalues. Supposedly a corner is characterized by the condition that both eigenvalues are large, which is why the are computed.
 
I don't think this has to do with principal component analysis, but it was a thoughtful suggestion.

I think you are right yyat. D(u,v) calculates the change in intensity about a point. But its essential term is I(i+u,j+v)-I(i,j). Is this a difference equation - effectively a discrete derivative? Or is the derivative of this equation what we want to set to zero and solve.

In the PDF the Taylor expansion is for I(i+u,j+v)=I(i,j)+Ix(u)+Iy(v)...(where Ix=dI/dx, and Iy=dI/dy, and x and y are in the i and j direction). But is this applied to the difference equation or to its derivative?

By the way, i and j represent pixel indices and have nothing to do with unit directional vectors.
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top