Understanding Linear Perceptrons and their Decision Boundary in Neural Networks

  • Thread starter Thread starter 3.141592654
  • Start date Start date
3.141592654
Messages
85
Reaction score
0
^{}This may be a silly question, but if I have an 8x3 matrix X, for example, then the rows of this matrix will span R3 (and will be linearly dependent). When we find the solution to:

Xw=t

where t is an 8x1 matrix of t's. Then each row can be represented as

w_{1}x_{i1}+w_{2}x_{i2}+w_{3}x_{i3} = t.

Each row then forms a unique plane in R3, correct? Does the matrix Xw form a plane? I'm learning about Perceptrons, a form of Artificial Neural Network, in which each row of data is classified as either

y^{&#039;} = +1 or -1 depending on if w_{1}x_{i1}+w_{2}x_{i2}+w_{3}x_{i3} > t or w_{1}x_{i1}+w_{2}x_{i2}+w_{3}x_{i3} < t.

The book states that in the above situation, "The perceptron model [in the example above] is linear in its parameters w and x. Because of this, the decision boundary of a perceptron, which is obtained by setting y^{&#039;}=0, is a linear hyperplane that separates the data into two classes, -1 and +1."

I'm having a really hard time understanding what this quote is trying to say, because I don't see how Xw=0 "forms a hyperplane" in x-space.
 
Physics news on Phys.org
Hey 3.141592654.

The answer will depend on the rank of the matrix.

If you row reduce the matrix and get a consistent system with n non-zero rows then the system will form an n-dimensional place from those vectors.

It may be a point, line, or n-dimensional plane depending on the above.
 
Back
Top