MHB Linear System-Coset: Solve Exercise & Learn

  • Thread starter Thread starter mathmari
  • Start date Start date
  • Tags Tags
    Linear
mathmari
Gold Member
MHB
Messages
4,984
Reaction score
7
Hi! I am stuck at an exercise.
I am asked to describe the set of solutions of a linear system as a coset of an appropriate subspace.
Could you explain me what I have to do?
 
Physics news on Phys.org
mathmari said:
Hi! I am stuck at an exercise.
I am asked to describe the set of solutions of a linear system as a coset of an appropriate subspace.
Could you explain me what I have to do?
You must already know that any system of linear equations can be expressed in the matrix form $Ax=B$, where $A$ is an $m\times n$ matrix, $B$ is an $n\times 1$ column vector and $x$ is the unknown. We need to find all such $x$ which satisfy $Ax=B$. Assume that the entries of $A$ and $B$ come from a field $F$ (which might be $\mathbb R$ or $\mathbb C$ most likely).
Let $x_0$ be a solution to $Ax=B$. Let $S$ be the set of all the vectors $y$ which satisfy $Ay=0$, that is, $S=\{y:Ay=0\}$. Show that $S$ is a subspace of the vector space $F^n$. Note that $x_0+S$ is a coset of $S$ all of whose elements are solutions of $Ax=B$. Can there be any other solutions?
 
So knowing a solution of the system, x_{0}, I have to say that the set of solutions as a coset of an subspace is x_{0}+S ?
 
mathmari said:
So knowing a solution of the system, x_{0}, I have to say that the set of solutions as a coset of an subspace is x_{0}+S ?

Note that $x_0+S=\{x_0+z:z\in S\}$. Yes, if $x_0$ is a solution then $x_0+S$ is the set of all the solutions. Can you prove this?
 
We know that Ax_{0}=B and Az=0, z \epsilon S .
We want to prove that x_{0}+S is the set of all the solutions, so
A(x_{0}+z)=Ax_{0}+Az=B+0=B.
Is that right??
 
mathmari said:
We know that Ax_{0}=B and Az=0, z \epsilon S .
We want to prove that x_{0}+S is the set of all the solutions, so
A(x_{0}+z)=Ax_{0}+Az=B+0=B.
Is that right??
Not entirely.

What you have proved is that each member of $x_0+S$ is a solution of $Ax=B$. But you have not shown that every solution of $Ax=B$ lies in $x_0+S$. Try it.
 
I'm stuck right now...I don't know how to prove this... Could you give me a hint?? :o
 
Is it maybe like that:
Let y_{0} be another solution of the system , Ay_{0}=B , then A(x_{0}-y_{0})=0 and x_{0}-y_{0}=z is a solution of Ax=0. So, y_{0}=x_{0}+z .That means that every solution of Ax=B lies in x_{0}+S ?
 
mathmari said:
Is it maybe like that:
Let y_{0} be another solution of the system , Ay_{0}=B , then A(x_{0}-y_{0})=0 and x_{0}-y_{0}=z is a solution of Ax=0. So, y_{0}=x_{0}+z .That means that every solution of Ax=B lies in x_{0}+S ?
Perfect! See why I wasn't giving away the solution?
 
  • #10
Great! Thank you very much! :D
 
  • #11
mathmari said:
Great! Thank you very much! :D
(Yes)
 
  • #12
To clarify something:
When I have to describe the set of the solutions as a conset of the solutions, it's x_{0}+S?
 
  • #13
mathmari said:
To clarify something:
When I have to describe the set of the solutions as a conset of the solutions, it's x_{0}+S?
Umm.. I don't know what you mean by that. The set of all the solutions as a coset of a subspace is $x_0+S$ (symbols have meanings borrowed from previous posts.) 'Coset of solutions' is not making sense to me.

P.S. Please be more descriptive with your doubts.
 
  • #14
caffeinemachine said:
Umm.. I don't know what you mean by that. The set of all the solutions as a coset of a subspace is $x_0+S$ (symbols have meanings borrowed from previous posts.) 'Coset of solutions' is not making sense to me.

P.S. Please be more descriptive with your doubts.

The set of all the solutions is described as a coset of the subspace S?
 
  • #15
mathmari said:
The set of all the solutions is described as a coset of the subspace S?
Yes. $x_0+S$ is a coset of $S$.

- - - Updated - - -

mathmari said:
The set of all the solutions is described as a coset of the subspace S?
Replace 'as' with 'is'. Was a typo. Sorry.
 
  • #16
Ok! Thank you! :D
 
  • #17
There is something very deep going on, here.

We are used to thinking of a system of linear equations such as:

[math]Ax = b[/math]

as "something we solve for [math]x[/math]" given the matrix [math]A[/math] of coefficients, and the constant vector [math]b[/math].

In this vein, what you have just shown is sometimes expressed as:

"general solution = particular solution + homogeneous solution".

Here, the solution set of the homogeneous system [math]Ax = 0[/math] is the space [math]S[/math], and [math]x_0[/math] is some "particular" vector for which [math]Ax_0 = b[/math].

But we can look at this another way: given m equations in n unknowns, we can think of the associated matrix of coefficients [math]A[/math] as something that takes an n-vector as input, and gives an m-vector as output. In other words, a linear transformation (since matrices are linear transformations...in some sense the linear transformations).

The set of n-vectors [math]x[/math] that [math]A[/math] "kills" (maps to the 0 m-vector), the space [math]S[/math], is the null space or kernel of [math]A[/math].

If the system [math]Ax = b[/math] HAS a solution, this means that [math]b[/math] lies in the image (or range​) of [math]A[/math]. In fact, we can say something stronger:

There is a 1-1 correspondence between the elements [math]b[/math] in the image of [math]A[/math], and the distinct cosets [math]x_0 + S[/math]. We can use the vector space structure of the image to induce a vector space structure on the cosets. The precise statement of this is known as the Rank-Nullity theorem:

For a linear transformation [math]A[/math]:

[math]\text{dim}(\text{dom}(A)) = \text{dim}(\text{im}(A)) + \text{dim}(\text{ker}(A))[/math]

In other words the range of A is isomorphic (as a vector space), to the quotient space [math]\text{dom}(A)/\text{ker}(A)[/math].

If we call the domain of [math]A,\ V[/math], and the kernel of [math]A,\ S[/math], we can express this more succinctly as:

[math]A(V) \cong V/S[/math].

This says the the space of POSSIBLE solutions to [math]Ax = b[/math], acts very much like [math]V[/math] (our space of n-vectors) except "shrunk down" by a factor of [math]\text{dim}(S)[/math] (since [math]A[/math] kills every n-vector in [math]S[/math]).

It turns out that the set of cosets [math]V/S[/math] of the form [math]x + S[/math], can be made into a vector space in a pretty "obvious" way:

[math](x_1 + S) + (x_2 + S) = (x_1 + x_2) + S[/math]
[math]a(x_1 + S) = ax_1 + S[/math] (for a scalar [math]a[/math], and vectors [math]x_1,x_2 \in V[/math]).

And this space is "smaller" than what we started with, so can be easier to work with.

In more concrete terms, when one solves a system of linear equations by row-reduction (to find the rank, or the dimension of the range, of the system), the dimension "left-over" (the number of "free variables", or parameters) is precisely the size of the basis of the null space of the system (null space = associated homogeneous system).

A baby example:

Suppose we have the equation:

[math]2x + 3y = 4[/math].

The rank of this system is clearly 1. Since our domain is the Euclidean plane, our null space (of the matrix:

[math]A = \begin{bmatrix}2&3 \end{bmatrix}[/math])

is the subspace of the euclidean plane:

[math]L = \{(x,y) \in \Bbb R^2: 2x+3y = 0\}[/math]

perhaps more clearly recognizable as the line through the origin:

[math]y = -\frac{2}{3}x[/math]

Thus the solution set of our system is the line in [math]\Bbb R^2[/math] parallel to [math]L[/math] passing through the point (2,0), that is the line:

[math]y = -\frac{2}{3}x + 2[/math].

As noted above, there is a 1-1 correspondence between the lines parallel to [math]L[/math] in the plane, and the real numbers, we can just send each coset (parallel line) to twice its y-intercept:

the line [math](x_0,0) + L[/math] is the solution set to:

[math]2x + 3y = 2x_0[/math], or in perhaps more familiar form, the solution space to:

[math]2x + 3y = b[/math] is:

[math]\{(x,y) \in \Bbb R^2: (x,y) = \left(\frac{b-3t}{2},t \right) = \left(\frac{b}{2},0 \right) + t\left(-\frac{3}{2},1 \right), t \in \Bbb R\} = \left(\frac{b}{2},0\right) + L[/math]

(as one can see here, the vector (-3/2,1) forms a basis for the null space [math]L[/math]).

"Chop the plane into parallel lines, and what you get 'acts like a line' (you can use a line crossing all the parallel lines to determine WHICH parallel line you're at)".
 
  • #18
Nice! Thank you very much! :D
 
Back
Top