Why Does a Subset of a Vector Space Need the Zero Vector to Be a Subspace?

torquerotates
Messages
207
Reaction score
0
I am curious as to why a subset of a vector space V must have the vector space V's zero vector be the subsets' zero vector in order to be a subspace. Its just not intuitive.
 
Physics news on Phys.org
Why would it not be intuitive? A subspace, U, of vector space V must be closed under addition and scalar multiplication. If v is in subspace U, the (-1)v = -v is also. Then v+ (-v)= 0 is in U. That is, the zero vector of V, 0, is in U. But it is easy to show that the zero vector of a unique. Since 0 is in U and, of course, v+ 0= v for all v in U, there cannot be another zero vector in U.
 
If each subspace has its own zero vector, then combine these subspaces in order to get a bigger subspace or even the whole space. We will get bunch of different zeros and the whole space will very entertaining, suddenly disappearing elements and discontinuities...

Also note that these are the rules of the game that are required to have, rather than anticipating their existence based on intuition.
 
torquerotates said:
I am curious as to why a subset of a vector space V must have the vector space V's zero vector be the subsets' zero vector in order to be a subspace. Its just not intuitive.

What would you suggest as an alternative?
 
ejungkurth said:
What would you suggest as an alternative?

A subspace which isn't a vector space?
 
by definition a subspace have to be a vector space, and then all the other peoples arguments holds. What you are sugesting is just a simple subset, but that is not so interresting i linear algebra because it don't have the vector space properties.

by the way: The space in subspace means vectorspace, so it should really say subvectorspace. But a subset isn't a space so that's why there is no ambiguity.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top