MHB Basic Exercise in Vector Spaces - Cooperstein Exercise 2, page 14

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading Bruce Cooperstein's book: Advanced Linear Algebra ... ...

I am focused on Section 1.3 Vector Spaces over an Arbitrary Field ...

I need help with Exercise 2 of Section 1.3 ...

Exercise 2 reads as follows:View attachment 5109Hope someone can help with this exercise ...

Peter*** EDIT ***

To give MHB
readers an idea of Cooperstein's notation and approach I am providing Cooperstein's definition of a vector space ... as follows:View attachment 5110
 
Physics news on Phys.org
The first thing you have to ask yourself is: given $v$, how do I show *any* element (say $x$) of $V$ is $-v$?

The answer lies in A4: if $x = -v$, then $x + v= 0$ (and also $v + x= 0$ by A1).

So to show something is $-(-v)$, what do you suppose you have to add it to, and show the sum is $0$?
 
Deveno said:
The first thing you have to ask yourself is: given $v$, how do I show *any* element (say $x$) of $V$ is $-v$?

The answer lies in A4: if $x = -v$, then $x + v= 0$ (and also $v + x= 0$ by A1).

So to show something is $-(-v)$, what do you suppose you have to add it to, and show the sum is $0$?
Thanks for the help, Deveno ...

I guess that based on what you have said, we can proceed as follows:We know that for $$x \in V$$ we have that $$x + (-x) = 0 $$... ... (1) ... ... ... by (A4)

Now put $$x = -v$$ in (1) ... then we have

$$(-v) + -(-v) = 0$$

so ... adding $$v$$ to both sides we have ...

$$v + {(-v) + [-(-v)]} = v$$

$$\Longrightarrow { v + (-v)} + [- (-v)] = v$$ ... ... by associativity of addition

$$\Longrightarrow 0 + [-(-v)] = v$$ ... by A4

$$\Longrightarrow [-(-v)] = v$$ ... by A3
Is that correct? Can you confirm that the above proof is correct?

Peter
 
Peter said:
Thanks for the help, Deveno ...

I guess that based on what you have said, we can proceed as follows:We know that for $$x \in V$$ we have that $$x + (-x) = 0 $$... ... (1) ... ... ... by (A4)

Now put $$x = -v$$ in (1) ... then we have

$$(-v) + -(-v) = 0$$

so ... adding $$v$$ to both sides we have ...

$$v + {(-v) + [-(-v)]} = v$$

$$\Longrightarrow { v + (-v)} + [- (-v)] = v$$ ... ... by associativity of addition

$$\Longrightarrow 0 + [-(-v)] = v$$ ... by A4

$$\Longrightarrow [-(-v)] = v$$ ... by A3
Is that correct? Can you confirm that the above proof is correct?

Peter

Yep, that's the ticket. Note you never used anything but A2-A4, so this proof is valid in any group, and is usually written like so:

In a group $(G,\ast)$, we have:

$(a^{-1})^{-1} = a$.

All A1-A4 say is that in a vector space $V$, we have an abelian group under vector addition. Thus many of the basic theorems in linear algebra are simply consequences of this.

The "other" main ingredient in a vector space is the scalar-multiplication, or $F$-action. Usually this is written as a left-action. We can view this in two main ways:

1. A "mixed map" $\mu: F \times V \to F$. This is a more "intuitive" view.

(EDIT: As Peter points out below, this should read: "$\mu: F\times V \to V$").

2. An "induced map" $a \mapsto \phi_a$ from $F \to \text{End}_{\Bbb Z}(V)$ that for every $a \in F$, we get a map $V \to V$ defined by:

$\phi_a(v) = av$.

This is the more "advanced" view.

In the second view what M1 says is:

$\phi_a$ is an abelian group homomorphism, for every $a$. This is an endomorphism, since the domain and co-domain are the same ($V$).

What M2 says is: the map $a \mapsto \phi_a$ (let's call this map $\Phi$) is an abelian group homomorphism from the additive group of the field $F$, to the additive group of the ring of abelian group endomorphisms (of the abelian group $V$).

Recall that an endomorphism is a map $V \to V$, and that we add such maps by:

$(\phi_a + \phi_b)(v) = \phi_a(v) + \phi_b(v)$ (the addition on the LHS is the "addition of maps", and the addition on the RHS is the "addition of vectors").

So $(a + b)v = av + bv$ simply states that $\Phi(a+b) = \Phi(a) + \Phi(b)$.

M3 is a bit more subtle, it says that $\Phi$ is a semi-group homomorphism from $F$ to $\text{End}_{\Bbb Z}(V)$ with the operation being the field multiplication in $F$, and *composition* in the ring of endmorphisms:

$\Phi(ab) = \Phi(a) \circ \Phi(b)$, that is:

$(ab)v = a(b(v))$.

Together, M2 and M3 say we have a ring-homomorphism from $F \to \text{End}_{\Bbb Z}(V)$.

M4 then says this ring-homomorphism is a *unity-preserving* ring homomorphism, that is, $1_F$ induces the identity endomorphism of $V$. This ensures that, for a fixed $v \in V$ the map:

$a \mapsto av$ is an *embedding* of the field $F$ into the one-dimensional subspace ("line") $\{av: a \in F\}$, the subspace generated by $v$. This is where the "linear" comes from in "linear algebra".

Most of the "meat" of linear algebra (at least in the finite-dimensional case) can be understood by a thorough grasp of Euclidean 2-space and 3-space. For example, in Euclidean 3-space, we have 3 copies of $\Bbb R$ (one for each spatial dimension). These are commonly referred to in physical situations as "axes". Although it is most convenient for these axes to be "orthogonal", this need not be the case. One axis determines a line, two (provided the second isn't on the "same line" as the first) determine a PLANE, three (provided the third isn't on the plane determined by the first two) determine a "space". Calculations in a 3-space can thus be reduced to calculations with 3 field elements (called the "coordinates in the respective axes"), thus giving us an ARITHMETIC to go along with the ALGEBRA (just as using rational approximations to real numbers gives us "numbers" we can use to solve "equations" in ordinary "high-school algebra").

There is one "catch". The arithmetic (numerical calculations) aren't uniquely determined by the vector space itself. We have to impose "units" upon it. For example, in the plane, distance might be measured in miles east-west, and miles north-south, or perhaps in feet east-west, and kilometers northwest-by-southeast. So the exact same point on a map, might have different "numbers" (coordinates) attached to it, even when using a common origin as a reference.
 
Last edited:
Deveno said:
Yep, that's the ticket. Note you never used anything but A2-A4, so this proof is valid in any group, and is usually written like so:

In a group $(G,\ast)$, we have:

$(a^{-1})^{-1} = a$.

All A1-A4 say is that in a vector space $V$, we have an abelian group under vector addition. Thus many of the basic theorems in linear algebra are simply consequences of this.

The "other" main ingredient in a vector space is the scalar-multiplication, or $F$-action. Usually this is written as a left-action. We can view this in two main ways:

1. A "mixed map" $\mu: F \times V \to F$. This is a more "intuitive" view.

2. An "induced map" $a \mapsto \phi_a$ from $F \to \text{End}_{\Bbb Z}(V)$ that for every $a \in F$, we get a map $V \to V$ defined by:

$\phi_a(v) = av$.

This is the more "advanced" view.

In the second view what M1 says is:

$\phi_a$ is an abelian group homomorphism, for every $a$. This is an endomorphism, since the domain and co-domain are the same ($V$).

What M2 says is: the map $a \mapsto \phi_a$ (let's call this map $\Phi$) is an abelian group homomorphism from the additive group of the field $F$, to the additive group of the ring of abelian group endomorphisms (of the abelian group $V$).

Recall that an endomorphism is a map $V \to V$, and that we add such maps by:

$(\phi_a + \phi_b)(v) = \phi_a(v) + \phi_b(v)$ (the addition on the LHS is the "addition of maps", and the addition on the RHS is the "addition of vectors").

So $(a + b)v = av + bv$ simply states that $\Phi(a+b) = \Phi(a) + \Phi(b)$.

M3 is a bit more subtle, it says that $\Phi$ is a semi-group homomorphism from $F$ to $\text{End}_{\Bbb Z}(V)$ with the operation being the field multiplication in $F$, and *composition* in the ring of endmorphisms:

$\Phi(ab) = \Phi(a) \circ \Phi(b)$, that is:

$(ab)v = a(b(v))$.

Together, M2 and M3 say we have a ring-homomorphism from $F \to \text{End}_{\Bbb Z}(V)$.

M4 then says this ring-homomorphism is a *unity-preserving* ring homomorphism, that is, $1_F$ induces the identity endomorphism of $V$. This ensures that, for a fixed $v \in V$ the map:

$a \mapsto av$ is an *embedding* of the field $F$ into the one-dimensional subspace ("line") $\{av: a \in F\}$, the subspace generated by $v$. This is where the "linear" comes from in "linear algebra".

Most of the "meat" of linear algebra (at least in the finite-dimensional case) can be understood by a thorough grasp of Euclidean 2-space and 3-space. For example, in Euclidean 3-space, we have 3 copies of $\Bbb R$ (one for each spatial dimension). These are commonly referred to in physical situations as "axes". Although it is most convenient for these axes to be "orthogonal", this need not be the case. One axis determines a line, two (provided the second isn't on the "same line" as the first) determine a PLANE, three (provided the third isn't on the plane determined by the first two) determine a "space". Calculations in a 3-space can thus be reduced to calculations with 3 field elements (called the "coordinates in the respective axes"), thus giving us an ARITHMETIC to go along with the ALGEBRA (just as using rational approximations to real numbers gives us "numbers" we can use to solve "equations" in ordinary "high-school algebra").

There is one "catch". The arithmetic (numerical calculations) aren't uniquely determined by the vector space itself. We have to impose "units" upon it. For example, in the plane, distance might be measured in miles east-west, and miles north-south, or perhaps in feet east-west, and kilometers northwest-by-southeast. So the exact same point on a map, might have different "numbers" (coordinates) attached to it, even when using a common origin as a reference.
Hi Deveno,

Thanks for the significant help!

Will work through your post in detail shortly ...

... but ... just a quick clarifying question ...

... writing about the scalar-multiplication or F-action in V, you write:

"... ... ... 2. An "induced map" $a \mapsto \phi_a$ from $F \to \text{End}_{\Bbb Z}(V)$ that for every $a \in F$, we get a map $V \to V$ defined by:

$\phi_a(v) = av$. ... ... ... My question is as follows:

What is the significance of the subscript $$\mathbb{Z}$$ in $$\text{End}_{\Bbb Z}(V)$$? Can you please explain?

Hope you can help ...

Peter*** EDIT ***

Just noticed something else I need to ask you about ... in the above post, you write:"... ... The "other" main ingredient in a vector space is the scalar-multiplication, or $F$-action. Usually this is written as a left-action. We can view this in two main ways:

1. A "mixed map" $\mu: F \times V \to F$. This is a more "intuitive" view. ""
Shouldn't the last sentence of the above quote from you actually read:

1. A "mixed map" $$\mu: F \times V \to V$$. This is a more "intuitive" view. ""

Peter
 
Last edited:
Peter said:
Hi Deveno,

Thanks for the significant help!

Will work through your post in detail shortly ...

... but ... just a quick clarifying question ...

... writing about the scalar-multiplication or F-action in V, you write:

"... ... ... 2. An "induced map" $a \mapsto \phi_a$ from $F \to \text{End}_{\Bbb Z}(V)$ that for every $a \in F$, we get a map $V \to V$ defined by:

$\phi_a(v) = av$. ... ... ... My question is as follows:

What is the significance of the subscript $$\mathbb{Z}$$ in $$\text{End}_{\Bbb Z}(V)$$? Can you please explain?

Hope you can help ...

Peter*** EDIT ***

Just noticed something else I need to ask you about ... in the above post, you write:"... ... The "other" main ingredient in a vector space is the scalar-multiplication, or $F$-action. Usually this is written as a left-action. We can view this in two main ways:

1. A "mixed map" $\mu: F \times V \to F$. This is a more "intuitive" view. ""
Shouldn't the last sentence of the above quote from you actually read:

1. A "mixed map" $$\mu: F \times V \to V$$. This is a more "intuitive" view. ""

Peter

Yes, good catch on that typo.

The subscript $\Bbb Z$ in $\text{End}_{\Bbb Z}(V)$ is to indicate this are only abelian group homomorphisms. By contrast, $\text{End}_F(V)$ (also written as $\text{Hom}_F(V,V)$) is the set of all $F$-linear maps, which is a "smaller" set.

As you may or may not recall, a (unital, associative) *algebra* over a field $F$, is something that is both a ring, *and* a vector space over $F$ such that the scalar multiplication is "compatible" with the ring multiplication. Another way to say this, is we have a ring-homomorphism:

$\eta: F \to Z(A)$.

(this makes $A$ into an extension ring of a field, which is *automatically* a vector space, with the scalar multiplication given by the ring-multiplication in $A$).

In this case, it turns out that if we take $A = \text{Hom}_F(V,V)$ that the maps $\phi_a$ form the entire center $Z(A)$. This is the "algebra" part of linear algebra. A basic theorem of linear algebra, is that given a basis for $V$ (where $V$ has dimension $n$), we have an algebra isomorphism between:

$\text{Hom}_F(V,V)$ and $\text{Mat}_n(F)$

that is, "coordinatizing" vectors turns our linear algebra into the arithmetic of a *particular* algebra, the algebra of $n \times n$ matrices with entries in $F$.

Thus, almost as soon as we learn about vectors, our attention shifts from the vector themselves, to linear transformations, which we "turn into numbers" by studying *matrices*. This is often students' first exposure to an algebraic object that behaves "differently" than the fields they are used to, the most obvious "different" properties being:

Matrix multiplication is not commutative,
Not all matrices have an inverse.

These "defects" lead to some of the more interesting properties of linear algebra-such as quantifying just how bad from being a "good matrix" any given matrix is-information which we can "lift" (via our basis) to the abstract structure.
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
Back
Top