# Quivers and representations

1. Jun 25, 2006

### matt grime

Lecture 1. Introduction and first examples.

I thought that this forum was looking a little sparse recently and decided to try to write something interesting for people to read, think about, possibly do some work on. One aspect of algebra that is not taught anywhere at undergraduate level that I'm aware of, but that is incredibly important (and reasonably simple) are quivers. In some sense absolutely everything in algebra is to do with quivers. So here goes.

Definition. A quiver is a directed graph (with a finite number of vertices and arrows).

Examples. (Let's see how much we can typeset in here.) Well, it's kinda obvious what they are but here are some of the more important ones we'll talk about:

$$\bullet \rightrightarrows \bullet$$

is the so called kronecker quiver.

$$\bullet \to \bullet$$

doesn't have a name but is important.

So, how on earth is a purely combinatorial thing like a graph actually algebraic?

Here's the simplest way.

Definition. Let Q be a quiver. A representation of Q is the following data: a finite dimensional complex vector space for each vertex (we may talk about other fields later, and in general any algebraically closed field will do for this part), and for each arrow between vertices a linear map between the corresponding vector spaces. Two such sets of data are defined to be equivalent/isomorphic if they differ by change of bases in the vector spaces. The collection vector spaces is denoted $\{W_v\}$ where v are the vertices, and the linear maps are $\{f_a\}$ labelled by the arrows a. It is useful to have a way of talking about where the arrows go from an to. The arrow starts at the source, and ends at the target space. We use s(a) and t(a) to represent these vertices in the graph. Thus in a representation we are assigning to each arrow, a, an element of ${\rm Hom}(W_{s(a)},W_{t(a)})$

Example. Consider

$$\stackrel{\bullet}{\circlearrowleft}$$

A representation of this is just a vector space and a linear map i.e. a pair (W,f). (W',g) is equivalent if W' is isomorphic to W (i.e. the dimensions are the same) and, identifying End(W') with End(W), if f is conjugate to g. Hence equivalence classes of representations of this quiver are parametrized by Jordan Normal form. This parametrization is relatively simple, especially if we make a further definition.

Definition. Given a representation $(\{W_v\},\{f_a\})$ of Q a subrepresentation is a choice of subspace of each W_v which are all preserved by the linear maps f_a.

A representation is simple if it has no subrepresentations.

Example. For

$$\stackrel{\bullet}{\circlearrowleft}$$

and a representation (W,f), a subrepresentation is just a subspace that f maps into itself. Since we are working over the complex numbers every linear map has an eigenvalue and eigenvector, thus the only simple representations of this quiver are those where W is 1-dimensional (and f is nonzero) or when W is the zero vector space.

If we just think about this quiver alone for a while, there are two different kinds of subrepresentation. Let W be of dimension 2. Consider the case when f has just one jordan block compared to when f has two jordan blocks. In the either case, the representation is not simple, but the former is a very different beast from the latter: in the second case f has two eigenvectors, and we can write W as W'+W'' where both W' and W'' are subrepresentations. But in the former case there is no complementary space we can pick that is preserved by f.

Thus we are led to the following definition.

Definition: a representation $(\{W_v\},\{f_a\}$ is decomposable if we can find a proper subrepresentation (i.e. one where not all subspaces are zero or W_v) and a choice of complementary subspaces that is also a representation. If a representation is not decomposable we call it indecomposable. Thus we can talk about sums of representations and summands of representations.

Indecomposable is *not* the same as simple. A two dimensional W and a linear map with one jordan block above is indecomposable but not simple.

Example.
$$\bullet \to \bullet$$

A representation is then a pair of vector spaces X and Y and a linear map f:X--->Y.

Let us classify all indecomposable representations where X and Y are nonzero vector spaces and f is nonzero. We can write X as ker(f)+X' for some complementary subspace X' of the kernel, and We can write Y as Im(f)+Y' for some complementary subspace of the Image of f called a cokernel. Thus if the representation is indecomposable, we must have ker(f)=Im(f)=0 and f is an isomorphism (invertible linear map). Further, since we are completely free to choose our basis we may assume that if {x_i} is a basis of X and {y_i} a basis of Y then f sends x_i to y_i. Then the only indecomposable is if X and Y are 1 dimensional spanned by x and y resp and f maps x to y, call this P_1

Now, let us look at the simple representations. If ({X,Y},{f}) is any representation there is the following subrepresentation ({0,Y},{f=0}), and hence there is a simple (nonzero) subrepresentation ({0,Y},{0}) where Y is one dimensional, call this S_2. Similarly if f is the zero map from X to Y the representation ({X,0},{0}) is a subrep, hence we find the only other simple is ({X,0},{0}) where X is one dimensional, call this S_1.

Summary: given $\bullet \to \bullet$ there are exactly two nonzero simple representations, and one more indecomposable representation. Further every representation is a direct sum of these three representations. Explicitly if f is a map between X and Y we can choose a basis of X and Y so that the representation of the quiver = $(S_1)^p \oplus (P_1)^q \oplus (S_2)^r$ where p is the dimension of the kernel of f, and r is the dimension of the cokernel.

This is an example of so-called finite representation type.

Exercise. Repeat this for $\bullet \to \bullet \to \bullet$

Last edited: Jun 25, 2006
2. Jun 25, 2006

### matt grime

Lecture 2. A brief diversion in history.

Ok, so that's what they are. This post is going to be purely expository and will explain some of the amazing things about quivers. Post 3 will get back to doing more maths.

1. The n Subspace Problem.

This a very famous, and hard, piece of mathematics and in some sense it asks: given a vector space W, and a collection of subspaces of W, how complicated is the family of all possible choices.

The one subspace problem we have already met. It is just the classification of the quiver $\bullet \to \bullet$. It is perhaps clearer what is going on if I say that the 2-subspace problem is the classification of the representations of this quiver

$$\bullet \rightarrow \bullet \leftarrow \bullet[/itex] Now, in the last post we classified the representations of $\bullet \to \bullet$, and said that there were two nonzero simples $$S_1 = (\{\mathbb{C},0\},\{0\})$$ and $$S_2 (\{0,\mathbb{C}\},\{0\})$$ and an indecomposable non-simple representation $$P_1=(\{\mathbb{C},\mathbb{C}\},\{1\})$$ and every representation is a sum of these three. What that means in this context is the following: classifying subspaces of W is the same as classifying representations the form ({X,W},{f}) where f is injective. I.e. inclusions of subspaces into W. Of course we all know from Sylvester's law of replacement that if we have a basis of X a subspace of W then we can complete to a basis of W. Thus we have our first statement of a well known result of linear algebra as a quiver result: Translation: {Sylvester's law of replacement} <--> {The 1 subspace problem} Now, what about the 2 subspace problem? It turns out again that up to choice of basis there are only finitely many ways to choose two subspaces of W: choose a basis of the intersection of the two subspaces and then extend to bases of each individually, and thence to the whole of W. In quiver terms this states that there are only finitely many indecomposable representations of $\bullet \rightarrow \bullet \leftarrow \bullet$. It turns out that the 3 subspace problem, which is classifying the indecomposable representations of $$\begin{array}{ccccc} &&\bullet && \\ && \downarrow&&\\ \bullet&\rightarrow&\bullet&\leftarrow&\bullet \end{array}$$ also has only finitely many possibilities but the 4 subspace problem has infinitely many possibilities. For the cogniscenti, notice that if we forget the arrows are arrows, and just think of them as edges, the resulting graphs for the 1,2,3 subspace problems are dynkin diagrams, and for the 4 subspace problem, it is not. This is no coincidence. 2. The 3 subspace problem in some detail. It is probably useful to illustrate why there are indeed only finitely many possibilities. Suppose I want to choose three subspaces in the plane R^2 (I know we're over C but the argument is the same: it is just easy to visualize over R). Let's look at the option where I pick 3 lines. Now, abusing mathematics horribly, I can assume that two of them are the lines L generated by l=(1,0) and K generated by k=(0,1) (or the x and y axes), then the third is a line through the origin. We'll assume the third line is not either the x or y axes. It is then y=ax for some nonzero a, or in vector terms it is the space spanned by (1,a)=l+ak. But I only care about subspaces, so I can replace my element generating the line L by m=(a,0) with out changing L and then the subspaces are <m>, <k> and <k+m>, thus all choices are essentially the same. 3. The 4 subspace problem. This simultaneous argument fails for 4 subspaces. I can't change my bases to satisfy two different constraints simultaneously. The above argument states that in picking three lines I can assume they are the lines <u>, <v> and the <u+v>. Now, if I pick any <u+av> and u+bv> I can't transform one line into the other. However, the possibilities in this case (for subspaces of a 2-d space) fall into a nice family of choices. Actually, it can be shown that this is true for all 4 subspace problems. Such things are called tame - there are infinitely many possibilities, but classifiable in a reasonable way. 4. The 5 subspace problem. This is different in a fundamental way again. The possibilities are wild - they cannot be classified in a reasonable way. Last edited: Jun 25, 2006 3. Jun 25, 2006 ### matt grime Lecture 3, back to maths. We will explain the statement about how everything algebra is to do with quivers, or representations of them. This might get quite abstract, so it isn't necessary to read this first time round. Instead, if you leap to lecture 4 we'll go into more detail about quivers that have no oriented paths. If you want a reason for why we examine that case, in preference to others, it is because the resulting algebraic gadgets that correspond to things with oriented cycles are non-artinian, and these things are less well understood, even now. 1. Quivers with relations. Given some quiver Q, a representation assigns vector spaces to the vertices and linear maps to the arrows and makes no more assumptions.We can add in extra assumptions, called relations. These are formal relations (say in the form of equations) that we require the linear maps to satisfy. One of our first examples was $$\stackrel{\bullet}{\circlearrowleft}$$ and a representation is a pair (W,f) of a vector space and an endomorphism of W. A relation for it would be that we needed f^2=0, or f to satisfy some more complicated polynomial p(f)=0. f would then be some linear map whose minimal polynomial divides p. Here's another example $$\mathop{\bullet}_{\circlearrowleft}^{\circlearrowright}$$ (that's supposed to be one vertex with two loops out of it) So, a representation is a a vector space and 2 linear maps (W,{f,g}). (Classifying representations of this system is impossible - it is a wild problem like the 5-subspace problem - there are too many of them.) We can impose a relation that makes our life easier, we can require that fg=gf, i.e. they commute. Let G be a finite group, and suppose it has a presentation as n generators and relations. Consider the quiver with 2n+1 arrows, let g_1,...g_n of them satisfy the relations of the n generators, one, e, satisfy ey=ye=y for all y (so it is the identity), and let the others h_1,..,h_n satisfy the relation (g_i)(h_i)=e, so we're adding in inverses. A representation of this quiver is then the same thing as a representation of G. Compare this to the observation that a group is the same as a category with one object and all morphisms isomorphisms. In order to take this idea further we need to introduce a different way of thinking of representations of quivers. Path Algebras Given a quiver Q, we can form a complex vector space CQ from the paths of Q. There are two types of path in Q 1. a sequence of arrows a_i, 1 <=i <= n for some n satsifying s(a_i)=t(a_{i-q}), or less abstractly, it is set of arrows that lie tip to tail. 2. the trivial paths, one for each vertex, that contain no arrows, we'll label these e_v, one for each vertex v. Let Path(Q) be the set of all paths in Q. We form the abstract vector space CQ with basis elements indexed by the elements of Path(Q). Remark: CQ is a finite dimensional vector space if and only if Q contains no oriented cycles. Example, if Q is the quiver $$\bullet \rightrightarrows \bullet$$ there are exactly 4 paths, the two trivial paths and the two paths of length 1. Example, if Q is the quiver $$\bullet \leftrightarrows \bullet$$ then there are infintely many paths. Given two paths, x and y, if y ends where x begins we can join together the two paths as xy. If we also define xy=0 whenever we cannot join up x and y, this makes CQ into an algebra. Observations: a trivial path satisfies (e_v)^2=e_v, i.e. they are idempotents. If x is a path ending at v, then (e_v)x=e_v, and Example. If Q is the quiver $$\stackrel{\circlearrowleft}{\bullet}$$ then CQ is exactly the same as C[x] the polynomial ring in one variable. There is one unique trivial path, e, and it acts as the identity, and every path is simply looping round n times, which corresponds to x^n. Example/Exercise. If Q is $$\mathop{\bullet}_{\circlearrowleft}^{\circlearrowright}$$ with the relation that the arrows commute. Justify why CQ is the same as C[x,y] the polynomial ring in two variables. The representation theory of C[x,y] as an algebra is also a wild problem. This again is no coincidence as we shall see. 2. CQ modules. To avoid using the word representation in two possibly confusing ways, we are going to talk about CQ modules. A CQ module is a complex vector space V and an algebra homomorphism from CQ to End(V). Later we shall see that representations of Q and CQ modules are the same thing, but first some examples. Consider the quiver [itex]\bullet \to \bullet$$. Label the vertices 1 and 2, and the arrow by a. The paths are then

1) the nontrivial path a
2) the trivial paths e_1 and e_2.

the path algebra is a 3 dimensional complex vector space. A typical element of CQ looks like (ue_1+ve_2+wa). You can work out the product of two of these elements, and it is clear that CQ is the same thing as the set of upper triangular 2x2 matrices. This has two simple modules, both one dimensional: if m is a 2x2 upper triangular matrix then the two maps m--> m_{11} and m-->m_{22} define algebra homomorphisms to End(C). There is one more indecomposable module, the natural interpretation of 2x2 matrices as elements in End(C^2).

We also notice that e_1+e_2 is the identity element of the algebra.

Remark: Our restriction of Q to have a finite number of vertices means that the sum of the e_v as v ranges over the vertices is a well defined identity element.

Remark: Finitely many arrows means that the algebra, whilst potentially infinite dimensional, has only a finite number of generators.

Last edited: Jun 26, 2006
4. Jun 26, 2006

### AKG

Example/Exercise. If Q is
$$\mathop{\bullet}_{\circlearrowleft}^{\circlearrowright}$$
justify why CQ is the same as C[x,y] the polynomial ring in two variables. The representation theory of C[x,y] as an algebra is also a wild problem. This again is no coincidence as we shall see.

Is it? C[x,y] is generated by {1,x,y} subject to the relations that 1 is multiplicative identity and xy = yx. CQ is generated by {1,x,y} subject to the relation that 1 is the identity, but NOT subject to xy = yx.

EDIT: Now that "with the relation that the arrows commute," has been added, it's clear that C[x,y] is CQ.

Last edited: Jun 26, 2006
5. Jun 26, 2006

### matt grime

Lecture 3 cont.

We'll end this part with a statement of a result of Gabriel's that concretizes the (as yet unfounded) assertion that algebra is essentially the study of quivers (with relations). The words in the full statement will not mean anything to most people so we shall slightly paraphrase it.

Theorem (Gabriel).

Let A be a a finite dimensional algebra satisfying some mild condition (basicness for those who know about such things), then there is a quiver Q(A) associated with A (its Ext quiver), and there is a surjective map from CQ(A) to A (i.e. A is a quotient of a path algebra). If in addition CQ(A) is a finite dimensional algebra then the kernel is of a particular form, the square of the radical, and there are correspondences of simple modules and two sided ideals of the algebras.

We have also seen that path algebras (with relations) give us free polynomial rings, and group algebras too.

6. Jun 26, 2006

### matt grime

Lecture 4. Intro

This post will only contain statements of things we aim to prove in the next.

There are two results we want to explain.

1. There is a natural correspondence between representations of the quiver Q, and the modules of CQ.

2. If Q is a quiver without oriented cycles (which is if and only if CQ is a finite dimensional algebra) then there is exactly one simple representation/module for each vertex of Q, and we will describe all projective and injective CQ modules too.

Remark: if there are oriented loops, then 2. fails. The ring C(x,y), which is the free algebra on two generators, we know is a quiver with relations in disguise, and it has simple modules of all dimensions.

I should get round to proving these tomorrow.

7. Jun 28, 2006

### matt grime

Lect. 4 cont.

1. CQ modules are the same as representations of Q.

The ways to convert one to the other are as follows:

if W is a CQ module we define a representation of Q as follows:

the vector space corresponding to vertex v is the the space e_vW (recall, e_v is an idempotent acting on W, so it corresponds to splitting W into a direct sum), and if a is an arrow in Q, it gives a linear map from e_{s(a)}W to e_{t(a)}W the source and target of the arrow a.

Conversely, given a rep of Q define W to be the direct sum of the W_v and define the action of the paths in the obvious way (a path is just a sequence of arrows).

This sounds more complicated than it is... honest.

It is now left to the reader (this apparently is standard) to verify that these identifications are inverse equivalences of categories, or at least it would be if I'd described what maps between representations are, but these are clear: maps between the vector spaces W_v that commute with the linear maps associated to arrows.

2. Simple modules for certain quivers.

Suppose that Q is a quiver without oriented cycles, then it is easy to describe the simple CQ modules because it is simple to describe simple representations of Q.

If Q is such a quiver, there is a representation S_v, one for each vertex v, with C at the vertex v and the zero vector space everywhere else.

Exercise, show that these are the only simple representations of Q. Hint: if S is a simple representation we may suppose that Q' is a connected subquiver of Q with a vector space at each vertex by ignoring the vertices where S has only the zero vector space. Q' is also oriented cycle free. There must be some vertex with only arrows in or out, now consider the things we did with kernels and cokernels to show that Q' has exactly one vertex.

Next time we'll talk about injectives, projectives and hereditary algebras and more complicated algebra.

Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?