# Lie-algebra representation powers - plethysms

I've also been trying to calculate something related to polygonal anomalies, generalizations of 4D triangle anomalies. These arise when trying to calculate fermion loops with gauge fields exiting from vertices in those loops.

They exist only for even dimensions, and for D dimensions, they have (D/2-1) vertices. Thus, in 4D, they are triangular, and in 10D, they are hexagonal.

For gauge field i operating on the fermion rep, the gauge operator is Li. For each chirality c, one has to calculate
Lc,ijk = Tr(Li.Lj.Lk)symmetric

for gauge fields i,j,k in 4D, and likewise for other numbers of dimensions. The overall result is
Lijk = LL,ijk - LR,ijk

and it must vanish for the anomaly to disappear. That's a constraint on the Standard Model and extensions of it like Grand Unified Theories.

I have been valiantly searching for general formulas for the likes of Tr(Li.Lj.Lk)symmetric, without success. However, it's easy to calculate for an algebra's Cartan subalgebra, and I'm able to do that with my Lie-algebra code.

One can construct scalar invariants from a Lie algebra, starting with the commutator formula, [Li,Lj] = fijkLk

One first gets a metric, gij = fiabfjba, and if the metric can be inverted, then the algebra is semisimple. So one can find "Casimir invariants", with the quadratic one given by
C = gabLaLb

To extend into higher powers, one takes Fij = fiajgabLb and finds
C(p) = Tr(Fp)

I've found it hard to find general formulas for those, though I've found some in Francesco Iachello's book Lie Algebras and Applications and A. M. Perelomov, V. S. Popov, “Casimir operators for semisimple Lie groups”, Izv. Akad. Nauk SSSR Ser. Mat., 32:6 (1968), 1368–1390. They are for A(n), B(n), C(n), D(n), and G2, with possible extension to E6 and E7, leaving F4 and E8 remaining. I've verified them in the quadratic case, though I've had difficulty doing so for higher powers.

C(1) = 0, C(2) is proportional to the earlier C, and C(p) is a degree-p polynomial in the highest weights. For rank n, there are n independent ones. Here are the lowest ones that form independent sets:
A(n): 2, 3, ..., n+1
B(n), C(n), D(n): 2, 4, ..., 2n
G2: 2, 6
F4: 2, 6, 8, 12
E6: 2, 5, 6, 8, 9, 12
E7: 2, 6, 8, 10, 12, 14, 18
E8: 2, 8, 12, 14, 18, 20, 24, 30
D(n) has the complication that one can form a degree-n polynomial in the highest weights from the independent C(p)'s, a sort of C'(n) that can substitute for C(2n).

You probably already know about this, but the symbolic manipulation program FORM has been used to calculate a huge number of invariants, Casimirs etc...
http://arxiv.org/abs/hep-ph/9802376, http://www.nikhef.nl/~form/maindir/packages/color/color.html
Of course, they don't use Cartan-Weyl basis like I assume your code does, so it's not directly comparable, but maybe of interest. The algorithms that they use can be nicely visualized using "bird tracks" (http://www.birdtracks.dk/). I've always wondered whether some similar graphical method would give any insight when using a Cartan-Weyl basis which is neither hermitian nor trace-orthogonal...

Anyway, good luck with it and I look forward to hearing more results!

Thanx for those links. However, I couldn't figure out how to get algorithms from Predrag Cvitanovic's "bird track" diagrammatic methods.

Yes indeed I use the Cartan-Weyl basis. I've most recently decided to try to fill in a missing piece in some discussions of it. Here's what the algebra looks like in that basis:

[Hi,Ea] = aiEa
[Ea,E-a] = a.g.H = <a,H>
[Ea,Eb] = Na,bEa+b
for roots a, b != -a, indices i, and metric g
Hi = Cartan subalgebra, generalization of Lz in the ladder-operator development of angular momentum (SO(3) / SU(2))
Ea = raising / lowering operators, again like angular momentum

The big problem is how to compute Na,b. From the Jacobi identities and some symmetries, we have
Na,b = 0 if a+b is not a root in the algebra
Nb,a = - Na,b, N-a,-b = - Na,b, Nb,-a-b = N-a-b,a = Na,b
Na,b2 - Na,-b2 + <a,b> = 0, Na,bNa+b,c + Nb,cNb+c,a + Nc,aNc+a,b = 0
avoiding Na,-a values.

The first two sets of constraints give all the N's in terms of the nonzero N's for a,b being positive roots of the algebra. Of the third set, the first one gives all their values to within signs, and the second one gives their signs. However, many of the signs are undetermined, and for n of them, I get 2n solutions, which easily becomes a strain on Mathematica.

So I decided to calculate their squares instead, and I found unique solutions in every case that I've tried. All the exceptional ones and some infinite-family ones.

Roots equally long: all N2 values = 1
A(n), D(n), E6, E7, E8

len(long root) / len(short root) = 2
B(n), n >= 3: all N2 values = 2
C(n), n >= 2, and F4: N2 values = 1 or 2

len(long root) / len(short root) = 3
G2: N2 values = 3 or 4

Could it be possible to prove these results in general for the infinite families? A help in that could be rewriting one of the identities to
Na+b,b2 - Na,b2 + <a+b,b> = 0
where a and b are both positive roots.

I now have some results for the squares of the normalizations, even if not about the signs.

I'll be using as a root basis a set of ei for some range of i, and an identity-matrix metric. The basis size is the rank except where noted.

For A(n), the basis size is (n+1) and the roots are
ei - ej
with
N(ei-ej, ej-ek)2 = 1
because ei-ek is a root and ei-2ej+ek is not a root.

For D(n), the roots are
+- ei +- ej
By similar arguments, all the nonzero N2's are 1.

For B(n), the roots are
+- ei +- ej and +- ei
By similar arguments, all the nonzero N2's are 1. My code gives 2 because of a different overall size of the metric.

One can do similar general arguments for E6, E7, and E8, though their roots have rather complicated expressions in this basis space. For E8, they can be
+- ei +- ej and (1/2)*(sum of +- e's with even or odd numbers of + and - signs)

For C(n), the roots are
+- ei +- ej and +- 2ei
The N(a,b)'s have different squares for different lengths of a, b, and a+b. By length:
N(s,s,s)2 = 1
N(s,s,l)2 = N(s,l,s)2 = N(l,s,s)2 = 2
s = short, l = long

For F4, the roots are
+- ei +- ej and +- 2ei and (sum of +- e's)
N(s,s,s)2 = 1
N(s,s,l)2 = N(s,l,s)2 = N(l,s,s)2 = 2
N(l,l,l)2 = 2

For G2, the basis size is 3 and the roots are
ei - ej and +- (2ei - ej - ek)
N(s,s,s)2 = 4
N(s,s,l)2 = N(s,l,s)2 = N(l,s,s)2 = 3
N(l,l,l)2 = 3

I've returned to implementing the branching in C++. I've implemented the shared part of the setup and all of the operation. This uses my earlier code for decomposing the resulting product-algebra rep into irreps. All that remains is to specify the subalgebras and the projection matrices, and that's been most of the work.

I've done all the root demoters and extension splitters, and I'm taking a break before continuing. But I find it satisfying to see great speed here also.

Whew! I finished my C++ versions of all the other branchers that I had implemented in Mma and Python, including the exceptional-algebra special-case ones. Like the rest of my C++ version, it also has great speed. I've also put all its source code into my archive and uploaded it.

I've thought about improvements, like using reference-counted smart pointers for the reps. My code makes copies of them in some cases, something which can eat up RAM.

For further porting, I've found SWIG: Simplified Wrapper and Interface Generator. It's for calling C/C++ from a variety of programming languages. But I don't think I'll bother with that any time soon.

I've now implemented multiple root demotion. I think I'll next implement concatenating branchers.

SO(10) 16 spinor rep 00010 -> Standard Model left-handed elementary fermions

Mathematica:
MakeMultiRootDemoter[ld,{4,5},{3,5}];MatrixForm[{#[[1]],#[[2,1]],#[[2,2]],(1/6)*#[[2,3]]-#[[2,4]],-(2/3)*#[[2,3]]}& /@ DoBranching[ld,{0,0,0,1,0}]]

Code:
1	{0,0}	{1}	-(1/2)	-1
1	{0,1}	{0}	-(2/3)	-(1/3)
1	{0,1}	{0}	1/3	-(1/3)
1	{1,0}	{1}	1/6	1/3
1	{0,0}	{0}	0	1
1	{0,0}	{0}	1	1
Multiplets: left-handed lepton, ap of right-handed up quark, ap of right-handed down quark, left-handed quark, ap of right-handed neutrino, ap of right-handed electron
ap = antiparticle

Columns:
Degeneracy / multiplicity
QCD multiplet: 10: 3, 01: 3*
Weak-isospin multiplet: 0: 0, 1: 1/2
Weak hypercharge
Baryon number - lepton number

I've done the brancher concatenation and brancher relabeling, for B2 <-> C2, A3 <-> D3, A1^2 <-> D2, etc. I've made the C++ version nearly feature-complete with the Python one, which has nearly all the non-graphical parts of the Mathematica code.

With my multiroot demoter, I think that I can retire my single-root demoter, with all its special-casing. For my extension splitter, I've found a way of calculating part of it that does not rely on special casing.

I've also added Giulio Racah's use of G2 in SU(7) -> SO(7) -> G2 -> SU(2), and showed that my code can do it in one step.

The link to my archive again: http://homepage.mac.com/lpetrich/Science/SemisimpleLieAlgebras.zip [Broken]

Last edited by a moderator:
I've updated my archive with my latest work.

I've worked out a general expression for the projection matrices for root demotion and extension splitting, and I've updated my branching code appropriately. I've succeeded in retiring a lot of special cases that I'd laboriously derived some months back, but at least my code is conceptually better justified.

In weight space, the projection matrices for root demotion are:
P(original index, subalgebra index) = 1 for an original root with a subalgebra root assigned to it, and 0 otherwise

The demoted root makes a U(1) that gets its root-space value, not its weight-space value.

P(original index, subalgebra index) = (extension vector)(original index) if the extension root was assigned to the subalgebra root

The extension vector is given as follows:
Take the main diagonal of the metric, make it MetDiag.
Divide MetDiag by the maximum value in it, giving NormMetDiag.

Of the original algebra's positive root vectors, find the one with the greatest sum or height: MaxPosRoot. Its corresponding weight vector is the highest-weight one for the algebra's adjoint representation.

The extension vector = - (component-by-component MaxPosRoot*NormMetDiag)
It should be a vector of integers.

I've succeeded in implementing extension splitting for A(n), and it works for A1 ~ B1 ~ C1 also. As expected, it does A(n) -> A(n). For A1, it reverses the root/weight values, multiplying them by -1.

I first wish to announce that my archive now has a new home: SemisimpleLieAlgebras.zip Its old home will be going away in a month.

I've also found that simple Lie algebras have these possible subalgebras, which contain all the maximal ones:
• Root demotion: roots to U(1) factors, splitting the algebra.
• Extension splitting: adding an extension root, then removing a root, which may split the algebra.
• SO(even) -> SO(odd)*SO(odd) where the odd numbers add up to the even number.
• From the above, SO(sum of n's) -> product of SO(each n).
• From the above, Sp(sum of n's) -> product of Sp(each n).
• SU(product of n's) -> product of SU(each n).
• SO(product of n's) -> product of SO(each n) -- each n and the product can be negative, with SO(-n) -> Sp(n).
• Any semisimple algebra -> SU(2) -- height of each root vector (sum of its components) -> root of SU(2) rep.
• SU(n) -> SO(n) or Sp(n)
• More generally, SU(n), SO(n), or Sp(n) -> simple subalgebra with a n-D irrep. The subalgebra can be one of the exceptional ones. A n-D fundamental irrep of the original gets mapped onto that subalgebra irrep. That irrep's reality:
• SU(n): real, pseudoreal, complex
• SO(n): real
• Sp(n): pseudoreal
The original's adjoint must get mapped onto a rep that contains the adjoint.
• Some exceptional-algebra subalgebras, like E8 -> F4*G2 -- 248 -> (52,1) + (1,14) + (26,7).
Note: SU(1) and SO(1) are singlet null algebras, and SO(2) ~ U(1).

Rep reality:
• Real: self-conjugate, height even.
• Symmetric square of rep: contains singlet -- invariant 2-form.
• Antisymmetric square of rep: contains adjoint (true in some cases; true in general?).
• Pseudoreal: self-conjugate, height odd.
• Symmetric square of rep: contains adjoint (true in some cases; true in general?).
• Antisymmetric square of rep: contains singlet -- invariant 2-form.
• Complex: conjugate is distinct. Product of rep and conjugate contains the adjoint and a singlet (both true in some cases; true in general?).
Height of rep: height of max root vector - height of min root vector

As I've posted earlier, I've succeeded in finding expressions for the projection matrices for subalgebras in all but the most general case of SU/SO/Sp(n) -> simple algebra with n-D rep

To illustrate what the problem in that case is, let's work out some simple examples:

First, SU(6) -> SU(3)
Antisymmetric rep powers (plethysms):
1: 10000 -> 20
2: 01000 -> 21
3: 00100 -> 30 + 03
4: 00010 -> 12
5: 00001 -> 02
6: singlets, 7+: none
Rep products:
10000 * 00001 -> 10001 + 00000
20 * 02 -> 22 + 11 + 00
Adjoint + singlet in both cases

Powers 1,2,4,5 suggest unambiguous mappings from the original weight space to the subalgebra one. Power 3 has an ambiguity. How to resolve it?

Trying SO(7) -> G2 gives plethysms
1: 100 -> 01
2: 010 -> 10 + 01
3: 002 -> 02 + 01 + 00
4 ~ 3, 5 ~ 2, 6 ~ 1, 7: singlets, 8+: none
When one gets to the original algebra's spinor rep, one gets a 2 instead of 1. Fortunately, there's a rep with 2 for the subalgebra.

Trying SO(8) -> SO(7) gives plethysms
1: 1000 -> 001
2: 0100 -> 100 + 010
3: 0011 -> 101 + 001
4: 0020 + 0002 -> 200 + 100 + 002 + 000
5 ~ 3, 6 ~ 2, 7 ~ 1, 8: singlets, 9+: none
Even worse. In addition to the subalgebra ambiguities, there's a spinor-rep ambiguity on the original-algebra side.

I think that I solved the reality problem for subalgebra reps. To do that, I have to step back a bit and be more general. Algebra generators L are related by

L(subalgebra)i = sumj cij L(original)j

where we can choose L to be Hermitian. This makes the c's real. Take the conjugate. This makes:
conjugate of original rep -> conjugate of subalgebra rep

A rep is self-conjugate if L* = Z.L.Z-1 for all L defined in the rep. The rep is real if
Z.Z* = I
and pseudoreal if
Z.Z* = - I
Must be one or the other if irreducible; can also be mixed if reducible.

Thus,
real -> real
pseudoreal -> pseudoreal
complex -> any reality: real, pseudoreal, or complex

The adjoint rep can be constructed with a basis space being the algebra generators themselves: |L>. Original -> subalgebra:
|L(subalgebra)i> = sumj cij |L(original)j>

Thus,

-

For finite groups, it's possible to prove that every irrep and its conjugate has one copy of the identity rep in its product. The proof. Let
n(r1,r2,r3) = (1/N)*suma char(r1,a) * char(r2,a) * char(r3,a)
over all N elements a for irreps r1, r2, r3.

The product of r1 and r2 contains n(r1,r2,r3*) copies of r3.

Let r2 = r1*, its conjugate and let r3 = identity rep. Then, the sum for n reduces to
(1/N)*suma |char(r1,a)|^2 = 1 by orthonormality.

For compact Lie groups, one replaces the sum over elements a by an integral over the Haar measure of the element parameters, and N by that integral over 1. Orthonormality carries over, and that result appears here also. Since it is true for the group, it must also be true for the algebra.

Reality: a rep D(a) is self-conjugate if there's some matrix T such that
TijDik(a)Djl(a) = Tkl
It's a real rep if T is symmetric, psuedoreal if T is antisymmetric. It carries over into Lie algebras:
TkjLki + TikLkj = Tij
The T is essentially a singlet rep of the algebra.

So all these products contain exactly one singlet rep:
• Real: symmetric square
• Pseudoreal: antisymmetric square
• Complex: product with conjugate

I've been stumped on getting the adjoint reps from powers of reps and products of reps with conjugates, but it's possible to get them in some special cases:
• SU(n): product of n-D rep with its conjugate minus a singlet
• SO(n): antisymmetric square of n-D rep: antisymmetric 2-tensor
• Sp(n): symmetric square of n-D rep: symmetric 2-tensor
• Exceptional fundamental reps: G2, F4, E8: in antisymmetric square, E7: in symmetric square, E6: in product with conjugate
• Adjoint rep of every algebra: in antisymmetric square
For SO(n) spinors, one can construct an antisymmetric p-tensor with some spinors:
(spinor).(antisymmetric product of p Dirac matrices).(spinor)

There are n SO(n) Dirac matrices, and one gets the adjoint for p = 2.

The plethysm method of calculating subalgebra projection matrices not only has some ambiguities, it's computationally expensive with my brute-force code. For a rep with size n and going up to power p requires O(np/p!) calculations. Its maximum is at p ~ n, giving O(en/sqrt(2*pi*n)) calculations.

I've worked out how to do plethysms on SU(n) reps using Young-diagram techniques, but it's rather gruesomely complicated, and I don't know how to generalize it to SO(n), Sp(n), or the exceptional algebras. Any Young-diagram techniques for doing SO(n) products? One could work with their SU(n) supergroups, but I don't know how well that would be justified.

For instance, a symmetric traceless 2-tensor in SO(n) is (2,0,0,...), and it's SU(n) (2,0,0,...) - (0,0,0,...), or in Young diagrams, (2) - (). One would use Young-diagram techniques, then go back from SU(n) to SO(n).

In general, rep addition and multiplication form a commutative semiring, as the nonnegative integers do.

Its elements are every possible rep of a group or algebra. Each one can be decomposed into a set of nonnegative-integer number of copies of each irrep, and that can indeed be interpreted as a vector of nonnegative integers with each irrep being a basis vector.

Its addition and multiplication are rep sums and products. It has an additive identity, the empty rep, and a multiplicative identity, the singlet or scalar rep.

I think that I now understand Weyl orbits, and I've moved them into the main code of the Mathematica version. I'm now working on Weyl orbits for the Python and C++ versions.

To understand them, let's first consider the Weyl groups. This is the group of symmetries of an algebra's root vectors. An algebra's group is generated by operators T for root a operating on vector x:
T(a).x = x - 2*(x.a)*a/(a.a)

In the Cartan-Weyl basis, what my code primarily uses, the structures of the Weyl groups are not very clear. But one can use alternate bases, ones where the algebra metric becomes the identity matrix. Here, the a(i)'s will be the simple roots and the e(i)'s the basis vectors.

Cartan-Weyl: a(i) = e(i). The metric is only the identity matrix in the case of rank 1.

Identity-matrix versions:
A(n): a(i) = e(i) - e(i+1) -- (n+1) basis vectors
B(n): for i = 1 to n-1: a(i) = e(i) - e(i+1); a(n) = e(n)
C(n): for i = 1 to n-1: a(i) = e(i) - e(i+1); a(n) = 2*e(n)
D(n): for i = 1 to n-1: a(i) = e(i) - e(i+1); a(n) = e(n-1) + e(n)

The exceptional algebras are more complicated.
G2: a(long) = 2*e1 - e2 - e3; a(short) = -e1 + e2
or a(long) = -3*e1 + sqrt(3)*e2; a(short) = 2*e1
F4: a1 = e1 - e2; a2 = e2 - e3; a3 = e3; a4 = - (1/2)*(e1 + e2 + e3 + e4)
E6, E7, and E8 are even worse.

Now the Weyl groups in those bases.

A(n): for a(i): interchange coordinates i and i+1
The group: permutations of identity matrix with size (n+1), thus Sym(n+1)
Order: (n+1)!
The symmetry group of the n-D regular simplex (generalized triangle and tetrahedron).

B(n) and C(n): the A(n-1) generators with a(n) reversing the sign of coordinate n.
The group: the A(n-1) group multiplied by the group of diagonal matrices with n 1's and -1's.
Order: 2n * n!
The symmetry group of the n-D hypercube (generalized square and cube) and cross polytope (generalized square and octahedron).

D(n): the A(n-1) generators with a(n) interchanging coordinates n-1 and n and also reversing their sign.
Like for B(n) and C(n), but with an even number of -1's.
Order: 2n-1 * n!
Consider a hypercube's vertices in a coordinate system where their positions are vectors of +-1's. The D(n) symmetry group is the symmetry group of those with even numbers of -1's, and also those with odd numbers of -1's.

G2: Dih(6), the symmetry group of the regular hexagon.
Order: 12

F4: The group for B(4) with 4*4 nonsingular matrices of +- 1/2 added.
Order: 1152
The symmetry group of the 4D 24-cell.

E6, E7, and E8 have more complicated groups, with orders 51840, 2903040, and 696729600, respectively.

Now for what a Weyl orbit is. It's a group-theory kind of orbit. Consider a group action: for some g in group G, find x' = g(x) for some x and x' in some set X. Thus: X = G(X).

If the group is a matrix group, then X can be a set of vectors, with action x' = g.x -- the dot or inner product.

If one starts with some x, then for all group elements g, the g(x)'s form an "orbit". The order or size of an orbit is related to that of the group by the orbit-stabilizer theorem:

(order of group) = (order of orbit) * (order of "stabilizer" subgroup, the subgroup that does x = g(x)).

So orbits can be much smaller than groups, and that's what one often finds here.

For roots in algebras and their reps, the orbits generated by the Weyl groups are Weyl orbits. The roots in the orbits can be related to weights, and the weights have the property that only one of them has all-nonnegative components: the dominant weight. So as one designates irreps by their highest weights, one can do likewise with Weyl orbits.

One can find the Weyl orbits in irreps using the same procedure as for rep roots. All one has to do is keep those with nonnegative weight vectors, and stop when the solution process fails to find any more. Multiplicities / degeneracies one can find with Frobenius's theorem, working from the dominant weights' roots. However, some of the roots that one uses in it will be non-dominant, and one has to find which orbits those roots are in. But that does not present much difficulty, and finding an irrep's orbits is usually much faster than finding its basis set of roots/weights.

This speedup is not original with me -- I read about it in [1206.6379] LieART -- A Mathematica Application for Lie Algebras and Representation Theory. Though that package has much fancier output options than mine, it does not seem to do plethysms.

LieART also mentioned another speedup for doing rep products and subalgebra reps. One still needs to use complete expansions of the input reps, but as one calculates the products and subalgebra reps, one can keep only the roots with nonnegative weights, because those will designate the Weyl orbits. One can then find what irreps by using their Weyl-orbit content.

Now for expanding a Weyl orbit. One does not need an entire Weyl group for that, just its generators, and my code uses only the generators for the algebra's simple roots. One can start with the dominant root/weight and work one's way downward until one can proceed no further. Likewise, to see what orbit a root is in, one can work one's way upward until one gets to the dominant root/weight.

Using those alternative basis vectors for the roots of A(n), B(n), C(n), and D(n), one can use Weyl-group elements implicitly to get further speedups when expanding Weyl orbits.
• Take the dominant root from the Cartan-Weyl basis into an appropriate alternative basis.
• Apply the Weyl-group elements implicitly by doing sign changes and permutations, generating all the roots in the orbit.
• Take these roots back into the Cartan-Weyl basis.
However, that does not work so well for the exceptional algebras. One can easily special-case G2, but F4 is more difficult. I've been unable to find anything even halfway simple for E6, E7, or E8.

At least in my Mathematica version, I've found speedups for A(n), B(n), C(n), and D(n), but not for G2 or F4.

Back to the question of real vs. pseudoreal vs. complex and what reps are present, I've come across this interesting result.

(rep) * (conjugate rep) contains both the scalar and the adjoint.
If the rep is real, then the symmetric part contains the scalar and the antisymmetric part the adjoint.
If the rep is pseudoreal, then the symmetric part contains the adjoint and the antisymmetric part the scalar.
There is one each in every case.

Here's a proof:
Start out with a Lie-algebra transformation: V -> (1 + i*ε*L).V = V + i*ε*D(V) Thus,
D(V) = L.V
D(V*) = - L*.V* -- conjugate rep
L is Hermitian: L* = LT

Now form a rep from the outer product of V and V*: T, with indices (V index, V* index)

D(T) = L.T - T.L
For T = scalar I, D(I) = 0
For T = another algebra generator L', D(L') = [L,L'] -- one gets the adjoint rep.

-

Now the self-conjugate case.

V* = Z.V where Z is some constant Hermitian matrix.
L* = LT = - Z.L.Z-1

For the square rep for V, T has indices (V index, V index), and
D(T) = L.T + T.LT = L.T - T.Z.L.Z-1
D(Z-1) = 0 -- scalar, since Z is a constant.
D(L'.Z-1) = [L,L'].Z-1 -- adjoint, since L' is an algebra generator.

So we now need to consider the symmetry properties of Z-1 and L.Z-1 -- scalar and adjoint reps.

To do that, we will need the symmetry properties of Z. We start with
L* = - Z.L.Z-1

Complex conjugate again:
L = (Z*.Z).L.(Z*.Z)-1
giving Z*.Z = ZT.Z = s*I where s > 0 for real and s < 0 for pseudoreal.

Hermitian conjugate this time:
L* = - Z-1.L.Z
L = (Z*.Z-1).L.(Z*.Z-1)-1
Z*.Z-1 = t*I
Z* = ZT = t*Z
Z = t*ZT
Thus, t = +- 1 and Z2 = s*t
Since Z is Hermitian, its eigenvalues are real and those of Z2 are greater than 0, thus, s has the same sign as t.

-

Let's consider the symmetry of L.Z-1. Taking the transpose gives ZT-1.LT = - (ZT-1.Z).(L.Z-1) = - t*(L.Z-1)

Likewise, the transpose of Z-1 is t*Z-1.

For real irreps, t = 1 and s > 0, and the symmetric product gives the scalar and the antisymmetric one the adjoint.

For pseudoreal irreps, t = -1 and s < 0, and the symmetric product gives the adjoint and the antisymmetric one the scalar.