# Banach sub-algebra

1. Sep 11, 2014

### Fredrik

Staff Emeritus
Edit: I originally wrote that $\mathcal A$ is a Banach algebra. The assumption that goes into the theorem is stronger. It's a C*-algebra. I am however still mainly interested in the claim that $\mathcal A_1$, as defined below, is a Banach sub-algebra of $\mathcal B(\mathcal A)$.

Let $\mathcal A$ be a C*-algebra without identity. For each $x\in\mathcal A$, define $L_x:\mathcal A\to A$ by $L_xy=xy$ for all $y\in\mathcal A$. It's trivial to show that $L_x\in\mathcal B(\mathcal A)$ for all $x\in\mathcal A$. The map $x\mapsto L_x$ with domain $\mathcal A$ will be denoted by L. It's easy to show that L is an isometric algebra homomorphism into $\mathcal B(\mathcal A)$. Define $\mathcal A_1=\{L_x+\lambda|x\in\mathcal A,\, \lambda\in\mathbb C\}$. The notation $\mathcal A_1=L(\mathcal A)+\mathbb C$ makes this definition easier to remember. $\mathcal A_1$ is closed under addition, scalar multiplication and multiplication. Supposedly* (this is what I want to prove), it's also complete. Since it's a subset of $\mathcal B(\mathcal A)$, every Cauchy sequence is convergent. It's just not obvious that the limit of a convergent sequence in $\mathcal A_1$ is in $\mathcal A_1$. So I want to prove that it is.

*) The claim is made by Conway in "A course in operator theory", in the proof of theorem 1.5, on page 3. http://books.google.com/books?id=gt...way operator&hl=sv&pg=PA3#v=onepage&q&f=false. His notation is slightly different from mine. (In particular, he writes λ where I write L)

Some of my thoughts: Let $(T_n)_{n=1}^\infty$ be an arbitrary convergent sequence in $\mathcal A_1$. Let $(x_n)$ and $(\lambda_n)$ be sequences in $\mathcal A$ and $\mathbb C$ respectively, such that $T_n=L_{x_n}+\lambda_n$. It would be nice if we could use that $(T_n)$ is Cauchy, to show that these two are Cauchy, and therefore convergent. Then we can define $x=\lim_nx_n$ and $\lambda=\lim_n\lambda_n$, and perhaps show that $L_{x_n}+\lambda_n\to L_x+\lambda$. But I don't see a way to proceed from
$$\varepsilon>\|T_n-T_m\|=\|(L_{x_n}+\lambda_n)-(L_{x_m}+\lambda_m)\| =\|L_{x_n-x_m}+(\lambda_n-\lambda_m)\|.$$ When we're dealing with Hilbert spaces, the usual way to continue a calculation like this would be to square both sides of the inequality and then use the pythagorean theorem (assuming that the terms are orthogonal), but I don't see what to do here.

It looks like (I haven't worked through that part of the proof yet) that we can prove that the C*-identity holds, without proving completeness first, when the involution is defined by $(L_x+\lambda)^* =L_{x^*}+\bar\lambda$. I was thinking that maybe we can use that somehow, but when I tried, it just made the calculation longer, and I ran into the same issue as above.

Last edited: Sep 11, 2014
2. Sep 11, 2014

### jambaugh

Can't you use the triangle inequality on the norms and pick n, and m each such that the component terms are within epsilon/2 of each other to show their difference (or sum) are within epsilon of each other?

I vaguely recall that being the trick to show the sum of two Cauchy sequences is Cauchy in the most generic setting.

(Or am I misunderstanding your issue?)

3. Sep 11, 2014

### Fredrik

Staff Emeritus
You mean like this?
$$\|L_{x_n-x_m}+(\lambda_n-\lambda_m)\|\leq\|L_{x_n-x_m}\|+\|\lambda_n-\lambda_m\| =\|x_n-x_m\|+\|\lambda_n-\lambda_m\|<\frac\varepsilon 2+\frac\varepsilon 2=\varepsilon.$$ In that case, the answer is no, because we don't know that $(x_n)$ and $(\lambda_n)$ are Cauchy sequences. In my idea for a proof, the first step is to prove that they are.

What we know is that $(L_{x_n}+\lambda_n)$ is a Cauchy sequence. That's our starting point. But since we have $\varepsilon>$ at the start of the calculation, it only makes sense to use the version of the triangle inequality with a minus sign, i.e. $\|x+y\|\geq\|x\|-\|y\|$, to get a ≥ instead of a ≤.

$$\varepsilon>\|T_n-T_m\|=\|(L_{x_n}+\lambda_n)-(L_{x_m}+\lambda_m)\| =\|L_{x_n-x_m}+(\lambda_n-\lambda_m)\|\geq\|x_n-x_m\|-\|\lambda_n-\lambda_m\|.$$ This doesn't imply that the norms on the right are small. They could both be huge.

4. Sep 12, 2014

### jambaugh

Oh... I thought there must be something more to it. But then, or course you won't get there from here because...
Clearly a sum or difference of sequences which is itself Cauchy need not imply each sequence is Cauchy. One could easily construct counter examples.

Hmmm.... I have to run to class. I'll think on this some more.

5. Sep 12, 2014

### Fredrik

Staff Emeritus
Right, but this isn't an arbitrary Cauchy sequence that's written as a sum of two sequences in an arbitrary way. Most, if not all, of the terms in the first sequence belong to a different subspace than the terms in the other*. Consider e.g. a convergent sequence in $\mathbb R^2$ that's written as a sum of a sequence $(x_ne_1)$ with all terms on the x axis and a sequence $(y_ne_2)$ with all terms on the y axis. Both of those sequences will be Cauchy. This is easy to prove using the pythagorean theorem. The key steps are:
\begin{align}
&\varepsilon^2>\|(x_n e_1+y_ne_2)-(x_me_1+y_me_2)\|^2 =\|(x_n-x_m)e_1+(y_n-y_m)e_2\|^2 =\|(x_n-x_m)e_1\|^2+\|(y_n-y_m)e_2\|^2\\
&>\begin{cases}\|x_ne_1-x_ne_1\|^2\\ \|y_ne_2-y_me_2\|^2.\end{cases}
\end{align} The last equality in the first line is the pythagorean theorem, so this proof relies on orthogonality, but the orthogonality isn't essential. It just makes the proof easier. If the subspaces aren't orthogonal, we can use the law of cosines instead of the pythagorean theorem.

*) Can $L_x$ can be a multiple of the identity in $\mathcal B(\mathcal A)$. I'm not sure. If $L_x=\mu$ for some $\mu\in\mathbb C$, then $L_x-\mu$ is an identity element of addition in $\mathcal A_1$. The additive identity is unique, so this would imply that $L_x-\mu=L_0+0$. But this doesn't seem to imply that $\mu=0$ and $x=0$. If it did, L would be injective, and I'm not sure it can be. So my guess is that there's no $x\in\mathcal A$ such that $L_x$ is a multiple of the identity.

Edit: I figured out how to prove that $L_x$ can't be a multiple of the identity. Suppose that it is. Let $\mu\in\mathbb C$ be such that $L_x=\mu$. Since L is linear, this equality implies that $L_{\frac x \mu}$ is the identity. So for all $y\in\mathcal A$, we have $y=L_{\frac x \mu}y=\frac x \mu y$. This means that $\frac x \mu$ is a left identity in $\mathcal A$. But that implies that $\big(\frac x\mu\big)^*$ is a right identity, because for all $y\in\mathcal A$, we have
$$y\left(\frac x \mu\right)^* =\left(\frac x \mu y^*\right)^* =(y^*)^*=y.$$ Since $\frac x \mu$ is a left identity and its adjoint is a right identity, we have
$$\frac x\mu =\frac x \mu \left(\frac x \mu\right)^* =\left(\frac x \mu\right)^*.$$ This means among other things that $\frac x \mu$ is also a right identity, and therefore an identity. This contradicts the assumption that $\mathcal A$ doesn't have an identity.

Last edited: Sep 12, 2014
6. Sep 13, 2014

### Fredrik

Staff Emeritus
I found a proof strategy in Sunder. Apparently I'm supposed to look at the quotient vector space $\mathcal B(\mathcal A)/L(\mathcal A)$ and use that the associated projection map is continuous to show that $L(\mathcal A)+\mathbb C$ is a closed set. I will have to think about how to do this.

It's really frustrating to read Conway's books sometimes. I don't know how he can think that this isn't even worthy of a comment.

Last edited: Sep 13, 2014