Formulating Linear Constraints on a Matrix

JeSuisConf
Messages
34
Reaction score
1
Hi everyone. I have this problem which I am trying to formulate. Basically, I have the following linear constraints:

<br /> p_{11} = 2<br />
<br /> p_{22} = 5<br />
<br /> p_{33}+2p_{12}=-1<br />
<br /> 2p_{13} =2<br />
<br /> 2p_{23} = 0<br />

And these are for the symmetric matrix

<br /> \mathbf{P} =<br /> \left( \begin{array}{ccc}<br /> p_{11} &amp; p_{12} &amp; p_{13} \\<br /> p_{12} &amp; p_{22} &amp; p_{23} \\<br /> p_{13} &amp; p_{23} &amp; p_{33} \end{array} \right)<br />

I would like to formulate a way to represent the linear constraints and \mathbf{P} as a matrix at the same time.

I can do this using \mathbf{P} or a vector of the entries of \mathbf{P}. The linear constraints are easy if I use a vector (\mathbf{Ap}=\mathbf{b}, but then I don't know how to represent \mathbf{P} as a matrix from the vector! And if I leave \mathbf{P} as a matrix, all the constraints are easy to formulate except p_{33}+2p_{12}=-1. Can anyone help me figure this out?

If anyone's curious, I'm trying to solve for \mathbf{P} over the cone of PSD matrices using SDP. But I am entirely new to SDP and I'm scratching my head formulating this problem. I feel stupid right now :'(
 
Physics news on Phys.org
I answered my own question... and the answer has to do with algebraic dependence in the structure of the problem, and how SDP problems are formulated. I just thought I was missing something obvious ... hence my convoluted search.
 
Well, I'm glad you answered it. It looks simple to me: I you let p_{12}= p then p_{33}= -1- 2p and all other values are essentially given:
\begin{pmatrix}2 &amp; p &amp; 1 \\ p &amp; 5 &amp; 0 \\1 &amp; 0 &amp; -1-2p\end{pmatrix}
 
HallsofIvy said:
Well, I'm glad you answered it. It looks simple to me: I you let p_{12}= p then p_{33}= -1- 2p and all other values are essentially given:
\begin{pmatrix}2 &amp; p &amp; 1 \\ p &amp; 5 &amp; 0 \\1 &amp; 0 &amp; -1-2p\end{pmatrix}

That's exactly what came out in the end. I'm not sure why I didn't see it at first... I always seem to manage to do things in the most roundabout way.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top