Vector that satisfies systems of equations simultaneously

In summary, the conversation discusses the task of normalizing a discretized function, denoted as ##p_{k,k'}##. This function needs to satisfy two conditions simultaneously: 1) ##\sum_{k=1}^{M} p^*_{k,k'} w_k=1## for all ##k'=1,2,...,M## and 2) ##\sum_{k=1}^{M} p^*_{k,k'} w_k \hat \Omega_k \cdot \hat \Omega_{k'}=g## for all ##k'=1,2,...,M##. The ##w_k## are weights and the ##\Omega## are unit quaternions. The conversation
  • #1
Telemachus
835
30
Hi. I want to normalize a discretized function ##p_{k,k'}##, to satisfy simultaneously two conditions. The normalized function ##p^*_{k,k'}## has to satisfy simultaneously:

1) ##\sum_{k=1}^{M} p^*_{k,k'} w_k=1##, for all ##k'=1,2,...,M##;

2) ##\sum_{k=1}^{M} p^*_{k,k'} w_k \hat \Omega_k \cdot \hat \Omega_{k'}=g##, for all ##k'=1,2,...,M##

The ##w_k## are weights, the ##\Omega## are versors, and I have to modify ##p_{k,k'}##, which is a given function in ##\Omega_k## in order to satisfy this both conditions.

For example, if I only had to satisfy condition 1), I would have the trivial solution:

##p^*_{k,k'}=p_{k,k'}/\sum_{k=1}^{M} p_{k',k} w_k##.

However, I'm not sure on how to proceed now that I have to satisfy both conditions simultaneously.

I was thinking that I could pose this problem this way. I could try to find the vector ##p^*_{k,k'}## considering the system of equations in matrix form. So, for example, for the first condition I would search the ##p^*_{k,k'}## that satisfy:

##(\sum_{k=1}^{M} p_{k,k'} w_k) p^*_{k,k'}=p_{k,k'}## for each ##k'##

I am trying to write these conditions in matrix form. Something like (this is clearly wrong):
##\begin{bmatrix}
p_{1,1}w_1 & p_{2,1}w_2 & p_{3,1}w_3 & \dots & p_{M,1}w_M \\
p_{1,2}w_1 & p_{2,2}w_2 & p_{3,2}w_3 & \dots & p_{M,2}w_M \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
p_{1,M}w_1 & p_{2,M}w_2 & p_{3,M}w_3 & \dots & p_{M,M} w_M\\
\vdots & \vdots & \vdots & \ddots & \vdots
\end{bmatrix}

\begin{bmatrix}
p^*_{1,1} \\
p^*_{1,2} \\
\vdots\\
p^*_{1,M} \\
\vdots\
\end{bmatrix}=\begin{bmatrix}
p_{1,1} \\
p_{1,2} \\
\vdots\\
p_{1,M} \\
\vdots\
\end{bmatrix}
##

This matrix should extend, and also include the systems for condition 2). However, I am having some trouble on figuring out the matrix coefficients and how to properly set the systems of equations.
 
Last edited:
Physics news on Phys.org
  • #2
When you say "normalize" I presume that you are only considering re-scaling the values simultaneously. So your first normalization condition fixes your discrete function and you have no more flexibility there. The additional condition could only provide a constraint to solve for the ##\Omega##'s or if those two are fixed would only at best be compatible with the first condition but more likely would over determine your system and give you no solutions.

hmmm... you say the ##\Omega##'s are versors which upon lookup means unit quaternions. This gives you some extra phase degree of freedom if you are also allowing that the discrete function values ##p_{k,k'}## are likewise and That's all if you mean "normalize" in this way. It would be helpful to understand your context by giving the application of these systems. For example if you are trying to normalize the whole double indexed array of values or each "row" separately. Or are you simply trying to find any solutions to the constraints from among all possible arrays of values.

As for matrix form. Your first condition looks like this:
[tex] \left[\begin{array}{cccc}w_1&w_2& \cdots & w_M\end{array}\right] \left[ \begin{array}{cccc} p*_{1,1} & p*_{2,1} & \cdots & p*_{M,1}\\ p*_{1,2} & p*_{2,2} & \cdots & p*_{M,2}\\ \vdots & \vdots & \ddots \vdots\\ p*_{1,M} & p*_{2,M} & \cdots & p*_{M,M} \end{array}\right] = \left[\begin{array}{cccc}1&1& \cdots & 1\end{array}\right][/tex]

But given the way you are using the weights it might be better to express this as a matrix product:
Let
[tex]P* = \left[ \begin{array}{cccc} p*_{1,1} & p*_{2,1} & \cdots & p*_{M,1}\\ p*_{1,2} & p*_{2,2} & \cdots & p*_{M,2}\\ \vdots & \vdots & \ddots \vdots\\ p*_{1,M} & p*_{2,M} & \cdots & p*_{M,M} \end{array}\right][/tex]
and let
[tex]W =diag(w_1,w_2, \ldots, w_m) = \left[\begin{array}{cccc}w_1& 0 & \cdots & 0\\ 0 & w_2& \cdots &0 \\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & w_M \end{array}\right] [/tex]
And similarly: ##\boldsymbol{\Omega} = diag(\hat{\Omega}_1,\hat{\Omega}_2,\ldots)##
Finally define the row of 1's and column of 1's. ##1_R = \left[\begin{array}{cccc} 1 & 1 &\cdots & 1\end{array}\right]##, ##1_C = 1_R^T##.

Then condition 1 reads: ## 1_R WP^* = 1_R## and condition 2 reads ## 1_R \boldsymbol{\Omega}WP^* \boldsymbol{\Omega} = g1_R##.
That's a bit awkward and makes me want to review your application to see if you've represented things correctly.
 
  • Like
Likes Telemachus
  • #3
jambaugh said:
When you say "normalize" I presume that you are only considering re-scaling the values simultaneously.
Yes, that's it.

So your first normalization condition fixes your discrete function and you have no more flexibility there. The additional condition could only provide a constraint to solve for the ##\Omega##'s or if those two are fixed would only at best be compatible with the first condition but more likely would over determine your system and give you no solutions.
There is also a symmetry condition that I haven't mentioned, which is: ##p_{k,k'}=p_{k',k}##, this is valid in a wide range of situations, in particular for the function ##p## I am interested in at this point.

The ##\hat \Omega## are known, the ##p_{k,k'}## are known, the ##w_k## are known, the unknown are the ##p^*_{k,k'}##. If only the condition 1) has to be satisfied, it is trivial to find these ##p^*_{k,k'}##. Now they must satisfy both conditions.

The system has ##2M## equations, and ##M^2## unknowns ##p*##, with the symmetry constraint the number of unknowns is reduced. However, it is underdetermined, the idea is to find the ##p*## that minimizes the error norm.
hmmm... you say the ##\Omega##'s are versors which upon lookup means unit quaternions. This gives you some extra phase degree of freedom if you are also allowing that the discrete function values ##p_{k,k'}## are likewise andThat's all if you mean "normalize" in this way. It would be helpful to understand your context by giving the application of these systems. For example if you are trying to normalize the whole double indexed array of values or each "row" separately. Or are you simply trying to find any solutions to the constraints from among all possible arrays of values.

Ok. I need to find any solution to the constraints, but the important fact is that ##p_{k,k'}## is known, and the solution ##p^*_{k,k'}## has to be a normalization of ##p_{k,k'}## that satisfies conditions 1) and 2).
As for matrix form. Your first condition looks like this:
[tex] \left[\begin{array}{cccc}w_1&w_2& \cdots & w_M\end{array}\right] \left[ \begin{array}{cccc} p*_{1,1} & p*_{2,1} & \cdots & p*_{M,1}\\ p*_{1,2} & p*_{2,2} & \cdots & p*_{M,2}\\ \vdots & \vdots & \ddots \vdots\\ p*_{1,M} & p*_{2,M} & \cdots & p*_{M,M} \end{array}\right] = \left[\begin{array}{cccc}1&1& \cdots & 1\end{array}\right][/tex]

But given the way you are using the weights it might be better to express this as a matrix product:
Let
[tex]P* = \left[ \begin{array}{cccc} p*_{1,1} & p*_{2,1} & \cdots & p*_{M,1}\\ p*_{1,2} & p*_{2,2} & \cdots & p*_{M,2}\\ \vdots & \vdots & \ddots \vdots\\ p*_{1,M} & p*_{2,M} & \cdots & p*_{M,M} \end{array}\right][/tex]
and let
[tex]W =diag(w_1,w_2, \ldots, w_m) = \left[\begin{array}{cccc}w_1& 0 & \cdots & 0\\ 0 & w_2& \cdots &0 \\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & w_M \end{array}\right] [/tex]
And similarly: ##\boldsymbol{\Omega} = diag(\hat{\Omega}_1,\hat{\Omega}_2,\ldots)##
Finally define the row of 1's and column of 1's. ##1_R = \left[\begin{array}{cccc} 1 & 1 &\cdots & 1\end{array}\right]##, ##1_C = 1_R^T##.

Then condition 1 reads: ## 1_R WP^* = 1_R## and condition 2 reads ## 1_R \boldsymbol{\Omega}WP^* \boldsymbol{\Omega} = g1_R##.
That's a bit awkward and makes me want to review your application to see if you've represented things correctly.

Yes, I know, I have made a few mistakes before, I was trying to do what follows, and it was late, and I edited the post many times. I think that the conditions you have written are correct, are the correct matrix representation for conditions 1) and 2). However, in those equations, the unknowns are the ##p^*_{k,k'}##. And I was trying to introduce somehow the ##p_{k,k'}## in the constraints, because the ##p_{k,k'}## are known, but it's the normalization that has to be normalized. So, thinking in the situation where only one has condition 1), which can be trivially solved, as in the first post, I was trying to write that situation in matrix form (unsuccessfully), and then maybe prolongate the array for the unknown ##p^*_{k,k'}## (which has two indices and looks like another matrix, but it should be written maybe in column major form, in order to get a system of the type ##A\mathbf{p}=\mathbf{x}## which I can solve).
 
  • #4
Pay very close attention to which of your actual variables are independent.

If you are normalizing known ##p_{k,k'}## by multiplying by a constant to yield the ##p^*##'s then you have only one unknown, the multiplicative factor, or at best the ##M## multiplicative factors of each "column" if you are normalizing only on a column by column basis. However doing that would destroy your symmetry condition for the ##p^*##. If you are trying to normalize the whole then your 1st condition isn't right.

If you like, the ##M^2## unknowns have an additional ##M^2## equations plus an additional set of unknowns something like:
[tex] p^*_{k,k'} = A_k' p_{k,k'}[/tex]
where the A's are your rescaling parameters. Or maybe they're all the same, ##A##?

Are you trying to normalize or are trying to solve for the p's in general so that they are normalized among other conditions?

For example, I can solve your system by letting all the ##p^*##s be zero except the first two in each column. The two variables per column let you solve the two conditions per column and you throw the p's away. Clearly there's an implicit relationship between the ##p##'s and ##p^*##'s that you should make very explicit
before you continue. The implication of "normalize" isn't quite consistent with what else you are trying to do.

In short... if you can solve a system completely and uniquely and you then add an additional constraint you either don't change the solution or you have no solution satisfying all conditions.
 
Last edited:
  • Like
Likes Telemachus
  • #5
I would like to add that if you are trying to set up a single linear system to solve your problem then you must make note of the fact that it is not actually a linear problem. The ## a p^* = p## condition is quadratic in the unknowns ##a## and ##p^*##. But again I think it's really only the scaling factor(s) that are your actual unknowns.

And yet again, it would be very very helpful to know the context of your question. What do the p's represent and most especially from where do the constraints come.
 
  • Like
Likes Telemachus
  • #6
jambaugh said:
Pay very close attention to which of your actual variables are independent.

If you are normalizing known ##p_{k,k'}## by multiplying by a constant to yield the ##p^*##'s then you have only one unknown, the multiplicative factor, or at best the ##M## multiplicative factors of each "column" if you are normalizing only on a column by column basis. However doing that would destroy your symmetry condition for the #p^*#. If you are trying to normalize the whole then your 1st condition isn't right.

If you like, the ##M^2## unknowns have an additional ##M^2## equations plus an additional set of unknowns something like:
[tex] p^*_{k,k'} = A_k' p_{k,k'}[/tex]
where the A's are your rescaling parameters. Or maybe they're all the same, ##A##?

That's right, but the A's should read ##A_{k,k'}##.
Are you trying to normalize or are trying to solve for the p's in general so that they are normalized among other conditions?

For example, I can solve your system by letting all the ##p^*##s be zero except the first two in each column. The two variables per column let you solve the two conditions per column and you throw the p's away. Clearly there's an implicit relationship between the ##p##'s and ##p^*##'s that you should make very explicit
before you continue. The implication of "normalize" isn't quite consistent with what else you are trying to do.

In short... if you can solve a system completely and uniquely and you then add an additional constraint you either don't change the solution or you have no solution satisfying all conditions.

It is a kind of normalization, in which instead of having only the constraint that the discrete integral be equal to 1, it also has to fulfill the second condition, where the moment integral in condition 2 must be equal to g.
 
Last edited:
  • #7
jambaugh said:
I would like to add that if you are trying to set up a single linear system to solve your problem then you must make note of the fact that it is not actually a linear problem. The ## a p^* = p## condition is quadratic in the unknowns ##a## and ##p^*##. But again I think it's really only the scaling factor(s) that are your actual unknowns.

And yet again, it would be very very helpful to know the context of your question. What do the p's represent and most especially from where do the constraints come.

Ok. The thing is that this integral comes into a differential equation. In order for this equation to preserve energy conservation, condition 1 must be fulfilled.

The integral in continuum form is: ##\int_{4\pi} p(\hat \Omega \cdot \hat \Omega') d\hat \Omega##, the integration is over the entire unit sphere. So condition 1) is for energy conservation.

Now, ##p(\hat \Omega \cdot \hat \Omega)## also satisfies: ##\int_{4\pi} p(\hat \Omega \cdot \hat \Omega') \hat \Omega \cdot \hat \Omega ' d\hat \Omega=g##, this comes from ##p(\hat \Omega \cdot \hat \Omega)##, which is the Henyey-Greenstein phase function. This condition is satisfied by the phase function in the continuum form, and in the discretized form should be fulfilled too. That is why normalization is needed.
 
  • #8
Telemachus said:
That's right, but the A's should read ##A_{k,k'}##.
...

But that is problematic. If you are independently rescaling every single individual term then you might just as well throw them out and solve directly for the ##p*##'s. Pick any non-zero number and I can "rescale" it to the value of 42. With A = 42/your number.

I don't think this is what you mean to do.

(... reading your latest post...)


Now to clarify one more point, could you explain how the single variable (or two variable if you include the scattering parameter g) phase function ##p(\theta)## or ##p(\cos(\theta))## relates to the double indexed discrete series with which you are working.

If your two unit "versors" ##\hat{\Omega}## and ##\hat{\Omega}'## represent something like incident and scattering angles then you are, by the implicit relations to the single variable function imposing far more than the simple symmetry conditions.

It is one thing for a matrix to be symmetric, and another for a matrix to be the product of a row vector and its transpose. Not every symmetric matrix can be so written.

I would suggest you specifically name the single variable function ##p## and the two variable function something else, say ##P##.
[tex] P(\Omega, \Omega') = p(\Omega\bullet \Omega')[/tex]
(I'm dropping the hats as we can assume each Omega to be normalized.)

So now. Are you trying to solve for ##p##? Are you trying to find the best discrete approximation? Are you using the same one given in a reference (found this on a quick search) but trying to solve for something else?

[... hmmmm...]

I think I see what's up here... You appear to be approximating integration over the sphere with weighted sums so:
[tex]P_{k,k'} = P(\Omega_k,\Omega_{k'}) = p(\Omega_k \bullet \Omega_{k'})[/tex]
and in general
[tex]\int_{\mathbf{S}^2} f(\Omega) d\Omega \approx \sum_{k=1}^M f(\Omega_k) \cdot w_k[/tex]

As a function of two variables ##P## should have normalization with respect to both arguments independently but symmetry gives you one from the other. Likewise with your secondary scale condition.

Now here's the problem. The two conditions were built into the original function. It was solved for subject to those conditions plus whatever other considerations the authors ... considered. The discrete case being an approximation will only approximately satisfy either condition. You can scale it to satisfy one of them exactly but you'll not be able to satisfy both unless you want to do more than rescale them. I think you might however find it efficacious to first normalize ##P\to P^*## and then tweak the ##g## parameter, say by using a different ##g_d## value for the function used to define the discrete values ##P_{k,k'}## but chosen so the second condition gives you the desired ##g##.

That's my advice if I now understand your problem correctly.
 
  • Like
Likes Telemachus
  • #9
jambaugh said:
But that is problematic. If you are independently rescaling every single individual term then you might just as well throw them out and solve directly for the ##p*##'s. Pick any non-zero number and I can "rescale" it to the value of 42. With A = 42/your number.

I don't think this is what you mean to do.

(... reading your latest post...)


Now to clarify one more point, could you explain how the single variable (or two variable if you include the scattering parameter g) phase function ##p(\theta)## or ##p(\cos(\theta))## relates to the double indexed discrete series with which you are working.

It's exactly what you did. I am integrating over the unit sphere, and the ##p## I am concerned with it's actually the Henyey-Greenstein phase function.

If your two unit "versors" ##\hat{\Omega}## and ##\hat{\Omega}'## represent something like incident and scattering angles then you are, by the implicit relations to the single variable function imposing far more than the simple symmetry conditions.
They are incident and scattered angles.

It is one thing for a matrix to be symmetric, and another for a matrix to be the product of a row vector and its transpose. Not every symmetric matrix can be so written.

I would suggest you specifically name the single variable function ##p## and the two variable function something else, say ##P##.
[tex] P(\Omega, \Omega') = p(\Omega\bullet \Omega')[/tex]
(I'm dropping the hats as we can assume each Omega to be normalized.)

So now. Are you trying to solve for ##p##? Are you trying to find the best discrete approximation? Are you using the same one given in a reference (found this on a quick search) but trying to solve for something else?

[... hmmmm...]

I think I see what's up here... You appear to be approximating integration over the sphere with weighted sums so:
[tex]P_{k,k'} = P(\Omega_k,\Omega_{k'}) = p(\Omega_k \bullet \Omega_{k'})[/tex]
and in general
[tex]\int_{\mathbf{S}^2} f(\Omega) d\Omega \approx \sum_{k=1}^M f(\Omega_k) \cdot w_k[/tex]

As a function of two variables ##P## should have normalization with respect to both arguments independently but symmetry gives you one from the other. Likewise with your secondary scale condition.

Right.
Now here's the problem. The two conditions were built into the original function. It was solved for subject to those conditions plus whatever other considerations the authors ... considered. The discrete case being an approximation will only approximately satisfy either condition. You can scale it to satisfy one of them exactly but you'll not be able to satisfy both unless you want to do more than rescale them. I think you might however find it efficacious to first normalize ##P\to P^*## and then tweak the ##g## parameter, say by using a different ##g_d## value for the function used to define the discrete values ##P_{k,k'}## but chosen so the second condition gives you the desired ##g##.

That's my advice if I now understand your problem correctly.

Yes. Your understanding is correct. However, I was reading this paper (it is not an original idea of mine, but something I have found, and I would like to understand how to implement): https://www.sciencedirect.com/science/article/pii/S0017931011006466

If you have access to the paper (I have were I work), you can see these two conditions in equations (18a) and (18b).

The author says that he performs a normalization of the type: ##p^*_{k,k'}=(1+A_{k,k'})p_{k,k'}##. Then the thing is to find the coefficients ##A_{k,k'}## by solving the system given by conditions 1) and 2), and the symmetry condition helps in reducing the number of unknowns (equations 18). The author says that, as the system is underdetermined, he performs a QR decomposition, in order to find the solution which gives the minimum error. However, I don't know how to solve this systems of equations, and I am not seeing how to write the matrix in order to perform the QR decomposition.

I think that if I do as you said, first I normalize the phase function to fulfill condition 1), and then I modify g in order to fulfill condition 2), after I modify g, condition 1) will no longer be fulfilled. And at the same time, I would be modifying the problem, as g represents the asymmetry factor, which gives how forward or backward peaked the pahse function is (and through the phase function, the probability of forward or backward scattering).

EDIT: After reading the paper more carefully, I have noted that the equation to be diagonalized is ##p^*_{k,k'}=(1+A_{k,k'}) p_{k,k'}##, and the system of equations involves this equation, which I wasn't considering. The unknowns are the ##A_{k,k'}##.
 
Last edited:
  • #10
Ok, so the author is solving a constrained optimization problem. He is minimizing error subject to the normalizing constraints. He could use this to solve for the p's or p*'s directly. The method is straightforward though not simple. The most straightforward method of solution is to apply the method of Lagrange multipliers.

However the devil is in the detail specifically how the author is expressing the error as a function of the A's that is to be minimized.

To set it up you'll need your error function ##E(A_{k,k'})## and your constraint functions ##C_1(A_{k,k'}) =1, C_2(A_{k,k'}) = g## and possibly your symmetry condition ##S(A_{k,k'} ) = A_{k,k'} - A_{k',k})## which is constrained to be zero.

You'll then want to solve the system of equations for the ##M^2+3## unknowns:
[tex] \frac{\partial}{\partial A_{k,k'}} E = \lambda_1\frac{\partial}{\partial A_{k,k'}}C_1 + \lambda_2 \frac{\partial}{\partial A_{k,k'}}C_2 + \lambda_3 \frac{\partial}{\partial A_{k,k'}}S[/tex]
[tex] C_1 = 1,\quad C_2 = g,\quad S = 0[/tex]
This give you a system of ##M^2+3## and ##M^2+3## unknowns, counting the Lagrange multipliers ##\lambda_k, k=1,2,3##.

We do something similar to find the probability distributions for systems with average energy and particle number that maximizes entropy subject to the expectation value and normalization of probability constraints. You should find no end of examples online.
 
  • Like
Likes Telemachus

What is a vector that satisfies systems of equations simultaneously?

A vector that satisfies systems of equations simultaneously is a set of values that can be substituted into all of the equations in a system, resulting in a true statement for each equation. This means that the vector is a solution to all of the equations in the system.

How can I find a vector that satisfies systems of equations simultaneously?

To find a vector that satisfies systems of equations simultaneously, you can use methods such as substitution, elimination, or graphing. These methods involve manipulating the equations in the system to isolate a variable and then solving for it.

Can there be more than one vector that satisfies systems of equations simultaneously?

Yes, there can be more than one vector that satisfies systems of equations simultaneously. This means that there can be multiple solutions to a system of equations. In some cases, there may be an infinite number of solutions.

What does it mean if a vector does not satisfy systems of equations simultaneously?

If a vector does not satisfy systems of equations simultaneously, it means that when the values in the vector are substituted into the equations in the system, at least one of the equations results in a false statement. This means that the vector is not a solution to the system and does not satisfy all of the equations.

Can a vector that satisfies systems of equations simultaneously be used to solve real-world problems?

Yes, a vector that satisfies systems of equations simultaneously can be used to solve real-world problems. Systems of equations can be used to model real-world situations such as the cost of a shopping trip or the growth of a population. Finding a vector that satisfies the system can provide the solution to the problem being modeled.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
919
Replies
27
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
666
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
708
  • Programming and Computer Science
Replies
31
Views
2K
  • Linear and Abstract Algebra
Replies
10
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
2K
Back
Top