Undergrad Reducing NxN Matrix to 2x2 w/ Physical Constraints

Click For Summary
SUMMARY

The discussion focuses on reducing an arbitrary symmetric NxN matrix to a 2x2 matrix using auxiliary constraints in linear algebra. Participants detail the transformation of voltage and current relationships, specifically using a 3x3 matrix as a case study. Key equations include the relationships between voltages V_p, V_s and currents i_p, i_s, with emphasis on eliminating variables to simplify the system. The conversation highlights the need for programmatic approaches to handle larger matrices efficiently.

PREREQUISITES
  • Understanding of linear algebra concepts, particularly matrix transformations.
  • Familiarity with voltage and current relationships in electrical systems.
  • Knowledge of matrix inversion and its implications in solving systems of equations.
  • Experience with programming for mathematical computations, ideally in Python or MATLAB.
NEXT STEPS
  • Study matrix reduction techniques in linear algebra, focusing on symmetric matrices.
  • Learn about matrix inversion and its applications in electrical engineering.
  • Explore programming libraries for matrix operations, such as NumPy for Python.
  • Investigate advanced linear algebra topics, including eigenvalues and eigenvectors, for larger systems.
USEFUL FOR

Electrical engineers, mathematicians, and data scientists who are involved in systems modeling, matrix computations, and those looking to optimize complex electrical systems using linear algebra.

waynewec
Messages
2
Reaction score
0
TL;DR
Reducing an NxN matrix to a 2x2 by application of physical constraints
Gonna preface by saying I never thought linear algebra would be a class I would regret not taking so much... but in short the goal is to reduce an arbitrary symmetric NxN system using a set of auxiliary constraint relationships, e.g. for a 3x3

<br /> \begin{bmatrix}<br /> V_1\\<br /> V_2\\<br /> V_3\\<br /> \end{bmatrix}<br /> =<br /> \begin{bmatrix}<br /> L_{e11}&amp;L_{e12}&amp;L_{e12}\\<br /> L_{e21}&amp;L_{e22}&amp;L_{e23}\\<br /> L_{e31}&amp;L_{e32}&amp;L_{e33}\\<br /> \end{bmatrix}<br /> \cdot<br /> \begin{bmatrix}<br /> i_1\\<br /> i_2\\<br /> i_3\\<br /> \end{bmatrix}\\<br />
using the following constraints
##V_1=V_2=V_p##
##V_3=V_s##
##i_p=i_1+i_2##
##i_s=i_3##
to end up with an equivalent system with L_s, L_p, and M in terms of the starting L_{eij} matrix
<br /> \begin{bmatrix}<br /> V_p\\<br /> V_s\\<br /> \end{bmatrix}<br /> =<br /> \begin{bmatrix}<br /> L_p&amp;M\\<br /> M&amp;L_s\\<br /> \end{bmatrix}<br /> \cdot<br /> \begin{bmatrix}<br /> i_p\\<br /> i_s\\<br /> \end{bmatrix}<br />
For those interested in the context, this is an application specific usage of the method covered in https://onlinelibrary.wiley.com/doi/full/10.1002/eej.23240 but they glossed a bit over some of the key linear math that I don't understand. Eventually I'll be extending this concept to quite large matrices with more complex auxiliary constraints, but for now I'd appreciate some guidance, and some good resources, to get me goin
 
Last edited:
Physics news on Phys.org
Have you tried just eliminating variables? ##V_1 = V_2## relates ##i_n## making it possible to express ##i_1## in terms of ##i_p## and ##i_s##. Clearly, ##i_2 = i_p - i_1## and ##i_3 = i_s## eliminates ##i_2## and ##i_3##.

I get something like,

##V_p = (L_{11}-L_{12})i_1 + L_{12}i_p + L_{13}i_s##
## 0 = (L_{11}-L_{12})i_1 + (L_{12}-L_{22})(i_p-i_1) + (L_{13}-L_{23})i_s##
## V_s = (L_{31}-L_{32})i_1 + L_{32}i_p + L_{33}i_s##

Okay, just use the second equation to eliminate ##i_1##.
 
Paul Colby said:
Have you tried just eliminating variables? ##V_1 = V_2## relates ##i_n## making it possible to express ##i_1## in terms of ##i_p## and ##i_s##. Clearly, ##i_2 = i_p - i_1## and ##i_3 = i_s## eliminates ##i_2## and ##i_3##.

I get something like,

##V_p = (L_{11}-L_{12})i_1 + L_{12}i_p + L_{13}i_s##
## 0 = (L_{11}-L_{12})i_1 + (L_{12}-L_{22})(i_p-i_1) + (L_{13}-L_{23})i_s##
## V_s = (L_{31}-L_{32})i_1 + L_{32}i_p + L_{33}i_s##

Okay, just use the second equation to eliminate ##i_1##.
Absolutely valid, and an approach I've used, but any changes made to constraints or the order of the input matrix requires extremely tedious manual calculations. I was hoping for a direction that relies on matrix mathematics and could be implemented programmatically. 3x3, not so bad - 9x9 will make me want to kill myself
 
Well, okay. The voltage constrains in matrix form,

##\left(\begin{array}{c} V_1 \\ V_2 \\ V_3\end{array}\right) = \left(\begin{array}{cc} 1 & 0 \\ 1 & 0 \\ 0 & 1\end{array}\right)\left(\begin{array}{c} V_p \\ V_s\end{array}\right)##

The current constraints in matrix form,

##\left(\begin{array}{c} i_p \\ i_s \end{array}\right) = \left(\begin{array}{ccc} 1 & 1 & 0 \\ 0 & 0 & 1\end{array}\right)\left(\begin{array}{c} i_1 \\ i_2 \\ i_3 \end{array}\right)##

More generaly,

##V = C V_c##

and

##I_c = D I##

Clearly,

##V_c = C^{-1} L D^{-1} I_c##

is the solution. All you need to do is figure out what ##C^{-1}## and ##D^{-1}## really mean.
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
18K
  • · Replies 1 ·
Replies
1
Views
8K
  • · Replies 9 ·
Replies
9
Views
14K
  • · Replies 3 ·
Replies
3
Views
4K