How Can I Vectorize a Computation in a Python Device Simulation Algorithm?

  • Thread starter maverick280857
  • Start date
  • Tags
    Python
In summary, the algorithm performs a summation of the form [tex]Gtilde_n = Gtilde_n + dot( dot(G, sigcIn), conjugate(transpose(G)) )[/tex] but it is faster to vectorize the calculation by partitioning α's range for each ij and premultiplying all (G_{i})_{\alpha} vectors component-wise by a vector made of the Σ diag components.
  • #1
maverick280857
1,789
4
Hi,

I am working on a device simulation algorithm, and am implementing it in Python on my laptop that runs Arch Linux. A particular step requires me to perform a computation of the form

[tex]\tilde{G} = G \Sigma_{c}^{in} G^{\dagger}[/tex]

where [itex]G[/itex] and [itex]\Sigma_{c}^{in}[/itex] are both [itex]N \times N[/itex] matrices, and [itex]\Sigma_{c}^{in}[/itex] is in fact a diagonal matrix.

N is of the order of 5040

Now, if I do this naively, the command to do it in Python is

Code:
Gtilde_n = Gtilde_n + dot( dot(G, sigcIn), conjugate(transpose(G)) )

(The addition is for purposes of sub-band summation, and isn't relevant to the question I'm posing.)

but this takes about 2 hours when it works (sometimes it just crashes my laptop).

Now if I utilize the fact that [itex]\Sigma_{c}^{in}[/itex] is diagonal, then I can do something more naive and write a for loop of the form

Code:
              for row in range(0,N):
                      for col in range(0,N):
                              sum = 0
                              for alpha in range(0,N):
                                      sum = sum + G[row,alpha]*sigcIn[alpha,alpha]*conjugate(G[col,alpha])
                              Gtilde_n[row,col] = Gtilde_n[row,col] + sum

But this doesn't look like a nice piece of code: it has 3 nested for loops, and looks a rather ancient way of doing the following calculation

[tex]\tilde{G}_{ij} = \sum_{\alpha}\sum_{\beta} G_{i\alpha}(\Sigma_{c}^{in})_{\alpha\beta}(G^{\dagger})_{\beta j} = \sum_{\alpha,\beta} G_{i\alpha}(\Sigma_{c}^{in})_{\alpha\alpha}\delta_{\alpha\beta} (G^{\dagger})_{\beta j} =\sum_{\alpha} G_{i\alpha}(\Sigma_{c}^{in})_{\alpha\alpha}G^{*}_{j\alpha}[/tex]

Is there some way to vectorize this?

In general, is there a better way to perform this computation?
 
Last edited:
Technology news on Phys.org
  • #2
maverick280857 said:
[tex]G_{ij} = \cdots = \sum_{\alpha} G_{i\alpha}(\Sigma_{c}^{in})_{\alpha\alpha}G^{*}_{j\alpha}[/tex]

maverick280857 said:
Is there some way to vectorize this?

Trivially by partioning α's range for each ij, but I would rather try to exploit this
[tex]G_{i\alpha} G^{*}_{j\alpha} = \left(G_{j\alpha} G^{*}_{i\alpha}\right)^{*} [/tex] or/and(given Σ is real, what I suspect) premultiplying all [itex](G_{i})_{\alpha}[/itex] vectors component-wise by a vector made of the Σ diag components. That could well be done in parallel.

maverick280857 said:
In general, is there a better way to perform this computation?
Are there any known algebraic constraints on [itex]G[/itex]?
 
Last edited:
  • #3
Thanks for your reply.

Solkar said:
Are there any known algebraic constraints on [itex]G[/itex]?

No, its just the matrix representation of a Green's function.
 
  • #4
What does this notation mean - [itex]\Sigma_{c}^{in}[/itex]?

You said that it was a diagonal matrix, but I'm curious about the significance of c and in.
 
  • #5
maverick280857 said:
No, its just the matrix representation of a Green's function.

You're sure there's no constraint related to its eigenspaces?
 
Last edited:
  • #6
Mark44 said:
What does this notation mean - [itex]\Sigma_{c}^{in}[/itex]?

You said that it was a diagonal matrix, but I'm curious about the significance of c and in.

So [itex]\Sigma_{c}^{in} = \Sigma_{1}^{in} + \Sigma_{2}^{in}[/itex] is simply the sum of two so called inscattering matrices, computed from the self-energy in a many body problem. There are two terms because there are two contacts. For each contact,

[tex]\Sigma_{k}^{in} = f(E)\Gamma_k[/tex]

k = 1, 2, and

[tex]\Gamma_k = i[\Sigma_k - \Sigma_k^\dagger][/tex]

It turns out that [tex]\Gamma_i[/itex] is a real matrix, i.e. all its elements are real.
 

1. What is G A Gdagger in Python?

G A Gdagger in Python is a method for calculating the inverse of a matrix using the Gauss-Jordan elimination algorithm. It is commonly used in linear algebra and is particularly useful for solving systems of linear equations.

2. How do I implement G A Gdagger in Python?

To implement G A Gdagger in Python, you will need to use the numpy library. First, you will need to define your matrix using the numpy array function. Then, you can use the numpy.linalg.inv function to calculate the inverse of the matrix. Finally, you can use the numpy.dot function to multiply the original matrix by its inverse to verify the result.

3. What are the advantages of using G A Gdagger in Python?

One of the main advantages of using G A Gdagger in Python is its efficiency. The Gauss-Jordan elimination algorithm used in G A Gdagger eliminates the need for computing determinants and cofactors, making it faster than other methods for calculating matrix inverses. Additionally, G A Gdagger is also more accurate than other methods, as it avoids round-off errors that can occur with other techniques.

4. Are there any limitations to using G A Gdagger in Python?

One limitation of using G A Gdagger in Python is that it can only be used for square matrices. Additionally, the matrix must be invertible, meaning it must have a non-zero determinant. If the matrix is singular or has a determinant of 0, G A Gdagger will not be able to calculate its inverse.

5. Can G A Gdagger be used for large matrices in Python?

Yes, G A Gdagger can be used for large matrices in Python. However, it is important to note that the computation time will increase as the size of the matrix increases. For very large matrices, it may be more efficient to use other methods for calculating matrix inverses.

Similar threads

  • Programming and Computer Science
Replies
2
Views
2K
Replies
9
Views
1K
  • Programming and Computer Science
Replies
1
Views
1K
  • Programming and Computer Science
Replies
1
Views
1K
  • Advanced Physics Homework Help
Replies
26
Views
3K
  • Programming and Computer Science
Replies
8
Views
885
  • Programming and Computer Science
Replies
16
Views
3K
Replies
3
Views
3K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
6
Views
5K
  • Programming and Computer Science
Replies
6
Views
5K
Back
Top