Transforming Matrix Operations: Row-Major to Column-Major

  • Thread starter Thread starter swartzism
  • Start date Start date
Click For Summary
SUMMARY

This discussion focuses on transforming matrix operations from row-major to column-major format, specifically in the context of Fortran programming. It highlights that treating a row-major array as a column-major array is equivalent to transposing it, which is challenging for non-square matrices due to data movement constraints. The conversation emphasizes the importance of using libraries like BLAS that allow for matrix interpretation without explicit transposition. Additionally, participants discuss the need to adjust nested loops to accommodate the column-major format, particularly in relation to the dimensions of betaP and betaM arrays.

PREREQUISITES
  • Understanding of matrix operations and their storage formats (row-major vs. column-major).
  • Familiarity with Fortran programming, specifically Fortran 90 syntax.
  • Knowledge of linear algebra libraries, particularly BLAS.
  • Experience with nested loops and array manipulation in programming.
NEXT STEPS
  • Research how to implement matrix transposition in Fortran without data movement.
  • Learn about the BLAS library and its options for matrix transposition.
  • Explore techniques for optimizing nested loops for column-major matrix operations.
  • Investigate the implications of memory layout on computational performance in numerical algorithms.
USEFUL FOR

This discussion is beneficial for software developers, particularly those working with numerical computing in Fortran, data scientists optimizing matrix operations, and anyone interested in improving performance through efficient memory management in linear algebra applications.

swartzism
Messages
103
Reaction score
0
Is there a straightforward way to switch matrix operations from row-major to column-major? From wikipedia,
Treating a row-major array as a column-major array is the same as transposing it. Because performing a transpose requires data movement, and is quite difficult to do in-place for non-square matrices, such transpositions are rarely performed explicitly. For example, software libraries for linear algebra, such as the BLAS, typically provide options to specify that certain matrices are to be interpreted in transposed order to avoid the necessity of data movement.
But applying this to the nested do-loop I am working with isn't so straightforward.

Code:
! ncol = 666
! nmu = 18
! ncomp = 4
 
      do 120 j=1,ncol
         do 130 i=1,nmu
            temppp = 0.
            temppm = 0.
            do 140 k=1,ncomp
               temppp = temppp + (bcomp(k)/btotal)*betatP(i,j,k)
               temppm = temppm + (bcomp(k)/btotal)*betatM(i,j,k)
 140        continue
            totlpp(i,j) = temppp
            totlpm(i,j) = temppm
 130     continue
 120  continue

betaP and betaM should have dimensions (k,i,j) for this to be column-major, but I'm not sure how to go about changing the computations to do so.

Any ideas?
 
Technology news on Phys.org
Hhhmmm, I have never concerned myself with these kind of things, my data is not that large and computers are fast enough, by now.

Is your data that large? Are you really concerned about speed? performance? Because making sure that the fastest varying index takes you from one memory address to the adjacent one, it's all that will be achieved if you manage to line up your memory and your calculations.

How did betaP and betaM come about? Can you make the switch back there in the first place? Then you can come back to these nested loops and do the right thing.
 
Last edited:
Is this the kind of thing you are looking for?
Code:
totlpp=0.0  ! array operation, I presume you are using Fortran 90
totlpm=0.0  !
do k = 1, ncomp
   coeff = bcomp(k)/btotal
   do j = 1, ncol
      do i = 1, nmu
         totlpp(i,j) = totlpp(i,j) + coeff*betatP(i,j,k)
         totlpm(i,j) = totlpm(i,j) + coeff*betatM(i,j,k)
      end do
   end do
end do
I don't have a fortran compiler at the moment and have not tested whether this produces the same results or not or whether it is what you are looking for or not...I still am not quite sure what you want to achieve...
 
We have many threads on AI, which are mostly AI/LLM, e.g,. ChatGPT, Claude, etc. It is important to draw a distinction between AI/LLM and AI/ML/DL, where ML - Machine Learning and DL = Deep Learning. AI is a broad technology; the AI/ML/DL is being developed to handle large data sets, and even seemingly disparate datasets to rapidly evaluated the data and determine the quantitative relationships in order to understand what those relationships (about the variaboles) mean. At the Harvard &...

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
8K
  • · Replies 11 ·
Replies
11
Views
8K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
1
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K