Programming style in R - to array-process or not?

  • Thread starter Thread starter andrewkirk
  • Start date Start date
  • Tags Tags
    Programming
Click For Summary
SUMMARY

The discussion centers on the programming style in R, particularly the preference for array processing over traditional looping constructs. R's ability to handle arrays with single statements, such as componentwise addition (e.g., c <- a + b), is highlighted as a powerful feature that can lead to elegant and concise code. However, users express concerns about the potential performance trade-offs, as array processing may be more memory-intensive and sometimes slower than explicit loops. The conversation also touches on comparisons with Python and APL, noting that while R encourages avoiding loops, Python does not share this philosophy, which some users find liberating.

PREREQUISITES
  • Understanding of R programming and its array processing capabilities
  • Familiarity with array operations and their performance implications
  • Knowledge of Python programming and its approach to loops
  • Basic concepts of sparse matrices and their applications in programming
NEXT STEPS
  • Research R's array processing functions and their performance benchmarks
  • Explore Python's array manipulation libraries, such as NumPy
  • Investigate the use of sparse matrix libraries in R and their efficiency
  • Learn about parallel processing techniques for optimizing array operations
USEFUL FOR

Data scientists, R programmers, and anyone interested in optimizing array processing techniques in programming languages, particularly R and Python.

andrewkirk
Science Advisor
Homework Helper
Insights Author
Gold Member
Messages
4,140
Reaction score
1,741
TL;DR
Can array-processing capability sometimes be more of a burden than a boon? Do you feel moral pressure to avoid loops when using an array-processing-capable language? Do you give in to the pressure?
R enables processing arrays by single statements rather than via loops, by having almost all commands able to apply to arrays as well as to scalars. I seem to recall, when I first learned it (a long while ago now) feeling that the intro texts, and user comments online, encouraged the practice of avoiding loops where possible.

I find the ability to process an array without looping very powerful, and it is dead-easy when one wants to do something like add two arrays componentwise, so we just write c <- a + b rather than having to write a nest of loops with one nesting level for each dimension of the arrays. But it becomes difficult when one has to perform operations that are more abstract, and may have several stages. I have become good at this over the years, and enjoy the elegance of a piece of code that does something really complex and multi-dimensional in a handful of lines. But I am often nagged by the thought that I could still do it much faster if I just wrote out the loops and performed the operations componentwise. Some purist (puritanical?) streak in me usually stops me from doing that, even when I feel sure I could write the code for that in two minutes, whereas it takes me half an hour, and much searching to find specialist helper functions, to do it without loops.

I have been learning Python recently and get the feeling that, while Python can do some array processing, there is no philosophy of avoiding loops. I find that rather liberating.

I'd love to hear the opinions of other R users as to whether they have ever felt a similar pressure to avoid loops, the extent to which their code avoids loops as a result, ad whether it takes them a long time to find a non-looping solution where a looping solution would be quicker to write, despite taking up more lines.

There may be other languages for which similar issues arise. I used APL thirty years ago and seem to recall that as having array-processing capabilities and a philosophy of avoiding loops where possible. I'd be interested in comments on this issue for other languages too.

Another reason I try to avoid loops is a belief that code that uses array processing capabilities will generally run faster than code with explicit loops, since R is an interpreted language. I don't know if there's anything in that.
 
Computer science news on Phys.org
Sometimes implicit array processing can be parallelized in ways hand written loops cannot And that’s the primary reason for using these constructs.

However array processing can be more memory intensive with intermediate results storage.

one example, I saw was with APL programming Where a very compact program computed the prime numbers from 1 to 100000. It did it by making 1000000x1000000 array with each row representing a number and each column of each row containing a 0 or 1 to indicate that the column number was a factor of the row number. Then is summed over all columns to make a single row of counts.

a filter was applied over the single row to create a list of column numbers where the row sum was 2 Meaning the number had two factors 1 and itself. That list was the primes numbers from 1 to 1000000.

anyway, you can see in this example the heavy use of memory initially.
 
  • Like
Likes   Reactions: andrewkirk
In power systems analysis, the matrices are very sparse, with 98% of off-diagonal elements zero. We use special sparse matrix libraries and methods, and of course we really don't allocate or store those 98% zero elements.

I've been retired for a long time, so my info is dated. Can modern compilers and parallel processing features deal efficiently with sparse matrices?
 
  • Like
Likes   Reactions: andrewkirk

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 17 ·
Replies
17
Views
4K
  • · Replies 13 ·
Replies
13
Views
5K
  • · Replies 3 ·
Replies
3
Views
3K
Replies
7
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
7K
  • · Replies 15 ·
Replies
15
Views
6K