Programming style in R - to array-process or not?

  • Thread starter Thread starter andrewkirk
  • Start date Start date
  • Tags Tags
    Programming
Click For Summary
R's array-processing capabilities allow for operations without loops, promoting concise and elegant code, although this can lead to challenges with more complex tasks. Users often feel pressured to avoid loops, believing that array processing is faster and more efficient, despite the potential for increased memory usage due to intermediate results. Comparisons are drawn with Python, where there is no strong emphasis on avoiding loops, which some find liberating. The discussion also highlights the use of specialized libraries for handling sparse matrices in power systems analysis, raising questions about modern compilers' efficiency with such data structures. Overall, the conversation reflects a tension between the elegance of array processing and the practicality of looping in programming.
andrewkirk
Science Advisor
Homework Helper
Insights Author
Gold Member
Messages
4,140
Reaction score
1,741
TL;DR
Can array-processing capability sometimes be more of a burden than a boon? Do you feel moral pressure to avoid loops when using an array-processing-capable language? Do you give in to the pressure?
R enables processing arrays by single statements rather than via loops, by having almost all commands able to apply to arrays as well as to scalars. I seem to recall, when I first learned it (a long while ago now) feeling that the intro texts, and user comments online, encouraged the practice of avoiding loops where possible.

I find the ability to process an array without looping very powerful, and it is dead-easy when one wants to do something like add two arrays componentwise, so we just write c <- a + b rather than having to write a nest of loops with one nesting level for each dimension of the arrays. But it becomes difficult when one has to perform operations that are more abstract, and may have several stages. I have become good at this over the years, and enjoy the elegance of a piece of code that does something really complex and multi-dimensional in a handful of lines. But I am often nagged by the thought that I could still do it much faster if I just wrote out the loops and performed the operations componentwise. Some purist (puritanical?) streak in me usually stops me from doing that, even when I feel sure I could write the code for that in two minutes, whereas it takes me half an hour, and much searching to find specialist helper functions, to do it without loops.

I have been learning Python recently and get the feeling that, while Python can do some array processing, there is no philosophy of avoiding loops. I find that rather liberating.

I'd love to hear the opinions of other R users as to whether they have ever felt a similar pressure to avoid loops, the extent to which their code avoids loops as a result, ad whether it takes them a long time to find a non-looping solution where a looping solution would be quicker to write, despite taking up more lines.

There may be other languages for which similar issues arise. I used APL thirty years ago and seem to recall that as having array-processing capabilities and a philosophy of avoiding loops where possible. I'd be interested in comments on this issue for other languages too.

Another reason I try to avoid loops is a belief that code that uses array processing capabilities will generally run faster than code with explicit loops, since R is an interpreted language. I don't know if there's anything in that.
 
Computer science news on Phys.org
Sometimes implicit array processing can be parallelized in ways hand written loops cannot And that’s the primary reason for using these constructs.

However array processing can be more memory intensive with intermediate results storage.

one example, I saw was with APL programming Where a very compact program computed the prime numbers from 1 to 100000. It did it by making 1000000x1000000 array with each row representing a number and each column of each row containing a 0 or 1 to indicate that the column number was a factor of the row number. Then is summed over all columns to make a single row of counts.

a filter was applied over the single row to create a list of column numbers where the row sum was 2 Meaning the number had two factors 1 and itself. That list was the primes numbers from 1 to 1000000.

anyway, you can see in this example the heavy use of memory initially.
 
  • Like
Likes andrewkirk
In power systems analysis, the matrices are very sparse, with 98% of off-diagonal elements zero. We use special sparse matrix libraries and methods, and of course we really don't allocate or store those 98% zero elements.

I've been retired for a long time, so my info is dated. Can modern compilers and parallel processing features deal efficiently with sparse matrices?
 
  • Like
Likes andrewkirk
Thread 'ChatGPT Examples, Good and Bad'
I've been experimenting with ChatGPT. Some results are good, some very very bad. I think examples can help expose the properties of this AI. Maybe you can post some of your favorite examples and tell us what they reveal about the properties of this AI. (I had problems with copy/paste of text and formatting, so I'm posting my examples as screen shots. That is a promising start. :smile: But then I provided values V=1, R1=1, R2=2, R3=3 and asked for the value of I. At first, it said...

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 13 ·
Replies
13
Views
5K
  • · Replies 3 ·
Replies
3
Views
3K
Replies
7
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 15 ·
Replies
15
Views
6K
  • · Replies 6 ·
Replies
6
Views
3K