Understanding the Time Complexity of Nested Loops in Algorithms

Click For Summary

Discussion Overview

The discussion revolves around understanding the time complexity of nested loops in algorithms, specifically focusing on a given code snippet involving two nested loops. Participants explore the implications of the loop structure on time complexity, addressing concepts such as logarithmic growth and the impact of initialization on loop behavior.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants express confusion about how to analyze the time complexity of the nested loops, particularly regarding the initialization and behavior of the variable j.
  • One participant suggests that the body of the inner j loop executes 1 + ⌊log₂(n / i)⌋ times for each iteration of the outer i loop.
  • Another participant proposes that the overall time complexity is O(n log n) but seeks clarification on how to sum the contributions from the loops.
  • Some participants argue that the inner loop's execution count varies with respect to i, complicating a straightforward multiplication of the complexities of the outer and inner loops.
  • A participant mentions that if j were initialized to 1 instead of i, the inner loop would indeed run log(n) times, raising questions about the impact of initialization on complexity analysis.
  • Another participant provides a rigorous argument for the time complexity, defining functions to represent the time taken by the algorithm and the inner loop, leading to the conclusion that A(N) = O(N log N).
  • Some participants discuss the importance of understanding how changes in N affect the number of executions of the inner loop, emphasizing the scalability of the algorithm rather than exact counts.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the exact nature of the time complexity due to differing interpretations of the loop behavior and initialization. Multiple competing views remain regarding the impact of the variable i and the structure of the loops.

Contextual Notes

There are limitations in the discussion regarding assumptions about the initialization of variables and the specific conditions under which the loops operate. Participants express uncertainty about the implications of these factors on the overall time complexity.

Who May Find This Useful

This discussion may be useful for individuals studying algorithm analysis, particularly those interested in time complexity related to nested loops and logarithmic growth in computational contexts.

needhelp83
Messages
193
Reaction score
0
Ok, I am brand new at this so I am kind of confused how to figure this out.

Code:
 i [tex]\leftarrow[/tex] n 
 while i >= 1     
  j [tex]\leftarrow[/tex] i 
  while j <= n 
   <body of the j loop>, needs (1) 
   j [tex]\leftarrow[/tex]j*2 
  end j while 
  i [tex]\leftarrow[/tex]i-1 
 end i while

I know that with nested loops that n goes to a power of 2, but it states that in my problem. What exactly am I missing and how do I show each step?
 
Physics news on Phys.org
needhelp83 said:
Ok, I am brand new at this so I am kind of confused how to figure this out.

Code:
 i [tex]\leftarrow[/tex] n 
 while i >= 1     
  j [tex]\leftarrow[/tex] i 
  while j <= n 
   <body of the j loop>, needs (1) 
   j [tex]\leftarrow[/tex]j*2 
  end j while 
  i [tex]\leftarrow[/tex]i-1 
 end i while

I know that with nested loops that n goes to a power of 2, but it states that in my problem. What exactly am I missing and how do I show each step?

n is an initializer, it would appear. So it does not change in value, does it? And the middle loop doesn't seem right. j gets initialized to n the first time, is left as n^2 after the firsts pass, and would appear to fail the comparison the 2nd time through? I must be missing something as well...


EDIT -- Oh, I see that j gets re-initialized before the 2nd comparison, so will not fail. In fact, it just keeps getting decremented until the outer loop comparison faiils, I guess.
 
The body of the inner j loop is executed 1 + \lfloor \log_2 (n / i) \rfloor times for a given pass of the outer i loop. Now sum this over the different values of i from the outer loop.
 
Alright, I didn't get this one quite right on the homework, but I am trying to figure out exactly how to solve this thing. The answer was n log n , but how do you total this up when going through the loop


Code:
i [tex]\leftarrow[/tex] n                                                                         1
   while i >= 1                                                                                       n
      j [tex]\leftarrow[/tex] i                                                                    1
         while j <= n                                                                                 n?
              <body of the j loop>, needs [tex]\Theta[/tex](1)                  [tex]\Theta[/tex](1) 
              j [tex]\leftarrow[/tex]j*2                                                         1
         end j while 
     i [tex]\leftarrow[/tex]i-1                                                                   1
end i while

Now when I total up I disregard so I left with n. Now I assume that the inner loop is log n, but I don't fully understand. Thanks for the reply justsam and berkeman, but further direction would be greatly appreciated
 
While I get what JustSam is saying, that's really not what O(N) is all about (keep the math out of it.).
"Summing up" the parts of an algorithm is, as far as I'm concerned, not how I solve these.

I rewrote what you put down so I could visualize it:

Code:
for(i = n; i>= 1; i--){    -LOOP B
   for(j = i; j <= n; j*=2){   -LOOP A
      <constant time steps>
   }
}

Alright, so we have linear-over-n many logarithmic-over-n loops. Multiply to get n*log(n).

Don't overanalyze O(N). These problems will introduce things (like the variable i above) just to waste your time. The goal is not to know exactly how many times a line will be executed given a certain N, the problem is to know how changing N will change the number of times the inner step is executed (O(N) tells you how Scalable your algorithm is)

So, think this: If I increase N, there will be linearly more outer loops, and logarithmically more inner runnings of the middle statement.

Consider these cases. If I gave you,

Code:
for(i = 0; i < n; i++){
   for(j = 0; j < n; j++){
      <statement>
   }
}

the runtime is obviously O(N^2)

Now try
Code:
for(i = 0; i < n; i++){
   for(j = i; j < n; j++){
      <statement>
   }
}

It's STILL O(N^2) man. we have linearly-over-n many linear-over-n inner loops. Multiply to get n*n.

Another example:
Code:
for(i = 1; i < n; i*=2){
  for(j = i; j <= n; j*=2){
    <statement>
  }
}
Go through it with me: We have logarithmic-in-n many logarithmic-in-n loops. Multiply to get O(N) = (log2(n))^2
 
taifunbrowser said:
Alright, so we have linear-over-n many logarithmic-over-n loops. Multiply to get n*log(n).
In all fairness, that doesn't quite work: the inner loop varies with respect to i, and it is easy to construct examples where a simplistic multiplication give the wrong value, such as:
Code:
for(int i = 1; i < N; i *= 2) {
  for(int j = 0; j < i; ++j) {
    // Something
  }
}
which executes "Something" O(N) times, not O(N^2) times.


(keep the math out of it.).
How can you keep the math out of it? :confused: The question we're trying to answer is already math!

In fact (assuming your intuition isn't simply flat out wrong), the way you are approaching this problem is most likely just appling the basic ideas of calculus! You intuitively recognize that most of the N times through the outer loop take nearly log(N)/log(2) steps and everything else is negligible, which heuristically justifies just multiplying the two.

But, for the record, it is very easy to turn it into a fully rigorous argument. For simplicity, let me define two functions:
A(N) = the time to run the whole algorithm
B(i,N) = the time to run the inner loop

First, we just replace the inner loop time with something that's always bigger, but still logarithmic:
We know B(i,N) < 1 + log(N) / log(2), and therefore
A(N) < N * (1 + log(N) / log(2)) = O(N log N)

Now, we isolate "most" of the iterations and find a logarithmic lower bound for those:
If i < N/2, then B(i,N) = log(N) / log(2), and therefore
A(N) > (N/2) log(N) / log(2) = O(N log N)

And we put it together to get A(N) = O(N log N)
 
needhelp83 said:
Now I assume that the inner loop is log n, but I don't fully understand.

It would be log(n), base 2, if j were initialized to 1 instead of i each time. Are you sure that is not the case?

EDIT: just read Hurkyl's last post more carefully ... looks like a good argument to me.
 
@hurkyl: Ah, that makes sense.

Minor correction, do you mean

Code:
for(int i = 1; i < N; i *= 2) {
  for(int j = 0; j < i; ++j) {
    // Something
  }
}
which executes "Something" O(N) times, not O(N*Log(N)) times.

Well, I was curious why flipping the inner with the outer loop mattered so much, and I was really surprised. Here are some graphs:

http://taif09.exofire.net/report/ONdata/

So yes, it looks like every situation with nested loops is a little different. I still don't think the inclusion of i / a "middle" variable is really the important thing, but I guess I don't have any basis for this assumption. Neat, though.
 
Thanks for all the help.

Wow, this seems to be a very interesting problem. So with looking at the arguments I am gathering the inner loop is a log n while the outer loop remains n and you multiply together.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 15 ·
Replies
15
Views
6K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K