Runtime of the line where a recursive call happens = ?

  • Thread starter Thread starter s3a
  • Start date Start date
  • Tags Tags
    Line Runtime
Click For Summary

Discussion Overview

The discussion revolves around determining the runtime complexity of a recursive algorithm presented in pseudo-code, which appears to perform sorting operations on an array. Participants explore the implications of the recursive call within the algorithm and how it affects the overall time complexity, considering various input scenarios and comparisons to known sorting algorithms.

Discussion Character

  • Homework-related
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant seeks clarification on how to determine the runtime of the recursive call in the algorithm, noting that each line except the recursive call has constant runtime.
  • Another participant suggests that the worst-case performance depends on how often the recursion is called, implying that the input characteristics are crucial.
  • Some participants propose that the algorithm resembles bubble sort, questioning whether the longest input would require a factorial number of while-loop passes.
  • There is a suggestion that the recursive call might have a linear runtime, while the overall complexity could be quadratic, although this remains uncertain.
  • One participant discusses how to quantify the improvement in sortedness after each pass, indicating that the number of comparisons decreases with each pass.
  • Another participant elaborates on the number of comparisons made in bubble sort and suggests that the algorithm's complexity could be O(n^2) due to the nature of the sorting process.
  • Some participants consider modifications to the algorithm that could potentially improve its efficiency, discussing changes to the recursive call and the structure of the algorithm.

Areas of Agreement / Disagreement

Participants express differing views on the runtime complexity of the algorithm, with some suggesting O(n) for the recursive call and O(n^2) for the overall algorithm, while others question the validity of these claims. The discussion remains unresolved regarding the exact nature of the runtime complexity.

Contextual Notes

Participants note that the complexity analysis may depend on specific input characteristics and the behavior of the algorithm during execution. There is an acknowledgment that different approaches to quantifying sortedness may yield varying insights into the algorithm's performance.

s3a
Messages
828
Reaction score
8

Homework Statement


What is the runtime complexity for the following (pseudo-code) algorithm?:
Code:
Algorithm MyAlgorithm(A, n)
    Input: Array of integer containing n elements
    Output: Possibly modified Array A
        done ← true
        j ← 0
        while j ≤ n - 2 do
            if A[j] > A[j + 1] then
                swap(A[j], A[j + 1])
                done ← false
            j ← j + 1
        end while
        j ← n - 1
        while j ≥ 1 do
            if A[j] < A[j - 1] then
                swap(A[j - 1], A[j])
                done ← false
            j ← j - 1
        end while
        if ¬ done
            MyAlgorithm(A, n)
        else
            return A

Homework Equations


Each primitive instruction is assumed to have the same constant amount of running time.

The Attempt at a Solution


Each line of the algorithm that's not "MyAlgorithm(A, n)" (without the quotes) has a constant run time, and each while loop is run (n-1) times.

I'm not sure about the run time of the line "MyAlgorithm(A, n)," other than the (recursive) call itself having a constant runtime, in addition to the runtime of the method called.

If the aforementioned line weren't a recursive call, I think I would understand the problem 100%.

Could someone please help me understand how to go about determining the runtime of that line, following which I should be able to determine the runtime of the algorithm itself?

Any input would be GREATLY appreciated!
 
Physics news on Phys.org
You'll have to figure out how often the recursion might be called. That depends on the input - you are probably asked for the worst case performance.
 
What that algorithm seems to do is sort an array, so I guess what you're saying is that I need to figure out how to determine what the input that will take the longest to be sorted by the algorithm is, so I tried applying the logic of the bubble sort, since this algorithm seems to make bubble sort-like passes, but from both directions.

Is the input that will take the longest an input that will take ##(n - 1)!## while-loop passes or ##(n - 1)! / 2## pairs of while-loop passes, or am I completely off the mark?
 
s3a said:
Is the input that will take the longest an input that will take ##(n - 1)!## while-loop passes or ##(n - 1)! / 2## pairs of while-loop passes, or am I completely off the mark?
Bubble sort isn't that bad. Try to find out exactly how the sortedness of the array has improved after one pass.
 
Try to find out exactly how the sortedness of the array has improved after one pass.
How does one quantify that, in general?

Also, I suspect that the answer is that the line with the recursive call is O(n) and that the entire algorithm is O(n^2), but I'd like to be aware of a proper systematic way of thinking about it.
 
s3a said:
How does one quantify that, in general?
There is no approach that works with every single problem. Figuring out how to quantify it here is the key part of the homework problem.
s3a said:
Also, I suspect that the answer is that the line with the recursive call is O(n) and that the entire algorithm is O(n^2), but I'd like to be aware of a proper systematic way of thinking about it.
What happens to the largest element in a single pass, for example?
 
There is no approach that works with every single problem. Figuring out how to quantify it here is the key part of the homework problem.
I see.

What happens to the largest element in a single pass, for example?
It goes to the end. So, each pass has one less pair of elements to compare. So, it seems like the first pass has n - 1 pairs of elements to compare, then the next pass n - 2 elements, then the next pass n - 3, and so on, until there is only one pair of elements left to compare. That is, there are (n - 1) + (n - 2) + (n - 3) + ... + 1 = (n-1)(n)/2 = (n^2 - n) / 2 pair comparisons, so the complexity of the bubble sort algorithm is O(n^2).

It seems to me that the algorithm I posted is similar to a cocktail / bidirectional bubble sort algorithm, except that the loops start from scratch every time.

So, it now seems to me that the two while loops, together, always run 2*(n-1) times, and the recursive call happens, at worst, n/2 times, because each recursive call, in the worst case, makes two elements be sorted into their correct location, such that the pair of while loops needs to be run n/2, in the worst case, and then there is one more recursive call that needs to be done, in order to return the array? So, in the end there are [2*(n-1) + k] * [n/2 + 1] instructions run, which means that the algorithm's worst-case complexity is O(n^2). Is my reasoning and answer correct?
 
Right.
s3a said:
It seems to me that the algorithm I posted is similar to a cocktail / bidirectional bubble sort algorithm, except that the loops start from scratch every time.
You could speed it up by ignoring the elements that are known to be sorted - useful in an application, but it won't change the time complexity.
 
Here is the algorithm, as stated in the problem (for your convenience).:
Code:
Algorithm MyAlgorithm(A, n)
    Input: Array of integer containing n elements
    Output: Possibly modified Array A
        done ← true
        j ← 0
        while j ≤ n - 2 do
            if A[j] > A[j + 1] then
                swap(A[j], A[j + 1])
                done ← false
            j ← j + 1
        end while
        j ← n - 1
        while j ≥ 1 do
            if A[j] < A[j - 1] then
                swap(A[j - 1], A[j])
                done ← false
            j ← j - 1
        end while
        if ¬ done
            MyAlgorithm(A, n)
        else
            return A

Would changing to the algorithm, as stated in the problem, to the following do what you said?:
Code:
Algorithm MyAlgorithm(A, n)
    Input: Array of integer containing n elements
    Output: Possibly modified Array A
        done ← true
        j ← 0
        while j ≤ n - 2 do
            if A[j] > A[j + 1] then
                swap(A[j], A[j + 1])
                done ← false
            j ← j + 1
        end while
        j ← n - 1
        while j ≥ 1 do
            if A[j] < A[j - 1] then
                swap(A[j - 1], A[j])
                done ← false
            j ← j - 1
        end while
        if ¬ done
            MyAlgorithm(A, n-1) <-- This is all I changed.
        else
            return A

Also would changing the algorithm, as stated in the problem, to the following be another kind of improvement that I can do?:
Code:
Algorithm MyAlgorithm(A, n)
    Input: Array of integer containing n elements
    Output: Possibly modified Array A
        done ← true
        j ← 0
        while j ≤ n - 2 do
            if A[j] > A[j + 1] then
                swap(A[j], A[j + 1])
                done ← false
            j ← j + 1
        end while
        if done <-- I added this (and removed the else that used to be below).
            return A <-- I added this (instead of the return A that used to be below).
        j ← n - 1
        while j ≥ 1 do
            if A[j] < A[j - 1] then
                swap(A[j - 1], A[j])
                done ← false
            j ← j - 1
        end while
        if ¬ done
            MyAlgorithm(A, n)

So, the following would have two kinds of improvements, right?:
Code:
Algorithm MyAlgorithm(A, n)
    Input: Array of integer containing n elements
    Output: Possibly modified Array A
        done ← true
        j ← 0
        while j ≤ n - 2 do
            if A[j] > A[j + 1] then
                swap(A[j], A[j + 1])
                done ← false
            j ← j + 1
        end while
        if done <-- I added this (and removed the else that used to be below).
            return A <-- I added this (instead of the return A that used to be below).
        j ← n - 1
        while j ≥ 1 do
            if A[j] < A[j - 1] then
                swap(A[j - 1], A[j])
                done ← false
            j ← j - 1
        end while
        if ¬ done
            MyAlgorithm(A, n-1) <-- I changed this.
 
  • #10
Actually, looking at it again, it seems to me that the modification which consists of an additional if statement, along with the removal of an else clause, is detrimental to the runtime of the program, if included together with the change which keeps decrementing n.
 
  • #11
These would be things to consider if the algorithm is supposed to be used frequently.
The second loop could get its own "done" check. In practice (in an actual computer) I would expect that to speed up the algorithm as checks of a boolean are very fast and a good branch prediction will expect the check to fail (=continue sorting), so you only lose time once, after the sorting is done - and that you lose no matter where the check is. The iterations in the loop depend on each other, they will take more time.
You could also stop considering initial elements (similar to ignoring the last element as you suggested), but that would need a modified function call.

All these things don't change the O(n2[/sup} runtime.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 10 ·
Replies
10
Views
2K
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K