Runtime of the line where a recursive call happens = ?

  • Thread starter Thread starter s3a
  • Start date Start date
  • Tags Tags
    Line Runtime
AI Thread Summary
The discussion focuses on determining the runtime complexity of a recursive sorting algorithm resembling bubble sort. The algorithm consists of two while loops that each run approximately n-1 times, leading to a worst-case scenario of O(n^2) for the overall complexity. The recursive call is suggested to occur at most n/2 times, as each call can potentially sort two elements into their correct positions. Various modifications to the algorithm are proposed to improve efficiency, including early exit conditions and reducing the size of the input for recursive calls. Despite these optimizations, the time complexity remains O(n^2), as the fundamental structure of the algorithm does not change.
s3a
Messages
814
Reaction score
8

Homework Statement


What is the runtime complexity for the following (pseudo-code) algorithm?:
Code:
Algorithm MyAlgorithm(A, n)
    Input: Array of integer containing n elements
    Output: Possibly modified Array A
        done ← true
        j ← 0
        while j ≤ n - 2 do
            if A[j] > A[j + 1] then
                swap(A[j], A[j + 1])
                done ← false
            j ← j + 1
        end while
        j ← n - 1
        while j ≥ 1 do
            if A[j] < A[j - 1] then
                swap(A[j - 1], A[j])
                done ← false
            j ← j - 1
        end while
        if ¬ done
            MyAlgorithm(A, n)
        else
            return A

Homework Equations


Each primitive instruction is assumed to have the same constant amount of running time.

The Attempt at a Solution


Each line of the algorithm that's not "MyAlgorithm(A, n)" (without the quotes) has a constant run time, and each while loop is run (n-1) times.

I'm not sure about the run time of the line "MyAlgorithm(A, n)," other than the (recursive) call itself having a constant runtime, in addition to the runtime of the method called.

If the aforementioned line weren't a recursive call, I think I would understand the problem 100%.

Could someone please help me understand how to go about determining the runtime of that line, following which I should be able to determine the runtime of the algorithm itself?

Any input would be GREATLY appreciated!
 
Physics news on Phys.org
You'll have to figure out how often the recursion might be called. That depends on the input - you are probably asked for the worst case performance.
 
What that algorithm seems to do is sort an array, so I guess what you're saying is that I need to figure out how to determine what the input that will take the longest to be sorted by the algorithm is, so I tried applying the logic of the bubble sort, since this algorithm seems to make bubble sort-like passes, but from both directions.

Is the input that will take the longest an input that will take ##(n - 1)!## while-loop passes or ##(n - 1)! / 2## pairs of while-loop passes, or am I completely off the mark?
 
s3a said:
Is the input that will take the longest an input that will take ##(n - 1)!## while-loop passes or ##(n - 1)! / 2## pairs of while-loop passes, or am I completely off the mark?
Bubble sort isn't that bad. Try to find out exactly how the sortedness of the array has improved after one pass.
 
Try to find out exactly how the sortedness of the array has improved after one pass.
How does one quantify that, in general?

Also, I suspect that the answer is that the line with the recursive call is O(n) and that the entire algorithm is O(n^2), but I'd like to be aware of a proper systematic way of thinking about it.
 
s3a said:
How does one quantify that, in general?
There is no approach that works with every single problem. Figuring out how to quantify it here is the key part of the homework problem.
s3a said:
Also, I suspect that the answer is that the line with the recursive call is O(n) and that the entire algorithm is O(n^2), but I'd like to be aware of a proper systematic way of thinking about it.
What happens to the largest element in a single pass, for example?
 
There is no approach that works with every single problem. Figuring out how to quantify it here is the key part of the homework problem.
I see.

What happens to the largest element in a single pass, for example?
It goes to the end. So, each pass has one less pair of elements to compare. So, it seems like the first pass has n - 1 pairs of elements to compare, then the next pass n - 2 elements, then the next pass n - 3, and so on, until there is only one pair of elements left to compare. That is, there are (n - 1) + (n - 2) + (n - 3) + ... + 1 = (n-1)(n)/2 = (n^2 - n) / 2 pair comparisons, so the complexity of the bubble sort algorithm is O(n^2).

It seems to me that the algorithm I posted is similar to a cocktail / bidirectional bubble sort algorithm, except that the loops start from scratch every time.

So, it now seems to me that the two while loops, together, always run 2*(n-1) times, and the recursive call happens, at worst, n/2 times, because each recursive call, in the worst case, makes two elements be sorted into their correct location, such that the pair of while loops needs to be run n/2, in the worst case, and then there is one more recursive call that needs to be done, in order to return the array? So, in the end there are [2*(n-1) + k] * [n/2 + 1] instructions run, which means that the algorithm's worst-case complexity is O(n^2). Is my reasoning and answer correct?
 
Right.
s3a said:
It seems to me that the algorithm I posted is similar to a cocktail / bidirectional bubble sort algorithm, except that the loops start from scratch every time.
You could speed it up by ignoring the elements that are known to be sorted - useful in an application, but it won't change the time complexity.
 
Here is the algorithm, as stated in the problem (for your convenience).:
Code:
Algorithm MyAlgorithm(A, n)
    Input: Array of integer containing n elements
    Output: Possibly modified Array A
        done ← true
        j ← 0
        while j ≤ n - 2 do
            if A[j] > A[j + 1] then
                swap(A[j], A[j + 1])
                done ← false
            j ← j + 1
        end while
        j ← n - 1
        while j ≥ 1 do
            if A[j] < A[j - 1] then
                swap(A[j - 1], A[j])
                done ← false
            j ← j - 1
        end while
        if ¬ done
            MyAlgorithm(A, n)
        else
            return A

Would changing to the algorithm, as stated in the problem, to the following do what you said?:
Code:
Algorithm MyAlgorithm(A, n)
    Input: Array of integer containing n elements
    Output: Possibly modified Array A
        done ← true
        j ← 0
        while j ≤ n - 2 do
            if A[j] > A[j + 1] then
                swap(A[j], A[j + 1])
                done ← false
            j ← j + 1
        end while
        j ← n - 1
        while j ≥ 1 do
            if A[j] < A[j - 1] then
                swap(A[j - 1], A[j])
                done ← false
            j ← j - 1
        end while
        if ¬ done
            MyAlgorithm(A, n-1) <-- This is all I changed.
        else
            return A

Also would changing the algorithm, as stated in the problem, to the following be another kind of improvement that I can do?:
Code:
Algorithm MyAlgorithm(A, n)
    Input: Array of integer containing n elements
    Output: Possibly modified Array A
        done ← true
        j ← 0
        while j ≤ n - 2 do
            if A[j] > A[j + 1] then
                swap(A[j], A[j + 1])
                done ← false
            j ← j + 1
        end while
        if done <-- I added this (and removed the else that used to be below).
            return A <-- I added this (instead of the return A that used to be below).
        j ← n - 1
        while j ≥ 1 do
            if A[j] < A[j - 1] then
                swap(A[j - 1], A[j])
                done ← false
            j ← j - 1
        end while
        if ¬ done
            MyAlgorithm(A, n)

So, the following would have two kinds of improvements, right?:
Code:
Algorithm MyAlgorithm(A, n)
    Input: Array of integer containing n elements
    Output: Possibly modified Array A
        done ← true
        j ← 0
        while j ≤ n - 2 do
            if A[j] > A[j + 1] then
                swap(A[j], A[j + 1])
                done ← false
            j ← j + 1
        end while
        if done <-- I added this (and removed the else that used to be below).
            return A <-- I added this (instead of the return A that used to be below).
        j ← n - 1
        while j ≥ 1 do
            if A[j] < A[j - 1] then
                swap(A[j - 1], A[j])
                done ← false
            j ← j - 1
        end while
        if ¬ done
            MyAlgorithm(A, n-1) <-- I changed this.
 
  • #10
Actually, looking at it again, it seems to me that the modification which consists of an additional if statement, along with the removal of an else clause, is detrimental to the runtime of the program, if included together with the change which keeps decrementing n.
 
  • #11
These would be things to consider if the algorithm is supposed to be used frequently.
The second loop could get its own "done" check. In practice (in an actual computer) I would expect that to speed up the algorithm as checks of a boolean are very fast and a good branch prediction will expect the check to fail (=continue sorting), so you only lose time once, after the sorting is done - and that you lose no matter where the check is. The iterations in the loop depend on each other, they will take more time.
You could also stop considering initial elements (similar to ignoring the last element as you suggested), but that would need a modified function call.

All these things don't change the O(n2[/sup} runtime.
 

Similar threads

Back
Top