Runtime of the line where a recursive call happens = ?

In summary: It goes to the end. So, each pass has one less pair of elements to compare. So, it seems like the first pass has n - 1 pairs of elements to compare, then the next pass n...1, then 0, and so on.
  • #1
s3a
818
8

Homework Statement


What is the runtime complexity for the following (pseudo-code) algorithm?:
Code:
Algorithm MyAlgorithm(A, n)
    Input: Array of integer containing n elements
    Output: Possibly modified Array A
        done ← true
        j ← 0
        while j ≤ n - 2 do
            if A[j] > A[j + 1] then
                swap(A[j], A[j + 1])
                done ← false
            j ← j + 1
        end while
        j ← n - 1
        while j ≥ 1 do
            if A[j] < A[j - 1] then
                swap(A[j - 1], A[j])
                done ← false
            j ← j - 1
        end while
        if ¬ done
            MyAlgorithm(A, n)
        else
            return A

Homework Equations


Each primitive instruction is assumed to have the same constant amount of running time.

The Attempt at a Solution


Each line of the algorithm that's not "MyAlgorithm(A, n)" (without the quotes) has a constant run time, and each while loop is run (n-1) times.

I'm not sure about the run time of the line "MyAlgorithm(A, n)," other than the (recursive) call itself having a constant runtime, in addition to the runtime of the method called.

If the aforementioned line weren't a recursive call, I think I would understand the problem 100%.

Could someone please help me understand how to go about determining the runtime of that line, following which I should be able to determine the runtime of the algorithm itself?

Any input would be GREATLY appreciated!
 
Physics news on Phys.org
  • #2
You'll have to figure out how often the recursion might be called. That depends on the input - you are probably asked for the worst case performance.
 
  • #3
What that algorithm seems to do is sort an array, so I guess what you're saying is that I need to figure out how to determine what the input that will take the longest to be sorted by the algorithm is, so I tried applying the logic of the bubble sort, since this algorithm seems to make bubble sort-like passes, but from both directions.

Is the input that will take the longest an input that will take ##(n - 1)!## while-loop passes or ##(n - 1)! / 2## pairs of while-loop passes, or am I completely off the mark?
 
  • #4
s3a said:
Is the input that will take the longest an input that will take ##(n - 1)!## while-loop passes or ##(n - 1)! / 2## pairs of while-loop passes, or am I completely off the mark?
Bubble sort isn't that bad. Try to find out exactly how the sortedness of the array has improved after one pass.
 
  • #5
Try to find out exactly how the sortedness of the array has improved after one pass.
How does one quantify that, in general?

Also, I suspect that the answer is that the line with the recursive call is O(n) and that the entire algorithm is O(n^2), but I'd like to be aware of a proper systematic way of thinking about it.
 
  • #6
s3a said:
How does one quantify that, in general?
There is no approach that works with every single problem. Figuring out how to quantify it here is the key part of the homework problem.
s3a said:
Also, I suspect that the answer is that the line with the recursive call is O(n) and that the entire algorithm is O(n^2), but I'd like to be aware of a proper systematic way of thinking about it.
What happens to the largest element in a single pass, for example?
 
  • #7
There is no approach that works with every single problem. Figuring out how to quantify it here is the key part of the homework problem.
I see.

What happens to the largest element in a single pass, for example?
It goes to the end. So, each pass has one less pair of elements to compare. So, it seems like the first pass has n - 1 pairs of elements to compare, then the next pass n - 2 elements, then the next pass n - 3, and so on, until there is only one pair of elements left to compare. That is, there are (n - 1) + (n - 2) + (n - 3) + ... + 1 = (n-1)(n)/2 = (n^2 - n) / 2 pair comparisons, so the complexity of the bubble sort algorithm is O(n^2).

It seems to me that the algorithm I posted is similar to a cocktail / bidirectional bubble sort algorithm, except that the loops start from scratch every time.

So, it now seems to me that the two while loops, together, always run 2*(n-1) times, and the recursive call happens, at worst, n/2 times, because each recursive call, in the worst case, makes two elements be sorted into their correct location, such that the pair of while loops needs to be run n/2, in the worst case, and then there is one more recursive call that needs to be done, in order to return the array? So, in the end there are [2*(n-1) + k] * [n/2 + 1] instructions run, which means that the algorithm's worst-case complexity is O(n^2). Is my reasoning and answer correct?
 
  • #8
Right.
s3a said:
It seems to me that the algorithm I posted is similar to a cocktail / bidirectional bubble sort algorithm, except that the loops start from scratch every time.
You could speed it up by ignoring the elements that are known to be sorted - useful in an application, but it won't change the time complexity.
 
  • #9
Here is the algorithm, as stated in the problem (for your convenience).:
Code:
Algorithm MyAlgorithm(A, n)
    Input: Array of integer containing n elements
    Output: Possibly modified Array A
        done ← true
        j ← 0
        while j ≤ n - 2 do
            if A[j] > A[j + 1] then
                swap(A[j], A[j + 1])
                done ← false
            j ← j + 1
        end while
        j ← n - 1
        while j ≥ 1 do
            if A[j] < A[j - 1] then
                swap(A[j - 1], A[j])
                done ← false
            j ← j - 1
        end while
        if ¬ done
            MyAlgorithm(A, n)
        else
            return A

Would changing to the algorithm, as stated in the problem, to the following do what you said?:
Code:
Algorithm MyAlgorithm(A, n)
    Input: Array of integer containing n elements
    Output: Possibly modified Array A
        done ← true
        j ← 0
        while j ≤ n - 2 do
            if A[j] > A[j + 1] then
                swap(A[j], A[j + 1])
                done ← false
            j ← j + 1
        end while
        j ← n - 1
        while j ≥ 1 do
            if A[j] < A[j - 1] then
                swap(A[j - 1], A[j])
                done ← false
            j ← j - 1
        end while
        if ¬ done
            MyAlgorithm(A, n-1) <-- This is all I changed.
        else
            return A

Also would changing the algorithm, as stated in the problem, to the following be another kind of improvement that I can do?:
Code:
Algorithm MyAlgorithm(A, n)
    Input: Array of integer containing n elements
    Output: Possibly modified Array A
        done ← true
        j ← 0
        while j ≤ n - 2 do
            if A[j] > A[j + 1] then
                swap(A[j], A[j + 1])
                done ← false
            j ← j + 1
        end while
        if done <-- I added this (and removed the else that used to be below).
            return A <-- I added this (instead of the return A that used to be below).
        j ← n - 1
        while j ≥ 1 do
            if A[j] < A[j - 1] then
                swap(A[j - 1], A[j])
                done ← false
            j ← j - 1
        end while
        if ¬ done
            MyAlgorithm(A, n)

So, the following would have two kinds of improvements, right?:
Code:
Algorithm MyAlgorithm(A, n)
    Input: Array of integer containing n elements
    Output: Possibly modified Array A
        done ← true
        j ← 0
        while j ≤ n - 2 do
            if A[j] > A[j + 1] then
                swap(A[j], A[j + 1])
                done ← false
            j ← j + 1
        end while
        if done <-- I added this (and removed the else that used to be below).
            return A <-- I added this (instead of the return A that used to be below).
        j ← n - 1
        while j ≥ 1 do
            if A[j] < A[j - 1] then
                swap(A[j - 1], A[j])
                done ← false
            j ← j - 1
        end while
        if ¬ done
            MyAlgorithm(A, n-1) <-- I changed this.
 
  • #10
Actually, looking at it again, it seems to me that the modification which consists of an additional if statement, along with the removal of an else clause, is detrimental to the runtime of the program, if included together with the change which keeps decrementing n.
 
  • #11
These would be things to consider if the algorithm is supposed to be used frequently.
The second loop could get its own "done" check. In practice (in an actual computer) I would expect that to speed up the algorithm as checks of a boolean are very fast and a good branch prediction will expect the check to fail (=continue sorting), so you only lose time once, after the sorting is done - and that you lose no matter where the check is. The iterations in the loop depend on each other, they will take more time.
You could also stop considering initial elements (similar to ignoring the last element as you suggested), but that would need a modified function call.

All these things don't change the O(n2[/sup} runtime.
 

What is a recursive call?

A recursive call is a function or method that calls itself during its execution. This allows for repeated execution of the same code with different input values.

What is the runtime of a recursive call?

The runtime of a recursive call refers to the amount of time it takes for the computer to execute the code within the recursive function. It depends on the complexity of the algorithm and the size of the input.

How is the runtime of a recursive call calculated?

The runtime of a recursive call is typically calculated using Big O notation, which describes the time complexity of an algorithm in terms of its input size. For recursive functions, the time complexity can be represented as O(n), where n is the input size.

What factors can affect the runtime of a recursive call?

The runtime of a recursive call can be influenced by several factors, including the number of times the function is called, the complexity of the code within the function, and the input size. The choice of programming language and the efficiency of the computer's hardware can also play a role.

How can the runtime of a recursive call be optimized?

To optimize the runtime of a recursive call, the algorithm can be designed in a way that reduces the number of function calls or simplifies the code within the function. Additionally, using data structures like memoization or dynamic programming can help improve the efficiency of recursive functions.

Similar threads

  • Engineering and Comp Sci Homework Help
Replies
3
Views
947
  • Engineering and Comp Sci Homework Help
Replies
10
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
18
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
12
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
7
Views
738
  • Engineering and Comp Sci Homework Help
Replies
21
Views
2K
Replies
1
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
4
Views
1K
Replies
27
Views
943
  • Engineering and Comp Sci Homework Help
Replies
18
Views
2K
Back
Top