Analyzing the time complexity of an algorithm

In summary, the worst-case time complexity of the given algorithm for finding the first term of a sequence of integers equal to some previous term is O(n²). This can be improved by first sorting the list in O(n log n) time and then using a binary search to find the first term, reducing the complexity to O(n log n).
  • #1
pc2-brazil
205
3

Homework Statement


Analyze the worst-case time complexity of the following algorithm, which finds the first term of a sequence of integers equal to some previous term.

Code:
[B]procedure[/B] find (a[sub]1[/sub], a[sub]2[/sub], a[sub]3[/sub],..., a[sub]n[/sub]: integers)
location := 0
i := 2
[B]while[/B] i ≤ n and location = 0
[B]begin[/B]
	j := 1
	[B]while[/B] j < i and location = 0
		[B]if[/B] a[sub]i[/sub] = a[sub]j[/sub] [B]then[/B] location := i
		[B]else[/B] j := j + 1
	i := i + 1
[B]end[/B]
(This pseudocode notation is the one used in "Discrete Mathematics and Its Applications" by Kenneth Rosen. The indentation indicates that a portion of code is the body of a loop structure. ":=" indicates attribution, while "=" indicates comparison. Tell me if any further clarification is needed.)

2. The attempt at a solution
I will try to calculate the total number of comparisons needed. For a list of n integers, the worst-case complexity of the algorithm above is when there is no integer equal to some previous term (that is, when all integers are different). This is the case where the most comparisons will be needed.
I'm not sure about what's the best way to write this in a clear manner, but I will try to go from inside out (that is, first, the inner loop, and then the outer loop).

Inside the body of the outer loop, there is an inner while loop in order to compare the ith number with all the elements preceding it (that is, the elements from j = 1 to j = i - 1). That is, the inner loop is executed (i - 1) times, each of them comprising 3 comparisons -- 2 for entering the loop body (that is, j < i and location = 0) and 1 inside the loop body (to verify if ai = aj). There are still 2 more comparisons needed to verify that the loop terminated (that is, when j = i and location = 0). So, the total number of comparisons needed for this inner loop is 3(i - 1) + 2.

The outer while loop (which verifies if i ≤ n and location = 0) will be executed (n - 1) times, plus two comparisons for terminating the loop.
For each iteration of the outer loop, there will be 2 comparisons for entering the loop body (i ≤ n and location = 0), plus the number of comparisons done by the inner loop.
So, each iteration will need 2 comparisons (for entering the loop body) plus 3(i - 1) + 2; which gives 3(i - 1) + 4.

Thus, 3(i - 1) + 4 comparisons will be made n - 1 times with i = 2, ..., n, plus 2 more for terminating the outer loop. Then, the total number of comparisons done in the whole execution of the algorithm will be:
[tex]T(n)=2+\sum_{i=2}^{n}\left[3(i-1)+4 \right][/tex]
[tex]T(n)=2+\frac{1}{2}(3n^2+5n-8)[/tex]
Because T(n) is a polynomial of degree 2, its complexity is O(n²).

Tell me if I didn't make myself clear (I tried to be as clear as possible).

Thank you in advance.
 
Last edited:
Physics news on Phys.org
  • #2
Yes, that all appears correct to me. To simplify things, I was taught, that considering we only truly care about O( ), to ignore anything but the largest term. It's been a decade since I learned it, so check with your prof. Also, same with coefficients etc.

Interesting, unrelated note, beyond the original question:
The same task is more efficient with the following algorithm. It relies on the fact that in a sorted list, identical elements will be next to each other.
A. Sort them (this can be done in n log_2( n) time with quicksort, mergesort, etc.)
B. Go through the array once comparing any number with it's previous number. (Takes O( n ) time.)
O(n log n + n) => O(n log n )
Plus you get a bonus of a sorted list.
 
  • #3
nickalh said:
Yes, that all appears correct to me. To simplify things, I was taught, that considering we only truly care about O( ), to ignore anything but the largest term. It's been a decade since I learned it, so check with your prof. Also, same with coefficients etc.

Interesting, unrelated note, beyond the original question:
The same task is more efficient with the following algorithm. It relies on the fact that in a sorted list, identical elements will be next to each other.
A. Sort them (this can be done in n log_2( n) time with quicksort, mergesort, etc.)
B. Go through the array once comparing any number with it's previous number. (Takes O( n ) time.)
O(n log n + n) => O(n log n )
Plus you get a bonus of a sorted list.

For a long list, your step B would be much faster with a binary search, since you have already sorted the list.
 

What is time complexity?

Time complexity refers to the amount of time it takes for an algorithm to run as the input size increases. It is a measure of the efficiency of an algorithm and is usually expressed in terms of the input size.

Why is analyzing time complexity important?

Analyzing time complexity allows us to understand how an algorithm will perform as the input size increases. This is important because it helps us make informed decisions about which algorithm to use for a particular problem, considering factors such as the size of the input and the resources available.

How is time complexity calculated?

Time complexity is calculated by counting the number of operations an algorithm performs as the input size increases. Generally, we look at the worst-case scenario, which gives us an upper bound on the time complexity.

What is the Big O notation?

The Big O notation is a mathematical notation used to describe the time complexity of an algorithm. It represents the worst-case scenario and gives us a way to compare the efficiency of different algorithms and their growth rates as the input size increases.

What are some common time complexity classes?

Some common time complexity classes include constant time (O(1)), logarithmic time (O(log n)), linear time (O(n)), quadratic time (O(n^2)), and exponential time (O(2^n)). These classes represent different levels of efficiency, with constant time being the most efficient and exponential time being the least efficient.

Similar threads

  • Calculus and Beyond Homework Help
Replies
1
Views
259
  • Calculus and Beyond Homework Help
Replies
1
Views
346
  • Calculus and Beyond Homework Help
Replies
12
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
2K
  • Calculus and Beyond Homework Help
Replies
16
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
2K
Replies
9
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
Back
Top