Time Complexity Algorithm Proof

In summary: I was just trying to show that there was a constant c for which f(n) + g(n) = Ω(g(n)) based on the given information. Sorry for the confusion! In summary, by the definition of Big-Oh, if f(n) and g(n) are non-negative functions such that f(n) = O(g(n)), then there exists positive constants c and n0 such that 0 ≤ f(n) ≤ c(g(n)) for all n ≥ n0. Using this information, it can be shown that f(n) + g(n) = Ω(g(n)) by choosing a constant c, such that c ≤ 1, and showing that 0 ≤ c(g(n)) ≤ f(n)
  • #1
enigmatic01
8
1
Homework Statement
Use the formal definition of Big-Oh to prove that if f (n) and g(n) are nonnegative functions such that f (n) = O(g(n)), f (n) + g(n) = Ω(g(n))
Relevant Equations
...
Use the formal definition of Big-Oh to prove that if f (n) and g(n) are nonnegative functions such that f (n) = O(g(n)), f (n) + g(n) = Ω(g(n)).

By the definition of Big-Oh:

If f(n) and g(n) are non-negative functions such that f(n) = O(g(n)) there must be positive constants c and n0 such that 0 ≤ f(n) ≤ c(g(n)) for all n ≥ n0

Would this be correct:

If 0 ≤ f(n) ≤ c(g(n)) and f(n) and g(n) are non-negative functions we know g(n) ≤ ( f(n) + g(n) ).

Since g(n) ≤ f(n) + g(n) there must be a constant c, namely 0 < c ≤ 1, such that c(g(n) ≤ g(n). We could then say 0 ≤ c(g(n)) ≤ f(n) + g(n) which is the definition of f (n) + g(n) = Ω(g(n))
 
Physics news on Phys.org
  • #2
That doesn't even use f(n) = O(g(n)) but it looks correct (assuming you use Knuth's definition of Ω).

A more interesting result to show (one that needs f(n) = O(g(n))) would be f (n) + g(n) = O(g(n))
 
  • #3
mfb said:
That doesn't even use f(n) = O(g(n)) but it looks correct (assuming you use Knuth's definition of Ω).

A more interesting result to show (one that needs f(n) = O(g(n))) would be f (n) + g(n) = O(g(n))

That was actually the other homework problem!
 
  • Haha
Likes mfb

FAQ: Time Complexity Algorithm Proof

What is time complexity in algorithm analysis?

Time complexity is a measure of the amount of time required for an algorithm to run, based on the input size. It is used to analyze the efficiency of an algorithm and predict how long it will take to run as the input size increases.

How do you calculate time complexity?

Time complexity is typically represented using the big O notation, which is a mathematical way of expressing the worst-case scenario of an algorithm's time as the input size grows. It is calculated by counting the number of basic operations, such as comparisons or assignments, that the algorithm performs in relation to the input size.

What is the purpose of proving time complexity for an algorithm?

Proving the time complexity of an algorithm allows us to determine its efficiency and compare it to other algorithms. It also helps us understand how the algorithm will perform as the input size increases and allows us to identify areas for improvement.

What are some common techniques used in proving time complexity?

Some common techniques for proving time complexity include counting the number of operations in the algorithm, using recurrence relations to express the time complexity, and using mathematical induction to prove the complexity of recursive algorithms.

How do you determine the best time complexity for an algorithm?

The best time complexity for an algorithm depends on the specific problem and the resources available. Generally, the lower the time complexity, the more efficient the algorithm. However, it is important to consider other factors such as space complexity and practical limitations when determining the best time complexity for an algorithm.

Similar threads

Replies
4
Views
2K
Replies
1
Views
1K
Replies
3
Views
2K
Replies
1
Views
1K
Replies
1
Views
1K
Back
Top