Questions about Big Oh notation

  • Thread starter AxiomOfChoice
  • Start date
  • Tags
    Notation
In summary, the conversation discusses the verification of two assertions: (\mathcal O(\epsilon))^2 = \mathcal O(\epsilon^2) and \sqrt{1 + \mathcal O(\epsilon^2)} = 1 + \mathcal O(\epsilon^2). The first assertion is shown to be true by using the definition of \mathcal O notation. The second assertion can be proved using the binomial expansion.
  • #1
AxiomOfChoice
533
1
I'm sure I could think my way through these, but I'm sick and on a tight schedule, so I was hoping someone here could help me out. I would appreciate a verification, with or without proof, of the following assertions:

[tex]
(\mathcal O(\epsilon))^2 = \mathcal O(\epsilon^2)
[/tex]

and

[tex]
\sqrt{1 + \mathcal O(\epsilon^2)} = 1 + \mathcal O(\epsilon^2)
[/tex]

Thanks so much.
 
Mathematics news on Phys.org
  • #2
I think I've managed to show the first one. Suppose [itex]f(\epsilon) = \mathcal O(\epsilon)[/itex] (as [itex]\epsilon \searrow 0[/itex]). Then there exists [itex]C >0, \delta > 0[/itex] such that [itex]0 < \epsilon < \delta[/itex] implies

[tex]
\left| \frac{f(\epsilon)}{\epsilon} \right| \leq C.
[/tex]

To show that [itex](\mathcal O(\epsilon))^2 = \mathcal O(\epsilon^2)[/itex], one simply observes that

[tex]
\left| \frac{f^2(\epsilon)}{\epsilon^2} \right| \leq C^2.
[/tex]
 
  • #3
The second approximation can be gotten by using the binomial expansion of the left side.
 

What is Big Oh notation and why is it important in computer science?

Big Oh notation is a mathematical notation used to describe the time complexity or efficiency of an algorithm. It is important in computer science because it allows us to analyze and compare different algorithms to determine which one is the most efficient for a given problem.

How is Big Oh notation calculated?

Big Oh notation is calculated by looking at the worst-case scenario for an algorithm's time complexity. This is often represented by the largest term in the algorithm's runtime, and any constant factors are ignored. For example, if an algorithm has a runtime of O(n^2 + 5n + 10), it would be simplified to O(n^2).

What is the difference between Big Oh, Big Omega, and Big Theta notation?

Big Oh notation represents the upper bound of an algorithm's time complexity, while Big Omega notation represents the lower bound. Big Theta notation represents both the upper and lower bounds, and is used when the upper and lower bounds are the same.

Why is it important to understand Big Oh notation when designing algorithms?

Understanding Big Oh notation is important when designing algorithms because it allows us to create more efficient and scalable solutions. By analyzing the time complexity of an algorithm, we can make informed decisions on how to improve its performance.

Are there any limitations to using Big Oh notation?

While Big Oh notation is a useful tool for analyzing and comparing algorithms, it does have its limitations. It only considers the time complexity of an algorithm and does not take into account other factors such as memory usage or practical implementation. Additionally, the analysis of an algorithm's time complexity may not accurately reflect its performance in real-world scenarios.

Similar threads

  • Introductory Physics Homework Help
Replies
2
Views
344
  • Topology and Analysis
Replies
2
Views
1K
Replies
6
Views
283
  • Quantum Physics
Replies
1
Views
812
Replies
4
Views
737
  • General Math
Replies
12
Views
1K
  • Topology and Analysis
Replies
2
Views
1K
Replies
16
Views
2K
Replies
4
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
Back
Top