How is computational complexity determined?

Click For Summary
SUMMARY

Computational complexity is determined by analyzing the number of bit-operations required for an algorithm to terminate, particularly focusing on the worst-case or average-case scenarios. For example, the complexity of checking all subsets of integers to find a sum of zero is O(2^n), where n is the number of integers. While some complexities may appear as decimals, such as O(5/2n^2.2342), they can be simplified to standard Big O notation. Tools like graphing input size against execution time can help visualize and fit curves to understand complexity better.

PREREQUISITES
  • Understanding of Big O notation
  • Familiarity with algorithm analysis techniques
  • Knowledge of subset problems in computational theory
  • Basic graphing skills for data visualization
NEXT STEPS
  • Study the principles of algorithm analysis in depth
  • Learn about different cases in computational complexity (worst-case, average-case)
  • Explore graphing tools for visualizing algorithm performance
  • Investigate specific algorithms and their complexities, such as the subset-sum problem
USEFUL FOR

Computer scientists, algorithm developers, and students studying computational theory will benefit from this discussion on determining computational complexity.

~Death~
Messages
45
Reaction score
0
like for any algorithm?

On wikipedia it lists for multiplication it's O(n^2)

but some of them have decimals like O(5/2n^2.2342) or something so how would you determine that?

Just use a computer and make a graph of input in bytes vs time in seconds and fit a curve to it?

Like I understand for eg the problem of determining if there's exists a subset of integers in another subset of integers such that they add to 0:

If the algorithm was to check each subset, if there's n integers then there's 2^n-1 subsets to check, but then youd have to add each one -so that would count as a step?

so there's 2^n+n-1 steps for an input of n ..does that mean its on the order of O(2^n) ?
 
Last edited:
Mathematics news on Phys.org
You might want to check out http://en.wikipedia.org/wiki/Big-o_notation first.

So yes, [itex]2^n+n-1\in\mathcal{O}(2^n)[/itex]. To determine this, you don't run the algorithm; you have to analyze (usually) the number of bit-operations it takes for your algorithm to terminate, given an input of n bits. Usually this number jumps around depending on the input, so we usually just look at the WORST case or the AVERAGE case.
 

Similar threads

  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
Replies
7
Views
2K