What is the Time Complexity of These Binary Search Tree Functions?

In summary: Thinking)Yes, with $h$ I mean the height of the tree... (Nod)Is the time complexity of the second algorithm also $O(h)$ ? (Thinking)We have the worst case if either all the keys other than this of the root are greater than this of the root either all the other keys are smaller than this of the root, because in such cases the tree is not balanced.This formulation is kind of awkward. A tree has only one root. Perhaps a better way to put is "if every node's key is greater than its parent node's key or if every node's key is smaller than its parent node's key". That way you make sure that
  • #1
evinda
Gold Member
MHB
3,836
0
Hello! (Wave)

The following two functions are given and I want to find their time complexity.
Code:
function BinarySearchTreeLookUp(key K,pointer R): Type
   if (R==NULL) return nill;
   else if (K==R->key) return R->data;
   else if (K<R->key)
         return(BinarySearchTreeLookUp(K,R->LC));
   else 
         return(BinarySearchTreeLookUp(K,R->RC));
Code:
function BinarySearchTreeLookUp(key K,pointer R): Type
   P=R;
   while (P!=NULL && K!=P->key){
            if (K<P->key) P=P->LC;
            else P=P->RC;
   }
   if (P!=NULL) return(P->data);
   else return nill;
}

I think that the time complexity of the first function is $O(h)$ where $\log n \leq h \leq n-1$ and that the time complexity of the second function is $O(n)$. Am I right? (Thinking)
 
Technology news on Phys.org
  • #2
Hello, evinda. Let me quickly preempt all of your follow-up questions by asking you to produce a proof (or a sketch of a proof) that the time complexity of the first algorithm is $O(h)$ where $h$ is presumably the height of the input binary search tree. Then we can check and discuss your proof and hence answer not only your current question but also your real questions, which are "how could we prove it" (because you will already have a proof) and "why does it work" (because you will hopefully understand your own proof). We can discuss the second algorithm once you have finished working on the first one.

If you are somehow only interested in knowing whether your answers are correct, you are right for the first algorithm (assuming $h$ is the height of the tree, which I think is what you meant - please define any variables you use) and completely wrong for the second algorithm.

PS: "nill" is not a word. it's either "null" or "nil" (probably "null" since you are already using it).​
 
  • #3
Bacterius said:
Hello, evinda. Let me quickly preempt all of your follow-up questions by asking you to produce a proof (or a sketch of a proof) that the time complexity of the first algorithm is $O(h)$ where $h$ is presumably the height of the input binary search tree. Then we can check and discuss your proof and hence answer not only your current question but also your real questions, which are "how could we prove it" (because you will already have a proof) and "why does it work" (because you will hopefully understand your own proof). We can discuss the second algorithm once you have finished working on the first one.

We have the worst case if either all the keys other than this of the root are greater than this of the root either all the other keys are smaller than this of the root, because in such cases the tree is not balanced.
Then the height of the tree will be equal to the height of the root, that is at the worst case $n-1$.
At the best case, we will have a balanced tree of which we know that the height is $O(\lg n)$.
No matter in which case we are, we cannot have a greater time complexity than $O(h)$ since the path that we will follow cannot be greater than the greatest path from the root to a leaf, that is $O(h)$.
Right? (Thinking)

Bacterius said:
If you are somehow only interested in knowing whether your answers are correct, you are right for the first algorithm (assuming $h$ is the height of the tree, which I think is what you meant - please define any variables you use) and completely wrong for the second algorithm.

Yes, with $h$ I mean the height of the tree... (Nod)

Is the time complexity of the second algorithm also $O(h)$ ? (Thinking)
 
  • #4
evinda said:
We have the worst case if either all the keys other than this of the root are greater than this of the root either all the other keys are smaller than this of the root, because in such cases the tree is not balanced.

This formulation is kind of awkward. A tree has only one root. Perhaps a better way to put is "if every node's key is greater than its parent node's key or if every node's key is smaller than its parent node's key". That way you make sure that the statement holds for every node, not just the root, which identifies the worst case correctly (basically, a linked list). Otherwise your definition above would apply to trees such as these, which are clearly not the worst case:
PD0U9Y9.png

As you can see, every node other than the root has a smaller key than the root's key, yet the algorithm still performs well. You could also show a picture of an actual worst case tree for illustration if you wanted.

Also, you haven't yet given the proof that the algorithm runs in $O(h)$, so this doesn't explain why it is a worst case. This should be moved after the proof that the algorithm runs in $O(h)$.

evinda said:
At the best case, we will have a balanced tree of which we know that the height is $O(\lg n)$.

Okay, sure. You might give the exact height of a balanced tree, which is $\lceil \log_2 n \rceil$ (or something similar depending on your exact definition of "height"). That way your O-notation is justified. You could show a picture of the best case if you wanted.

Same remark as before, why is this a best case? Should be moved after the proof that the algorithm runs in $O(h)$.

evinda said:
No matter in which case we are

Wait, are you giving a proof only for the best and worst case, or does it work for all $n$? You can't just give a proof for the best and worst case and then interpolate it for every $n$ in between, it doesn't work like that. So get rid of this, replace it by "for any $n$-node binary search tree of some height $h$" and move your best/worst case remarks after this paragraph.
evinda said:
we cannot have a greater time complexity than $O(h)$ since the path that we will follow cannot be greater than the greatest path from the root to a leaf, that is $O(h)$.

Why is that? I mean, yes, okay, but you should explain why the path can't be greater longer than the greatest longest path from the root to any leaf. For instance, a convincing argument for this step (that doesn't use paths) could be "at each iteration, the algorithm either stops, or recurses down to a child of the current node, which amounts to going down one level of the tree, therefore it must reach a leaf in at most $h$ iterations, at which point it will stop". Try and come up with an argument using paths. It doesn't have to be as rigorous as a formal logic proof, but it should be enough that anyone reading it can convince himself that the argument is solid.

evinda said:
Is the time complexity of the second algorithm also $O(h)$ ? (Thinking)

Maybe. Maybe not. Try running the algorithm on the worst case and best case inputs to the previous algorithm, to see how they compare. Then, look at the code for the second algorithm. What is that while loop doing? Run through both algorithms step by step on the same input. Aren't they basically doing the same thing? What does that mean? Often just walking through each step of the algorithm is the best way to understand it, it's not just for computers!
 
  • #5


Yes, you are correct. The first function has a time complexity of $O(h)$ where $h$ is the height of the binary search tree. This is because in the worst case, the function will have to traverse the entire height of the tree to find the key, resulting in a time complexity of $O(h)$. However, in the best case scenario, the key is found at the root node, resulting in a time complexity of $O(1)$.

On the other hand, the second function has a time complexity of $O(n)$ where $n$ is the number of nodes in the tree. This is because the function uses a while loop to traverse through all the nodes in the tree until the key is found. In the worst case, the key is not present in the tree and the function will have to traverse through all $n$ nodes, resulting in a time complexity of $O(n)$. In the best case scenario, the key is found at the root node, resulting in a time complexity of $O(1)$.
 

1. What is time complexity in relation to functions?

Time complexity is a measure of how much time it takes for a function to run as the input size increases. It is used to analyze the efficiency of an algorithm or function, with lower time complexity indicating faster performance.

2. How is time complexity calculated?

Time complexity is typically calculated by counting the number of operations a function performs as the input size increases. The number of operations, or "steps", is then expressed in terms of the input size, usually denoted as "n".

3. What is the Big O notation and how is it related to time complexity?

The Big O notation is a mathematical notation used to describe the upper bound on the growth rate of a function. In terms of time complexity, it is used to represent the worst-case scenario of how the time taken for a function to run increases as the input size increases. It helps categorize functions into different levels of efficiency, with lower time complexity functions having a better performance.

4. What are some common time complexities and their corresponding notations?

Some common time complexities and their Big O notations include O(1) (constant time), O(log n) (logarithmic time), O(n) (linear time), O(n^2) (quadratic time), O(2^n) (exponential time). There are also more complex notations such as O(n log n) and O(n^3).

5. How can knowing the time complexity of a function be useful?

Knowing the time complexity of a function can help in understanding and predicting the efficiency of an algorithm or program. It can also aid in identifying and optimizing inefficient code, leading to faster and more efficient programs. Time complexity analysis is an important tool in computer science and is used in various applications such as software development, data analysis, and machine learning.

Similar threads

  • Programming and Computer Science
Replies
1
Views
1K
  • Programming and Computer Science
Replies
1
Views
599
  • Programming and Computer Science
Replies
8
Views
1K
  • Programming and Computer Science
Replies
1
Views
2K
  • Programming and Computer Science
Replies
13
Views
2K
  • Programming and Computer Science
Replies
3
Views
3K
  • Programming and Computer Science
Replies
2
Views
1K
  • Programming and Computer Science
Replies
12
Views
967
  • Programming and Computer Science
Replies
6
Views
928
  • Programming and Computer Science
Replies
3
Views
2K
Back
Top