# Is $\Theta(m^2)$ the best time complexity we can achieve for this problem?

• MHB
• evinda
In summary, the conversation discusses an algorithm for finding three numbers from three vectors that have a sum equal to 0. The proposed algorithm has a time complexity of $O(m^2)$ and involves sorting the vectors and synchronously iterating over two of the vectors while tracking the third in reverse. The conversation also explores potential modifications to the algorithm and addresses concerns about its effectiveness.
evinda
Gold Member
MHB
Hello! (Wave)

There are given three vectors $V_1, V_2, V_3$ of dimension $m$, the elements of which are real numbers. I want to write an algorithm that determines if there are three numbers, one of each of the matrices $V_1, V_2$ and $V_3$, that have sum equal to $0$.

• Can you write an $O(m^3)$ algorithm?
• Can you write an $O(m^2 \log m)$ algorithm?
• Can you write an $O(m^2)$ algorithm?
• Can you find a quicker algorithm that $O(m^2)$ ?
That's what I have tried:

• Code:
Algorithm(V1,V2,V3){
// K contains all the possible sums of the vectors V1, V2, V3
l=0;
while (l!=1){
for (i=0; i<m; i++){
for (j=0; j<m; j++){
for (k=0; k<m; k++){
if (K[i,j,k]==1) l=1;
}
}
}
}
if (l==0) printf("No.");
else printf("Yes.");
}

• Code:
Algorithm(V1,V2,V3){
// K contains all the possible sum of the vectors V1, V2
s=0;
while (m!=1){
for (i=0; i<m; i++){
for (j=0; j<m; j++){
l=Binarysearch(C,-K[i,j],0,n-1);
if (l==1) s=1;
}
}
}
if (m==0) printf("No.");
else printf("Yes.");
}

Is it right so far?

What could I do in order to achieve time complexity $O(m^2)$? (Thinking)

Last edited:
Hi! (Blush)

evinda said:
Is it right so far?

Seem right to me. (Nod)

What could I do in order to achieve time complexity $O(n^2)$? (Thinking)

Sort all vectors for $3m\log m$.
Iterate over the first,
Iterate over the second,​
Track the third in reverse. (Thinking)​

I like Serena said:
Seem right to me. (Nod)

(Happy)
I like Serena said:
Sort all vectors for $3m\log m$.
Iterate over the first,
Iterate over the second,​
Track the third in reverse. (Thinking)​
What do you mean with the last step? (Thinking)

evinda said:
What do you mean with the last step? (Thinking)

Assume that $v_1, v_2, v_3$ are sorted.

Suppose at some point in time we have indices $i,j,k$ such that $v_1+v_2[j]+v_3[k]$ is greater than 0, but only just.
If we decrease k, we'll be below 0. That also means we have no solution yet. (Thinking)

Now increment $j$ and keep decrementing $k$ until we have the same situation - or a solution in which case we're done. (Wasntme)

I like Serena said:
Assume that $v_1, v_2, v_3$ are sorted.

Suppose at some point in time we have indices $i,j,k$ such that $v_1+v_2[j]+v_3[k]$ is greater than 0, but only just.
If we decrease k, we'll be below 0. That also means we have no solution yet. (Thinking)

Now increment $j$ and keep decrementing $k$ (at least once) until we have the same situation - or a solution in which case we're done. (Wasntme)

In order to find the $k$ such that $v_1+v_2[j]+v_3[k]$ is greater than 0, don't we have to iterate also over the third matrix? Or am I wrong? (Thinking)

evinda said:
In order to find the $k$ such that $v_1+v_2[j]+v_3[k]$ is greater than 0, don't we have to iterate also over the third matrix? Or am I wrong? (Thinking)

Yes.
But we synchronously iterate over the 2nd and 3rd vector.
That is, we don't consider each and every combination of the 2 vectors.
As a result the complexity is $O(m)$ instead of $O(m^2)$. (Wasntme)

I like Serena said:
Yes.
But we synchronously iterate over the 2nd and 3rd vector.
That is, we don't consider each and every combination of the 2 vectors.
As a result the complexity is $O(m)$ instead of $O(m^2)$. (Wasntme)

So do you mean that we increment j and decrement k synchronously?

Or should there be a condition that we should consider and only if this is satified we change one of the two indices?

Because I tried to apply this algorithm: http://http://pastebin.com/9H4mL76P

at these three vectors:

View attachment 4362but we don't find that the marked numbers satisfy the desired property, although they do... (Worried)

#### Attachments

• vect.png
2.3 KB · Views: 52
evinda said:
So do you mean that we increment j and decrement k synchronously?

Or should there be a condition that we should consider and only if this is satified we change one of the two indices?

Because I tried to apply this algorithm: http://http://pastebin.com/9H4mL76P

at these three vectors:

but we don't find that the marked numbers satisfy the desired property, although they do... (Worried)

Let's run through the algorithm as it should work. (Thinking)

Suppose we have already iterated through i and now have $v_1=6$.
Then:
\begin{array}{|c|c|c|c|l|}
\hline
v_1 & v_2[j] & v_3[k] & \sum \\
\hline
6 & -1 & 54 & 59 \\
&& 53 & 58 \\
&& -1 & 4 \\
&& (-49) & (-44) & \text{too far}\\
& -5 & -1 & 0 & \text{we continue where we were and we've found it!}\\
\hline
\end{array}
(Emo)

I like Serena said:
Let's run through the algorithm as it should work. (Thinking)

Suppose we have already iterated through i and now have $v_1=6$.
Then:
\begin{array}{|c|c|c|c|l|}
\hline
v_1 & v_2[j] & v_3[k] & \sum \\
\hline
6 & -1 & 54 & 59 \\
&& 53 & 58 \\
&& -1 & 4 \\
&& (-49) & (-44) & \text{too far}\\
& -5 & -1 & 0 & \text{we continue where we were and we've found it!}\\
\hline
\end{array}
(Emo)

Why do we delete from $v_1$ all the other numbers except from $6$ and from $v_2$ all the other numbers rather than $-1,-5$? (Thinking)

evinda said:
Why do we delete from $v_1$ all the other numbers except from $6$ and from $v_2$ all the other numbers rather than $-1,-5$? (Thinking)

I have skipped over the earlier iteration of $i$.
Perhaps you can say what it should look like? (Wondering)

In $v_2$ we've stopped at $-5$ since there is no need to continue - we have found a solution! (Party)
Ideally the algorithm breaks off at this point unless we want to find all possible solutions. (Nerd)

I like Serena said:
I have skipped over the earlier iteration of $i$.
Perhaps you can say what it should look like? (Wondering)

In $v_2$ we've stopped at $-5$ since there is no need to continue - we have found a solution! (Party)
Ideally the algorithm breaks off at this point unless we want to find all possible solutions. (Nerd)

So should the algorithm work as follows?

Calculate the sum $s=v_1[1]+v_2[1]$, check if $v_3[1]$ is equal to $-s$. If not, then if $v_3[1]<-s$ should we continue checking if $v_3[2]$ is equal to $-s$ and if not but it holds that $v_3[2]<-s$ we should increment $k$?
And if no $k$ exists that satisfies the desired property, should we increment $j$ and then check if there is a $k$ such that $v_3[k]=v_1[1]+v_2[2]$, as above?

Or should the algorithm work in an other way? (Thinking)

evinda said:
So should the algorithm work as follows?

Calculate the sum $s=v_1[1]+v_2[1]$, check if $v_3[1]$ is equal to $-s$. If not, then if $v_3[1]<-s$ should we continue checking if $v_3[2]$ is equal to $-s$ and if not but it holds that $v_3[2]<-s$ we should increment $k$?
And if no $k$ exists that satisfies the desired property, should we increment $j$ and then check if there is a $k$ such that $v_3[k]=v_1[1]+v_2[2]$, as above?

Or should the algorithm work in an other way? (Thinking)

Assuming each vector runs from $0$ to $m - 1$, I'd do something like:
Code:
for each i
k = m
for each j
while k >= 1 and v1[i] + v2[j] + v3[k-1] >= -epsilon
k = k - 1
if abs(sum) < epsilon
Solution! (Party)
(Thinking)

I like Serena said:
Assuming each vector runs from $0$ to $m - 1$, I'd do something like:
Code:
for each i
k = m
for each j
while k >= 1 and v1[i] + v2[j] + v3[k-1] >= -epsilon
k = k - 1
if abs(sum) < epsilon
Solution! (Party)
(Thinking)

Isn't the time complexity of the algorithm $O(m^3)$ because of the fact that $k$ could pick at most $m$ different values? (Thinking) Or am I wrong?

evinda said:
Isn't the time complexity of the algorithm $O(m^3)$ because of the fact that $k$ could pick at most $m$ different values? (Thinking) Or am I wrong?

Together, $j$ and $k$ will take at most $2m$ values. (Wasntme)

We can compare it with a merge sort of 2 sorted arrays.
Even though we iterate through 2 vectors, the result is still O(n).

Or perhaps we should write out the iterations on your example.
It should show the difference with an algorithm that uses 3 nested for loops.

Code:
for each i
k = m
for each j
while k >= 1 and v1[i] + v2[j] + v3[k-1] >= -epsilon
k = k - 1
if abs(sum) < epsilon
Solution! (Party)

I like Serena said:
Together, $j$ and $k$ will take at most $2m$ values. (Wasntme)

We can compare it with a merge sort of 2 sorted arrays.
Even though we iterate through 2 vectors, the result is still O(n).

Or perhaps we should write out the iterations on your example.
It should show the difference with an algorithm that uses 3 nested for loops.
Couldn't we pick epsilon=0 and > instead of >= at the condition of the while loop ([m]while k >= 1 and v1 + v2[j] + v3[k-1] > 0 [/m]) ? Or am I wrong? (Thinking)

If [m]v1 + v2[j] + v3[k-1]>0[/m] is true for all $k,j$, don't we go for each $j$ through all $v_3[k-1]$ ?

I haven't understood why even though we iterate through 2 vectors, the result is still O(n). Could you explain it further to me? (Thinking)In our example we go through all $v[k-1]$ for $j=0$ but we find three numbers with the desired property for $j=1$.
If $v_1+v_2[j]+v_2[k-1]>0, \forall i=0, \dots, m-2, \ \ j=0, \dots, m-1,\ \ k=1, \dots, m$, why would the time complexity be linear?

evinda said:
Couldn't we pick epsilon=0 and > instead of >= at the condition of the while loop ([m]while k >= 1 and v1 + v2[j] + v3[k-1] > 0 [/m]) ? Or am I wrong? (Thinking)

Yes, we can. (Nod)

The epsilon is needed only in an actual computer program where we need to take presentation and calculation errors into account.
That's because for instance $(1.0/3) + (1.0/3) + (-2.0/3) \ne 0$.
If [m]v1 + v2[j] + v3[k-1]>0[/m] is true for all $k,j$, don't we go for each $j$ through all $v_3[k-1]$ ?

I haven't understood why even though we iterate through 2 vectors, the result is still O(n). Could you explain it further to me? (Thinking)

In our example we go through all $v[k-1]$ for $j=0$ but we find three numbers with the desired property for $j=1$.
If $v_1+v_2[j]+v_2[k-1]>0, \forall i=0, \dots, m-2, \ \ j=0, \dots, m-1,\ \ k=1, \dots, m$, why would the time complexity be linear?

Starting with $j=0$, first we will go through all the $k$, ending at $k=1$, after which $k$ will not change any more.
Then we will go through the remaining $j$, leaving $k$ at $1$.
In other words: $2m$ iterations. (Wasntme)

Perhaps a table with all the iterations would help.
If it were really $\Theta(m^3)$, you should get about $m^3 =4^3=64$ iterations.
However, I predict an upper limit of $m \cdot 2m = 4 \cdot (2 \cdot 4) = 32$ iterations. (Nerd)
It will be less if we add some optimizations.
That's because once we have that even $k=0$ won't satisfy the condition, we can skip all other iterations for $j$. (Emo)

I like Serena said:
Starting with $j=0$, first we will go through all the $k$, ending at $k=1$, after which $k$ will not change any more.
Then we will go through the remaining $j$, leaving $k$ at $1$.
In other words: $2m$ iterations. (Wasntme)

Perhaps a table with all the iterations would help.
If it were really $\Theta(m^3)$, you should get about $m^3 =4^3=64$ iterations.
However, I predict an upper limit of $m \cdot 2m = 4 \cdot (2 \cdot 4) = 32$ iterations. (Nerd)
It will be less if we add some optimizations.
That's because once we have that even $k=0$ won't satisfy the condition, we can skip all other iterations for $j$. (Emo)

So do you mean that we could optimize the algorithm as follows?

Code:
Algorithm(V1,V2,V3){
Heapsort(V1);
Heapsort(V2);
Heapsort(V3);
for (i=0; i<m; i++){
k=m;
j=0;
while (j<m and k>=1){
while (k>=1 and V1[i]+V2[j]+V3[k-1]>0)
k=k-1
if  (V1[i]+V2[j]+V3[k-1]=0)
return (V1[i], V2[j], V3[k-1]);
j=j+1;
}
}
}

Do we have the worst case if $v1+v2[j]+v3[0]<0$ and $v1+v2[j]+v3[k]<0, \forall k\geq 1$, for all $i,j$?
Or do we have the worst case under other conditions? (Thinking)

Last edited:
Or do we have the worst case when $v_1[m-1]+v_2[m-1]+v_3[0]=0$?
Also, even if we execute the iterations for $j$ only if $k \geq 0$, won't the time complexity be again $O(2m)=O(m)$ ? (Thinking)

evinda said:
So do you mean that we could optimize the algorithm as follows?

Looks good. (Nod)
I'd add something like [m]return NO_SOLUTION[/m] at the end though. (Wasntme)

Do we have the worst case if $v1+v2[j]+v3[0]<0$ and $v1+v2[j]+v3[k]<0, \forall k\geq 1$, for all $i,j$?
Or do we have the worst case under other conditions? (Thinking)

I think we need more conditions. (Worried)

evinda said:
Or do we have the worst case when $v_1[m-1]+v_2[m-1]+v_3[0]=0$?
Also, even if we execute the iterations for $j$ only if $k \geq 0$, won't the time complexity be again $O(2m)=O(m)$ ? (Thinking)

This would be right if we add the extra condition that there is no other sum that is zero. (Nod)

Note that the optimization doesn't have any effect for the worst case scenario.
So to keep the algorithm clean and neat, we could also leave out all optimizations. (Thinking)

I like Serena said:
Looks good. (Nod)
I'd add something like [m]return NO_SOLUTION[/m] at the end though. (Wasntme)

A ok... (Nod)

I like Serena said:
This would be right if we add the extra condition that there is no other sum that is zero. (Nod)

I see.. (Smile)

I like Serena said:
Note that the optimization doesn't have any effect for the worst case scenario.
So to keep the algorithm clean and neat, we could also leave out all optimizations. (Thinking)

Code:
Algorithm(V1,V2,V3){
1.   sort V1
2.   sort V2
3.   sort V3 ;
4.   for (i=0; i<m; i++){
5.        k=m;
6.        for (j=0; j<m; j++){
7.                while (k>=1 and V1[i]+V2[j]+V3[k-1]>0)
8.                         k=k-1
9.                if  (V1[i]+V2[j]+V3[k-1]=0)
10.                    return (V1[i], V2[j], V3[k-1]);
11.
12.        }
13.   }
14.   return NO_SOLUTION
15.}

So the lines 1,2,3 require $O(m \log m)$ time, the lines 6-12 are executed $O(2m)=O(m)$ times for a specific $i$ and thus the lines 4-13 are executed in total $O(m^2)$ time.

Thus, the time complexity of the algorithm is $O(m \log m)+O(m^2)=O(m^2)$, right? (Thinking)Furthermore, I think that we cannot find a quicker algorithm than $O(m^2)$, or can we? (Thinking)

Last edited:
evinda said:
Thus, the time complexity of the algorithm is $O(m \log m)+O(m^2)=O(m^2)$, right? (Thinking)

Right! (Nod)
Furthermore, I think that we cannot find a quicker algorithm than $O(m^2)$, or can we? (Thinking)

We're not using that V1 is sorted, so currently there is no need to sort it.

If there is a way to get it quicker, I think we should use somehow that V1 is sorted as well.
But as yet, I don't see how.
The best we seem to be able to do, is improve the best-case and average-case times. (Thinking)

evinda said:
Code:
Algorithm(V1,V2,V3){
1.   Heapsort(V1);
2.   Heapsort(V2);
3.   Heapsort(V3);
4.   for (i=0; i<m; i++){
5.        k=m;
6.        for (j=0; j<m; j++){
7.                while (k>=1 and V1[i]+V2[j]+V3[k-1]>0)
8.                         k=k-1
9.                if  (V1[i]+V2[j]+V3[k-1]=0)
10.                    return (V1[i], V2[j], V3[k-1]);
11.
12.        }
13.   }
14.   return NO_SOLUTION
15.}

One more thing.
In line 9 we should have
Code:
9.                if  (k>=1 and V1[i]+V2[j]+V3[k-1]=0)
Otherwise we access the vector outside of its range, which is bad. (Doh)

I like Serena said:
One more thing.
In line 9 we should have
Code:
9.                if  (k>=1 and V1[i]+V2[j]+V3[k-1]=0)
Otherwise we access the vector outside of its range, which is bad. (Doh)

Ah I see... (Nod)

In order to achieve a better time complexity we could sort the vectors using the algorithm [m]Countingsort[/m], right? (Thinking)

Don't we achieve with the following algorithm linear time? Or am I wrong? (Thinking)

Code:
 Alg(v1, v2, v3, low, high , m){
Countingsort(v1);
Countingsort(v2);
Countingsort(v3);
mid=floor((high-low)/2 )
i=j=k=mid
while k>=0 && k<=m
s=v1[i]+v2[j]+v3[k]
if s>0
k=floor((mid-low)/2)
else if s<0
k=floor((high-low)/2)
else
return i, j, k
while j>=0 && j<=m
s=v1[i]+v2[j]+v3[k]
if s>0
j=floor((mid-low)/2)
else if s<0
j=floor((high-low)/2)
else
return i, j, k
while i>=0 && i<=m
s=v1[i]+v2[j]+v3[k]
if s>0
i=floor((mid-low)/2)
else if s<0
i=floor((high-low)/2)
else
return i, j, k
}

evinda said:
Ah I see... (Nod)

In order to achieve a better time complexity we could sort the vectors using the algorithm [m]Countingsort[/m], right? (Thinking)

From wikipedia:
In computer science, counting sort is an algorithm for sorting a collection of objects according to keys that are small integers; that is, it is an integer sorting algorithm.
However, it says in your problem statement that we have real numbers.

So no, we can't use counting sort. (Doh)
evinda said:
Don't we achieve with the following algorithm linear time? Or am I wrong? (Thinking)

That algorithm makes the assumption that 2 of the indices will be at their mid value.
That won't work.
For a worst case $i$ and $j$ (or rather any 2 of the indices) can really be any value, leaving only $k$ to be found.
So I'm actually sure that we can't do better than $\Theta(m^2)$. (Shake)

I like Serena said:
From wikipedia:
In computer science, counting sort is an algorithm for sorting a collection of objects according to keys that are small integers; that is, it is an integer sorting algorithm.
However, it says in your problem statement that we have real numbers.

So no, we can't use counting sort. (Doh)

Ah, I see... (Nod)

I like Serena said:
That algorithm makes the assumption that 2 of the indices will be at their mid value.
That won't work.
For a worst case $i$ and $j$ (or rather any 2 of the indices) can really be any value, leaving only $k$ to be found.
So I'm actually sure that we can't do better than $\Theta(m^2)$. (Shake)

How could we justify formally why we can't do better than $\Theta(m^2)$ ? (Thinking)

## 1. Are there numbers that do not exist?

No, numbers are abstract concepts that represent quantities, and therefore are not physical objects that can exist or not exist. However, there are numbers that are not yet known or discovered by humans.

## 2. Can numbers be infinite?

Yes, numbers are infinite and limitless. There is no highest or lowest number, and they can continue on infinitely in both positive and negative directions.

## 3. Are there numbers that cannot be expressed?

Yes, there are numbers that cannot be expressed in decimal form or as a fraction. These numbers are called irrational numbers, such as pi and the square root of 2.

## 4. Are there numbers that are bigger than infinity?

No, infinity is not a number but a concept representing something that is uncountable or endless. Therefore, it is not possible for a number to be bigger than infinity.

## 5. Are there numbers that are both positive and negative?

Yes, numbers can have both positive and negative values. These are called signed numbers and are represented on a number line with positive numbers to the right and negative numbers to the left.

• Programming and Computer Science
Replies
31
Views
2K
• Programming and Computer Science
Replies
9
Views
1K
• Programming and Computer Science
Replies
1
Views
1K
• Introductory Physics Homework Help
Replies
10
Views
1K
• Programming and Computer Science
Replies
9
Views
2K
• Programming and Computer Science
Replies
4
Views
1K
• Introductory Physics Homework Help
Replies
3
Views
294
• Programming and Computer Science
Replies
8
Views
1K
• Other Physics Topics
Replies
0
Views
2K
• Programming and Computer Science
Replies
5
Views
1K