Complexity of divide and conquer algorithm?

AI Thread Summary
The discussion centers on the application of a divide-and-conquer approach to an algorithm with complexity O(n^k). The initial premise suggests that partitioning the problem into halves would reduce the time complexity, leading to a summation of T/2^i^(k-1) for infinite recursions. However, several critical counterpoints are raised. It is emphasized that not all problems can be effectively partitioned, and even when they can, the costs associated with partitioning and reconstructing the solution may outweigh the benefits. Examples such as matrix multiplication illustrate that partitioning does not necessarily lead to a reduction in computational time. In fact, conventional matrix multiplication remains O(n^3) even when split into submatrices, and the additional overhead from partitioning can increase overall complexity. The discussion concludes that while divide-and-conquer strategies can be beneficial in some cases, they are not universally applicable and can sometimes result in higher costs than solving the original problem directly.
homomorphism
Messages
19
Reaction score
0
Let's say I have some algorithm with complexity O(n^k) for some constant k. and let's say it runs in some time T. Now, I want to implement a divide and conquer approach for this algorithm, by dividing the problem in half each recursion. So basically, the first time i run my algorithm it will run in time T, the second time it will be O(\frac{n^k}{2^k}) + O(\frac{n^k}{2^k}) [since I'm running my algorithm on both halves] so the time will be \frac{T}{2^k} + \frac{T}{2^k}. and then when i run my algorithm again it will be O(\frac{n^k}{4^k}) * 4. This means the time will be 4 * \frac{T}{4^k}.

So basically, for a worst case scenario, with infinite recursions, I should have some sum,
\sum_{i=0}^{\infty} \frac{T}{2^i^{(k-1)}}

Where T is a constant. does this look about right?
 
Technology news on Phys.org
bump...
 
homomorphism said:
does this look about right?

You are doing several things wrong here. You are assuming that the problem at hand can be partitioned. This is not always the case. You are assuming that partitioning buys something, and this also is not always the case. Even if some problem can be partitioned and the partitioned problem is easier to solve, there is typically a cost involved in reconstructing the solution in terms of the original problem. Finally, you are assuming infinite recursion.

Consider a couple of examples: Multiplying a pair of square matrices and sorting a list.

Using conventional matrix multiplication algorithm to compute the product of a pair of NxN matrices requires N3 multiplications and N3-N2 additions; it is an O(n3) operation. Suppose the matrices are of size 2Nx2N. Each matrix can be split into four NxN submatrices. The product of the original matrices can be computed using these submatrices. Since 2x2 matrix multiplication requires 8 multiplications and 4 additions, the partitioned problem involves 8 NxN matrix multiplications and 4 NxN matrix additions, for a total of 8*N3=(2N)3 scalar multiplications and 8*(N3-N2)+4*N2=(2N)3-(2N)2 additions. Partitioning doesn't buy a thing here!

There exists a non-conventional technique for multiplying 2x2 matrices that requires but seven multiplications (but at the cost of extra bookkeeping and a lot of extra additions). Partitioning can be of benefit here. Infinite recursion is not possible; A 1x1 matrix cannot be split into smaller parts. For large matrices, this technique can be used to reduce the order from N3 to Nlog27.

Partitioning is the key tactic needed to make sorting an O(n*log(n)) problem rather than O(n2). However, the partitioning can only be carried so far (you can't split a list of one into parts) and the resulting sorted lists need to be merged.
 
yeah, i definitely understand what you're saying. But for example, your matrix multiplication, let's say it runs in some time T. and then you partition it. the time it takes for the computation would be less right? (assuming it can be partitioned, you have a large enough matrix, etc), which is kind of what my formula above is trying to say (though you can modify the bounds of the summation depending on the situation). is this right?
 
No. Partitioning the standard matrix multiplication algorithm doesn't buy a thing (the total number of multiplications and additions doesn't change.) In fact, it hurts because it takes time to perform the partitioning, extra time to do the extra bookkeeping the comes with partitioning, and extra time to reconstruct the solution from the partitioned products. Divide and conquer does not always work. Sometimes using divide-and-conquer costs more than just solving the original problem. Matrix multiplication is one case where this occurs. There are many, many others.
 
Thread 'Is this public key encryption?'
I've tried to intuit public key encryption but never quite managed. But this seems to wrap it up in a bow. This seems to be a very elegant way of transmitting a message publicly that only the sender and receiver can decipher. Is this how PKE works? No, it cant be. In the above case, the requester knows the target's "secret" key - because they have his ID, and therefore knows his birthdate.
Back
Top