What is the lowest known complexity for multiplication?

In summary, there are various methods for multiplication, each with their own time complexity. The 'Schoolbook' method takes O(n^2) operations, while the Karatsuba method takes O(n^{\lg 3})\approx O(n^{1.59}) operations. The best method for very large numbers is O(n \log n \log\log n). Despite efforts to find a nontrivial lower bound on the complexity of multiplication, there has been limited progress. Donald Knuth's book discusses the lack of a lower bound and while there have been some claims of an n log n algorithm, it is still considered a conjecture and the S-S method remains the optimum.
  • #1
CRGreathouse
Science Advisor
Homework Helper
2,844
0
'Schoolbook' multiplication takes [itex]O(n^2)[/itex] operations. Karatsuba multiplication takes [itex]O(n^{\lg 3})\approx O(n^{1.59})[/itex] operations. The best method I know of (which is only practical for very large numbers) is [itex]O(n \log n \log\log n)[/itex].

Is there a known nontrivial lower bound on the complexity of multiplication? I read a paper that discussed the time-(silicon) area tradeoff on multiplication, but I wasn't able to get a useful bound from that (caring only about the time and not knowing how to 'remove' the area):
The Area-Time Complexity of Binary Multiplication

Can anyone help me out here?
 
Mathematics news on Phys.org
  • #2
Well Donald Knuth's The Art of Computer Programming, Volume 2: Seminumerical Algorithms has a chapter entitled How fast can we multiply and if I remember correctly he comments on the lack of a lower bound for multiplication algorithms. However, that was written over 35 years ago, so you'd think that there'd have been some progress since then. I tried searching for papers with references to that chapter of Knuth's book, but didn't find very many of them.
 
  • #3
Yeah, I've been searching but haven't found anything. I hoped the new Algorithms by Dasgupta, Papadimitriou, and Vazirani might have something, but no such luck. I don't have a copy of Knuth's book or that's where I would have started (old as it is).

I did find something online, but I don't think I trust it: "Surprisingly, there are several better [multiplication] algorithms, taking as few as [itex]n\log n[/itex] operations (although whether it's possible to solve this in [itex]O(n)[/itex] operations is a major open problem)."
http://www.cs.princeton.edu/courses/archive/spr06/cos522/lec1.pdf
What do you think? On one hand it supports what you remember reading; on the other, it's written for a basic audience and (seems to) misquote the complexity of (what I presume to be) S-S multiplication.
 
  • #4
No, I don't believe the n log n claim either. I did find http://www.csc.liv.ac.uk/~ped/teachadmin/algor/complex.html which reports it as a conjecture, but even then I'm doubtful - I would have thought that an n log n algorithm would be straightforward enough to have been found by now. It looks like S-S is still the optimum. The lack of a good lower bound isn't particularly surprising, proving such things is never easy.
 

1. What is computational complexity?

Computational complexity is a measure of the amount of time and resources required to solve a given problem using a computer. It is used to analyze and compare the efficiency of different algorithms.

2. How is computational complexity measured?

Computational complexity is typically measured in terms of time complexity and space complexity. Time complexity refers to the number of steps or operations required to solve a problem, while space complexity refers to the amount of memory or storage space needed.

3. What is the difference between worst-case and average-case complexity?

Worst-case complexity refers to the maximum amount of time or resources required to solve a problem, regardless of the input. Average-case complexity, on the other hand, takes into account the distribution of different inputs and calculates the expected time or resources needed to solve the problem.

4. How is computational complexity used in real-world applications?

In real-world applications, computational complexity is used to determine the most efficient algorithms for solving a given problem. This can help improve the performance of software and systems, reducing the time and resources needed to complete a task.

5. Can computational complexity be reduced?

Yes, computational complexity can be reduced by using more efficient algorithms or by optimizing existing algorithms. However, there are certain problems that have been proven to have a minimum level of complexity, meaning that they cannot be solved any faster than the most efficient algorithm currently known.

Similar threads

Replies
19
Views
3K
Replies
3
Views
1K
Replies
14
Views
1K
  • Programming and Computer Science
Replies
1
Views
1K
  • General Math
Replies
16
Views
2K
  • General Math
Replies
1
Views
2K
Replies
6
Views
1K
Replies
4
Views
384
Replies
3
Views
2K
  • Programming and Computer Science
Replies
25
Views
5K
Back
Top