Is this new multiplication method the key to faster quantum computers?

  • Context: High School 
  • Thread starter Thread starter jedishrfu
  • Start date Start date
  • Tags Tags
    Multiplication
Click For Summary
SUMMARY

The recent forum discussion centers on a new multiplication method proposed for quantum computing, which theoretically offers improved speed for multiplying extremely large numbers, specifically those exceeding ##10^{214857091104455251940635045059417341952}## digits in binary. However, practical application is limited due to the inability to store or compute such vast numbers on current hardware. The discussion critiques the method's relevance for everyday calculations and highlights existing algorithms like Karatsuba and Schönhage-Strassen, which are more efficient for numbers within computable limits. The new method's time complexity is noted as ##O(n \log n)##, but its practical utility remains questionable.

PREREQUISITES
  • Understanding of quantum computing principles
  • Familiarity with time complexity analysis in algorithms
  • Knowledge of existing multiplication algorithms such as Karatsuba and Schönhage-Strassen
  • Basic concepts of binary number representation
NEXT STEPS
  • Research the implications of the new multiplication method on quantum computing performance
  • Study the time complexity of existing multiplication algorithms in detail
  • Explore the concept of prime factorization in number representation
  • Investigate the practical limitations of current computing hardware for large number calculations
USEFUL FOR

Mathematicians, computer scientists, quantum computing researchers, and anyone interested in advanced algorithms for large number multiplication.

Mathematics news on Phys.org
I was disappointed to find that it worked only for incredibly large numbers and not at the range of numbers that grade school kids suffer with and so we are left with multiplication tables or the cool Trachtenberg method that never caught on despite its promise.
 
Quote from the article:
In the new study, the researchers considered only numbers with more than roughly ##10^{214857091104455251940635045059417341952}## digits when written in binary, in which numbers are encoded with a sequence of 0s and 1s. But the scientists didn’t actually perform any of these massive multiplications, because that’s vastly more digits than the number of atoms in the universe. That means there’s no way to do calculations like that on a computer, because there aren’t enough atoms to even represent such huge numbers, much less multiply them together. Instead, the mathematicians came up with a technique that they could prove theoretically would be speedier than other methods, at least for these large quantities.
I don't get the reasoning on why the multiplication couldn't be done. So what if there aren't that many atoms in the universe. We can do already do calculations with numbers much larger than the no. of atoms in the universe, and get the right answer.

Typical (insert your favorite pejorative) "journalist," which is synonymous with "can't do any math past arithmetic, and even has trouble with fractions and percents, at that."
 
Mark44 said:
Typical (insert your favorite pejorative) "journalist," which is synonymous with "can't do any math past arithmetic, and even has trouble with fractions and percents, at that."
I like the roughly in front of this number.
 
  • Haha
Likes   Reactions: FactChecker
Mark44 said:
We can do already do calculations with numbers much larger than the no. of atoms in the universe, and get the right answer.
Numbers whose decimal representations are much larger than the number of atoms in the universe are more challenging.
 
jbriggs444 said:
Numbers whose decimal representations are much larger than the number of atoms in the universe are more challenging.
Maybe we can convince everyone to start representing numbers as their prime factorizations. Then multiplication would have a time complexity of just ##O(n)## (because it’s basically string concatenation) and, thanks to the prime counting function being logarithmic, we could represent much larger numbers efficiently. Though it might be difficult to remember my wife’s ##p_1^3p_3## rd birthday...
 
  • Like
Likes   Reactions: jedishrfu
TeethWhitener said:
Maybe we can convince everyone to start representing numbers as their prime factorizations. Then multiplication would have a time complexity of just ##O(n)## (because it’s basically string concatenation) and, thanks to the prime counting function being logarithmic, we could represent much larger numbers efficiently. Though it might be difficult to remember my wife’s ##p_1^3p_3## rd birthday...

We could have an app for that and save your marriage.
 
  • Haha
Likes   Reactions: TeethWhitener
Mark44 said:
I don't get the reasoning on why the multiplication couldn't be done. So what if there aren't that many atoms in the universe. We can do already do calculations with numbers much larger than the no. of atoms in the universe, and get the right answer.
Wrong power.
We can multiply numbers larger than 1080, numbers with 80 (decimal) digits - easily. But we cannot (in general) store numbers of the order of 101080, with 1080 digits. In some special cases you can, of course. As an example, ##10^{{10}^{80}} \cdot 10^{{10}^{80}} = 10^{2\cdot 10^{80}}##, but what the article is about is a multiplication scheme that works for all numbers. In all cases where we can store the numbers on computers in this universe the new multiplication method is not faster.
 
  • Like
Likes   Reactions: dRic2
Thank you for this thread, as the topic had eluded PhysOrg news.

Per wry comment on another site, it might offer a nut-cracker for previously intractable 'integer' problems such as the arcane extremities of Diophantine stuff...
 
  • #10
I'll make a table for multiplying ##n##-digit numbers:
AlgorithmRuntime
Explicit##O(n^2)##
This new one##O(n \log n)##
Karatsuba##O(n^{\log 3 / \log 2}) \approx O(n^{1.58})##
Toom-Cook (Toom-3)##O(n^{\log 5 / \log 3}) \approx O(n^{1.46})##
Schönhage-Strassen##O(n \log n \cdot \log \log n)##
Fürer##O(n \log n \cdot 2^{O(\log^* n)})##
Here, ##\log^* n## is the "iterated logarithm", how many times one has to take a logarithm before the result becomes less than 1.

These formulas omit overall multiplication constants, and these constants get larger in the asymptotically-faster algorithms.
 
  • Like
Likes   Reactions: Greg Bernhardt and fresh_42
  • #11
Here I will estimate the effect of a large overall constant. Consider Karatsuba or Toom-Cook, with ##K n^c##. The crossover point for ordinary multiplication is at ##K n^c \approx n^2##, with solution ##n_{co} \approx K^{1/(2-c)}##. For ##c \approx 1.5## and ##K \approx 10##, ##n_{co} \approx 100##.

For the best asymptotic case, ##K n \log n \approx n^2##, with ##n_{co} \approx K \log K##. So in this most recent paper, ##K## must be very large.

I used the Wikipedia articles as sources for all but the first two algorithms.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 29 ·
Replies
29
Views
3K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 19 ·
Replies
19
Views
4K
Replies
43
Views
6K
  • · Replies 19 ·
Replies
19
Views
5K
  • · Replies 6 ·
Replies
6
Views
6K
  • · Replies 1 ·
Replies
1
Views
1K