Space Complexity of Number-theoretic Algorithms

In summary: Yes, this is close to what you said. The logarithmic cost criterion is when we say that we need O(log N) units of space to store the information for a number N. This is equivalent to O(log N) bits.
  • #1
nworm
31
0
Can you write an example where a space complexity of any number-theoretic algorithm is calculated? Thanks In Advance.
 
Physics news on Phys.org
  • #2
Have you tried google? Try 'euclidean algorithm' and 'complexity', you'll get piles of examples where the time and space complexity is discussed. I'd expect most textbooks covering complexity would have an example from number theory as well (the euclidean algorithm is probably common here as well being a very 'classic' algorithm)
 
  • #3
Yes, I have. I have found examples where the time complexity of number-theoretic algorithms is discussed only.
 
  • #4
Try

http://pharos.cpsc.ucalgary.ca/Dienst/UI/2.0/Describe/ncstrl.ucalgary_cs/1989-376-38

(first hit from "space complexity" euclidean algorithm)

You might want to check Andrew Odlyzko's webpage

http://www.dtc.umn.edu/~odlyzko/doc/cnt.html

It's filled with number theory algorithms, the handfull I've read included time and space complexity discussions.
 
Last edited by a moderator:
  • #5
Thank you. This is very interesting links.
 
  • #6
Dear shmoe.

I have read the part of these links. Not long ago I read "A Course in Number Theory and Cryptography" by Neal Koblitz. I have used Neal Koblitz's model of bit operations for estimation of the time complexity of my algorithm.

If you read Neal Koblitz's book say please what do you think about next questions?
Is this model a RAM with infinity space and word length?
Can I speak that the space complexity of my algorithms is the number of bits using to story information?
Thanks In Advance.
 
  • #7
I haven't read the book, but normally when we speak of time and space complexity, we do in terms of Turing Machine (commonly referred to as Turing Machine model). I presume you are aware of this and hence won't delve further into this.

Now when one speaks of RAM model, one has to be very precise as to which one he/she is referring to.

There are two types of RAM models,
1. General Unit Cost RAM model
Unit cost inucurred per instruction execution
2. Log-cost RAM model
Cost incurred per instruction execution is proportional to logarithm on the size of the operands

The general unit cost RAM model is much more powerful than our Turing Machine. The Turing Machine equivalent in RAM models is the log-cost RAM model. Hence, if you are using the RAM model, one normally should adhere to log-cost RAM model. However, working with unit-cost RAM model is simpler and hence we do a work around.

Work Around :
We make sure that all the input operands are polynomially bounded in the input size. For such a unit cost RAM model, there is an equivalent log-cost RAM model and thereby a Turing model.

Most algorithms (especially the number theoretic algorithms that you are interested in) today commonly appeal to unit-cost RAM model in the way i described above.

-- AI
 
  • #8
Thank you.
If I understand rightly then you mean "the uniform cost criterion" and "the logarithmic cost criterion" from

http://www.cse.ohio-state.edu/~gurari/theory-bk/theory-bk-fivese1.html"

I deal with big numbers. So I think to use the logarithmic cost criterion, i.e. O(log N) units of space for a number N (or O(log N) bits for a number N). Am I right?
 
Last edited by a moderator:
  • #9
nworm said:
Thank you.
If I understand rightly then you mean "the uniform cost criterion" and "the logarithmic cost criterion" from

http://www.cse.ohio-state.edu/~gurari/theory-bk/theory-bk-fivese1.html"
Yes.

I deal with big numbers. So I think to use the logarithmic cost criterion, i.e. O(log N) units of space for a number N (or O(log N) bits for a number N). Am I right?
Sort of close but not completely. Our computers are known to simulate Turing Machines. As i said earlier, the unit cost RAM model is far more powerful than our Turing Machine. That is, a unit cost RAM model can outperform our computers at certain things and obviously, since we are looking at things that our computers are capable of doing, we wouldn't want to work with unit cost RAM model.

-- AI
 
Last edited by a moderator:

Related to Space Complexity of Number-theoretic Algorithms

1. What is space complexity and how is it different from time complexity?

Space complexity refers to the amount of memory or storage required by an algorithm to solve a problem. It is different from time complexity, which measures the amount of time taken by an algorithm to solve a problem.

2. How is space complexity calculated for number-theoretic algorithms?

The space complexity of a number-theoretic algorithm is typically calculated by determining the amount of memory required for data structures such as arrays, matrices, or other variables used in the algorithm. It can also be calculated by counting the number of variables declared and used during the execution of the algorithm.

3. What factors affect the space complexity of number-theoretic algorithms?

The space complexity of a number-theoretic algorithm can be affected by various factors such as the size and type of input data, the number and type of variables used, and the data structures and their sizes. Additionally, the space complexity can also be influenced by any auxiliary space used for temporary storage during the execution of the algorithm.

4. How does the space complexity of number-theoretic algorithms impact their efficiency?

The space complexity of an algorithm can have a direct impact on its efficiency. Algorithms with lower space complexity generally require less memory and therefore can be executed faster and more efficiently. However, in some cases, the trade-off between space and time complexity may need to be considered, and a higher space complexity may result in a more efficient algorithm.

5. How can the space complexity of number-theoretic algorithms be optimized?

The space complexity of an algorithm can be optimized by using efficient data structures and algorithms, avoiding unnecessary variables and data storage, and minimizing the use of auxiliary space. Additionally, analyzing and understanding the problem at hand can also help in developing more efficient algorithms with lower space complexity.

Similar threads

  • Linear and Abstract Algebra
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
14
Views
733
  • Linear and Abstract Algebra
Replies
11
Views
1K
  • Linear and Abstract Algebra
Replies
10
Views
472
  • Linear and Abstract Algebra
Replies
1
Views
707
  • Linear and Abstract Algebra
Replies
14
Views
662
  • Engineering and Comp Sci Homework Help
Replies
3
Views
965
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
817
  • Classical Physics
Replies
13
Views
942
Back
Top