Difficulty of Project Euler problems 1-100; now versus then

AI Thread Summary
The discussion centers on the evolution of solving Project Euler problems since their inception in 2001, particularly focusing on the impact of hardware and software advancements. Participants note that modern hardware, such as faster CPUs, significantly enhances computational speed, allowing for quicker problem-solving. The use of Python, especially its standard library features like itertools, is highlighted as a means to simplify coding solutions without relying on third-party libraries like NumPy. However, the consensus is that while these tools make coding easier, they do not fundamentally change the intrinsic difficulty of the problems. The key to efficient problem-solving lies in understanding algorithms rather than merely leveraging faster hardware or advanced libraries. Participants emphasize that algorithmic knowledge is crucial for optimizing performance, as even with improved tools, the essence of the challenges remains rooted in algorithmic complexity rather than computational power alone.
Avatrin
Messages
242
Reaction score
6
Hi

I recently noticed that the first Project Euler problem was published in 2001! The first 100 problems were all published before 2006. That made me curious; What I wonder is how different the difficulty is now versus then... I mean, I can think of two factors that make things easier:

1. The hardware. The first 100 problems were meant to be solved on Pentium 4 CPU's. How much of a difference does having faster hardware make?

2. The software.. I have been using Python to solve a lot of these problems. I generally try not to use NumPy, but I have done that. Numpy 1.0 was released in 2006. How much of a difference does this make?
 
Technology news on Phys.org
Well the site has under 1 min rule. I think the computations can be done pretty fast again. definately with C. For python I am not sure. The problems do not take much computation time if its solved cleverly.
 
Arman777 said:
For python I am not sure.

Every Euler problem that I have solved so far, I have solved with just Python's standard library, no third party libraries like NumPy. I have had no issues so far getting solution times well under 1 minute.

There are plenty of them that I have not solved, but I would not expect Python to be significantly slower for them. The key is always finding the right solution algorithm that avoids combinatorial explosion.
 
  • Like
Likes Arman777
Well, my assumption when I asked the question was actually the opposite; Isn't it easier now to solve these problems..?
PeterDonis said:
Every Euler problem that I have solved so far, I have solved with just Python's standard library, no third party libraries like NumPy. I have had no issues so far getting solution times well under 1 minute.
This doesn't change that; Itertools is a part of Pythons standard library and, maybe even more than Numpy, would make solving Project Euler problems a lot easier.

Almost all of the Project Euler problems I have solved, have been done with code producing the solution in less than five seconds. I just wanted to know how good this would be compared to the tools that were available back when those problems were published.
 
No, it is not intrinsically "easier", IMO. Why?

You are assuming that if we had a super fast box any old algorithm would work. Not a good concept unless you are trying to sell management on upgrading existing hardware.

I think the main takeaway from Project Euler is that 90% or more of run time reduction is based on knowing/creating/understanding algorithms. Not parallelizing with CUDA on a desktop high end Nvidia graphics card. If you work as a software engineer you have to fully understand the concept, not rely on brute force.

Example:
Did you ever play modern computer games? e.g. "Esports". They have to run on all kinds of systems.

A deeper dive into sort algorithms:
https://www.geeksforgeeks.org/time-complexities-of-all-sorting-algorithms/

No one sort alorithm is universally better. You have to understand all kinds of requirements to make a good choice. Commercial sort libraries have those kind of "smarts" built into them to pick the best available algorithm choice based on data input.
 
  • Like
Likes berkeman
jim mcnamara said:
No, it is not intrinsically "easier", IMO. Why?
I don't know. Sometimes, when I use itertools, it almost feels like I am cheating. I mean, the earliest code example in the problem threads are using assembly and C++, and I assume they didn't have access to something like itertools. I just wonder if my assumption is correct.

But, I guess you're right... Maybe I just need to up my algorithms repertoire so that I can rely on fewer libraries for future problems...
 
Avatrin said:
Itertools is a part of Pythons standard library and, maybe even more than Numpy, would make solving Project Euler problems a lot easier.

A lot easier compared to what?

If you didn't have itertools in the Python standard library, you'd probably end up writing your own equivalents anyway. And if you were solving the problems in, say, C, you'd end up writing the C equivalent of itertools. But that's more because it would make it easier to express your algorithms in understandable code, than because of any speed advantage. The speed advantage of itertools over less idiomatic Python is tiny compared to the speed advantage of, say, using an O(log n) algorithm instead of an O(n) one.
 
Avatrin said:
Sometimes, when I use itertools, it almost feels like I am cheating. I mean, the earliest code example in the problem threads are using assembly and C++, and I assume they didn't have access to something like itertools.

Python with itertools is still going to be something like an order of magnitude slower than assembly and C++ running the same basic algorithm. The advantage of Python is ease of expression, not speed.
 
Back
Top