Difficulty of Project Euler problems 1-100; now versus then

Click For Summary

Discussion Overview

The discussion revolves around the perceived difficulty of solving Project Euler problems from the first 100 published between 2001 and 2006, comparing the challenges faced by solvers then versus now. Participants explore factors such as advancements in hardware and software, particularly the use of Python and its libraries, in relation to problem-solving efficiency.

Discussion Character

  • Debate/contested
  • Technical explanation
  • Conceptual clarification

Main Points Raised

  • Some participants suggest that faster hardware, such as modern CPUs, significantly aids in solving problems quickly.
  • Others argue that the key to reducing computation time lies in understanding and implementing efficient algorithms rather than solely relying on hardware improvements.
  • One participant notes that they have successfully solved problems using only Python's standard library, implying that clever algorithm design can mitigate performance concerns.
  • Another participant expresses skepticism about the notion that solving problems is intrinsically easier now, questioning whether reliance on libraries like itertools diminishes the challenge.
  • There is a discussion about the relative speed of Python with libraries compared to lower-level languages like C and assembly, with some suggesting that ease of expression in Python is more beneficial than raw speed.

Areas of Agreement / Disagreement

Participants do not reach a consensus on whether solving Project Euler problems is easier now compared to the past. There are competing views regarding the impact of hardware and software advancements on problem difficulty, and the discussion remains unresolved.

Contextual Notes

Participants express varying assumptions about the role of algorithm efficiency versus hardware capabilities, and there is uncertainty regarding the comparative advantages of different programming languages and libraries.

Avatrin
Messages
242
Reaction score
6
Hi

I recently noticed that the first Project Euler problem was published in 2001! The first 100 problems were all published before 2006. That made me curious; What I wonder is how different the difficulty is now versus then... I mean, I can think of two factors that make things easier:

1. The hardware. The first 100 problems were meant to be solved on Pentium 4 CPU's. How much of a difference does having faster hardware make?

2. The software.. I have been using Python to solve a lot of these problems. I generally try not to use NumPy, but I have done that. Numpy 1.0 was released in 2006. How much of a difference does this make?
 
Technology news on Phys.org
Well the site has under 1 min rule. I think the computations can be done pretty fast again. definitely with C. For python I am not sure. The problems do not take much computation time if its solved cleverly.
 
Arman777 said:
For python I am not sure.

Every Euler problem that I have solved so far, I have solved with just Python's standard library, no third party libraries like NumPy. I have had no issues so far getting solution times well under 1 minute.

There are plenty of them that I have not solved, but I would not expect Python to be significantly slower for them. The key is always finding the right solution algorithm that avoids combinatorial explosion.
 
  • Like
Likes   Reactions: Arman777
Well, my assumption when I asked the question was actually the opposite; Isn't it easier now to solve these problems..?
PeterDonis said:
Every Euler problem that I have solved so far, I have solved with just Python's standard library, no third party libraries like NumPy. I have had no issues so far getting solution times well under 1 minute.
This doesn't change that; Itertools is a part of Pythons standard library and, maybe even more than Numpy, would make solving Project Euler problems a lot easier.

Almost all of the Project Euler problems I have solved, have been done with code producing the solution in less than five seconds. I just wanted to know how good this would be compared to the tools that were available back when those problems were published.
 
No, it is not intrinsically "easier", IMO. Why?

You are assuming that if we had a super fast box any old algorithm would work. Not a good concept unless you are trying to sell management on upgrading existing hardware.

I think the main takeaway from Project Euler is that 90% or more of run time reduction is based on knowing/creating/understanding algorithms. Not parallelizing with CUDA on a desktop high end Nvidia graphics card. If you work as a software engineer you have to fully understand the concept, not rely on brute force.

Example:
Did you ever play modern computer games? e.g. "Esports". They have to run on all kinds of systems.

A deeper dive into sort algorithms:
https://www.geeksforgeeks.org/time-complexities-of-all-sorting-algorithms/

No one sort alorithm is universally better. You have to understand all kinds of requirements to make a good choice. Commercial sort libraries have those kind of "smarts" built into them to pick the best available algorithm choice based on data input.
 
  • Like
Likes   Reactions: berkeman
jim mcnamara said:
No, it is not intrinsically "easier", IMO. Why?
I don't know. Sometimes, when I use itertools, it almost feels like I am cheating. I mean, the earliest code example in the problem threads are using assembly and C++, and I assume they didn't have access to something like itertools. I just wonder if my assumption is correct.

But, I guess you're right... Maybe I just need to up my algorithms repertoire so that I can rely on fewer libraries for future problems...
 
Avatrin said:
Itertools is a part of Pythons standard library and, maybe even more than Numpy, would make solving Project Euler problems a lot easier.

A lot easier compared to what?

If you didn't have itertools in the Python standard library, you'd probably end up writing your own equivalents anyway. And if you were solving the problems in, say, C, you'd end up writing the C equivalent of itertools. But that's more because it would make it easier to express your algorithms in understandable code, than because of any speed advantage. The speed advantage of itertools over less idiomatic Python is tiny compared to the speed advantage of, say, using an O(log n) algorithm instead of an O(n) one.
 
Avatrin said:
Sometimes, when I use itertools, it almost feels like I am cheating. I mean, the earliest code example in the problem threads are using assembly and C++, and I assume they didn't have access to something like itertools.

Python with itertools is still going to be something like an order of magnitude slower than assembly and C++ running the same basic algorithm. The advantage of Python is ease of expression, not speed.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 97 ·
4
Replies
97
Views
10K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
16
Views
3K
Replies
9
Views
8K
  • · Replies 4 ·
Replies
4
Views
5K
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K