- #1

Spathi

- 74

- 7

It seems to me that all mathematics, to one degree or another, is devoted to solving inverse problems. Let's take five equations as an example:

1) ax+b=0

2) ax^2+bx+c=0

3) ax^3+bx^2+cx+d=0

4) ax^4+bx^3+cx^2+dx+e=0

5) ax^5+bx^4+cx^3+dx^2+ex+f=0

To solve the first problem, you need to spend one division. The second problem is solved through the discriminant. The third problem is solved by the Cardano formula, the fourth by the Ferrari formula, and the fifth is generally “not solved”, more precisely, it is considered that it cannot be solved analytically.

In fact, all five problems are inverse problems from a simple equation of the nth degree. Those. if we write, for example, y=ax^2+b+c, then finding y from a known x is a direct problem, and finding x from a known y is an inverse problem.

Differences between direct problems and inverse ones:

1) The direct problem can be solved in for any x, and the inverse - not for any y;

2) The solution to the direct problem is unique, while there can be several solutions to the inverse problem;

3) The complexity (time) and reliability of the solution of the direct problem practically does not depend on what x is equal to or what the initial approximation was;

4) The solution of the inverse problem requires much more computational resources than the solution of the direct problem;

5) There are simple and universal methods for solving the inverse problem through a straight line, such as simple enumeration or gradient descent (although they are usually far from efficient in terms of the required resources).

Now I propose the following idea: take the five equations above and determine their complexity. The complexity of solving a problem can be measured quantitatively - the amount of computer time spent on solving it using the best algorithm. I have a strong suspicion that for the first four tasks the complexity will be described by some simple dependence on the number (for example, an exponent), and for the fifth, the complexity will clearly fall out of this dependence. And from this it will be possible to conclude that there is an unsolved problem in mathematics - to find an analytical solution to the equation of the fifth degree.

If I'm not mistaken, it is believed that decomposing a number into prime factors requires much more computational resources than multiplying these factors (although formally this is only a hypothesis). To what extent is the concept of “computing resource costs” formalized in this hypothesis?

In programming, all these terms, as I understand it, are also quite applicable. A fairly understandable example is neural networks: the work of a neural network is a solution to a direct problem, and training (training) of a neural network is a solution to a reverse one.

In my opinion, all mathematics and all computer science in one way or another comes down to optimal ways of solving inverse problems. And here you I suggest one more idea for programmers. I came to the conclusion that when programming complex algorithms, it is useful to use “heavy assertions”: these are assertions that check everything at once.

These assertions are quite easy to implement, but they greatly slow down the program, so they need to be disabled not only in the final build of the program, but in general almost always, except for the moments when you need to check critical sections of the algorithm. With heavy assertions, bugs should be easy to find. So this is all also included in my topic: assertions are also an auxiliary solution to the direct problem, which allows you to check how correctly the inverse problem is solved.

1) ax+b=0

2) ax^2+bx+c=0

3) ax^3+bx^2+cx+d=0

4) ax^4+bx^3+cx^2+dx+e=0

5) ax^5+bx^4+cx^3+dx^2+ex+f=0

To solve the first problem, you need to spend one division. The second problem is solved through the discriminant. The third problem is solved by the Cardano formula, the fourth by the Ferrari formula, and the fifth is generally “not solved”, more precisely, it is considered that it cannot be solved analytically.

In fact, all five problems are inverse problems from a simple equation of the nth degree. Those. if we write, for example, y=ax^2+b+c, then finding y from a known x is a direct problem, and finding x from a known y is an inverse problem.

Differences between direct problems and inverse ones:

1) The direct problem can be solved in for any x, and the inverse - not for any y;

2) The solution to the direct problem is unique, while there can be several solutions to the inverse problem;

3) The complexity (time) and reliability of the solution of the direct problem practically does not depend on what x is equal to or what the initial approximation was;

4) The solution of the inverse problem requires much more computational resources than the solution of the direct problem;

5) There are simple and universal methods for solving the inverse problem through a straight line, such as simple enumeration or gradient descent (although they are usually far from efficient in terms of the required resources).

Now I propose the following idea: take the five equations above and determine their complexity. The complexity of solving a problem can be measured quantitatively - the amount of computer time spent on solving it using the best algorithm. I have a strong suspicion that for the first four tasks the complexity will be described by some simple dependence on the number (for example, an exponent), and for the fifth, the complexity will clearly fall out of this dependence. And from this it will be possible to conclude that there is an unsolved problem in mathematics - to find an analytical solution to the equation of the fifth degree.

If I'm not mistaken, it is believed that decomposing a number into prime factors requires much more computational resources than multiplying these factors (although formally this is only a hypothesis). To what extent is the concept of “computing resource costs” formalized in this hypothesis?

In programming, all these terms, as I understand it, are also quite applicable. A fairly understandable example is neural networks: the work of a neural network is a solution to a direct problem, and training (training) of a neural network is a solution to a reverse one.

In my opinion, all mathematics and all computer science in one way or another comes down to optimal ways of solving inverse problems. And here you I suggest one more idea for programmers. I came to the conclusion that when programming complex algorithms, it is useful to use “heavy assertions”: these are assertions that check everything at once.

These assertions are quite easy to implement, but they greatly slow down the program, so they need to be disabled not only in the final build of the program, but in general almost always, except for the moments when you need to check critical sections of the algorithm. With heavy assertions, bugs should be easy to find. So this is all also included in my topic: assertions are also an auxiliary solution to the direct problem, which allows you to check how correctly the inverse problem is solved.

Last edited by a moderator: