- #36

SSequence

- 517

- 82

Well, for one, many/most of the theorems of (ordinary) computation theoryCan AI be trained to prove only interesting theorems? If yes, can it be trained to explain us why they are interesting? As a specific example, I have in mind the Godel incompleteness theorems.

**[**say in literature etc.

**]**should, in principle, be provable in sufficiently powerful axiomatic systems.

I suspect that if one tries to look for results of negative nature (no program can do this etc.) then probably this can serve as the most basic "filter" as a starting point (for results similar to what you mentioned).

===========================

Of course the question of "interesting" in general is well much more broad than this. As with most other questions of this type

**[**including the one in OP

**]**, one can take two different viewpoints (mechanical/aesthetic or theoretical/practical distinction depending on question).

Here we have:

(i) purely mechanical

**[**mechanical isn't the best word here, but I don't know of an alternative

**]**viewpoint

(ii) aesthetic viewpoint on this.

(i) If we take a mechanical viewpoint we can say that nothing is really interesting or non-interesting. It is just that based on several factors (our lifespans, information processing speed, physical limitations on movements etc.) we only take those statements to be interesting which feel "short"/"elegant" enough to us.

(ii) The aesthetic viewpoint would not accept (i).