jobyts said:
It's different from art. In art a computer cannot sort out a good picture from garbage. In case of mathematical theorems proof, a computer can clearly rule out the invalid ones. Using it to derive all the mathematical proofs will be nearly impossible. But for a given specific problem, a computer may be able to find proof for it and rule out the invalid ones.
For theorems it's actually a lot like art. Most theorems are worthless and the computer won't be able to rule out garbage theorems.
If you already have a statement you want to find a proof for, then it could be of some use, but it's still of limited use for the following reasons:
1) If the statement is false, then it will keep searching and you won't get any information.
2) The theorem may be undecidable in your axiom system in which case it will keep searching.
3) Even if your statement is true AND provable you have no way to know when the search will stop.
Also when you finally find a proof it will not be in an easily understandable format (200 million logical symbols is not easily readable).
Of course it could be useful once in a while, but keep in mind that this is a hypothetical scenario and we likely won't have completely automated theorem provers capable of doing anything significant by bruteforce.
At least we can prove that, there is no proof exist for, say Reimann hypothesis, in x number of steps. x could be in millions.
If there are 5 different symbols it would take 5^1000000 combinations to reach a proof of length 1 million. If the computer could go through 5^10000 combinations each nanosecond (5^10000 has ~7000 digits), then it would still take 10^33000 years to reach a proof of length 40000 symbols. Given all the short-cuts and cross-references mathematicians employ I suspect that we already have many proofs much larger than 10million symbols if they were written down explicitly.
The world's fastest supercomputer can perform almost 2*10^15 floating-point operations a second (stuff like additions and multiplications). Suppose we improved on that a 5billion-fold giving us a computer performing 10^25 floating-point operations a second. Now if somehow each floating-point operation could construct an entire combination (in reality we'd probably need at least a couple of hundreds per symbol), we can construct 10^25 combinations per second. Now suppose we collect 10 billion of these computers giving us a total power of 10^35combinations per second. Now let these run for 10billion year. This gives us 10^52 combinations, or in other words all proofs of a length up to 74 symbols. If we instead let it run 10^100 years we would get up to 203 symbols.
Asymptotically the problem just doesn't scale. By increasing the proof length by 1 symbol it takes k times longer to arrive at it where k is the number of symbols (so by going from 10 symbols to 20 symbols you would increase the time by k^10). For it to work you would need a computer that improved itself at a pretty large exponential rate. In fact it's even worse in reality as longer combinations clearly will take longer to compose. Of course in a thought experiment such a thing may exist, but in reality this doesn't work.