Do mean that,
TimeSkip said:
Namely a precise amount of steps to call a proof either least (qualitatively less 'complex') or where a necessarily least amount of steps has been made in the proof of the theorem.
Can you conceptualize 'complexity' if this is hard to still understand or at least quantify it in regards to proofs?
Maybe the question is the same as something like, is a function ##f_L : T \longrightarrow \mathbb{N}##, defined as ##f(t;L)=\min_{p\in P_{t,L}}|p|## computable? Where ##T## is the set of all provable theorems, and ##L## is the formal language for the axiomatic system, over a given alphabet ##A##, that the theorems and proofs are expressed in, and ##P_{t ,L}## is the set of proofs of theorem ##t## under language ##L##.
I think that the answer is yes, granted that you have ##T##, the set of provable theorems. Because you could compute this function by simply testing all possible proofs one by one from smallest to biggest, and checking if they prove the theorem or not, and then stop when they do. There is one catch, which is that we don't know what the domain is, because we don't know if some theorems are provable or not. But say you had a proof of a theorem and you wanted to know if it were the shortest possible one, just test every string in the language that is smaller and see if any of them are shorter proofs of the same theorem.
The other question you seem to be asking is whether the size of the alphabet matters. Say you had the provable theorems ordered according to the size of their minimal proofs in a given language. Would the order change if the language changes? It could change by having different syntax, and a larger or smaller alphabet. Yes, I think that a theorem may have a smaller or larger minimal proof relative to another theorem in the different language.
One thing that might be neglected, with a finite alphabet, is the need to encode each symbol. A Turing machine can only handle a finite number of symbols, and we pretend that we can write and read symbols with atomic operations. Maybe in a real physical system, that is not true, because we need to decipher each symbol, and if you have a lot of different ones, then there will be more work to be done deciphering and distinguishing them. With modern computers, the alphabet is actually 0s and 1s. And characters are represented as strings of 0s and 1s that encode them. We usually neglect the cost of encoding characters and such things, when they are finite in number, since it's only a constant amount of overhead, it's still ##O(1)## to read and write, say an ASCII character, in order to model a language with a larger alphabet than just 0s and 1s.
However, say we want to model a formal language with an infinite alphabet, I don't think you could make the simplifying assumption that each symbol is read or written with an atomic operation. At least not with a Turing machine. What you would have instead of an alphabet is something like the set of all natural numbers. And the complexity of reading or writing one of them would be a function of the length of it's encoding ##n##. So reading and writing the symbols would be ##O(n)##.
But suppose you did have a formal axiomatic system with a language over an infinite alphabet. Then to model it with a Turing machine, you would have to choose which symbols, with their particular meanings get to be represented by the smaller or larger natural numbers that encodes them. Supposing the language makes sense, still, the relative size of the minimal proof of one theorem compared to another could change simply by making one such different choice about encoding the symbols alone.