lodbrok said:
If it is reducible, then it is predictable and not random. If we don't know how to reduce it, then we are ignorant of its cause. It makes no sense to suggest that "uncertainty" is NOT due to "ignorance". The very definitions of the concepts of "uncertainty" and "ignorance" depend on each other.
Perhaps it's again about words and I don't know what are the correct definitions of words... but I will explain
how i distinguish between uncertainty and ignorance.
Uncertainty ~ Is a measure of (an obsevers/agents) predictive abiliy, typically measured by some "statistical uncertainty" score, say some confidence interval as per some confidence level, in predicting the future (for example the outcome of an experiment), or the responses from the environment.
But without specifying WHY.
Ignorance - supposedly indicates that the explanation of lack of certainty/confidence is because the agent/observer is uninformed but where we at least in principle
could have been informed. Ie that the information it needs is in principle information theoretically and computationally accessible in a given time scale.
lodbrok said:
Because it is internally inconsistent.
Where in lies the inconsistency? Do you mean because the randomness is not absolute or what else?
Why I refer to as the relativity of randomness, is nothing strange, I have in mind something similar to cryptographic classifications such as
computational hardness, or even limits due to informaiton theoretic constraints. Ie. if the decrypting algorithm requires more memory and computational power, than the agent A possesses, then I would not label Agent A as ignorant, because it may well perform optimally (given the constraints, which are due to physical constraints) and yet fail to distinguish input from noise.
The presumed relevance of hte "size of the observer", as scaling from microscopic to macrodomain is that it presumably implies a bound on its computational capacity. I don't have a simple reference that gets right to the point but associating around this
https://en.wikipedia.org/wiki/Bekenstein_bound
https://en.wikipedia.org/wiki/Landauer's_principle
Gia Dvali has interesting ideas ahd talks just a few minutes about information in black holes
what-fundamental-physics-behind-information-processing-black-holes, but the next logical step is, what about information storage in say any system?
The connection is that if we "ignore" the information capacity limits, and use "fictive" objects that really would imply creating black holes, then from the information theoretic persepctive, then we have an inconsistency. These problems IMO, is rooted in the foundations of QM. But the train of association may be long, but this isn't supposedto be easy, then all this would have been solved already.
/Fredrik