When discussing the fundamental principles of quantum mechanics we always assume ideal experiments to purposely exclude effects caused by any limitations in the devices used in the experiments. It is understood, then, that any non-classical behavior is due to the quantum nature of the experiment and not due to any human error or experimental imperfections.
We are currently able to do real experiments with only one photon in the experimental apparatus at any time. Only one detector is ever triggered. This is an experimental fact. We never see the two detectors in a Mach-Zehnder interferometer, for example, triggered simultaneously when only one photon is present. This seems obvious if there is only one photon available to do anything. A wave, being associated with a continuum, should trigger both detectors at least some of the time, even with imperfect devices.
The effect is more dramatic with photons hitting a detection screen, which is a continuum of detectors. In a one-photon interference experiment, the one photon produces one dot on the screen. We do not observe the total distribution of dots all appearing simultaneously. (As an aside, how would we get a zillion photons from the original one??) The interference pattern is built up one dot at a time, not continuously as a wave would do. The result obtained in a single measurement is always a single dot on the detection screen.
The point is this – Some experiments with light exhibit particle properties. This has been demonstrated in many experiments done over many years. That is what is being discussed here. That is why quantum mechanics was invented. The wave nature of light cannot explain the results of all these experiments. Nor can we explain all such experiments as being due to imperfections in our instruments or due to human ignorance, as you suggest.
Quantum mechanics is indeterminate. There are many different possible results of a quantum measurement and, generally, quantum mechanics does not predict which result will happen. It only predicts the probability of getting each possible result. Each result is an eigenvalue of the observable being measured, For example, if we perform a position measurement, there are very many locations where the particle can be found. If we repeat the same experiment many times we generate a statistical distribution of all the dots that contains the entire eigenvalue spectrum of the position operator. The position does not have a unique value as it does in classical physics where the same experiment always yields the same result. This is what we mean when we say that the position is uncertain. (Bohr, among others, preferred to say the position is “indeterminate”, which I believe is more descriptive and less confusing.) The uncertainty principle is a consequence of the purely statistical nature of quantum events.
Theoretically, the uncertainty in position is defined to be the root-mean-square deviation from the mean value of all the measurement results, called the standard deviation in ordinary statistics: \Delta x = \sqrt {\left\langle {\left. {\psi \left| {\left( {\hat x - \left\langle {\left. {\hat x} \right\rangle } \right.} \right)^2 } \right|\psi } \right\rangle } \right.} Likewise the uncertainty in momentum is:
\Delta p_x = \sqrt {\left\langle {\left. {\psi \left| {\left( {\hat p_x - \left\langle {\left. {\hat p_x } \right\rangle } \right.} \right)^2 } \right|\psi } \right\rangle } \right.} ]. Notice that the uncertainties depend on the wavefunction \psi (x). Every wavefunction gives uncertainties that satisfy \Delta x\Delta p \ge \hbar /2. Notice, also, that the uncertainties depend on the operators \hat x and \hat p.
This is how we calculate uncertainties. It is these definitions, along with the commutation relation \left[ {\hat x,\hat p_x } \right] = i\hbar, that gives the uncertainty relation [tex}\Delta x\Delta p_x \ge \hbar /2[/tex]
I apologize for having to resort to the mathematical formalism, but, in the hope of minimizing misconceptions, the exact definitions should be used in any discussion of the uncertainty principle.
Of course, it is \left| \psi \right|^2 that predicts the statistical distribution of all the measurement results. When we see the statistical distribution of many repeated position measurements, we know there is an uncertainty in position. If we always get the same position in repeated measurements, then, when we calculate the position uncertainty, we get \Delta x = 0, and we say the position is absolutely certain, as in classical physics. If the statistical spread is clustered around only one location, then it is or less uncertain. If the dots are scattered over a wide range of positions, then the position is more uncertain. In any case, you must repeat the experiment many times to determine whether the position is certain \Delta x = 0 or whether it is uncertain \Delta x \ne 0.
We intentionally use “certain” in place of words like “accurate” and “precise”, which refer to a single measurement and which can be misleading in this discussion. A single measurement tells us nothing about uncertainties. Uncertainties are not measured values. When a particle hits a detection screen we see a dot on the screen that gives the particle’s position at the instant it hit. We have a value for the position. Yet, there is no way to obtain the uncertainty in position from that number.
We repeat, for emphasis, THE UNCERTAINTY PRINCIPLE IS ABOUT UNCERTAINTIES. (I apologize for yelling, but too many discussions ignore this essential fact.) It is not about the actual values of position and momentum obtained in single measurements. The uncertainty principle tells us that there is no experiment, and no wavefunction, for which both position and momentum are certain. The uncertainty principle also states that an uncertainty in position is accompanied by an uncertainty in momentum, but in no case will the product of the uncertainties be less than \hbar /2.
The part of the Wiki article you site is
In quantum mechanics, the Heisenberg uncertainty principle states a fundamental limit on the accuracy with which certain pairs of physical properties of a particle, such as position and momentum, can be simultaneously known. In layman's terms, the more precisely one property is measured, the less precisely the other can be controlled, determined, or known.
IMHO, a more meaningful statement is
In quantum mechanics, the Heisenberg uncertainty principle states a fundamental limit on the product of certain pairs of uncertainties, such as the position uncertainty and the momentum uncertainty. There is no experiment in which the product of those uncertainties can be less than \hbar /2. The more certain we are of one property, the more uncertain is the other.
Best wishes