ftr said:
So what is an electron in a hydrogen atom? Or electrons in a silver atom for that matter?
Well, even in classical relativistic physics "point particles are strangers", as Sommerfeld put it. The troubles with the point-particle concept became apparent already from the very beginning of Lorentz's "Theory of Electrons". I'm not sure, whether it's the first source, but already in 1916 the troubles with divergent self-energy in the context of the attempt to find closed equations for the motion of charged point particles ("electrons") and the electromagnetic fields became apparent. The trouble has been attact by some of the greatest physicists like Dirac or Schwinger with no real success. Today, as far as we know, the best one can do is to even approximate the famous Abraham-Lorentz-Dirac equation further, boiling it down to the Landau-Lifshitz equation, as it can be found in the famous textbook series (vol. 2, among the best textbooks on classical relativistic field theory ever written).
Even in the classical regime the most natural way to describe the mechanics of charged particles is a continuum description like hydrodynamics or relativistic kinetic theory (aka Boltzmann equation). One very practical application is the construction of modern particle accelerators like the FAIR accelerator here in Darmstadt, Germany, where the high-intensity particle bunches need a description taking into account not only the interaction between the particles ("space-charge effects") but also radiation losses, and there a hydro simulation (i.e., continuum desription of the particles) leads to the conclusion that for the discrete-particle picture the Landau-Lifshitz approximation to the Abraham-Lorentz-Dirac equation, describing the (accelerated) motion of charged particles, including the radiation-reaction forces.
The most fundamental theory we have today about "elementary particles" is the Standard Model of elementary-particle physics, which is based on relativistic, local (microcausal) quantum field theory (QFT). Here the trouble persists but is quite a lot milder. The early failed attempts to formulate a relativistic quantum mechanics clearly show that relativity needs many-body description even if you start with a few particles only as in the usual scattering experiments, where you consider reactions of two particles in the initial state. The reason is that at relativistic collision energies (i.e., where these energies come into the same order of magnitude as the masses (##\times c^2##, but I set ##c=\hbar=1##) of the lightest particles allowed to be created in the reaction (where allowed means not violating any of the empirically known conservation laws like energy, momentum, angular momentum and several conserved-charge conservation laws) there's always some probability to create new particles and/or destroying the initial colliding particles.
In QFT the fundamental concept are fields, as the name suggests. QFT was known from the very beginning of the development of modern QFT. Immediately after Heisenberg's ingenious insight during his hay-fever enforced stay on Helgoland in the summer of 1925, his vague ideas were amazingly quickly worked out by Born and Jordan and also Heisenberg himself as a formalism today known as "matrix mechanics", and already in one of these very early papers (the famous "Dreimännerarbeit" with Born, Jordan, and Heisenberg) everything was "quantized", i.e., not only the particles (electrons) but also the electromagnetic field. At the time ironically man physicists thought to also quantized the em. field is "too much of a revolution", and it was considered as unnecessary for a short while. The reason is simple: It is not so easy to see the necessity for field quantization at lower energies, available in atomic physics at this time. Although it was well known that for some phenomena a "particle picture for radiation", as proposed in Einstein's famous paper of 1905 on what we nowadays call "light quanta", can more easily explain several phenomena (like the photoelectric effect and Compton scattering) than the classical-field picture, to understand atomic physics for almost everything a treatment, where only the electrons were quantized and the interaction was described by electrostatics and the radiation by classical electromagnetic fields. What, however, was known at the time was the necessity for "spontaneous emission", i.e., if if there's no radiation field present which could lead to induced emission, there must be some probability for an excited atomic state (i.e., an energy eigenstate of the electrons around a nucleus) to emit a photon. This is the only phenomenon at the time which cannot be described by the semiclassical theory, where only the electrons were quantized but not the electromagnetic field. Everything else, including the photoelectric effect and Compton scattering as well as first applications to condensed-matter phenomena like the theory of dispersion of em. waves in matter can be successfully described in the semiclassical approximation.
The idea of field quantization was rediscovered by Dirac in 1927 when he formulated the theory of emission and absorption of electromagnetic radiation in terms of annihilation and creation operators for photons, leading to the correct postdiction of spontaneous emission, which was needed to explain Plancks black-body radiation formula which started the entire quantum business in 1900. It was well known by Einstein's (also very famous) paper of 1917 on the quantum-kinetic derivation of the Planck spectrum within "old quantum mechanics" that the spontaneous emission had to be postulated in addition to induced emission and absorption to get the correct Planck formula from kinetic considerations, but before Dirac there was no clear formalism for it.
Shortly thereafter among others Heisenberg and Pauli formulated quantum electrodynamics, and the use of perturbation theory lead to quite some success as long as one used only the lowest-order approximations (what we nowadays call the tree-level approximations using the pictorial notation in terms of Feynman diagrams). But to go to higher orders was plagued by the old demon of divergences known from the classical theory of radiation reactions, i.e., the interaction of charged particles with their own radiation fields, leading to the same self-energy divergences known already from classical theory, but the divergences were less severe than in classical theory, and the solution of the problem within perturbation theory was found in 1948 when Tomonaga, Schwinger, and Feynman developed their renormalization theory, also largely triggered by the fact that the "radiative corrections", i.e., the higher-order corrections leading to divergences in naive perturbation theory, became measurable (particularly Lamb's discovery of a little shift in the finestructure of the hydrogen-atom spectrum, now named after him "Lamb shift"). The final solution of the problem within perturbative QFT came in the late 1960ies, proving then crucial for the Standard Model, when in 1971 't Hooft and Veltman could prove the perturbative renormalizability of Abelian as well as non-Abelian gauge theories to any order of perturbation theory.
The upshot of this long story is that the particle picture of subatomic phenomena is quite restricted. One cannot make true sense of the particle picture accept in the sense of asymptotically free states, i.e., only when the quantum fields can be seen as essentially non-interacting a particle interpretation of quantum fields in terms of Fock states (eigenstates of the particle-number operators) becomes sensible.
Particularly for photons a classical-particle picture, as envisaged by Einstein in his famous 1905 paper on "light quanta", carefully titled as "a heuristic approach", is highly misleading. There's not even a formal way to define a position operator for massless quanta (as I prefer to say instead of "particles") in the narrow sense. All we can calculate is a probability for a photon to hit a detector at the place where this detector is located.