Conceptually, indeed the limits we take in mathematics are idealizations and simplifications. E.g., in classical continuuum mechanics we work with continuous quantities like the mass density, i.e., you take a fluid and mathematically an infinitely small volume out of it, determine its mass and call the mass within the volume divided by the volume the density of the matter at this point in space at the given time. In fact, what's described by this idealized quantity is a macroscopically small volume (i.e., a volume within which the spatial changes of the relevant quantities can be considered negligibly small) but a microscopically large one (i.e., there must be a large number of particles within this volume element, and the fluctuations (quantum and/or thermal) should be small on average over this volume.
The same holds true for QFT. You (try to) define it as Poincare covariant theory in Minkowski ##\mathbb{R}^4##, but that fails for all physically relevant models, and it's likely that in this rigorousity it's doomed to fail for fundamental reasons related with Haag's theorem and all that. On the other hand, of course, realtivistic QFT in the way it is treated by physicists as an effective theory, is very successful, and the way to cure Haag's desastrous theorem is indeed to regularize it somehow with the effect to make space and energy-momentum finite in some sense. E.g., you can put it in a box, impose convenient periodic boundary conditions to have welldefined momentum operators etc. etc. an then you make also a momentum cutoff to get rid of UV trouble. That let's you at least define something like scattering matrix elements within this regularized model and then take appropriate limits to get S-matrix elements from appropriately renormalized perturbative N-point functions comparable with experiment.
Of course, with about 70 years experience in such regularization procedures nobody would do such a brute-force regularization but rather uses more convenient prescriptions, working, e.g., in a manifestly covariant way using dimensional regularization or the heat-kernal-##\zeta##-function method, because that simplifies the task of the practical calculation. At the end you have an effective theory defined by renormalized perturbation theory.
Lattice regularization is of course also another way used in lattice-QCD calculations, and also here you have to employ continuum extrapolations using scaling laws and other mathematical tricks to get the numbers of the continuum theory out beyond the perturbative approach.
Of course each of these approximation schemes has its limitations, but what's the underlying theory we approximate with this practitioners' version of relativistic QFT is not known today (neither do we know, whether such a theory really exists).