So here's my concrete challenge: Show me, how we can analyze the speed of convergence of finite elements methods without using rigorous mathematics.
I mean, I can implement the algorithm in my language of choice (Python or chicken Scheme if I can get away with it, Fortran or C/C++ if I preferred to suffer/needed the performance) and benchmark how much time it takes to converge. If this exceeds my optimization constraints (i.e. if Pointy Haired Boss wants me to have it run in <20 minutes or something) I need to consider implementing a different algorithm or attempting to optimize my existing implementation.
In other words, I could care less if it takes *insert expression here* steps/terms/increments to converge, I only care about the time it takes, a question which can be determined with brute force.
If a mathematician hands me *closed form # of steps expression* that's all well and good, but probably useless given that different architectures, hardware, and languages will muddle any attempts to extract useful information about how long it will take to obtain the precision I need.
If we're at the drawing board and he hands me *expression1* and *expression2* for two different algorithms, it would still be almost certainly easier to just implement algorithm's 1 and 2 and then benchmark them, assuming the first one I tried wasn't quick enough.
In my experience these expressions don't exist. I implemented a Monte Carlo approach to computing perturbation expansions for three-body decays in QED (a triplet pair production reaction, to be precise) last year from scratch and the literature was not very helpful. I've implemented many different algorithms for complex networks/solving SDE's derived from solvent simulations around proteins and apart from complexity classes in the CS papers, we're stuck with straight up brute force benchmarks.
Does this answer your question or do I still not understand it? In short, the answer is that I only care about real time, not number of steps/increments/terms.