-A rigorous proof does not always implies a correct proof...
That's incorrect -- a (completely) rigorous proof
must be a correct proof: to be completely rigorous, it must start with the hypotheses, and each step follows logically from the previous steps (with nothing left implicit). That is essentially the
definition of a correct proof.
Of course, if the hypotheses are false, then even if we have a correct proof, that does not imply the conclusion is true.
take a look to E. Rips paper in which the "Bible codes" are inspired, the reasoning is correct but the results are simply a nonsense, of course he was famous so his paper was published.
I don't know much about this particular scenario. But based on what little I do know, he makes a statistical claim, but without following any of the proper procedure that one is supposed to use in statistics.
-By the way how do you proof with your math "rigor" that 1+1=2 of course i mean without taking it as an axiom, or proving it empirically.
If possible at all, it would depend on the
definitions of "1", "+", and "2", and whatever axioms you did choose.
For example, in the theory of the natural numbers, 2 is generally defined to be the successor of 1 (Which is defined to be the successor of 0). The addition operation is defined recursively by: (using s(a) to denote the successor of a)
a + 0 = a
a + s(b) = s(a) + b
And the proof that 1+1=2 is a one-liner:
1+1 = 2+0 = 2
Of course, in other contexts, 2 is simply defined as 1+1. After all, it needs to be defined as something. Can you think of a better choice?
In another context, the ordinal 1 is defined to be the ordering consisting of a single element *. The ordinal 2 is defined to be the ordering consisting of two elements: * < *. u + v is defined, in this context, to mean that you concatenate u and v, and put a < between them. So, 1+1 is:
(*) < (*) = * < *
and thus 1 + 1 = 2.
And in still other contexts, 1+1=2 is false. For example, in category theory, 1 is (by definition) the diagram that looks like this:
*
1+1 is the diagram that looks like this (as computed by the definition of + as the coproduct): (pretend the white stripe isn't there)
*____[/color]*
2 is the diagram that looks like this (by definition)
*--->*
So 1+1 is clearly unequal to 2.
So I hope you see why definitions are important.

Without definitions, you can't prove anything at all!
-How do you justify "infinitesimals" (i know that nowadays there,s a rigourous theory about them..but how about in XVII-th century?) take a value dx being the smallest positive number then (dx)/2 is smaller contradictions infinitesimals can,t exist so calculus can,t exist so why use?..the same arguments were used to ban Fourier series.
These are wonderful examples of why rigor and the axiomatic method
are useful!
Newton criticisms that they were pushing symbols around in strange and mysterious ways, and there was no particular reason they should be getting any sort of sensible results at all.
But if Newton could have
rigorously derived calculus, then such criticisms would have held no weight. People would
have to accept the validity of his calculus -- but without the force of fully rigorous logic, he had to rely on people sharing his intuition, and on its outstanding empirical success.
I would say the same about Fourier, but my memory of history is fuzzy -- I think he
did eventually manage to prove that his method works, and thus it was accepted by the mathematical community.
(i wish some teachers in my univeristy gave me also a chance,not in math but in physics)
Anyways, I've warned you before about griping about how nobody listens to you, so this is strike #1.