Well, first of all, it's not! Unless f happens to be "Riemann integrable".
So of course the answer depends on the definition of "Riemann integrable" and "definite integral".
Let f be defined and non-negative on the interval from a to b. partition the interval [a, b] into n subintervals. For each i choose a number xi* in the ith interval. Let \Delta x_i be the length of the ith interval. Then
\sum_{i=1}^n f(x_i^*)\Delta x_i
is the Riemann sum for those choices.
If f is bounded on [a,b] then it is easy to show that the set of all such Riemann sums, for a given n, has an upper bound (the lower bound is trivially 0) and so has a greatest lower bound and least upper bound, Un and Vn. f is said to be "Riemann integrable" on [a,b] if and only if \lim_{n\rightarrow\infty} U_n= \lim_{n\rightarrow\infty} V_n.
If f is Riemann integrable, one way of defining \int_a^b f(x)dx is as that common value. With that definition, there is nothing to prove- just that the sum you give is, for each n, a Riemann sum and so the limit must be the integral.
Another way to define the definite integral is as "area bounded by x= a, y= f(x), x= b, and y= 0". To prove your equality, choose each xi* to be, first, the x in that interval that gives the largest value for f(xi*) and, secondly, the x in the interval that gives the smallest value of f. Calling the first sum Mn and the second mn it is trivial that m_n \le area \le M_n. Since that is true for all n, if f is Riemann integrable, so that those to sums have a common limit, by the "pinching theorem", that common limit must be the area under the curve.