I'm trying to prove a point to someone about compounding periods and mortgage payments. Interest is calculated as $ = P(1 + I/x)^x where P is how much you borrowed, I is "annualized" interest rate and x is how many compound periods are in a year. For example, a credit card at 20% compounded monthly would be $ = P(1 + 0.2/12)^12 If you strip out the garbage, you're left with Y = (1 + 1/x)^x The real life limits of this equation are Y = 1 when X = 0 (no interest accumulates when there's no compounding period) and Y = e when X = a big number. If you graph it from X=0 to X=999 you'll see that the slope is always positive. In real terms, that means the bank screws you more if the money compounds more often, and this is always true. There's no point where more frequent compound periods means less interest. I can rewrite that above equation as Y = (1 + x^-1)^x to make this easier. dy/dx = (x)(-x^-2)(1 + x^-1)^(x-1) dy/dx = -(1/x)(1 +x^-1)^(x-1) This dy/dx slope, which is apparently correct based on several google search results, says the slope is always negative. What went wrong? The result I'm anticipating is a slope graph that shows large positive changes at the start and the chages get smaller and smaller. In real terms that means the interest rate grows the most for the first jumps in frequency, then the jumps after that are smaller - going from 1 compound yearly to 12 monthly is a huge change but going from 12 monthly to 365 daily is a fairly small change.