Why doesn't the residue theorem work for branch cut integration?

Click For Summary
The discussion centers on the challenges of using the residue theorem for branch cut integration, specifically for the integral involving x raised to the power of alpha. The user initially applied a keyhole contour but encountered discrepancies in the results, particularly when alpha is not equal to one half. A key issue identified was the handling of the branch cut along the real axis, which affects the values of z raised to alpha above and below the x-axis. Upon further examination, the user recognized that the assumption of uniformity across the branch cuts was incorrect, leading to a trivial resolution of the problem. The discussion highlights the importance of carefully considering branch cuts in complex integration.
gonzo
Messages
277
Reaction score
0
I need help with a branch cut intgration. The problem is to show the following for 0< \alpha <1:

<br /> \int_{0}^{\infty}{x^{\alpha - 1} \over x+1}={\pi \over sin\alpha\pi}<br />

I used the standard keyhole contour around the real axis (taking that as the branch cut), but using the residue theorem I end up with:

<br /> -\pi i e^{i\alpha\pi}<br />

Which obviously doesn't match. Although this does match up for alpha equals one half.

Some help would be appreciated.
 
Physics news on Phys.org
It would be easier to see what went wrong if you showed more work!

What was your residue? (I think you were Ok here)

How did you deal with the part of the keyhole contour that lies just below the x-axis? I think this is what went wrong. On this part you will be working with a different branch of the logarithm, so z^{alpha} will take on different values here than the bit above the x-axis.
 
Thanks, you actually showed me my stupid mistake. I had just worked through the problem with alpha as one half, in which case the lower and upper part of the branch cuts are the same (well, negatives, but going in different directions, so you get a divisor of 2).

I had naively assumed that it would be the same for this problem. Your comment inspired me to take a closer look and I realized it wasn't the case, and then the answer was trivial.

Thanks!
 
There are probably loads of proofs of this online, but I do not want to cheat. Here is my attempt: Convexity says that $$f(\lambda a + (1-\lambda)b) \leq \lambda f(a) + (1-\lambda) f(b)$$ $$f(b + \lambda(a-b)) \leq f(b) + \lambda (f(a) - f(b))$$ We know from the intermediate value theorem that there exists a ##c \in (b,a)## such that $$\frac{f(a) - f(b)}{a-b} = f'(c).$$ Hence $$f(b + \lambda(a-b)) \leq f(b) + \lambda (a - b) f'(c))$$ $$\frac{f(b + \lambda(a-b)) - f(b)}{\lambda(a-b)}...

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
Replies
5
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
6K