D'Alembert question - boundary conditions parts

Ratpigeon
Messages
52
Reaction score
0

Homework Statement



I have a general wave equation on the half line
utt-c2uxx=0
u(x,0)=α(x)
ut(x,0)=β(x)
and the boundary condition;
ut(0,t)=cηux
where α is α extended as an odd function to the real line (and same for β)
I have to find the d'alembert solution for x>=0; and show that in general it doesn't exist for η=-1

Homework Equations



The d'alembert solution is
u(x,t)=1/2(α(x+ct)+α(x-ct))+1/2c \intx+ctx-ctβ(y) dy
for x>ct

The Attempt at a Solution



I know that to restrict it to the whole of the x>=0, t>=0 region, I need to use the boundary condition; but I get that
u(0,t)=0 because α and β are odd, which makes α(ct)+(-ct)
and the integral from -ct to ct of β(y) zero; and so u_t(0,t) is zero which is supremely not useful...
 
Physics news on Phys.org
Ratpigeon said:

Homework Statement



I have a general wave equation on the half line
utt-c2uxx=0
u(x,0)=α(x)
ut(x,0)=β(x)
and the boundary condition;
ut(0,t)=cηux
where α is α extended as an odd function to the real line (and same for β)
I have to find the d'alembert solution for x>=0; and show that in general it doesn't exist for η=-1

Homework Equations



The d'alembert solution is
u(x,t)=1/2(α(x+ct)+α(x-ct))+1/2c \intx+ctx-ctβ(y) dy
for x>ct

The Attempt at a Solution



I know that to restrict it to the whole of the x>=0, t>=0 region, I need to use the boundary condition; but I get that
u(0,t)=0 because α and β are odd, which makes α(ct)+(-ct)
and the integral from -ct to ct of β(y) zero; and so u_t(0,t) is zero which is supremely not useful...

Are you sure you copied down the problem statement correctly? It would make a lot more sense if \beta were an even function, and your boundary condition was u_t(0,t)=c\eta u_x(0,t) for some constant \eta.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top