Solving a system of two nonlinear second order ODEs (Mechanical vibrations)

1. Aug 13, 2012

Bartok

I was wondering what the common methods for solving such a system are:

$2 m \ddot{x} - m l \ddot{θ} θ + k x = 0$
$m l^{2} \ddot{θ} - m l \ddot{x} θ + m g l θ = 0$

2. Aug 15, 2012

bigfooted

You have probably already used the assumption of small angles: $\sin(\theta)\approx\theta$
Since theta is small, you can neglect all products like $\theta^{2}$ but also $\theta\cdot\ddot{\theta}$

3. Aug 15, 2012

chiro

Hey Bartok and welcome to the forums.

Do you need an analytic solution? Even if this is required, it would be beneficial if you simulated the system of equations using a numerical integration scheme. Have you come across these?

4. Aug 15, 2012

Bartok

Yes, I have used the $\sin(\theta)\approx\theta$ assumption but I don't think the $\theta\cdot\ddot{\theta}$ could also be neglected without prior info about the order of $\ddot{\theta}$.

Thank you.

Not necessarily looking for an analytic solution. Just wanted to know about the available options. Got a deadline coming up and gotta solve this somehow!
Could you explain a bit about the numerical solution as how to approach a system like this or link me to some resources? I derived it in a nonlinear vibrations problem which I'm quite new to.

5. Aug 15, 2012

chiro

The basic idea for a lot of the schemes is that you make use of more terms corresponding to those of the taylor series expansion of the function. It's not exactly using these but what happens is that the scheme tries to eliminate so many lower order terms so that the error term is higher than all these terms. For example O(x^5) means that all the powers of x < 5 have been handled by the scheme and the lowest error term relates to the fifth power of your independent variable (this is a 1D example, but the same idea holds in more dimensions).

The taylor series gives a way to represent a function as a series in term of the evaluation of derivatives at a certain point: basically if we know every derivative at one point we can represent the entire function.

So what a lot of schemes do is they evaluate so many terms to get a specific order. The euler method evaluates one derivative term, and things like Runge-Kutta evaluate more terms (and thus cost more computation time). The tradeoff is usually more stable scheme = more computation time.

But it has to be done to get specific constraints on the errors: the goal of numeric analysis is to introduce schemes with known properties particular of local and global errors. These both help analyze the stability of the scheme with respect to classes of DE's.

Here are some links to some methods: The Euler one is very simple but I would look at the other ones first considering you have a non-linear DE:

http://en.wikipedia.org/wiki/Euler_method

http://en.wikipedia.org/wiki/Runge–Kutta_methods

6. Aug 15, 2012

Bartok

Thanks chiro. I don't have much numerical experience and I was under the impression that the well-known methods are applicable only to linear DEs.

7. Aug 15, 2012

chiro

The applications are mainly for non-linear DE's as these are the ones where the analytic solution can't be figured out. That's the power of numerical methods in that if the solution is stable and accurate enough it doesn't matter what the system of DE's corresponds to.

In fact there are some situations where numerical schemes are used even when an analytic solution exists due to that it may be computationally better to do the numerical scheme over the analytic scheme (it sounds crazy, but these situations do exist).