Solve for x, a and b in matrix equation aAx + bBx = C

  • Thread starter Thread starter Panteren
  • Start date Start date
  • Tags Tags
    Matrix
Panteren
Messages
3
Reaction score
0
Hello everybody

I recently encountered the following equation C(t) = a\int_0^t{x(\tau)d\tau} + b\int_0^t{\int_0^\tau{x(\tau')d\tau'd\tau}}, where C, a, b and x are greater or equal to zero. C and x are vectors - in my case around 3500 long - and a and b are constants.

If we take sufficiently small steps we can replace the integrals with summations:
C(t) = a\Sigma_0^t{x} + b\Sigma_0^t\Sigma_0^t{x}.
Such a summation can also be written as a matrix of the form [1 0 0; 1 1 0; 1 1 1] etc. using Matlab notation and [1 0 0; 2 1 0; 3 2 1] etc. for det double summation.

Now we have a system: C = aAx + bBx where C and x are Nx1 matrices, a and b are constants, and A and B are NxN matrices. I wish to solve it in some least norm sense for x, a and b, with the constraints that x, a and b should be equal to or greater than zero.

I have tried to solve the first equation using some of the nonlinear optimization tools in Matlab with poor results. I hoped it would be easier to solve when rewritten as a linear system, but I cannot see how.

Any suggestions would be most welcome.
 
Physics news on Phys.org
I suppose τ' is a different τ? Like saying τ1 and τ2?

If so, then the inner integral is equal to x(τ2), which yields:

C(t)= a\Sigma_0^t{x} + b\Sigma_0^t{x(τ_2)}

Edit: Nevermind, I thought it was x'(τ')
 
Ok, I found what was bugging me:

You replaced the double integral with two sums from 0-t. However, the inner sum should be from τ'=0 to τ'=τ, and the outer sum should be τ=0 to τ=t
 
meldraft said:
Ok, I found what was bugging me:

You replaced the double integral with two sums from 0-t. However, the inner sum should be from τ'=0 to τ'=τ, and the outer sum should be τ=0 to τ=t

Thank you. You are correct. That is what I meant. Equivalent to using two 'cumsum' in Matlab.
 
Welcome to PF, Panteren! :smile:

I take it your system is actually the following?
$$C(t_i) = aAx(t_i) + bBx(t_i)$$

In that case the solution in a least-norm-sense is given by a least-squares solution.

What you'd do is minimize ##\sum_i (C(t_i) - aAx(t_i) + bBx(t_i))^2##, which is the sum-squared deviation given a certain a and b.
To solve it you'd calculate the partial derivatives to a and also to b and set them to zero.

You'll find the system of equations:
$$a \sum_i (Ax(t_i))^2 + b \sum_i Ax(t_i) \cdot Bx(t_i) = \sum_i C(t_i) \cdot Ax(t_i)$$
$$a \sum_i Ax(t_i) \cdot Bx(t_i) + b \sum_i (Bx(t_i))^2 = \sum_i C(t_i) \cdot Bx(t_i)$$

Its solution (for a and b) appears to be what you want.
 
I like Serena said:
Welcome to PF, Panteren! :smile:

Thank you. :smile: I have read the forums for quite some time. Lots of interesting stuff and insight to be found.


I take it your system is actually the following?
$$C(t_i) = aAx(t_i) + bBx(t_i)$$
Yes, but I am not sure I understand the distinction between that and what I wrote? Please elaborate what I have misunderstood or stated unclear :-/?

You'll find the system of equations:
$$a \sum_i (Ax(t_i))^2 + b \sum_i Ax(t_i) \cdot Bx(t_i) = \sum_i C(t_i) \cdot Ax(t_i)$$
$$a \sum_i Ax(t_i) \cdot Bx(t_i) + b \sum_i (Bx(t_i))^2 = \sum_i C(t_i) \cdot Bx(t_i)$$

Its solution (for a and b) appears to be what you want.

Thank you, but the problem is, that x is also unknown. Perhaps it is obvious how to get that in addition to a and b from the system of equations, but I do not follow :-(
If I had x I could just turn it into a standard linear regression problem and likewise if I had the constants a and b, but when I only have A, B, C and the non-negativity constraints on a, b and x ... ?
 
Back
Top