MHB Solving this system of equations.

jasonc
Messages
6
Reaction score
0
I have a personal project I'm working on that involves calibrating some values from a sensor input. So far I've been doing the calibrations in a very tedious manner but I'd like to write a program to solve the calibration for me. Very long story very short, this basically boils down to the following:

I have this equation:

View attachment 220

I have a set of many v, r, g, b, and a values (v is either 0 or 1, a is a known constant, r g b vary between 0 and 1), and I want to find the best fit values for G, B, and C.

Is this possible? I don't know much about linear equations but I don't think this is one. Can anybody recommend any methods for solving these?

Thanks!
J
 

Attachments

  • math_image.gif
    math_image.gif
    393 bytes · Views: 102
Physics news on Phys.org
jasonc said:
I have a personal project I'm working on that involves calibrating some values from a sensor input. So far I've been doing the calibrations in a very tedious manner but I'd like to write a program to solve the calibration for me. Very long story very short, this basically boils down to the following:

I have this equation:
I have a set of many v, r, g, b, and a values (v is either 0 or 1, a is a known constant, r g b vary between 0 and 1), and I want to find the best fit values for G, B, and C.

Is this possible? I don't know much about linear equations but I don't think this is one. Can anybody recommend any methods for solving these?

Thanks!
J

Hi jasonc, :)

What you have is a system of equations which you have to solve for \(G,\,B,\mbox{ and }C\).

Whenever, \(v=1\),

\[1=\frac{-gG-bB+r-C}{-gG-bB+r-C}\Rightarrow r=a\]

Hence the set of values that you have should meet the criteria, \(r=a\) whenever \(v=1\). Otherwise this system is not soluble.

Whenever, \(v=0\),

\[0=\frac{-gG-bB+r-C}{-gG-bB+r-C}\Rightarrow -gG-bB+r-C=0\]

When you plug in values for \(r,\,g,\,b\) you have a set of equations with three variables, \(G,\,B,\mbox{ and }C\). If you have more than three linearly independent equations then the system doesn't have a solution. If there are exactly three linearly independent equations then the system has a unique solution. If the number of linearly independent equations are less than two the system will have infinitely many solutions.

Kind Regards,
Sudharaka.
 
jasonc said:
I have a personal project I'm working on that involves calibrating some values from a sensor input. So far I've been doing the calibrations in a very tedious manner but I'd like to write a program to solve the calibration for me. Very long story very short, this basically boils down to the following:

I have this equation:

View attachment 220

I have a set of many v, r, g, b, and a values (v is either 0 or 1, a is a known constant, r g b vary between 0 and 1), and I want to find the best fit values for G, B, and C.

Is this possible? I don't know much about linear equations but I don't think this is one. Can anybody recommend any methods for solving these?

Thanks!
J

You want a non-linear optimisation tool that finds the minimum of:

\[ \rm{ Ob(G,B,C)=\sum_i \left|v_i-\frac{-g_iG-b_iB+r_i-C}{-g_iG-b_iB+a-C}\right|}^{\alpha} \]

Usual choices of \(\alpha\) are 1 and 2 (2 is better as systems that assume smoothness will work better).

I would initially suggest you look at the (non-linear) solvers that ship with Excel and/or Gnumeric.

CB
 
I'm not so sure it's nonlinear. Consider that you can re-arrange the equation thus:

$$va-r=(v-1)gG+(v-1)bB+(v-1)C=(v-1)(gG+bB+C).$$

Thus, you could try minimizing the difference

$$Ob(G,B,C)=\sum_{i}\left(v_{i}a-r_{i}+(1-v_{i})(g_{i}G+b_{i}B+C)\right)^{2}.$$

You might even be able to derive the explicit formulas you need by using the standard calculus treatment of setting the derivatives

$$\frac{\partial Ob}{\partial G}=\frac{\partial Ob}{\partial B}=\frac{\partial Ob}{\partial C}=0.$$
 
Ackbach said:
I'm not so sure it's nonlinear.

I'm not sure I said it was intrinsically non-linear, only that it could be solved fairly easily with a non-linear optimisation tool. But in the sense of mathematical programing it is non-linear.

In terms of regression it can be reduced to a linear least squares problem, but are we sure that least squares is the desired optimality condition, also you are not now minimising the sum of square resiuals between the variable of interest and the model. So you have lost any obvious sense in which this is a good fit, that is you have introduced an optimality condition different from minimising the sum some strictly increasing function of the absolute residuals (or some even more general function of the residuals).

If you want to go down this route either you need to prove that the two solutions are the same, or invoke the principle of "good enough for government purposes".

CB

PS I know that I did not leave the optimality condition fully general, but least squares or least absolute value are the two most popular optimality conditions.

PPS Another approach to to treat this a probabilistic problem were the RHS is the probability of v being 1, then we could go down the route of a maximum likelihood (or maximum posterior probability) estimator for the model paramenters - but we would still probably end up with a numerical non-linear least squares problem to solve. See my warship battle-damage survival probability paper for an example of this approach.
 
Last edited:
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...

Similar threads

Replies
5
Views
2K
Replies
3
Views
1K
Replies
1
Views
1K
Replies
4
Views
2K
Replies
9
Views
2K
Replies
7
Views
2K
Replies
1
Views
1K
Replies
2
Views
1K
Replies
11
Views
2K
Back
Top