Solving a set of quadratic equations

In summary, the conversation discusses a problem of solving a set of quadratic equations in the form of ax^2 -bxy + cy^2 = d, where a,b,c,d are given constants. The equations are solved using a least squares method, but the resulting values are not feasible. Suggestions are made to use Buchberger's Algorithm or Maple software to find a solution. The conversation also mentions the importance of considering the accuracy of the coefficients in the equations.
  • #1
biomedicalhat
9
0
Hey,

I have a set of quadratic equations in the form of:

ax^2 -bxy + cy^2 = d
ax^2 -bxz + cz^2 = d
ay^2 -byz + cz^2 = d

where a,b,c,d are given constants.

I am really stuck trying to figure out the values of x,y,z :confused:

My previous trial, was attempting to solve it as a least squares problem.. went something like that (I plugged in nice numbers)

Put them in matrix form:

Ax = d

A = [100 200 0 -200 0 0;
100 0 500 0 -200 0;
0 200 500 0 0 -600];

x = [x^2
y^2
z^2
xy
xz
yz]

d = [25
100
25]

Make square matrix system:

Cx = b

where C = At*A, b = At*d

then, I try to get x

x = C^-1*b

The expected values are supposed to give x = y = z = 0.5.
Unfortunately, the values I get are not feasible (negative values for squared varables).

I would really appreciate any hint on what is the best method to solve this problem!

Thank you in advance! :)
 
Physics news on Phys.org
  • #2
Give us a hint on where you encountered this problem? If it's from a course, which one? Or is it from "real life"? Or a question from a mathematics competition?A systematic way to solve simultaneous polynomial equations is to use a procedure known as Buchberger's Algorithm, but that might be overkill.
 
  • #3
It is a real life problem.. I am a graduate engineering student, and I am trying to solve this problem as part of my research work.

Thanks for suggesting Buchberger's Algorithm :)

By overkill, do you mean it is computationally expensive, or does it need extensive manual formulation labor?

Since I am certainly not the expert here, do you recommend any literature or software/code that could help me implement it?
 
  • #4
By "overkill", I mean that it is hard to find a down to Earth exposition of the algorithm that doesn't get into the abstractions of ring theory, ideals, etc. However, many computer algebra packages now include this algorithm. I once read something by Bruno Buchberger himself (it might have been his thesis) that had many simple examples using ordinary polynomials. I don't remember the algorithm well. It is connected with a topic called Grobner bases (an umlaut over the "o").

What you get from the algorithm is a set of simpler polynomials that provide a convenient method for generating all possible solutions for the equations. Simultaneous polynomial equations in several variables may have infinitely many solutions.

I'll look for some concrete recommendations. You can probably find them as fast as me by Googling.
 
  • #5
biomedicalhat said:
Hey,

I have a set of quadratic equations in the form of:

ax^2 -bxy + cy^2 = d
ax^2 -bxz + cz^2 = d
ay^2 -byz + cz^2 = d

where a,b,c,d are given constants.

I am really stuck trying to figure out the values of x,y,z :confused:

My previous trial, was attempting to solve it as a least squares problem.. went something like that (I plugged in nice numbers)

Put them in matrix form:

Ax = d

A = [100 200 0 -200 0 0;
100 0 500 0 -200 0;
0 200 500 0 0 -600];

x = [x^2
y^2
z^2
xy
xz
yz]

d = [25
100
25]

Make square matrix system:

Cx = b

where C = At*A, b = At*d

then, I try to get x

x = C^-1*b

The expected values are supposed to give x = y = z = 0.5.
Unfortunately, the values I get are not feasible (negative values for squared varables).

I would really appreciate any hint on what is the best method to solve this problem!

Thank you in advance! :)

Hi there, do you need an explicit solution/framework/algorithm whatever or do you want anything that can give the answer?

Have you considered using something like Maple to turn your quadratic forms into new quadratic forms without the middle terms? Using that you can turn your original forms into ones that are in the form of eu^2 + fv^2 = g. With these kind of relationships, you should be able to solve your system.

Maybe something along the lines of this?

http://www.maplesoft.com/demo/adoption/Quadratic_Forms.aspx
 
  • #6
Subtract the first two equations:

ax^2 -bxy + cy^2 = d
ax^2 -bxz + cz^2 = d
-b(xy-xz) + c(y^2 - z^2) = 0
-bx(y-z) + c(y+z))(y-z) = 0

So either y = z or -bx + cy + cz = 0

The other two pairs of equations will give you similar equations.

Solutions with x = y = z are easy to find if they exist at all, since your three equations are then identical.
 
  • #7
biomedicalhat,

One thing you should clarify is whether the coefficients in your equations are exact or approximate. For example, if they are integers representing numbers of patients or something like that, there is no doubt about them. But if they are from measurements of weights or concentrations then they are approximate.

Algebraic techniques like Buchberger's algorithm rely on doing things that exactly cancel out terms in equations. If the coefficients aren't exactly the correct numbers then it's hard to analyze what error is being made.
 
  • #8
chiro said:
Hi there, do you need an explicit solution/framework/algorithm whatever or do you want anything that can give the answer?

Have you considered using something like Maple to turn your quadratic forms into new quadratic forms without the middle terms? Using that you can turn your original forms into ones that are in the form of eu^2 + fv^2 = g. With these kind of relationships, you should be able to solve your system.

Maybe something along the lines of this?

http://www.maplesoft.com/demo/adoption/Quadratic_Forms.aspx

My final goal, is to implement the solution method within my own code. The code currently generated the quadratic equations, and it needs to solve them, and take the results for further processing.

I used Matlab to implement the method described in your Maple video. I was able to simplify the quadratic equation into a form without cross-multiplications as u've mentioned.. I also solved the equation, and got "a" solution, that for the equation. Unfortunately, the solution would not satisfy the constrains of the other equations.

Is there a way that I can use to solve the simplified equations simultanously? The way I understand it, is that every simplified equation has two unique variables Ui, Vi, which are a multiplication of the original variables by the equation's eigen vector matrix of the Hessian.
 
  • #9
AlephZero said:
Subtract the first two equations:

ax^2 -bxy + cy^2 = d
ax^2 -bxz + cz^2 = d
-b(xy-xz) + c(y^2 - z^2) = 0
-bx(y-z) + c(y+z))(y-z) = 0

So either y = z or -bx + cy + cz = 0

The other two pairs of equations will give you similar equations.

Solutions with x = y = z are easy to find if they exist at all, since your three equations are then identical.

AlephZero, your answer makes sense, but I made a mistake not mentioning that the a, b,c,d constants are unique for every equation, like they are:

a1x^2 -b1xy + c1y^2 = d1
a2x^2 -b2xz + c2z^2 = d2

so subtracting the equations wouldn't probably cancel out terms
 
  • #10
Stephen Tashi said:
biomedicalhat,

One thing you should clarify is whether the coefficients in your equations are exact or approximate. For example, if they are integers representing numbers of patients or something like that, there is no doubt about them. But if they are from measurements of weights or concentrations then they are approximate.

Algebraic techniques like Buchberger's algorithm rely on doing things that exactly cancel out terms in equations. If the coefficients aren't exactly the correct numbers then it's hard to analyze what error is being made.

Stephan, I got these equations from Euclidean distance relationships.. an equation I got looks like that:

10520.364253 x^2 -21072.026517 xy + 10551.148164 y^2 = 52.61719

Do you think it is still feasible to use Buchberger's, with floating point errors? Maybe introducing error tolerance?
 
  • #11
biomedicalhat said:
Do you think it is still feasible to use Buchberger's, with floating point errors? Maybe introducing error tolerance?

I'll have to investigate that question. As mathematics history goes, Buchberger's algorithm is a modern development. There is probably much current research going on about its applications. From what I know about the original version of it, I would say "No", it would not be reliable for your problem. However, if you are using a numerical technique that requires some initial guess for a solution, it could provide it.

Another thing we need to clarify is whether you need to publish a solution method or computer program that works for such equations or whether you only need to solve a finite set of them and utilize those answers in your work.

Iteration (essentially iterative guessing), beginning with an initial guess for a solution works surprisingly well in many problems involving simultaneous equations.
 
  • #12
Stephen Tashi said:
I'll have to investigate that question. As mathematics history goes, Buchberger's algorithm is a modern development. There is probably much current research going on about its applications. From what I know about the original version of it, I would say "No", it would not be reliable for your problem. However, if you are using a numerical technique that requires some initial guess for a solution, it could provide it.

Another thing we need to clarify is whether you need to publish a solution method or computer program that works for such equations or whether you only need to solve a finite set of them and utilize those answers in your work.

Iteration (essentially iterative guessing), beginning with an initial guess for a solution works surprisingly well in many problems involving simultaneous equations.

I basically want to solve the equations to get the variable values. Each variable represents the parameter t of a line parametric equation, so they should all lie between 0-1. Furthermore, physical constrains limit the range more to between 0.6-0.95, and the number of equations demand that there exists a single unique solution to the set of equations, even though individual equations may have multiple solutions.

I tried out several optimization techniques, but the search spaces I could come up with were too irregular to reach my global minimum, and heuristic methods just take too much computational time..

Iterations seems to be a good idea, do you think a good formulation can be achieved, that can ensure convergence?
 
  • #13
biomedicalhat said:
AlephZero, your answer makes sense, but I made a mistake not mentioning that the a, b,c,d constants are unique for every equation, like they are:

a1x^2 -b1xy + c1y^2 = d1
a2x^2 -b2xz + c2z^2 = d2

so subtracting the equations wouldn't probably cancel out terms

Oh well. I can usually read equations OK, but unfortunately I can't read minds :smile:
 
  • #14
The Wu Elimination Method ?

Sometimes 10 minutes of Googling can make anyone look like a genius - or an idiot.

Suppose I said:

"The Wu Elimination Method! Well, of course, that's what you need. We Westerners are becoming so backward."

Which would I be?

http://www.scichina.com:8081/sciAe/fileup/PDF/05ya1260.pdf
http://www.wseas.us/e-library/conferences/2008/rhodes/istasc/istasc39.pdf

What the heck is that about?
 
Last edited by a moderator:
  • #15
biomedicalhat said:
My final goal, is to implement the solution method within my own code. The code currently generated the quadratic equations, and it needs to solve them, and take the results for further processing.

I used Matlab to implement the method described in your Maple video. I was able to simplify the quadratic equation into a form without cross-multiplications as u've mentioned.. I also solved the equation, and got "a" solution, that for the equation. Unfortunately, the solution would not satisfy the constrains of the other equations.

Is there a way that I can use to solve the simplified equations simultanously? The way I understand it, is that every simplified equation has two unique variables Ui, Vi, which are a multiplication of the original variables by the equation's eigen vector matrix of the Hessian.

So are you saying that when you transformed your quadratic forms to ones without middle terms and obtained "a solution" for all of your original relationships that you didn't get any unique (or even set of) solutions?
 
  • #16
biomedicalhat,

Let us know your progress on this problem.

I myself am getting curious about the Wu Elimination method using interval arithmetic.
 
  • #17
I tried some online GIAC code that I found online for the basic Wu elimination. It worked fine for "nice" input polynomials, like those found in papers. As for my problem, it took an extensive amount of time, then the output equations were unfeasible, like they would lead to complex results. When I tried it on the test equations (with the known expected results of 0.5), it actually succeeded. Maybe it failed in my problems because the input coofficients were not integers, but high precision floats... This should be solved with the interval method, according to the literature.. When I tried to plug in variables instead of the numerical coofficients, it failed altogether after looong processing times..

I think I will need to implement Wu's interval method myself in order to get that done.. Unfortunately, it doesn't seem to be a very simple piece of code to write :s but it doesn't seem possible to solve it manually either...
 
  • #18
Chiro,

I meant, that I could get "one" solution for every individual equation.. This solution would not possible satisfy all the other equations' constrains..
 
  • #19
biomedicalhat said:
I basically want to solve the equations to get the variable values. Each variable represents the parameter t of a line parametric equation, so they should all lie between 0-1. Furthermore, physical constrains limit the range more to between 0.6-0.95, and the number of equations demand that there exists a single unique solution to the set of equations, even though individual equations may have multiple solutions.

I tried out several optimization techniques, but the search spaces I could come up with were too irregular to reach my global minimum, and heuristic methods just take too much computational time..

Iterations seems to be a good idea, do you think a good formulation can be achieved, that can ensure convergence?

You have a set of simultaneous nonlinear equations. If you don't mind an iterative numerical technique, what about good old Newton-Raphson? That should work well given that you know the solution has to be within a relatively narrow range of values.
 
  • #20
Hotvette,
That might work if the search space didn't contain multiple minimas, which are introduced by the different functions?
 
  • #21
biomedicalhat said:
That might work if the search space didn't contain multiple minimas, which are introduced by the different functions?

Correct, if there are indeed multiple minima, NR probably isn't the best method to find the global minimum.

Wait a minute. Unless I read it wrong, the problem as originally stated isn't a minimization problem. The objective was to find the zeros of a set of (nonlinear) equations. Perhaps the problem has morphed into something else (I didn't read every post in detail).
 
Last edited:

1. What is a set of quadratic equations?

A set of quadratic equations is a group of equations that involve variables raised to the power of two (quadratic terms). These equations can be written in the form of ax^2 + bx + c = 0, where a, b, and c are constants and x is the variable.

2. How do you solve a set of quadratic equations?

To solve a set of quadratic equations, you can use different methods such as factoring, completing the square, or using the quadratic formula. The method used depends on the form of the equations and personal preference.

3. What is the quadratic formula?

The quadratic formula is a mathematical formula used to solve quadratic equations. It is written as x = (-b ± √(b^2 - 4ac)) / 2a, where a, b, and c are the constants in the equation ax^2 + bx + c = 0. This formula gives the values of x that satisfy the equation.

4. Can all quadratic equations be solved?

Yes, all quadratic equations can be solved using the quadratic formula. However, some equations may have complex solutions (involving imaginary numbers) or no real solutions.

5. Why is solving quadratic equations important?

Solving quadratic equations is important in many fields of science, such as physics, engineering, and economics. It allows us to find the values of variables that satisfy a given equation, which is essential in understanding and predicting real-world phenomena.

Similar threads

  • Linear and Abstract Algebra
Replies
4
Views
2K
Replies
7
Views
829
  • STEM Educators and Teaching
2
Replies
36
Views
3K
Replies
4
Views
496
Replies
24
Views
1K
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
2K
  • Precalculus Mathematics Homework Help
Replies
5
Views
681
  • Linear and Abstract Algebra
Replies
6
Views
512
  • Linear and Abstract Algebra
Replies
10
Views
130
Back
Top