Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Remove Known Solution from System of Polynomials?

  1. Jul 29, 2013 #1

    I'm here because I lack experience and education in the subject of systems of polynomial equations (univariate and multivariate). I've also recently found a need to work with them. If this is the wrong subforum, then I apologize for my ignorance and would be grateful if a moderator would move it into the correct section.

    Now, onto the subject.

    I have a system of multivariate polynomial equations. The system is over-determined and I know that at least one solution exists. This solution is 'trivial' (in regards to what I'm doing) and simply consists of all indeterminants, y_j, taking the value 0. I also know that only a finite number of solutions exist (I think this is called zero-dimensional?)

    Now, the thing is, I don't really care about this solution. Furthermore, I don't need to find every solution of the system. I just need to find out if at least one non-trivial solution exists, and if it turns out that one exists, or several, I'd like to be able to determine at least one of them, it doesn't matter which one, just as long as it is not the trivial solution, y_j = 0, for all j.

    If one solution is known, can that solution be somehow 'removed' from the system, leaving a system sharing every other zero of the original system except the one which was removed? If so, is there an efficient algorithm to do this?

    Is there a fast algorithm to find only 1-2 solutions for a system rather than every solution of the system?

    I've looked around online a bit, and it seems like the problem of finding every solution of a system is quite computationally expensive. Also, I can only seem to find information concerning finding all solutions. Most stuff points towards computing a grobner basis or rational univarate representation for the system. That said, I can understand the computation of the grobner basis but it seems REALLY expensive. On the other hand I lack the education to understand how to compute the univariate representation, and do not want to take a long detour to learn it atm.

    If anyone knows anything about the subject and could help me out I'd be very appreciative. Heck, even if you can just point me in the right direction I'd be happy.

    I appreciate your time and consideration.
  2. jcsd
  3. Jul 29, 2013 #2

    Stephen Tashi

    User Avatar
    Science Advisor

    Do you have just one system of polynomial equations? If so, why does the expense of computing a Grobner basis matter? Or do you have a type of problem that produces a system of multivariate polynomial equations and need to solve many examples of this type of problem?

    If there exists a fast way to solve your problem, I think it would be special to the details of the problem. I don't know of any rapid way to remove solutions from a general set of polynomial equations.
  4. Jul 30, 2013 #3
    Ah, yeah, it's not a fixed system in the sense that it has parameters assumed constant, and I'm looking to make a template.

    As for additional details, well, the initial system can be reduced to a system involving only 2nd degree polynomials, so I assume every polynomial in the initial system is quadratic. Also, every polynomial in the system has one of two forms.

    For each j a natural number or 0, let c_j represent a parameter assumed constant, and y_j represent one of the indeterminants. Let X_1, X_2, X_3, and X_4 (there may be a lot of these X combinations, not just 4) each represent a linear combination of the y_j where each y_j is not necessarily in each X term, and each weight is either a known constant or parameter constant c_k for some k. Then the two forms are:

    X_1 * X_2 - X_3 * X_4 = 0


    (X_1) * (X_1 - 1) = 0

    Last details are that, one of the x*(x-1) terms is in a single indeterminant y_1 or y_0, and constants, at most one of which is a parameter c_0 or c_1, and there is a little bit of symmetry to the linear combination terms.

    The sets of linear combinations X_k can be arranged so that initially they are very small, in one or two indeterminants and one to three total sum terms. They get larger, in the sense of number of terms in the sum, up till you reach about 1/2 of the total number of such combinations. Then they decrease, in the sense of number of terms in the sum, back down to one to three terms again near the end.

    So if you arrange them in this way, listing them in a column (maybe using a really really big piece of paper) they take on a very diamond-shape. Like symmetric in the number of terms of the linear combinations.

    I don't know if this helps? But either way, I'm really grateful for your time. I feel pretty decent with linear algebra, but I just know next to nothing about systems of polynomials.

    It sucks that there is not a quick way to remove a known solution from a system. :(

    Oh yeah, one more detail I think is important, totally forgot. The coefficients of the polynomials in the initial system are integers! And each c_j, may be assumed to be, in this case, zero or one, while the y_j indeterminants are also assumed to be integers.
    Last edited: Jul 30, 2013
  5. Jul 31, 2013 #4

    Stephen Tashi

    User Avatar
    Science Advisor

    I suggest that you search the web on a combination of keywords that includes your special type of problem (e.g. "pose estimation", "automatic theorem proving" - or whatever it is) and keywords like "Ritt", "Wu", "Ritt-Wu characteristic set", "regular chain" which are associated with methods of solving simultaneous polynomial equations and are alternatives to the Grobner basis method.

    I'm not an expert on methods of solving polynomial equations, but I've always been curious about how its done - although only curious enough to glance at articles about it.

    If we have a system of equations expressed with factors, such as

    [tex] (y_1 + 3 y_2 + 6 y_3)(y_1 + 8 y_2) (y2 - 4y_3) = 0 [/tex]
    [tex] (2y_1 + 4 y_3)(2y_1 + 4y_3 - 1) = 0 [/tex]

    The one way to hunt for solutions is to pick one factor from each equation and try to solve the system that sets the factors simultaneously to zero. So, for example, we could try to solve the system:

    [tex] y_2 - 4y_3 = 0 [/tex]
    [tex] 2y_1 + 4y_3 - 1 = 0 [/tex]

    If each equation in system is a sum of factored terms, we still can look hunt for zeroes by looking at somewhat simpler systems of equations.

    For example if the system is:

    [tex] (y_1 + 3 y_2 + 6 y_3)(y_1 + 8 y_2) (y_2 - 4y_3) + (6y_1 + 3y_2)( y_2 + y_3) = 0 [/tex]
    [tex] (2y_1 + 4 y_3)(2y_1 + 4y_3 - 1) + (y_2-5y_3) = 0 [/tex]

    We can hunt for solutions by picking one factor from each term and form simpler systems like

    [tex] y_2 - 4y_3 = 0 [/tex]
    [tex] 6y_1 + 3y_2 = 0 [/tex]
    [tex] 2y_1 + 4y_3 -1 = 0 [/tex]
    [tex] y_2 - 5y_3 = 0 [/tex]

    but the problem with doing is this is that even if we look at all possible ways of selecting one factor per term, we may miss some or all of the solutions to the original set of equations. The general idea behind systematic methods for solving simultaneous polynomial equations in several variables seems (to me) to be that we express them as sums of terms involving a special set of factors and using this special set of factors overcomes the problem of missing solutions.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook