Regression Analysis - Constructing a Model ((Need ))

Click For Summary
SUMMARY

This discussion focuses on constructing a general function that adheres to specific minimum and maximum constraints using regression analysis. The objective is to find a function f(x) that satisfies N inequality constraints and two equality constraints while minimizing the mean square distance from a defined line. The proposed approach involves using a quadratic function of the form f(x) = Ax² + Bx + C and applying optimization techniques such as simulated annealing and the conjugate gradient method to solve the problem effectively.

PREREQUISITES
  • Understanding of regression analysis and optimization techniques
  • Familiarity with quadratic functions and their properties
  • Knowledge of numerical methods for constraint satisfaction
  • Basic proficiency in programming for implementing algorithms
NEXT STEPS
  • Research "simulated annealing" for optimization in constrained environments
  • Explore "genetic programming" as a method for function approximation
  • Study the "conjugate gradient method" for solving optimization problems
  • Learn about "mean square error" and its application in regression analysis
USEFUL FOR

Mathematicians, data scientists, and engineers involved in optimization problems, particularly those requiring regression analysis and function modeling under constraints.

Fjolvar
Messages
156
Reaction score
0
Hello,

I am trying to construct a general function/method based on two sets of minimum/maximum data point constraints, which can take on new values in different situations. The only known data for this general function is the starting point (y-axis intercept) and the x-range. The rate of change over time must equal zero, so the amount increased/decreased must be compensated within the x-range. Optimally will vary as little as possible, as long as it meets the constraints.

I attached a figure of an example plot. The minimum constraint data points are marked as the 'necessary' function. The maximum constraint data points are marked as 'maximal.' The function marked as 'case 2' is an example of the general function that would satisfy the constraints.

I would appreciate any advice or suggestions on how to approach this problem as I've had very little success so far. Thank you in advance.
 

Attachments

  • Example Graph.jpg
    Example Graph.jpg
    14 KB · Views: 519
Last edited:
Physics news on Phys.org
In my opinion, a procedure for solving this general type of problem would boil down to having a computer program that minimizes a function of several variables subject to many constraints using a numerical method. There are many methods known for solving this type of problem, but they are not methods where you apply a relatively simple formula that produces the answer.

Let my try to interpret your description of the problem as mathematics.

We are given data for two functions [itex]y_{lower}, y_{upper}[/itex] that are defined on [itex]N[/itex] points [itex]{x_0,x_1,...x_N}[/itex]. We want to find a function [itex]f(x)[/itex] satisfying:

[itex]N[/itex] inequality constraints:

[itex]y_{lower}(x_i) \lt f(x_i) \lt y_{upper}(x_i)[/itex] for [itex]i = 1,2..N[/itex]

Two equality constraints:

[itex]f(x_0) = f(x_N) = y_0[/itex] for some number [itex]y_0[/itex]

You want [itex]f(x)[/itex] to vary as little as possible One way to express that in mathematics is to say that we want to minimize the mean square distance between [itex]f(x)[/itex] and the line between [itex](x_0,y_0)[/itex] and [itex](x_n,y_0)[/itex].

So the "objective function" is [itex]\frac{1}{N} \sum_{i=1}^N (f(x_i) - y0)^2[/itex]

If you know the mathematical definition of "a function", you know that functions can be defined with symbols or words and combinations thereof. So trying to solve the above problem for "the" best function is intractable because there is such a great variety in functions.

To make the problem tractable. we could assume it comes from a family of functions, such as quadratic function, that is defined by a few parameters.

If we assume [itex]f(x) = Ax^2 + Bx + C[/itex] and also treat [itex]y_0[/itex] as a variable, the problem becomes:

Minimize [itex]\frac{1}{N} \sum_{i=1}^N (f(x_i) - y0)^2[/itex]
with respect to the variables, [itex]A,B,C,y_0[/itex]
subject to the constraints
[itex]y_{lower}(x_i) \lt f(x_i) \lt y_{upper}(x_i)[/itex] for [itex]i = 1,2..N[/itex]
[itex]f(x_0) = f(x_N) = y_0[/itex] for some number [itex]y_0[/itex]

This format ( Minimize...with respect to...subject to the constraints) is a standard way to state optimization problems. There are methods of solving such problems that resemble organized forms of trial and error: "simulated annealing", "genetic programming". There are algorithms that rely on calculus and "if ...then..." decisions", such as "the conjugate gradient method", versions of it that handle constraints.
 

Similar threads

  • · Replies 23 ·
Replies
23
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
3
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K