Optimization methods with bivariate functions

In summary: But I still don't see the relevance of the function u in that.Sorry, the description is still unclear to me.In summary, the problem described involves a bivariate function g(z) with one or two local minima, and a set of reference points zr={(x1,y1),(x2,x2),...(xn,yn)} in the Euclidean 2D plane. The goal is to find the best parameter b that will produce points as close as possible to the reference points from the function f(z)=g(z)+b*u(z), where u(z) is a random noise variable following a uniform distribution. However, the exact nature of the problem and the relevance of the functions g(z) and u(z)
  • #1
Symeon
2
0
Hi, I have the following equation:

f(z)=g(z)+b*u(z)

where z=(x,y) i.e. bivariate,b is a parameter, u(z) the uniform distribution and g(z) a function that represents distance.

By considering for a momment b=0, min(f(z)) can give me the location of the minimum distance. However because I want to have locations that are not the same I add u(z). With b it's possible to change the influence of u(z). Very high values of b give very random positions, while if b is very small, only locations around the minimum are chosen.

Furthermore, I have some reference locations zr={(x1,y1),(x2,x2),...(xn,yn)}. I'm trying to figure out the best b I could have in order to produce from f(z) locations as much close as possible to zr.

Do you have any ideas of optimisation method I could use or even if I could find an analytical solution?

Thanks
 
Physics news on Phys.org
  • #2
What do you mean by 'u(z) [is] the uniform distribution'? You have written it as though u is a function ##u:\mathscr{R}^2\to\mathscr{R}## but a distribution is not such a function.

What do you mean by 'g(z) [is] a function that represents distance'? A distance function will typically have two arguments, but z is only a single argument unless we consider x and y as separate arguments, in which case why not just write |x-y| rather than g(z)?

The problem needs to be specified much more clearly to have a good chance of receiving help.
 
  • #3
Hi andrewkirk,

Thanks for your reply and sorry for the miss-use of the terms.

g(z) is just a bivariate function that has some local minima. One or two. The reason I have used the word distance is because it has come up from the subjtraction of two functions squared. I believe this is of not Interest as it's part of a pattern recognition technique I'm using.

What I believe is important is that ##g(z)\equiv g(x,y)## has one or two local minima. What I'm trying to take from g(z) is some random values, i.e. some (x,y) such that they are close to my reference points ##z_r=\{ (x_1,y_1),(x_2,x_2),\ldots(x_n,y_n) \}##.

The way I'm doing it is by adding a random noise or variable if you like it ##u(x,y)##, probably is not correct the notation, that follows a uniform distribution and its level changes from the parameter ##b##. In this way I can change the location of the minimum point but in a way that it follows g(z). So I'm trying to get the best ##b##, such as the points I get are as much close to my reference points.

I hope now the explanation is better.
Thanks
 
  • #4
I am afraid the problem is still unclear. Is it from a textbook or assignment sheet? If so, perhaps you could type it out in full to make it clear.

It sounds like you have a finite set of points, labelled zr, in the Euclidean 2D plane. And you are trying to get the function f to return a point as close as possible to any point in zr. The solution to that is just to make f the constant function that always returns the pair of coordinates of one of the points in zr.

Or do you mean that you want f to be the function that, given the coordinates of a point in ##\mathscr{R}^2##, returns the coordinates of the nearest point in zr? If so, I don't see the relevance of the function g. It would not be used in defining the function f.

Or do you mean that g is the function that, given the coordinates of a point in ##\mathscr{R}^2##, returns the greatest distance of that point to any of the points in zr, and you want to find the point in the number plane that minimises the value of g. In that case it becomes a problem of minimising the value of g over the convex hull of the points in zr.
 

FAQ: Optimization methods with bivariate functions

What is the purpose of using optimization methods with bivariate functions?

The purpose of using optimization methods with bivariate functions is to find the maximum or minimum value of a function with two variables. This is useful in many scientific and engineering applications, such as finding the most efficient solution to a problem or determining the best fit for a model.

What are some common optimization methods used for bivariate functions?

Some common optimization methods for bivariate functions include gradient descent, Newton's method, and the simplex method. These methods use different approaches to iteratively find the optimal solution of a function.

How does gradient descent work in optimizing bivariate functions?

Gradient descent works by calculating the gradient of a function at a given point and then moving in the direction of the steepest descent. This process is repeated until the gradient becomes smaller and the algorithm converges to the optimal solution.

Can optimization methods with bivariate functions handle non-linear functions?

Yes, optimization methods with bivariate functions can handle non-linear functions. In fact, these methods are particularly useful for non-linear functions where finding the optimal solution using traditional algebraic methods is not possible.

Are there any limitations to using optimization methods with bivariate functions?

One limitation of using optimization methods with bivariate functions is that they may converge to a local minimum rather than the global minimum. In addition, some methods may be computationally expensive for complex functions with many variables.

Similar threads

Back
Top