Minimization of many-variable function

In summary, the conversation is about a person who is learning Python and is trying to minimize a function of many variables. They are having trouble with their code, specifically with defining a function and using it to sum elements from a table of data. They also mention some NameErrors in their code.
  • #1
gaby287
14
0
Hi, I'm learning python and I'm just trying to minimize a function of many variables, but I have some problems with my code.

Python:
import numpy as np
import scipy.optimize as op
from scipy.optimize import minimize

table1_np = np.genfromtxt('Data/tabla1.txt', usecols=0)
#--------------------------
def function(r0, rs):
      r0, rs = parameters
     return (r0*rs**3/table1_np[I])
def function2(parameters):
      return sum(table2_np[I] - function[I])
           x0=np.array[0.7, 1.1]
      res=minimize(function2, x0, method = 'nelder-mead', options={'xtol':1e-8, 'disp':True})
 
Technology news on Phys.org
  • #2
Perhaps you could tell us more about the function you want to minimize and the steps you need to take to do it.
 
  • #3
gaby287 said:
I have some problems with my code

What are the symptoms of those problems?
 
  • #4
Well I have a table of data (table1_np) and I want to use it to define a function, but the problem is, that I don't know if the definiton is correct, because what I need to do sum the elements of other table whit that function and after that minimize the result.
 
  • #5
There are some NameErrors in the code as you've likely seen. The tuple 'parameters' I believe you intend to be passed into the first function as it is the second function. Also the index name 'I', 'table2', etc.
 

1. What is minimization of a many-variable function?

Minimization of a many-variable function is the process of finding the lowest possible value of a function that depends on multiple variables. This is often used in optimization problems where the goal is to find the optimal solution.

2. How is minimization of a many-variable function different from single-variable minimization?

The main difference is that in single-variable minimization, there is only one variable involved, while in many-variable minimization, there are multiple variables that can affect the value of the function. This makes the optimization process more complex and requires different techniques.

3. What techniques are commonly used for minimization of many-variable functions?

Some common techniques include gradient descent, Newton's method, and the Levenberg-Marquardt algorithm. These methods involve finding the direction of steepest descent and iteratively updating the values of the variables to reach the minimum.

4. What are some challenges in minimization of many-variable functions?

One of the main challenges is the presence of local minima, where the function has a lower value in one area but a higher value overall. This can make it difficult to find the true global minimum. Additionally, the complexity of the function and the number of variables can also pose challenges.

5. How is minimization of many-variable functions used in real-world applications?

Minimization of many-variable functions has a wide range of applications, including in machine learning, finance, and engineering. It can be used to optimize processes and systems, such as finding the best parameters for a machine learning model or maximizing profits in a financial portfolio.

Similar threads

  • Programming and Computer Science
Replies
16
Views
1K
  • Programming and Computer Science
Replies
15
Views
1K
  • Programming and Computer Science
Replies
3
Views
1K
  • Programming and Computer Science
Replies
6
Views
2K
  • Programming and Computer Science
Replies
7
Views
3K
  • Programming and Computer Science
Replies
2
Views
2K
  • Programming and Computer Science
Replies
1
Views
672
  • Programming and Computer Science
Replies
2
Views
896
Replies
3
Views
3K
  • Special and General Relativity
Replies
11
Views
193
Back
Top