Can you help me find the bug in my optimal control code?

In summary, the conversation discussed the use of nonlinear control systems and fuzzy control for trajectory control. It was suggested that local linear control could be used for simpler trajectories, as long as the system stays close to the intended trajectory. The concept of "phase space" was also mentioned. The conversation then shifted to discussing an optimal control problem and a Fortran code that was not producing accurate results. Suggestions were made for improving the algorithm, and the code was shared for further analysis.
  • #1
enroger0
21
0
I've been looking at nonlinear control system and fuzzy control in particular.

If I simply want to do trajectory control (have the system to follow a certain trajectory in phase space), can I use much simpler local linear control?

I mean, for each point in phase space, do a local linearization and find the local linear control transfer function, as long as the system stay close to the intended trajectory this would work right?
 
Engineering news on Phys.org
  • #2
What to you refer to by "phase space"?

And yes linear controllers usually can be used successfully within certain limits on nonlinear systems.
Most real systems are nonlinear, some much more so than others.
 
  • #3
enroger0 said:
I've been looking at nonlinear control system and fuzzy control in particular.

If I simply want to do trajectory control (have the system to follow a certain trajectory in phase space), can I use much simpler local linear control?

I mean, for each point in phase space, do a local linearization and find the local linear control transfer function, as long as the system stay close to the intended trajectory this would work right?

Is this trajectory fixed, or will it change in time?

First the trajectory setting.

It is easier to follow a trajectory if the path has the following conditions:

1.- position follows a continiuos function through the path. let's say x=f(x).
2.- speed function is continuous in the segment, then df(x) continous
3.- aceleration ddf(x) is continous.

this will prevent from trying to follow a difficult or impossible trajectory.

Choose the input variable to use as control input. Usually we use time in order to decide the position. This would mean that time will be use in order to find the error, and you will have some errors in position. perhaps you want to input position and try to accomplish the positioning in the time required, giving you errors in time and no in position. I usually use position rather than time because I prefer to have erros in time and no in position.


If you can create a plant or function on what "should" be the inputs in order to follow the trajectory (Output), use it as the main entrance, then add a PI controll and you will have a good result.
 
  • #4
Hello ,Guys!
I have been trying to solve an optimal control problem that needs a code written for it.I have used the forward RK 4th order method (for the state variable) and backward RK 4th order method (for the adjoint equation) but the results I obtained are not right.I am saying this because the article I found the problem from includes the graph of the solution which is nothing like mine .Now I want you to help me find the bug if any or suggest a better algorithm.
Here is the Fortran code:
!===========================
PROGRAM opct
IMPLICIT NONE
!Declaration
real,DIMENSION(1010)::t,z,eta,w,u,x_1,x_2,A,B,C,D
INTEGER::k,i,j
real :: h,n,k1,k2,k3,k4,l1,l2,l3,l4,dz,deta,tol
real:: gama,g_1,g_2,phi_1,phi_2,z_0,eta_T,z_1,z_2,u_1,u_2
!Initial conditions
x_1(1)=427.22;x_2(1)=157.10
z(1)= x_2(1)/x_1(1)
!z(1)=0.37
phi_1=0.14;phi_2=0.1;gama=0.5;g_1=0.67;g_2=1.0;u_1=0.01;u_2=1.0
u(1)=0.05;n=1000.00;t(1)= 0.0;z_1=0.1033; z_2=31.5502
t(1001) = 5.0;eta(1001)=0.0;h=(t(1001)-t(1))/n;tol=0.001
w(1)= (phi_1 + phi_2 * (z(1)**gama))*z(1)/(g_1 * z(1) + g_2)
!print*,w(1)
!The header of the output
write(19,100)
100 format(5X,"t",10X,"z",9X,"eta",8X,"u")
!201 format (2f10.6,2f10.6,2f16.6,2f10.6)
!do i = 1,1001
!Do loop of Runge-Kutta fourth order forward method to get state variable
do i = 1,1001

do j = 1,1001
k1 = h*dz(t(j),z(j))
k2 = h*dz(t(j) + h/2.0,z(j) + k1/2.0)
k3 = h*dz(t(j) + h/2.0,z(j) + k2/2.0)
k4 = h*dz(t(j) + h,z(j) + k3)
!The values of z will be calculated
z(j+1) = z(j) + (k1 + 2.0*k2 + 2.0*k3 + k4)/6.0
t(j+1) = t(j)+ h !Getting set the time for the next iteration
w(j+1) = (phi_1 + phi_2 * (z(j+1)**gama))*z(j+1)/(g_1 * z(j+1) + g_2)
!print*, k1,k2,k3,k4
A(j)=k1;B(j)=k2;C(j)=k3;D(j)=k4
end do ! for z
!print*,w
!end do
!write(19,101)
!101 format(5X,"A",10X,"B",9X,"C",8X,"D")
!202 format (2f10.6,2f10.6,2f10.6,2f10.6)

!Do loop of Runge-Kutta fourth order backward method to get adjoint variable
!do i=1001,2,-1
do k = 1001,1,-1
l1 = h*deta(t(k),z(k),eta(k))
l2 = h*deta(t(k) - h/2.0,z(k)+A(k)/2.0,eta(k) +l1/2.0)
l3 = h*deta(t(k) - h/2.0,z(k)+B(k)/2.0,eta(k) + l2/2.0)
l4 = h*deta(t(k) - h,z(k)+ C(k),eta(k) + l3)
!===========================
!The values of lambda will be calculated
eta(k-1) = eta(k) - (l1 + 2.0*l2 + 2.0*l3 + l4)/6.0
t(k-1)=t(k)-h !Getting set the time for the next iteration
!write(8,*) l1,l2,l3,l4
!write(9,*) eta(k)
end do !for eta
!end do
!print*,eta
!Do i=1,1001
if (eta(i) .lt. (g_1/g_2)) then
u(i) = u_1
else if (eta(i).eq.(g_1/g_2)) then
if (z(i) .le. z_1) then
u(i)= u_1
else if (z_1 .lt. z(i) .and. z(i) .lt. z_2) then
u(i)= w(i)

else
u(i)= u_2
end if
else
u(i)=u_2
end if
end do !for u
!======================================= saved slopes
do j=1,100
write(*,*)j,A(j),B(j),C(j),D(j)
end do
!=======================================

!if (abs(x(i)-x(i+1)) + abs(u(i)-u(i+1))+ abs(lamd(i)-lamd(i+1)) .lt. tol )then
! stop
!else
201 format (2f10.6,2f10.6,2f16.6,2f10.6)
do i=1,1001
write(19,201) t(i),z(i),eta(i),u(i)
end do
!end if
!print*,u
end program opct

!=======================================================================================
function dz(t,z)
integer::i
real, intent(in)::t,z
real,DIMENSION(1010)::u
dz = -(phi_1 + phi_2*(z**gama))*z + (g_1*z + g_2)*u(1)
end function dz
!================================
function deta(t,z,eta)
integer::i
real, intent(in)::t,z,eta
real,DIMENSION(1010)::u
deta = ( phi_1 + (1-gama)*phi_2*(z**gama))*eta + g_2*u(1)*(eta-g_1/g_2)*eta-(gama*phi_2/(z**(1-gama)))
end function deta

!=================================================

Thanks a lot!
 
  • #5


I am happy to assist you in finding the bug in your optimal control code. However, I would need more information about your code, such as the specific system you are working with and the methods you are using for nonlinear and fuzzy control. Without this information, it is difficult for me to accurately diagnose the issue.

In terms of your question about using simpler local linear control for trajectory control, it is possible to use this approach as long as your system remains close to the intended trajectory. However, it is important to consider the limitations of local linearization and the potential for your system to deviate from the desired trajectory. In some cases, a more robust control strategy, such as optimal control, may be necessary to ensure the desired trajectory is followed accurately.

I would also recommend consulting with other experts in the field of nonlinear and fuzzy control to get their insights and recommendations for your specific system and goals. Collaborating with others can often lead to new ideas and solutions that may not have been considered before.
 

1. What is a nonlinear control problem?

A nonlinear control problem is a type of control problem in which the relationship between the input and output of a system is not linear. This means that the output is not directly proportional to the input, making it more challenging to design a control system that can accurately regulate the output.

2. Why are nonlinear control problems difficult to solve?

Nonlinear control problems are difficult to solve because they require more complex control algorithms compared to linear control problems. Nonlinear systems also exhibit more complex behaviors and are more sensitive to changes in parameters, making it challenging to design a control system that can handle these variations.

3. What are the main approaches to solving nonlinear control problems?

The main approaches to solving nonlinear control problems are feedback linearization, sliding mode control, and adaptive control. Feedback linearization transforms the nonlinear system into a linear system, while sliding mode control uses a discontinuous control law to force the system to reach a desired state. Adaptive control uses a learning algorithm to adjust the control parameters based on the system's behavior.

4. What are some applications of nonlinear control?

Nonlinear control has a wide range of applications, including aerospace and aviation systems, robotic systems, power systems, and biological systems. Nonlinear control is also essential in industries such as automotive, chemical, and process control, where systems often exhibit nonlinear behaviors.

5. What are the advantages of using nonlinear control?

The advantages of using nonlinear control include better performance, robustness to system uncertainties, and the ability to handle complex and highly nonlinear systems. Nonlinear control also allows for more flexibility in system design and can achieve better control of process variables compared to linear control methods.

Similar threads

  • Introductory Physics Homework Help
Replies
12
Views
829
  • Differential Equations
Replies
1
Views
1K
  • General Math
Replies
1
Views
1K
  • Quantum Interpretations and Foundations
11
Replies
371
Views
10K
  • STEM Academic Advising
Replies
11
Views
1K
Replies
2
Views
2K
  • Engineering and Comp Sci Homework Help
Replies
2
Views
2K
  • Electrical Engineering
Replies
5
Views
2K
Replies
13
Views
2K
  • Atomic and Condensed Matter
Replies
1
Views
820
Back
Top