# Control Theoretical Inquiry

Hey everyone,

I have a hypothesis that I would like to confirm. I won't bore anyone with the nitty gritty details, so I will try to be as general as possible.

I'm doing a project on gradient ascent methods and their application to quantum control. The quantum part isn't important as my question is mathematical in nature, though a small caveat will appear and I'll make that clear.

Essentially, I'm trying to find an optimal control that will drive an operator $\rho(t), \rho(0) =\rho_0$ to an operator $\tau$ in time T such that it optimizes their mutual inner product, say
[tex] C = \langle \tau , \rho(T) \rangle [/itex]
The gradient ascent method says that we should find the gradient of C, and then proceed in the direction in which the gradient is maximal. This is very useful from a numerical standpoint, and that is the context with which I will be using it.

I was asked during a seminar whether, in the event that we could directly calculate C, there was any way of formulating an optimal control just using the value of C, and if this could be potentially more efficient. Incidentally, this is where the quantum caveat occurs, in that there's no guarantee we can calculate C.

Thus my question comes down to this. Under the assumption that we can calculate the cost function directly, can I then find an algorithm to optimize my control variables?

I suspect not, since the inner product naively represents the overlap of the two operators. Hence calculating the cost function may tell us how close we are to a solution, but in an iterative numerical process, does not tell us "in which direction" to update our control variables.

Any thoughts on this?