Unique solution to an arbitrary monotonic non-linear system

  • Thread starter Thread starter dmytro
  • Start date Start date
  • Tags Tags
    Non-linear System
dmytro
Messages
7
Reaction score
0
Quick version:

I have a vector field f:\mathbb{R}^n\oplus\mathbb{R}^m \to \mathbb{R}^n of two arguments x \in \mathbb{R}^n, y \in \mathbb{R}^m, which has the following properties:

  1. The jacobian matrix of f wrt to the first argument \frac{\partial f}{\partial x}: \mathbb{R}^n\oplus\mathbb{R}^m \to \mathbb{R}^{n\times n} is a lower triangular matrix with negative elements on the main diagonal, for all input argument values. This assumption is equivalent to the statement that each f_i is strictly decreasing wrt to x_i for i = 1 \dots n
  2. (optional, if it helps, but I should probably relax it later to monotonous w.r.t to y) f is linear in the second argument, i.e. \frac{\partial f}{\partial y} = \text{const} is a constant n\times m matrix. Or, even, f(x,y)=g(x)+Gy, where G is some (known) matrix
  3. f is sufficiently differentiable, nice and all

The question is, given the assumptions above, and setting y=W^Tx, is there any hope of finding some constrains on the matrix W \in \mathbb{R}^{n\times m}, such that the equation f(x, W^Tx)=0 has a unique solution?

Some details:

I need this to show that a linear feedback control stabilizes my system, that's where W^Tx comes from. This question arose as a generalization to a 1d case, which has a nice solution, as described below. However, I don't know which tools can I use to study the generalized problem.

In 1D case, i.e. n = m=1, we have the following:

  • Let be of the form f(x, y) = g(x) + \gamma y
  • then f(x, wx) = 0 \implies \gamma wx = -g(x)
  • g(x) is strictly decreasing (according to 1.), then a sufficient condition for g(x) + \gamma w x = 0 to have a unique root is that \gamma w x is strictly decreasing, i.e. \gamma w < 0

So, had I asked this question in 1D, the answer would be like "the system has unique solution for all w, such that \gamma w < 0 holds". I was hoping to get smth like that for the general case, but with no luck so far. I'd also be grateful is someone points me to the suitable mathematical apparatus to figure it out.

One more thing: for stable fixed points, the original question can be equivalently restated as: find constrains on W, such that the matrix

<br /> \frac{\partial}{\partial x}[f(x, W^Tx)] = \frac{\partial f}{\partial x} + \frac{\partial f}{\partial y}W^T<br />

has only negative eigenvalues, for all x. The equivalence can be shown using the dynamical systems theory. So an answer to this question is as welcome as to the original one
 
Last edited:
Physics news on Phys.org
edit:

f:\mathbb{R}^n\oplus\mathbb{R}^m \to \mathbb{R}^n should be f:\mathbb{R}^n\times\mathbb{R}^m \to \mathbb{R}^n

\frac{\partial f}{\partial x}: \mathbb{R}^n\oplus\mathbb{R}^m \to \mathbb{R}^{n\times n} should be \frac{\partial f}{\partial x}: \mathbb{R}^n\times\mathbb{R}^m \to \mathbb{R}^{n\times n}
 
Last edited:
One thought is to see if the Banach fixed point theorem helps.. Let T to be the mapping of \mathbb{R}^n into itself defined by T(x) = f(x, W^T x).

http://en.wikipedia.org/wiki/Banach_fixed-point_theorem

Correction: Let T to be the mapping of \mathbb{R}^n into itself defined by T(x) = f(x, W^T x) + x. so the fixed point will be relevant to the solution of your equation.
 
Last edited:
Stephen Tashi said:
One thought is to see if the Banach fixed point theorem helps
Thanks for the clues. I can't seem to figure out how to make use of that theorem though...

However, I found a solution for a special case of W. If one picks W^T with only first non-zero column, then the product \Gamma = \frac{\partial f}{\partial y}W^T also has only first column not equal to zero, that is, a special case of a triangular matrix. Therefore, to make sure that the full derivative of f is a stable triangular matrix, one should only require that the (1, 1) elememnt of \Gamma is always negative, that is:
sign(W_{1i}) = -sign(\frac{\partial}{\partial y_i}f_1(x, y)), \forall i, x, y
This condition is easy to fulfill, if one assumes that f_1 is monotonic in the second argument.

For the full-rank W, the question is still open, and that's exactly the case I need :(
 
Last edited:
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top