Existence of (sequential) feasible direction

• A
Having trouble with something that's likely too trivial, but here goes..

In Optimization theory and nonlinear programming [Sun, Yuan] the following is discussed at section 8.2.

Consider the optimisation of $f:\mathbb R^n\to\mathbb R$ with constraints $g_j:\mathbb R^n\to\mathbb R, g_j(x) = 0, j=1,\ldots , m$.

A feasible set $X$ is a subset of $\mathbb R^n$ which contains all those points that satisfy the system of constraints. The following definition is given.

A feasible direction at $x^*\in X$ is a straight line section containing $x^*$ that consists of ONLY feasible points. Formally, there exists a direction $h\in\mathbb R^n\setminus\{0\}$ (may assume to be normed) with
$$\exists \delta >0, \forall \eta (0\leq \eta\leq \delta \implies x^* + \eta h\in X)$$
Only this definition is given, so one naturally asks, do such feasible directions even exist, in general?

Further development is done via the language of sequences.
Call a direction $h\neq 0$ to be a sequential feasible direction at $x^*\in X$ if there exist sequences $h_k\to h$ such that for every vanishing sequence $\eta _k >0$ it holds that $x^* + \eta_k h_k\in X$.
Evidently, the sequence converges to $x^*$, but the following is problematic for me.
Set $x_k := x^* + \eta_kh_k$. If we set $\eta _k := \|x_k-x^*\|$ then
$$\frac{x_k-x^*}{\|x_k-x^*\|} \xrightarrow[k\to\infty]{} h?!\tag{E}$$
What I understand is that we have a sequence of normed elements, so if it converges to anything, the limit has to be nonzero, but why does it converge?

Let's give some context. Suppose $x^*\in X$ is not a local maximum point (subject to said constraints), then by definition
$$\forall k\in\mathbb N,\exists x_k\in X : \|x^*-x_k\|\leq \frac{1}{k}\quad \&\quad f(x_*) < f(x_k)$$
So, we can construct a sequence of feasible points that converges to $x^*$. May we assume without loss of generality, this sequence has a limiting direction? More formally, if we write $x_k = x^* + \eta_ kh_k, k\in\mathbb N$, may we assume $\exists \lim _k h_k\in\mathbb R ^n\setminus\{0\}$?

So, like I said, probably something trivial, but I really don't understand why we may assume the argument that culminates in (E).

S.G. Janssens

S.G. Janssens
Only this definition is given, so one naturally asks, do such feasible directions even exist, in general?
No, at the level of generality of your presentation, I do not see why they should exist. What excludes the possibility of ##X## being a finite set?
Further development is done via the language of sequences.
Call a direction $h\neq 0$ to be a sequential feasible direction at $x^*\in X$ if there exist sequences $h_k\to h$ such that for every vanishing sequence $\eta _k >0$ it holds that $x^* + \eta_k h_k\in X$.
Evidently, the sequence converges to $x^*$, but the following is problematic for me.
Set $x_k := x^* + \eta_kh_k$. If we set $\eta _k := \|x_k-x^*\|$ then
Sorry, but this confuses me. You took ##(\eta_k)_k## to be a vanishing sequence (of real numbers), then used it to define ##x_k##, but now you use ##x_k## in turn to define ##\eta_k##? Could you perhaps clarify this?

Without this (unnecessary?) re-definition, I would imagine that (E) follows because
$$\frac{x_k - x^{\ast}}{\|x_k - x^{\ast}\|} = \frac{\eta_k}{|\eta_k|}\frac{h_k}{\|h_k\|} = \frac{h_k}{\|h_k\|} \to \frac{h}{\|h\|} = h,$$
where it was used that ##\eta_k > 0## and ##h## is normalized.

Last edited:
nuuskur
Right, but it assumes ##h_k\to h ##. For the example I've given about the maximal point, I'd like to explicitly show which direction is sequentially feasible. So far, all I've seen in the chapter is a slew of definitions, but there's no lemma or proposition of the form "a (sequentially) feasible direction exists".

I'm kind of getting the vibe as in algebra - we describe a structure with a whole number of properties, prove a lot of equivalent statements and then when it comes to exhibiting examples of said structure, we're left empty-handed.

S.G. Janssens
Right, but it assumes ##h_k\to h ##.
Indeed. What I did is: I read your definition of ##h \neq 0## being a sequentially feasible direction at ##x^{\ast} \in X##. So, I took the involved sequence ##(h_k)## that the definition assumes exists and converges to ##h##. Then I picked an arbitrary vanishing sequence ##(\eta_k)## of positive numbers and checked that the sequence ##(x_k)## defined in terms of ##(h_k)## and ##(\eta_k)## has the behavior (E).

At this point, we are just verifying a simple consequence of the definition of "sequential feasibility". Does this clarify how (E) is obtained from the definition?
For the example I've given about the maximal point, I'd like to explicitly show which direction is sequentially feasible.
Let's give some context. Suppose $x^*\in X$ is not a local maximum point (subject to said constraints), then by definition
$$\forall k\in\mathbb N,\exists x_k\in X : \|x^*-x_k\|\leq \frac{1}{k}\quad \&\quad f(x_*) < f(x_k)$$
So, we can construct a sequence of feasible points that converges to $x^*$.
Yes, that looks correct.
May we assume without loss of generality, this sequence has a limiting direction? More formally, if we write $x_k = x^* + \eta_ kh_k, k\in\mathbb N$, may we assume $\exists \lim _k h_k\in\mathbb R ^n\setminus\{0\}$?
Sorry if I am slow, but I am confused again. Where do the ##h_k## and ##\eta_k## come from? I thought they are provided by the definition of "sequential feasibility", together with the direction ##h## itself. Also, it seems to me that this definition as such does not make any reference to extrema.

For me, the intuition behind the definitions is as follows. In order to solve local optimization problems, we need a way to compare points ##x^{\ast}## to other points nearby. Ideally, in a sense, we would want to be able to approach ##x^{\ast}## in a continuous fashion from all directions, but this requires ##x^{\ast}## to be an interior point of ##X##.

Depending on the constraints that define ##X##, this may not be possible. For example, ##X## may have corners (such that it is only possible to approach corner points from within ##X## along certain directions, but not along others) or ##X## may be a discrete set. I believe that your two definitions ("feasible" and "sequentially feasible") cater for such cases, respectively.
So far, all I've seen in the chapter is a slew of definitions, but there's no lemma or proposition of the form "a (sequentially) feasible direction exists".
One trivial way in which (sequentially) feasible directions exist at any point of ##X##, is of course when ##X## is open. On a more interesting level, I do expect your book (which I unfortunately do not know myself) to formulate certain conditions on the constraints ##g_j## that ensure such existence. Maybe it will still come in a later section?
I'm kind of getting the vibe as in algebra - we describe a structure with a whole number of properties, prove a lot of equivalent statements and then when it comes to exhibiting examples of said structure, we're left empty-handed.
Yes, I know that feeling, although I think that (what I consider) good abstract algebra study books motivate the definitions and properties from the examples, not the other way around. (For monographs or encyclopedic works it may be different, of course.)

nuuskur
Where do the ##h_k## and ##\eta_k## come from?
When we pick a sequence that violates the maximum condition, we can write every sequence element as a sum $x^* +\eta _kh_k$, where $\eta _k >0$ is vanishing and wo.l.o.g $\|h_k\|=1$. Simply put, the vector ##h_k ## points at the direction where ##x_k## is taken from and the scalar ##\eta_k## scales ##h_k## appropriately.

If $X$ is open, or, at the very least, for some $\delta >0$ $B(x^*,\delta)\subseteq X$, then we can of course pick an arbitrary straight line section and obviously it also provides a sequentially feasible direction, but then does the non-interior point case become something vacuous?

For instance, if $x^*$ is isolated, formally, for some $\delta >0, B(x^*,\delta)\cap X = \{x^*\}$, does $x^*$ automatically become a constrained extremum point?

What happens if it's not an interior point, but, say, an accumulation point i.e there exist sequences of ##X ## that converge to it. So let ##x_k\to x^* ##. Write every ##x_k = x^* +\eta_kh_k \in X## for reasons mentioned earlier. Is it safe to assume the sequence of directions ##h_k ## converges?

I've tried exhibiting some trivial counter-examples. For instance in 2-dimensional case, suppose ##x^* = 0 ## and we can set $x_k = \frac{((-1)^k,0)}{2^k}$, that means the sequence itself converges, but the sequence of directions keeps oscillating, but we can extract a converging subsequence (constant actually) of directions, so that doesn't work.

*lightbulb moment*

We are picking directions in a compact set (the unit sphere). We can immediately extract a converging subsequence, consequently a nonzero sequentially feasible direction.

As said, probably too trivial, which it is

Last edited:
S.G. Janssens
S.G. Janssens
When we pick a sequence that violates the maximum condition, we can write every sequence element as a sum $x^* +\eta _kh_k$, where $\eta _k >0$ is vanishing and wo.l.o.g $\|h_k\|=1$. Simply put, the vector ##h_k ## points at the direction where ##x_k## is taken from and the scalar ##\eta_k## scales ##h_k## appropriately.
Ok, now I understand what you meant earlier, thank you.
*lightbulb moment*

We are picking directions in a compact set (the unit sphere). We can immediately extract a converging subsequence, consequently a nonzero sequentially feasible direction.

As said, probably too trivial, which it is
I don't find it that trivial. Sorry for insisting, what you do here is that you start with a sequence ##(x_k)## converging to ##x^{\ast}## and then you define ##(h_k)## and ##(\eta_k)## using ##(x_k)##. Finally, you use compactness to extract a (sub)sequence ##(h_k)## converging to a unit vector ##h##.

Now, I take it that this ##h## should then be a sequentially feasible direction at ##x^{\ast}##. However - if you want to check your definition of sequential feasibility - you would still need to prove that for every vanishing positive sequence ##(\eta_k)## (not necessarily the same sequence that you used to get to ##h## in post #5) the points ##x^{\ast} + \eta_k h_k## all lie in ##X## (are all feasible).

S.G. Janssens
In addition, here are some trivial counterexamples to show that - at least at the great level of generality of your OP - sequentially feasible directions may not exist for any point of ##X##.

1. ##X## is a finite set.
2. ##X = \mathbb{Q}##.

(For 2, select any rational ##x^{\ast}## and let ##(h_k)## be a normalized sequence in ##\mathbb{R}## converging to ##h = \pm 1##. Then ##h_k = 1## or ##h_k = -1## identically, beyond any sufficiently large ##k##. Let ##(\eta_k)## be a vanishing sequence of positive, irrational numbers. Then ##x^{\ast} + \eta_k h_k = x^{\ast} \pm \eta_k## for ##k## sufficiently large, and this is not in ##X##.)

I really don't think that there is a problem, though: It seems to me that the authors of your book have continuous optimization in mind, and I think the definitions that you gave are chosen on that basis.

nuuskur
I really don't think that there is a problem, though: It seems to me that the authors of your book have continuous optimization in mind, and I think the definitions that you gave are chosen on that basis.
My mistake. Not only continuous, but even continuously (twice) differentiable.

S.G. Janssens
S.G. Janssens
My mistake. Not only continuous, but even continuously (twice) differentiable.
Ok, very good. Let's review your earlier sequential definition:
Call a direction $h\neq 0$ to be a sequential feasible direction at $x^*\in X$ if there exist sequences $h_k\to h$ such that for every vanishing sequence $\eta _k >0$ it holds that $x^* + \eta_k h_k\in X$.
From what you wrote afterwards, we also assume that ##h## and the ##h_k## are normalized. I also think that after "it holds that..." it is probably meant to write "eventually" (i.e. for ##k## large enough).

Related to what I said at the end of post #6,
Now, I take it that this ##h## should then be a sequentially feasible direction at ##x^{\ast}##. However - if you want to check your definition of sequential feasibility - you would still need to prove that for every vanishing positive sequence ##(\eta_k)## (not necessarily the same sequence that you used to get to ##h## in post #5) the points ##x^{\ast} + \eta_k h_k## all lie in ##X## (are all feasible).
you could consider the following.

If the ##g_j## are linear functionals such that the hyperplanes defined by them intersect nontrivially, then every point in ##X## admits a feasible direction, so it certainly admits a sequentially feasible direction.

If we admit nonlinear functionals, then I think it gets more complicated. As an example, take ##\mathbb{R}^2## with the single constraint ##g(x) = \|x\|^2 - 1##, so ##X## is just the unit sphere. Take a point ##x^{\ast} \in X## and let ##h## be a normalized direction, so in fact ##h \in X## as well. Consider a sequence ##(h_k)## of normalized vectors such that ##h_k \to h##. For every positive and sufficiently small ##\eta_k## the vector ##x^{\ast} + \eta_k h_k## lies outside the unit sphere, so outside ##X##. Therefore ##x^{\ast}## does not have a sequentially feasible direction.

Are you definitions aimed at problems with linear equality (and perhaps inequality) constraints?

Constraints are usually sufficiently smooth, say, once or twice continuously differentiable. In the general case, yes, inequality constraints can also be considered, but I'm trying to work my way through constraints with equalities for now.

S.G. Janssens