Optimizing Vector Magnitude through 180-Degree Rotations

In summary, Jeff attempted to solve a problem involving maximizing the sum of the differences between the angles, but struggled due to the inherent order dependency in the input vectors. He suggests to solve the problem using a fixed, pre-determined order in which the vectors are fed into the algorithm, but this solution does not work when the vectors have a varying direction.
  • #1
Jeff.Nevington
12
1
Wasn't sure whether to post this in the computing section or not but here goes anyway,

I have n number of 2d vectors (all have the same magnitude),
for each vector I can choose whether or not to multiply it by -1 (spinning it through 180 degrees),
all of the vectors are then added together,
The objective is to maximize the magnitude of the final vector, the only variable being whether or not to multiply the vector by -1.

Here is an example,
I have 2 vectors, both magnitude 1, one at 30° and one at -80° from the horizontal axis. Adding both of these together I will have a new vector at -37.6° with magnitude 1.31. However if I flip the -80° vector (spin it through 180), it becomes 100°. The new resultant vector is 64.7° with a magnitude of 1.64.

It is easy to check all combinations for a small number of vectors but it soon becomes unfeasible.

The main observation that I keep coming back to is that the objective should be to minimize the sum of the differences between the angles (i.e. the maximum magnitude will be achieved when the vectors change direction the least amount - trying to make them as similar to a straight line as possible). I am having trouble figuring out how to write the problem down mathematically so it makes sense.

Does anyone have any ideas on this? I feel like it should be really quite simple to solve and just requires some good math common sense but I have been fiddling with it for days now and I am going around in circles (literally).

I should point out that I do have a solution that seems to work but is very messy. I pick a single point far away from the origin (further than if all the vectors made a straight line) and then find the arrangement of vectors that gives the minimum distance to that point (this is a convex linear programming problem). I then move the point through several degrees relative to the origin and repeat the process. I keep going until the point has made a full circle. I then find which solution gave the largest magnitude (there are always at least 2 because for every solution there is a symmetrical one). Although I am pretty certain this approach finds the maximum magnitude it seems very messy to me and I cannot prove that it works mathematically.

Any help greatly appreciated,
Jeff
 
Last edited:
Physics news on Phys.org
  • #2
Someone correct me if I'm wrong, but the way I see it, in order to maximize your sum, you have to maximize the inner (scalar) product of the vectors, [itex]\vec{a} \vec{b}=|\vec{a}|*|\vec{b}|*Cos(\alpha-\beta)[/itex]. Each time you add a new vector, just add a condition checking whether [itex]Cos(\alpha-\beta)[/itex] is positive (since magnitudes can't be negative), and if not, flip its direction, this guarantees that the scalar product will be positive since [itex]Cos(x+\pi)=-Cos(x)[/itex] for any x.
 
  • #3
Thankyou for your response!

Your proposed method would work perfectly if the list of vectors had a fixed, pre-determined order in which they were fed into your algorithm. If I understand correctly you suggest to take the first two vectors, [itex]\vec{a}[/itex] and [itex]\vec{b}[/itex], find the maximum dot product by checking both directions for [itex]\vec{b}[/itex] and then finding the resultant vector. This new vector will then be checked against the next vector in the sequence and so on until all the vectors had been checked.

The problem is that the chosen direction for [itex]\vec{b}[/itex] (the one that makes [itex]Cos(\alpha-\beta)[/itex] positive) is dependent on all of the vectors that came before it.

In the example I gave in the original post with two vectors, this method would work fine, giving a resultant vector of magnitude 1.64 and angle 64.7°. However, if I had a third vector of magnitude 1 and angle -37.6° and maximized the dot product again I would have a final resultant vector of magnitude 1.74 at 30°. This is not the highest possible magnitude for these three vectors. If checked the third vector (-37.6°) against the second vector (-80°) initially, then checked the first vector (30°) last, the trend would change and I would end up with a resultant vector of magnitude 2.31 at -37.6°.

Maybe what I need is for [itex]\alpha[/itex] to remain constant in your algorithm. Say there was an [itex]\alpha[/itex] that represents the overall trend of the vectors, I could check all of the vectors against this and the order in which I checked them would have no effect.

Thanks again for the reply,

Jeff
 

What is a vector optimization problem?

A vector optimization problem is a type of mathematical optimization problem in which the objective function and constraints involve vectors instead of just scalar quantities. It involves finding the optimal values for a set of decision variables in order to maximize or minimize a vector objective function.

What is the difference between a scalar and a vector optimization problem?

In a scalar optimization problem, the objective function and constraints involve only scalar quantities, while in a vector optimization problem, they involve vectors. This means that the decision variables and the optimal solution will also be vectors in a vector optimization problem, whereas they will be scalars in a scalar optimization problem.

What are some real-world applications of vector optimization problems?

Vector optimization problems have many applications in fields such as engineering, economics, and computer science. Some examples include portfolio optimization in finance, resource allocation in logistics, and route optimization in transportation.

How are vector optimization problems solved?

There are various methods for solving vector optimization problems, including gradient-based methods, evolutionary algorithms, and mathematical programming techniques. The most appropriate method depends on the specific problem and its characteristics.

What are the challenges in solving vector optimization problems?

One of the main challenges in solving vector optimization problems is the complexity that arises from dealing with multiple objectives and constraints. This can make it difficult to find a single optimal solution and may require trade-offs between conflicting objectives. Another challenge is the computational cost, as vector optimization problems often involve high-dimensional decision variables and complex objective functions.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
1K
Replies
3
Views
1K
Replies
9
Views
1K
Replies
2
Views
1K
  • General Math
Replies
2
Views
1K
  • Electromagnetism
2
Replies
35
Views
3K
  • Special and General Relativity
Replies
1
Views
542
Replies
5
Views
3K
  • Introductory Physics Homework Help
Replies
9
Views
3K
Back
Top