Points and Lines: Computationally fast

  • Thread starter Thread starter _Nate_
  • Start date Start date
  • Tags Tags
    Lines Points
AI Thread Summary
The discussion centers on developing a physics engine for a video game, focusing on the need to decompose force vectors into translational and rotational components efficiently. The user seeks a method to perform this decomposition without using square roots, sine, or cosine functions, emphasizing the importance of computational speed due to high-frequency calculations. Participants suggest considering the actual performance of the engine before optimizing and highlight the potential complexity and bugs introduced by premature optimization. They recommend using vector algebra techniques like dot and cross products, which may offer faster alternatives to trigonometric functions. Ultimately, the consensus is to prioritize functionality first and optimize later, while also exploring test programs to measure calculation speeds.
_Nate_
Messages
19
Reaction score
0
I am working on a basic physics engine for a video game. In it, models are applied forces and deal with those forces by some physics calculations.

Whenever a model is applied a force, the force is in the form of a vector originating from a point in space. The vector must be decomposed into two components: the component that is pointing from the origin of the vector to the center of mass of the object is added to the translational force vector of the model, the component that is perpendicular to the first component goes into rotational force.

I need a way to decompose these vectors into two base vectors, one pointing at the center of mass of the object, and one perpendicular to that. But the catch is, because I am doing this many times a second, the calculation needs to be very fast.

This problem is very easily solved, but I would like to know if there is any way to solve it *without* using square root, sin, or cos. Is this possible?

Keep in mind that if a formula dictates I find sqrt(a^2 + a^2) then this really isn't forcing me to use the square root function: because this is simply sqrt(2) * a, and sqrt(2) is a constant. So it's possible to use sqrt, sin, and cos at computationally fast speeds so long as their use is constant regardless of the vector used.

Is this possible?

Here are several other versions of the problem that, if solved, can be used to solve the problem as a whole:

You have a point c and a line. Find the two points on the line such that their distance from c is equal to a given value r.

You have a circle centered at c with a radius r. A ray starts with a point on the circle. Find out where the ray intersects the circle again.

Anyone have any ideas?
 
Mathematics news on Phys.org
First question: have you actually implemented the engine and profiled it? Do you actually know that your calculations are too slow? If so, are you sure that the problem is with calling these numeric functions, rather than other facets of your algorithm?

Second question: how accurate do the calculations need to be? There are ways to quickly compute good approximations to commonly used function; these might be sufficient for your purposes.

Third question: are you familiar with vector algebra? (e.g. dot products, cross products, etc)
 
Last edited:
1. No -- I thought it would be safe to assume that calculations happening at this frequency would be likely to create a bottleneck, and thus wanted to evaluate my possibilities before getting too deep in the project.

2. This will be the tradeoff -- I'll probably implement both and allow the user to choose the degree of precision (at the expense of speed)

3. Yes, quite.

Thanks.
 
_Nate_ said:
1. No -- I thought it would be safe to assume that calculations happening at this frequency would be likely to create a bottleneck, and thus wanted to evaluate my possibilities before getting too deep in the project.
Some famous quotes from computer science:

“More computing sins are committed in the name of efficiency (without necessarily achieving it) than for any other single reason - including blind stupidity.”

“premature optimization is the root of all evil.”

“Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you have proven that's where the bottleneck is.”


Thinking about efficiency is a good thing -- but optimization really belongs near the end of the development process. IMHO, the most important reason is that optimizations make your code more complex; this means it will take longer to design and develop, it will be more difficult to understand, and it will contain more bugs. (as a side benefit, it can also help you avoid wasting time optimizing when none is needed) It is better in almost every way to first focus on getting your code working, and only later focus on getting your code to work fast.


If you want to insist on thinking about speed now, then I suggest trying toy programs -- write short little test programs that allow you to estimate how long it takes to perform various calculations, and thus how many operations per second you can perform. Then, analyze your engine design to see how the numbers compare.


3. Yes, quite.
I would expect that working with dot and cross products would be more efficient than working with sines and cosines, except in certain special cases. I doubt you can avoid square roots entirely (though consider working through the algebra to see if you can delay them -- it might be possible to cancel them out, or gather a bunch together to do at once)... but I expect square roots to be faster than trig functions.
 
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Thread 'Imaginary Pythagorus'
I posted this in the Lame Math thread, but it's got me thinking. Is there any validity to this? Or is it really just a mathematical trick? Naively, I see that i2 + plus 12 does equal zero2. But does this have a meaning? I know one can treat the imaginary number line as just another axis like the reals, but does that mean this does represent a triangle in the complex plane with a hypotenuse of length zero? Ibix offered a rendering of the diagram using what I assume is matrix* notation...

Similar threads

Replies
2
Views
2K
Replies
59
Views
2K
Replies
2
Views
2K
Replies
20
Views
2K
Replies
4
Views
4K
Replies
3
Views
3K
Replies
1
Views
2K
Back
Top