0% found this document useful (0 votes)
27 views3 pages

Physics Simulation Stability Challenges

The document discusses the challenges faced by a game developer in creating a realistic physics simulator, focusing on the use of numerical methods like Euler's integrator and the importance of stability in simulations. It explains how numerical instability can lead to simulation failures and presents solutions such as reducing step sizes and employing advanced techniques like Runge-Kutta methods. The author emphasizes the need for understanding differential equations to effectively manage and improve simulation performance.

Uploaded by

rkrams1989
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views3 pages

Physics Simulation Stability Challenges

The document discusses the challenges faced by a game developer in creating a realistic physics simulator, focusing on the use of numerical methods like Euler's integrator and the importance of stability in simulations. It explains how numerical instability can lead to simulation failures and presents solutions such as reducing step sizes and employing advanced techniques like Runge-Kutta methods. The author emphasizes the need for understanding differential equations to effectively manage and improve simulation performance.

Uploaded by

rkrams1989
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

by Jeff Lander GRAPHIC CONTENT

Lone Game Developer


Battles Physics Simulator

A
s a real-time 3D graphics developer, I need to wage many battles. I fight
with artists over polygon counts, with graphics card manufacturers over
incomplete or incorrect drivers, and with some producers’ tendencies to
continuously expand feature lists. However, some of the greatest battles
I have fought have been with myself. I ShipPosition = ShipPosition + ShipVelocity; In this formula, kd represents the
fight to bring back the knowledge I have ShipVelocity = ShipVelocity + coefficient of drag that is multiplied by 15
long since forgotten. I fight my desire to ShipAcceleration; the velocity of the particle. This coeffi-
play the latest action game when more Look familiar to anyone? It’s a pretty cient determines how fast the velocity
pressing needs are at hand (deadlines, simple physics model, but it turns out of the object is dragged down to zero.
the semblance of a social life). that even here I was integrating. If you This is a very simple differential equa-
This month I document one of the look at the Euler integrator from last tion. In fact, it’s simple enough to be
less glamorous battles — the battle of month, I had satisfied for v directly by the formula. I
the physics simulator. It’s not going to Position = Position + (DeltaTime * Velocity); can use this exact solution to check the
be fun. It’s going to be a bit bloody. Velocity = Velocity + (DeltaTime * Force * accuracy of my numerical integrator:
However, if I ever hope to achieve a OneOverMass);
V = e − kd t
realistic and interesting physics simula- Now for my simple physics model, (Eq. 2)
tion, it’s a battle that must be fought. DeltaTime = 1 and Mass = 1. Guess what? I Euler’s method is used to approxi-
So, my brave warriors, join me. was integrating with Euler’s method mate the integral curve of Equation 2
Sharpen your pencils, stock your first- and didn’t even know it. If I had made with a series of line segments along this
aid kit with plenty of aspirin, drag out this Apple II physics model any more path. Each step along this path is taken
the calculus book, and fire up the cof- complex, this integrator could have every time, interval h, via the formula
feepot. Let’s get started. blown up on me. These sorts of prob- wi +1 = wi + hf (t i + wi )
I hope you all had a chance to play lems can be difficult to track down, so
Vi +1 = Vi + h( − kdVi )
around with the soft body dynamics it’s important to understand the causes.
simulator from last month. The demo (Eq. 3)
highlighted an interesting problem — In all cases, the viscous drag force
the need for stability. While creating When Things Go Wrong should approach zero. However, the
my dynamics simulation, I waged a size of the step h and coefficient of
constant battle for stability. However,
in order to wage the war effectively, I
need to understand the roots of the
T he reason that the Euler integrator
can blow up is that it’s an approxi-
mation. I’m trying to solve a differen-
drag kd determine how well the
approximation performs. Take a look
at Figure 1.
instability in the system. Last month, I tial equation by using an iterative With the given step size and drag
implied that the problem resulted from numerical method. The approximation coefficient, Euler’s method may not be
my use of a simple Euler integrator. But can differ from the true value and cause a great approximation, but it gives the
I didn’t really explain why that caused error. When this error gets too large, desired result. The velocity converges
the problem. Let me fix that right now. the simulation can fail. A concrete on zero. But take a look at the relation-
example may help to explain. Last ship between the step size and drag
month, I added a viscous drag force to coefficient in Equation 3.
Integrators and You the simulation to add stability. The for- If h > 1
mula for this force was kd then the approximation
Fd = − kdV
M any game programmers never
realize that when they create the
physics model for their game, they are
(Eq. 1)
step will overshoot zero, as you can see
in Figure 2.

using differential equations. One of my Jeff is the technical director of Darwin 3D where he spends time calculating his rate
first programs on the Apple II was a of procrastination with respect to his articles. E-mail optimization suggestions to
spaceship flying around the screen. My [email protected].
“physics” loop looked like this:

http://www.gdmag.com APRIL 1999 G A M E D E V E L O P E R


GRAPHIC CONTENT

By increasing the step size, I was try- take fairly large step sizes. Unfortun-
ing to get a system that converged to
How Can I Prevent Explosions? ately, when you get lots of objects
zero more quickly — but I got some-
thing entirely different. Things really
start to get bad when the drag coeffi-
I f you find a situation where your
simulator blows up, there’s an easy
way to see if this kind of numerical
interacting, these instability problems
appear even more. So, just when
things start to get interesting, you
cient increases more, as in Figure 3. As instability is the cause. Reduce the step need to reduce the step size and slow
each step is taken, not only does the size. If you reduce the size of the step things down.
approximation oscillate across zero, and the simulation works, then this I’d rather create an integrator that
but it also actually diverges from zero, numerical instability is the problem. would allow me to take large step sizes
and eventually explodes the system. The easy solution is always to take without sacrificing stability. To do
This is exactly what was happening in small steps. However, realize that each this, I need to look at the origins of
the spring demonstration from last step requires quite a few calculations. Euler’s method.
month, when the box blew up. The simulation will run faster if it can

F I G U R E 1 . A decent approximation. Taylor’s Theorem


60.0000

50.0000
Y ou may remember Taylor’s
Theorem from calculus. It’s
named after mathematician Brook
16 40.0000 Actual Taylor’s work in the eighteenth centu-
Velocity

ry. This theorem describes a method


30.0000 Euler
for converging on the solution to a dif-
20.0000 ferential equation.

10.0000 f ( x ) = Pn ( x ) + Rn ( x )

0.0000
Pn ( x ) = f ( x0 ) + f ′( x0 )( x − x0 ) +
0.0 0.8 1.6 2.4 3.2 4.0 4.8 5.6 6.4 7.2 8.0
f ′′( x0 )
Time (Stepsize = 0.8 Kd = 0.8) ( x − x0 )2 +
2!
f ( n )( x0 )
... + ( x − x0 )n
F I G U R E 2 . This looks a lot worse. n!

60.0000 f ( n +1)( E )
Rn ( x ) = ( x − x0 )n
n!
40.0000 x0 < E < x
Actual
20.0000 (Eq. 4)
Velocity

Euler In Equation 4, Pn(x) represents the


0.0000 nth Taylor polynomial. If you take the
limit of Pn(x) as n → ∞ , you get the
-20.0000 0.0 0.8 1.6 2.4 3.2 4.0 4.8 5.6 6.4 7.2 8.0
Taylor series for the function. If, how-
ever, the infinite series is not calculated
-40.0000 and the series is actually truncated,
Time (Stepsize = 0.8 Kd = 0.8) Rn(x) represents the error in the system.
This error is called the truncation error
of approximation.
F I G U R E 3 . Kaboom! How does this apply to the problem
with which we are working? If I only
150.0000 look at the first Taylor polynomial and
do some substitution, I get Equation 5.
100.0000

50.0000
h = ( x − x0 )
Actual
w ′(t ) = f (t , w(t ))
Velocity

0.0000 Euler
h2 n
-50.0000
0.0 0.8 1.6 2.4 3.2 4.0 4.8 5.6 6.4 7.2 8.0 w(t i +1 ) = w(t i ) + hf (t i , w(t i )) + w ( E)
2
-100.0000 (Eq. 5)
-150.0000
Notice how similar this equation is to
Equation 3. In fact, Euler’s method is
Time (Stepsize = 0.8 Kd = 0.8) based on this equation. The only differ-
ence is that the last error term is

G A M E D E V E L O P E R APRIL 1999 http://www.gdmag.com


GRAPHIC CONTENT

dropped in Equation 5. By stopping the ing up. The only issue now is what the example, an implicit Runge-Kutta inte-
series at the second term, I get a trunca- step size should be. grator trades greater computations per
tion error of 2. This gives Euler’s step for greater stability in particularly
method an error of order O(h2). difficult differential equations. Also,
If I added another term of the Taylor Watch Your Step! the one-step nature of these integrators
series to the equation, I could reduce reflects the fact that the method does
the error to O(h3). However, to com-
pute this exactly, I would need to eval-
uate the next derivative of f(x). To
E ven with a robust integrator such
as RK4, there will be times when
the simulation will be in danger of
not consider any trends in the past
when calculating a new value.
In addition to these one-step meth-
avoid this calculation, I can do another blowing up. To keep this from hap- ods, there are also multistep methods,
Taylor expansion and approximate this pening, you may have to reduce the extrapolation algorithms, predictor-
derivative as well. While this approxi- step size at times. At other times, how- corrector methods, and certainly many
mation increases the error slightly, it ever, a large step size works fine. If my others. Clearly, there is plenty of
preserves the error bounds of the simulator only has a single fixed step ground for the adventurous program-
Taylor method. This method of expan- size, I cannot take advantage of these mer to explore. The book I used,
sion and substitution is known as the facts. If I vary the size of the steps Numerical Algorithms with C, does a
Runge-Kutta techniques for solving dif- according to need, I could use large good job of comparing different meth-
ferential equations. This first expan- steps when possible without sacrific- ods during a variety of test conditions.
sion beyond Euler’s method is known ing stability. For this month’s sample application
18 as the Midpoint method or RK2 This is how it works. I take full step (available from Game Developer’s web
(Runge-Kutta order 2), and is given in using my current integrator, then take site), I have implemented both the
Equation 6. It’s called the Midpoint two steps half the current step size, midpoint method and Runge-Kutta
method because it uses the Euler and compare the results. If the error order 4 in the dynamic simulation
approximation to move to the mid- between the two results is greater than from last month. You can switch
point of the step, and evaluates the a threshold, then the step size should between integrators and adjust the
function at that new point. It then be reduced. Conversely, if the error is step size and simulation variables to
steps back and takes the full time step less than the threshold, the step size get a feel for how each performs. ■
with this midpoint approximation. could actually be increased. This form
of control is known as an adaptive FOR FURTHER INFO
 h h 
wi +1 = wi + h f (t i + , wi + f (t i , wi ) step size method. Adaptive methods
 2 2  In addition to the references cited last
are a major area of research in numeri-
month, a couple of other sources proved
+ O( h3 ) cal analysis, and can definitely
very valuable during this article.
(Eq. 6) improve simulation performance. I
In fact, I can continue to add Taylor chose not to implement adaptive step
• Faires, J. Douglas and Richard Burden.
terms to the equation using the Runge- size controls in my simulation.
Numerical Methods.Second edition.
Kutta technique to reduce the error However, this is an area where you
Pacific Grove, California: Brooks/Cole,
further. Each expansion requires more could improve the simulation.
1998. This book provided a great dis-
evaluations per step, so there is a point
cussion of measuring error in numerical
at which the calculations outweigh the
solutions. It also contains a great deal
benefit. I don’t have the space to get Other Techniques of source code for all the algorithms.
into it here, however, I understand
• Engeln--Müllges, Gisela and Frank
that smaller step sizes are preferred
over methods above RK4 with an error
5
of O(h ) (Faires & Burden, p. 195).
D ifferential equations are not easy
to learn and understand. How-
ever, the programmer who pursues this
Uhlig. Numerical Algorithms with C. ,
New York, New York: Springer-Verlag,
1996. In addition to the fine sections on
Runge-Kutta order 4 is outlined in knowledge has many weapons in his
the methods discussed in this column,
Equation 7. arsenal. As witnessed by the birthdates
this book describes and compares a
of Euler and Taylor, this research has
k1 = ( hf (t i ,wi ) great number of other numerical meth-
been going on for centuries. If you
h 1 ods. Additionally, the book has a great
k2 = hf (t i + , wi + k1 ) ignore this work and strike out on your
number of references to articles on the
2 2 own, you’re doing yourself a great dis-
topic.
h 1 service. Knowledge is available to the
k3 = hf (t i + , wi + k2 ) • Press, William H. et al., Numerical
2 2 developer as never before. While work-
Recipes in C. Cambridge, England:
k4 = hf (t i + h, wi + k3 ) ing on these algorithms, I was able to
Cambridge University Press, 1998.
cross-check formulas and techniques in
wi +1 = wi + (k1 + 2 k2 + 2 k3 + k4 )
1
While not as strong a reference on these
6 many different sources.
topics, this book may be interesting to
+ O( h5 ) In fact, I’ve barely scratched the sur-
many, as it is available in electronic
face of the field. The integrators I’ve
form. See http://www.nr.com but also
(Eq. 7) described (all explicit one-step meth-
check out a critical discussion of it on
RK4 gives the simulation a very ods) represent only a subset of the
http://math.jpl.nasa.gov/nr/nr.html.
robust integrator. It should be able to methods available to the programmer.
handle most situations without blow- Implicit integrators will also work. For

G A M E D E V E L O P E R APRIL 1999 http://www.gdmag.com

You might also like