0% found this document useful (0 votes)
172 views302 pages

Dynamic

Uploaded by

Mariano Serrano
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
172 views302 pages

Dynamic

Uploaded by

Mariano Serrano
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 302

Dynamic Analysis of Structures for the Finite Element

Method
Francisco J. Montns, Ivan Muoz
May 7, 2013
Contents
Contents
I Fundamentals of dynamics of structures 1
1 Single-Degree-of-Freedom Systems 3
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Free undamped vibrations . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.1 Equation of motion . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.2 Harmonic motion: more formal approach solution . . . . . . . 7
1.3 Free damped vibrations . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3.1 Underdamped motion . . . . . . . . . . . . . . . . . . . . . . 12
1.3.2 Overdamped motion . . . . . . . . . . . . . . . . . . . . . . . 15
1.3.3 Critically damped motion . . . . . . . . . . . . . . . . . . . . 15
1.4 Response to harmonic excitation . . . . . . . . . . . . . . . . . . . . 16
1.4.1 Undamped system . . . . . . . . . . . . . . . . . . . . . . . . 17
1.4.2 Damped system . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.4.3 Dynamic response factor . . . . . . . . . . . . . . . . . . . . . 25
1.4.4 Frequency response function method . . . . . . . . . . . . . . 26
1.4.5 Laplace transform based analysis . . . . . . . . . . . . . . . . 28
1.5 General forced response . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.5.1 Impulse response function . . . . . . . . . . . . . . . . . . . . 29
1.5.2 Response to an arbitrary excitation . . . . . . . . . . . . . . 30
2 Multi-Degree-of-Freedom Systems 33
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.1.1 Two-Degree-of-Freedom system . . . . . . . . . . . . . . . . . 34
2.1.2 Mathematical modeling of damping . . . . . . . . . . . . . . 36
2.1.3 Solutions of the equation of motion . . . . . . . . . . . . . . . 37
2.2 Vibration absorber: an application of Two-Degree-of-Freedom systems 38
2.2.1 The undamped vibration absorber . . . . . . . . . . . . . . . 41
2.2.2 The damped vibration absorber . . . . . . . . . . . . . . . . . 43
2.3 Free undamped vibrations of MDOF systems . . . . . . . . . . . . . 47
2.3.1 Natural frequencies and mode shapes . . . . . . . . . . . . . . 48
2.3.2 Orthogonality of mode shapes . . . . . . . . . . . . . . . . . . 49
2.3.3 Modal matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.3.4 Normalization of modes . . . . . . . . . . . . . . . . . . . . . 52
2.3.5 Response of undamped MDOF systems . . . . . . . . . . . . 53
2.4 Free damped vibrations of MDOF systems . . . . . . . . . . . . . . . 56
2.5 Response of MDOF systems under arbitrary loads . . . . . . . . . . 59
2.6 Response of MDOF systems under harmonic loads . . . . . . . . . . 60
2.7 Systems with distributed mass and stiness . . . . . . . . . . . . . . 64
2.7.1 Vibration of beams . . . . . . . . . . . . . . . . . . . . . . . . 65
2.7.2 Vibration of plates . . . . . . . . . . . . . . . . . . . . . . . . 72
2.8 Component mode synthesis . . . . . . . . . . . . . . . . . . . . . . . 76
2.8.1 The xed interface method . . . . . . . . . . . . . . . . . . . 76
- 2-
Contents
2.8.2 The free interface method . . . . . . . . . . . . . . . . . . . . 78
3 Introduction to signal analysis 81
3.1 Introduction to signal types . . . . . . . . . . . . . . . . . . . . . . . 81
3.1.1 Deterministic signals . . . . . . . . . . . . . . . . . . . . . . . 81
3.1.2 Random signals . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.2 Fourier Analysis of signal . . . . . . . . . . . . . . . . . . . . . . . . 82
3.2.1 The Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . 82
3.2.2 The Fourier integral transform . . . . . . . . . . . . . . . . . 83
3.2.3 Digital signals . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.3 State-space analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
II Finite element procedures for the dynamic analysis of struc-
tures. 89
4 Finite element discretization of continuous systems. Stiness and
Mass matrices 91
4.1 Stiness matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.1.1 Beam elements . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.1.2 Continuum elements . . . . . . . . . . . . . . . . . . . . . . . 95
4.2 Mass matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.2.1 Consistent mass matrix . . . . . . . . . . . . . . . . . . . . . 99
4.2.2 Lumped mass matrix . . . . . . . . . . . . . . . . . . . . . . . 101
5 Computational procedures for eigenvalue and eigenvector analysis.105
5.1 The modal decomposition revisited. Mode superposition analysis . . 105
5.2 Other eigenvalue and eigenvector problems . . . . . . . . . . . . . . 107
5.3 Computation of modes and frequencies . . . . . . . . . . . . . . . . . 110
5.4 Reduction of the general eigenvalue problem to the standard eigen-
value problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5.5 Static condensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
5.6 Model order reduction techniques: the Guyan reduction . . . . . . . 121
5.7 Inclusion of damping matrices . . . . . . . . . . . . . . . . . . . . . . 125
5.8 Complex eigenvalue problem: complex modes . . . . . . . . . . . . . 128
5.8.1 Formulation in nonsymmetric standard form . . . . . . . . . 130
5.8.2 Formulation in general symmetric form . . . . . . . . . . . . 133
6 Computational algorithms for eigenvalue and eigenvector extrac-
tion 135
6.1 Some previous concepts . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.1.1 Matrix deation . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.1.2 Rayleigh quotient . . . . . . . . . . . . . . . . . . . . . . . . . 137
6.1.3 Courant minimax characterization of eigenvalues and Sturm
sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
6.1.4 Shifting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
6.1.5 Krylov subspaces and the Power method . . . . . . . . . . . . 143
- 3-
Contents
6.2 Determinant search method . . . . . . . . . . . . . . . . . . . . . . . 151
6.3 Inverse iteration method . . . . . . . . . . . . . . . . . . . . . . . . . 152
6.4 Forward iteration method . . . . . . . . . . . . . . . . . . . . . . . . 154
6.5 Jacobi method for the standard eigenvalue problem . . . . . . . . . . 156
6.6 The QR decomposition and algorithm . . . . . . . . . . . . . . . . . 161
6.7 Jacobi method for the generalized eigenvalue problem . . . . . . . . 164
6.8 Bathes subspace iteration method and Ritz bases. . . . . . . . . . . 170
6.9 Lanczos method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
7 Transient analyses in linear elastodynamics 185
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
7.2 Structural dynamics and wave propagation analyses. The Courant
condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
7.3 Linear multistep methods. Explicit and implicit algorithms. Dahlquist
theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
7.4 Explicit algorithms: central dierence method . . . . . . . . . . . . . 192
7.5 Implicit algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
7.5.1 Houbolt method . . . . . . . . . . . . . . . . . . . . . . . . . 202
7.5.2 Newmark-, method . . . . . . . . . . . . . . . . . . . . . . . 204
7.5.3 Collocation Wilson-0 methods . . . . . . . . . . . . . . . . . . 218
7.5.4 Hilbert-Hughes-Taylor (HHT) cmethod . . . . . . . . . . . 225
7.5.5 Bathe-Baig composite (substep) method . . . . . . . . . . . . 227
7.6 Stability and accuracy analysis . . . . . . . . . . . . . . . . . . . . . 236
7.7 Consistent initialization of algorithms . . . . . . . . . . . . . . . . . 250
8 Transient analysis in nonlinear dynamics 251
8.1 The nonlinear dynamics equation . . . . . . . . . . . . . . . . . . . . 251
8.2 Time discretization of the nonlinear dynamics equation . . . . . . . 252
8.3 Example: The nonlinear Newmark-, algorithm in predictor-multicorrector
d-form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
8.4 Example: The HHT method in predictor-multicorrector a-form . . . 257
8.5 Example: The Bathe-Baig algorithm in predictor-multicorrector d-form.259
9 Harmonic analyses 271
9.1 Discrete Fourier Transform revisited . . . . . . . . . . . . . . . . . . 271
9.2 Harmonic analysis using the full space. . . . . . . . . . . . . . . . . . 272
9.3 Harmonic analysis using mode superposition . . . . . . . . . . . . . . 277
10 Spectral and seismic analyses 281
10.1 Accelerograms and ground excitation . . . . . . . . . . . . . . . . . . 281
10.2 The equation of motion for ground excitation. Accelerometers and
vibrometers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
10.3 Elastic Response Spectra: SD, SV, SA, PSV, PSA. . . . . . . . . . . 284
10.4 Modal superposition methods for spectral analysis. Modal mass . . . 288
10.5 Static correction or mode acceleration method . . . . . . . . . . . . . 289
- 4-
Contents
11 Bibliography 297
- 5-
Part I
Fundamentals of dynamics of
structures
Ivan M. Daz
Associate Professor. Universidad Politcnica de Madrid, Spain
- 1-
- 2-
1 Single-Degree-of-Freedom Systems
1.1 Introduction
The principal distinctive feature of dynamic analysis (compared with static analysis)
is the consideration of inertial forces. That is, forces acting on the structure will
cause accelerations such that inertial forces cannot be neglected in the analysis. If
loading is such that the accelerations caused can be neglected, a static analysis can
be performed.
Generally, real systems are continuous and their parameters are distributed.
However, in many cases it is possible to simplify the analysis by representing a
system with distributed parameters by a discrete one. Hence, mathematical models
can be divided into two types:
1. continuous systems, or distributed-parameter systems, and
2. discrete systems, or lumped models.
The number of unknown displacements (or velocities or accelerations) which
are of interest is called the number of degrees of freedom (DOFs) of the system.
As a general rule, a DOF must be associated with each point on the structure
having signicant inertial forces. Thus, a continuous model represents an innite
DOF system because there is an innite number of points (each having a dierent
position coordinate) having mass and stiness properties associated with them. This
means that the unknown displacements at each point are represented by a continuous
function n(r
i
, t) of the position r and time t.
On the other hand, in a discrete or lumped model, the whole system is assumed
to be represented by a number of point masses at particular pre-determined positions
on the structure The nite element modelling technique is currently the most widely
used method for developing discrete parameter models. Mathematically, the behav-
ior of discrete parameter systems is described by ordinary dierential equations,
whereas that of distributed-parameter systems is generally governed by partial dif-
ferential equations. In the case of ordinary dierential equations, unknown functions
(typically displacement) depend on time as the only variable. In the case of partial
dierential equations, time and at least one position coordinate are the variables
of the unknown functions. As an example, Figure 1 shows a distributed-parameter
model and a discrete-parameter model of a cantilever beam
Of the discrete models, the simplest model is the one described by a rst or second
order ordinary dierential equation with constant coecients. This system is usually
referred to as a single-degree-of-freedom (SDOF) system. Such a model is often used
as an approximation for a generally more complex system. However, its importance
is coming from the fact that in cases in which a technique known as Modal Analysis
can be employed, the mathematical formulation associated with many linear multi-
degree-of-freedom (MDOF) discrete systems and continuous systems can be reduced
to sets of independent second-order dierential equations, each having exactly the
same form and means for solution as the equation of a SDOF system. Hence, a
thorough study of SDOF linear systems is clearly justied and will be carried out
here.
- 3-
1 Single-Degree-of-Freedom Systems
Figure 1: (a) Distribuited-parameter model. (b) Discrete-parameter model for a
cantilever beam
The elements of a SDOF systems are (see Figure 2): spring of stiness /, which
relates forces to displacements, dashpot with a viscous damping coecient c, which
relates forces to velocities, and mass of value :, which relates forces to accelerations.
The physical properties of a SDOF system are constants, /, c and : playing the
role of modeling parameters. It should be noted that springs and dampers possess
no mass and masses are assumed to behave as rigid bodies, i.e., they do not deform.
As an example of modeling a real system by a SDOF system, consider the ele-
vated water tank of Figure 3. It is of interest to obtain the lateral vibration under
ground motion or under lateral wind excitation. This structure can be initially
modelled as a lumped mass supported by a massless structure with stiness /. An
energy dissipating mechanism has been included into the structural idealization to
feature the decaying motion observed during free vibration after an excitation oc-
curred. The most commonly used damping is the viscous damping since it is the
simplest to deal with mathematically
The main objective of theoretical dynamic analysis is to study the behavior of
structures subjected to given excitations. The behavior of the structure is char-
acterized by the motion caused by these excitations and is commonly referred to
as system response. The motion is generally described by displacements, velocities
or accelerations. When displacements are known at each instant of time, velocities
and accelerations (i.e., the rst and second derivatives of the time-varying function
describing displacements) and other types of structural responses to excitation, such
as stresses and strains, can be calculated.
The excitations can be in the form of initial displacements and velocities, or in
the form of externally applied forces. The response of systems to initial conditions
is known as free response or free vibration, whereas the response to continuous ex-
ternally applied forces is known as forced response or forced vibration. Moreover,
although all real structures have some form of damping which will produce decaying
free vibrations, mathematical treatment of vibrating systems can be either with no
damping (undamped vibrations) or with damping (damped vibrations), depending
on the importance of the corresponding damping force.
- 4-
1.1 Introduction
Figure 2: Elements of SDOF system
Figure 3: (a) Water elevated tank. (b) Idealized model. (c) Spring-mass system
with damping
- 5-
1 Single-Degree-of-Freedom Systems
Figure 4: Undamped SDOF oscillator and its free body diagram
1.2 Free undamped vibrations
1.2.1 Equation of motion
Consider Figure 4 as a schematic form of a SDOF spring-mass oscillator. Assume
that c = 0, i.e. there is no damping. According to Newtons second law of motion,
the inertial force 1
n
is proportional to the acceleration n(t) through the mass :.
The restoring force due to the spring 1
c
is proportional to the spring stiness
: n(t) +/n(t) = 0 n(t) +
/
:
n(t) = 0 (1)
Equation (1) represents simple harmonic motion and is analogous to
n(t) +.
2
a
n(t) = 0 with .
a
=
r
/
:
(2)
In order to predict the response, Equation (2) must be solved. The solution of
this equation is of the following form
n(t) = sin(.
a
t +c) (3)
where is the amplitude, .
a
is the angular natural frequency [rad/s], which deter-
mines the interval in time during which the function repeats itself and c, called the
phase, determined the initial (at t = 0) value of the sine function. From successive
dierentiation of the displacement Equation (3), the velocity and acceleration
functions are obtained
n(t) = .
a
cos (.
a
t +c) n(t) = .
2
a
sin(.
a
t +c) (4)
- 6-
1.2 Free undamped vibrations
Then, by substituting Equation (4) into (2), it can be seen that this equation is
satised. This corresponds to a second-order linear dierential equation. Thus,
there are two constants of integration to evaluate. These are and c and are
determined by the initial state of motion (initial conditions): n(0) = n
0
and n(0) =

0
. Substitution of these initial conditions on the solution (3) yields
n(0) = sin(.
a
0 +c) = sinc, and n(0) =
0
= .
a
cos (.
a
0 +c) = .
a
cos c
(5)
The solution of these two equations for the unknown constants and c leads to
=
p
.
2
a
n
2
0
+
2
0
.
a
c = tan
1

.
a
n
0

(6)
Thus, the solution (3) of the equation of motion for the spring-mass system is given
by
n(t) =
p
.
2
a
n
2
0
+
2
0
.
a
sin

.
a
t + tan
1

.
a
n
0

(7)
An example of a solution n(t) is plotted in Figure 5. This solution is known as
free response of the system since no external force is applied. The motion of the
spring-mass system is called simple harmonic motion.
Considering the simple harmonic motion given by Equations (3) and (4), the
velocity is 90

(or ,2 radians) out of phase with the displacement, while the ac-
celeration is 180

(or radians) out of phase with the displacement and 90

out of
phase with the velocity (see Figure 5) Additionally, the velocity response amplitude
is greater (or smaller, depending if .
a
is greater or smaller than 1) than that of the
displacement response by a multiple of .
a
, and the acceleration response is greater
by a multiple of .
2
a
. The angular frequency, .
a
, describes the repetitiveness of the
oscillation and the time the cycle takes to repeat itself is the period T, which is
related to the natural frequency by
T =
2
.
a
[s] (8)
Quite often the frequency is measured and discussed in terms of cycles per second,
which is called Hertz. The frequency in Hertz [Hz], denoted by )
a
, is related to the
frequency in radians per second by
)
a
=
1
T
=
.
a
2
[Hz] (9)
1.2.2 Harmonic motion: more formal approach solution
The solution given by (7) was obtained assuming that the response was harmonic.
However, the form of the response can also be derived by following the theory of
elementary linear dierential equations. This approach is reviewed here.
Assume that the solution of n(t) is as follows
n(t) = ac
At
(10)
- 7-
1 Single-Degree-of-Freedom Systems
0 1 2 3 4 5 6 7 8 9 10
2
1
0
1
2
Time (s)
u
(
t
)
Free response of an undamped SDOF system
0 1 2 3 4 5 6 7 8 9 10
4
2
0
2
4
Time (s)
v
(
t
)
0 1 2 3 4 5 6 7 8 9 10
5
0
5
Time (s)
a
(
t
)
Figure 5: Free response of an undamped SDOF.
- 8-
1.2 Free undamped vibrations
where a and ` are constants to be determined. The velocity and acceleration are
n(t) = `ac
At
and n(t) = `
2
ac
At
. Substitution of the assumed exponential form in
the equation of motion (1) yields

:`
2
+/

ac
At
= 0 :`
2
+/ = 0 (11)
whose solutions are
`
1,2
=
r

/
:
= ,
r
/
:
= ,.
a
(12)
where , =

1 is the imaginary number. The substitution of the two solutions of
(12) into (10) gives two solutions for n(t)
n(t) = a
1
c
).
n
t
, and n(t) = a
2
c
).
n
t
(13)
Then, since the system is linear, the sum of two solutions is also a solution
n(t) = a
1
c
).
n
t
+a
2
c
).
n
t
(14)
where a
1
and a
2
must be complex constants in order to make n(t) real after mul-
tiplying by c
).nt
and c
).nt
. Taken into account the Euler relations c
)&t
=
cos .t , sin.t, Equation (14) can be written as
n(t) =
1
cos .
a
t +
2
sin.
a
t (15)
where
1
and
2
are real constants, or
n(t) = sin(.
a
t +c) (16)
and c being real constants. Then Equations (14), (15) and (16) are three equiva-
lent ways of the solution of (1) subjected to non-zero initial conditions.
The relationships between the three sets of constants are shown now. It is
recommended for the reader to prove these relationships.
For sets (, c) and (
1
,
2
):
=
q

2
1
+
2
2
and c = tan
1

For sets (
1
,
2
) and (a
1
, a
2
):

1
= a
1
+a
2
and
2
= , (a
1
a
2
)
For sets (a
1
, a
2
) and (
1
,
2
):
a
1
=

1
+,
2
2
and a
2
=

1
,
2
2
- 9-
1 Single-Degree-of-Freedom Systems
Finally, we mention some quantities which are usually employed in structural dy-
namic analysis. The peak value dened as the maximum displacement, or magnitude
of Equation (6). Other quantities useful are the mean value and the mean-square
value
n = lim
T
1
T
Z
T
0
n(t) dt and n
2
= lim
T
1
T
Z
T
0
n
2
(t) dt
The square root of the mean-square value, called the root mean square (RMS)
value, is commonly used in dynamic analysis
n
1AS
=

n
2
Because the peak value of the velocity and acceleration are multiples of the
natural frequency and its square, respectively, times the displacement amplitude
see Equation (3) and (4), these three magnitudes usually dier by an order
of magnitude or more. Therefore, logarithmic scales are popular when presenting
graphs. A common unit of measurement for amplitude and RMS values is the decibel
(dB). The decibel is dened as the base 10 logarithm of the ratio of the square of
the amplitudes of two signals, thus
d1 10 log
10

j
1
j
2

2
= 20 log
10
j
1
j
2
In many cases it is useful to employed dB scale to improve graphical representation.
1.3 Free damped vibrations
The undamped SDOF system is an ideal system which does not have a mechanism
for dissipating energy in such a way that the total energy initially supplied to the
system remains constant. However, damped systems have mechanisms for reducing
the total energy supplied to the system. A clear distinction should be made between
mechanisms of damping in real systems (such as heating, radiation, friction, etc.)
and mathematical models for representing damping mechanisms (viscous, hysteretic,
proportional of Rayleigh type, non-proportional, etc.). Viscous damping is the most
commonly used one in mathematical models and will be treated here.
Consider Figure 6, in which a viscous damper has been added to Figure 4. The
force generated by the damper is proportional to the velocity and in an opposite
direction of motion: 1
o
= c n(t), where c is a constant parameter called damping
coecient, whose units are Ns , m. It should be noted that, in real systems, 1
o
usually represents an equivalent damping eects and it is considered proportional
to velocity due to mathematical convenience. From the free body diagram of Figure
6 and using the Newtons 2
nd
law of motion, the equation of motion of a freely
vibrating damped SDOF system is derived
: n(t) +c n(t) +/n(t) = 0 (17)
with the given initial conditions n(0) = n
0
and n(0) =
0
.
- 10-
1.3 Free damped vibrations
Figure 6: Damped SDOF oscillator and its free body diagram.
To solve Equation (17), the same approach used for solving Equation (1) is used
now. The substitution of Equation (10) into (17) yields

:`
2
+c` +/

ac
At
= 0 :`
2
+c` +/ = 0 (18)
since ac
At
6= 0. This equation is known as characteristic equation and has two
solutions
`
1,2
=
c
2:

1
2:
p
c
2
4/: (19)
By examining these solutions, it can be seen that the roots ` will be real or complex
depending on the value of c
2
4/:. At this point the critical damping coecient
and damping ratio are dened as follows
c
cv
= 2:.
a
= 2

/:, =
c
c
cv
=
c
2:.
a
=
c
2

/:
(20)
respectively. Then, the solutions of the characteristic equation (19) can be rewritten
using relations (20) as
`
1,2
= .
a
.
a
p

2
1 (21)
and the equation of motion (17) as
n(t) + 2.
a
n(t) +.
2
a
n(t) = 0 (22)
This equation is sometimes known as the standard form of the equation of motion.
It is now clear that the damping ratio determines if the roots are complex or
real. For positive mass, damping and stiness coecients (as they are for real
problems), there are three cases that are analyzed here: underdamped, overdamped
and critically damped motion.
- 11-
1 Single-Degree-of-Freedom Systems
1.3.1 Underdamped motion
In this case the damping ratio is less than 1 (0 < < 1) and
2
1 is negative in
(21). Then, the two solutions are complex conjugate pair of roots (see (21))
`
1,2
= .
a
,.
a
p
1
2
(23)
Following the same argument as before, a linear combination of solutions is also a
solution
n(t) = a
1
c
A
1
t
+a
2
c
A
2
t
= c
.nt

a
1
c
).n

1
2
t
+a
2
c
).n

1
2
t

(24)
Using the Euler relations, the solution can be rewritten as
n(t) = c
.
n
t
h

1
sin

.
a
p
1
2

+
2
cos

.
a
p
1
2
i
= c
.
n
t
sin(.
o
t +c)
(25)
in which and c are real constants to be determined and .
o
is called the damped
natural frequency dened as
.
o
= .
a
p
1
2
, with .
a
=
r
/
:
(26)
The period of the damped vibration is
T
o
=
2
.
o
=
T
p
1
2
(27)
T being the natural period given in Eq. (8). Setting t = 0 in Equation (25)
n(0) = n
0
= sinc (28)
and dierentiating Equation (25) and setting t = 0
n(t) = .
a
c
.
n
t
sin(.
o
t +c) +.
o
c
.
n
t
cos (.
o
t +c) (29)
i.e.
n(t) =
0
= .
a
n
0
+.
o
n
0
cot c (30)
Solving the last expression for c, it is obtained
tanc =
n
0
.
o

0
+.
a
n
0
(31)
and the sine of c is
sinc =
n
0
.
o
q
(
0
+.
a
n
0
)
2
+ (n
0
.
o
)
2
(32)
Finally, taken into account that from Equation (28) = n
0
, sinc, the value of both
and c are given
- 12-
1.3 Free damped vibrations
Time (s)
u

(
t
)
u
1
u
2
u
3
u
4
Figure 7: Free response of an underdamped SDOF system considering n
0
= 1,

0
= 0, .
a
= 2 1 and = 0.1
=
q
(
0
+.
a
n
0
)
2
+ (n
0
.
o
)
2
.
o
, c = tan
1
n
0
.
o

0
+.
a
n
0
(33)
where n
0
and
0
are the initial displacement and velocity. A plot of n(t) versus
time for this underdamped case is shown in Figure 7. The motion is oscillatory with
decaying amplitude governed by the exponential term exp(.
a
t). The damping
ratio sets the rate of decay. As a check, if one sets = 0 in expressions (33) , we
recover the undamped casesee expressions (6).
Next, the logarithmic decrement is dened. The ratio between successive peaks
of the underdamped free response is related to the damping ratio. The ratio between
the displacement at time t to its value a period T
o
later is independent of time and
it is as follows using (25)
n(t)
n(t +T
o
)
= c
.
n
[t(t+T
u
)]
= c
.
n
T
u
(34)
This expression is also valid if n(t) and n(t +T
o
) are successive peaks: n
i
and n
i+1
.
Then, the logarithmic decrement is dened as the natural logarithm of the ratio
between these to successive peaks
c = ln

n
i
n
i+1

= .
a
T
o
= .
a
2
.
a
p
1
2
= 2

p
1
2
(35)
If is small,
p
1
2
' 1 and (35) can be approximated as
- 13-
1 Single-Degree-of-Freedom Systems
c ' 2 (36)
Then, if the damping ratio is small ( < 0.2, which is the case of most civil engi-
neering structures), it can be estimated from
=
c
2
=
1
2
ln

n
i
n
i+1

(37)
If the decay of motion is slow, it is recommended to use the ratio between two
amplitudes separated several cycles. For instance, if it assumed amplitudes separated
: cycles
n
1
n
a+1
=
n
1
n
2
n
2
n
3
...
n
a
n
a+1
= c
ac
Hence
c =
1
:
ln

n
1
n
a+1

' 2 (38)
and the damping ratio for lightly damped systems can be determined as
=
1
2:
ln

n
1
n
a+1

or =
1
2:
ln

n
1
n
a+1

(39)
The second is a similar equation in terms of the acceleration, which is valid for
lightly damped systems.
Example 1 Consider a single-span footbridge. Assume that a mass of 5000 kg is
suspended from mid-span of the bridge producing a static sag of 0.016 m. Then, this
mass is suddenly released causing a free decay response. The response is basically
given by one vibration mode, then, SDOF vibration response can be assumed. The
maximum peak is 0.078 m, s
2
at 0.3 s and 18 cycles later the amplitude is 0.031 m, s
2
at 20.1 s. From these data compute the following: (1) damping ratio; (2) natural
period; (3) stiness; (4) mass; (5) damping coecient ; and (6) number of cycles
required for the acceleration to decrease to 0.005 m, s
2
.
Solution: The damping ratio can be obtained as follows:
=
1
218
ln

0.078
0.031

= 0.008 = 0.8%
The assumption of small damping is valid since it is smaller than 20%.
The natural period can be obtained from the time between peaks and the number of
cycles, and assuming the natural and the damped periods are approximately the same
T
o
=
20.1 0.3
18
= 1.1 s; T ' T
o
= 1.1 s
The stiness can be obtained from the static equation 1 = /n
ct
, in which n
ct
is the
static sag, then
/ =
50000
0.16
= 312500 N, m
- 14-
1.3 Free damped vibrations
The angular natural frequency and the natural frequency are given by
.
a
=
2
T
=
2
1.1
= 5.7120 rad, s or )
a
=
.
a
2
=
5.7120
2
= 0.909 Hz
The mass can be obtained using the expression for the natural frequency
: =
/
.
2
a
= 9577.9 kg
The damping coecient can be obtained as
c =

/:

= 875.3 Ns , m
The number of cycles to get the signal reduced to 0.005 m, s
2
can be obtained from
the approximated expression of the damping ratio
=
1
2:
ln

n
1
n
a+1

: =
1
2
ln

n
1
n
a+1

=
1
20.008
ln

0.078
0.005

= 54.6 ' 55 cycles


1.3.2 Overdamped motion
In this case, the damping ratio is greater than 1 ( 1) and
2
1 is positive.
Then, the two solutions (21) are real roots
`
1,2
= .
a
.
a
p

2
1 (40)
The solution of (17) is as follows
n(t) = a
1
c
A
1
t
+a
2
c
A
2
t
= c
.
n
t

a
1
c

2
1.
n
t
+a
2
c

2
1.
n
t

(41)
which is a non-oscillatory motion and a
1
and a
2
are real constants determined, again,
by the initial conditions. It is quite straight-forward to obtain the value of a
1
and
a
2
(the reader is encouraged to try to derive these values) in terms if the initial
displacement n
0
and velocity
0
a
1
=

0
+.
a
n
0

+
p

2
1

2.
a
p

2
1
and a
2
=

0
+.
a
n
0

+
p

2
1

2.
a
p

2
1
A plot of n(t) versus time for this overdamped case with three dierent initial
conditions is shown in Figure 8. The motion does not involve oscillation and the
system returns to its rest position exponentially.
1.3.3 Critically damped motion
In this last case, the damping ratio is exactly 1 ( = 1) and
2
1 is zero. Then,
the two solutions are equal and real roots
`
1,2
= .
a
(42)
- 15-
1 Single-Degree-of-Freedom Systems
Figure 8: Free response of an overdamped system considering .
a
= 2 0.3 and
= 1.1 . Initial conditions are n
0
= 1,
0
= 0 for the continuous curve, n
0
= 0,
0
= 1
for the dashed curve and n
0
= 1,
0
= 0 for the dash-dotted curve.
and the solution is as follows (it is recommended to the reader to revise second-order
dierential equations with identical roots of the characteristic equation)
n(t) = a
1
c
A
1
t
+a
2
c
A
2
t
= c
.
n
t
(a
1
+ta
2
) (43)
where, again, the constants a
1
and a
2
are determined by the initial conditions,
a
1
= n
0
, and a
2
=
0
+.
a
n
0
It can be noted that critically damped systems represent systems with the small-
est damping ratio that produces non-oscillating response and provides the fastest
return to zero without oscillation.
1.4 Response to harmonic excitation
This section considers the response to harmonic excitation of a SDOF system. Har-
monic excitations are sinusoidal external forces of a single frequency applied to the
system. Note that periodic excitations can be represented as the sum of harmonic
functions due to the Fourier series decomposition theorem and the principle of su-
perposition. The harmonic analysis is an important issue in structural dynamics
because:
1. the response can be calculated in a quite straight forward manner,
- 16-
1.4 Response to harmonic excitation
Figure 9: Damped SDOF system acted by an external force 1 (t) and its free diagram
2. harmonic excitations are simple to produce, and then, they are often used in
experimental dynamic analyses and
3. harmonic inputs are very useful in studying damping and stiness properties
of systems, that is, to characterize the dynamic properties of systems.
Consider Figure 6 with an external excitation 1 (t) Figure 9 applied in the
direction of n(t) being a sine or cosine of a single frequency
1 (t) = 1
0
cos .t (44)
where 1
0
is the amplitude and . is the frequency of the applied force. The latter
is also known as input frequency or driving frequency, depending on the particular
application.
1.4.1 Undamped system
Recall Figure 2, in which 1 (t), dened by Equation (44), is applied in the direction
of n(t). The equation of motion for the undamped case is as follows
: n(t) +/n(t) = 1
0
cos .t or n(t) +.
2
a
n(t) = )
0
cos .t with )
0
=
1
0
:
(45)
In order to solve this equation, which is a second-order linear non-homogeneous
dierential equation, the solution is written as the sum of the homogeneous solution
- 17-
1 Single-Degree-of-Freedom Systems
n
I
(t) (i.e., the solution for )
0
= 0, which has been studied in the before subsection)
and a particular solution n
j
(t)
n(t) = n
I
(t) +n
j
(t) (46)
The particular solution is assumed to be in the same form as the excitation
n
j
(t) = l cos .t (47)
Substituting (47) into (45), it is obtained
.
2
l cos .t +.
2
a
l cos .t = )
0
cos .t l =
)
0
.
2
a
.
2
provided .
a
6= ., then
n
j
(t) =
)
0
.
2
a
.
2
cos .t
The solution of (45) is
n(t) =
1
sin.
a
t +
2
cos .
a
t
| {z }
homogeneous
+
)
0
.
2
a
.
2
cos .t
| {z }
particular
(48)
in which the form of the homogeneous solution given by (15) has been considered.
The initial conditions have to be considered to obtain
1
and
2
. The following two
equations are then obtained
n(0) = n
0
=
2
+
)
0
.
2
a
.
2
, and n(t) =
0
=
1
.
a
Solving these two equations,
1
and
2
are obtained and the solution (48) is rewritten
n(t) =

0
.
a
sin.
a
t +

n
0

)
0
.
2
a
.
2

cos .
a
t +
)
0
.
2
a
.
2
cos .t (49)
Figure 10 shows an example of the total response of an undamped system to har-
monic excitation and given initial conditions.
It is now considered the case of . = .
a
. The particular solution given by (47) is
no longer valid since this function is also a solution of the homogeneous part of (46).
The particular solution is now as follows (it is recommended to see a book which
deals with ordinary dierential equations)
n
j
(t) = tl sin.t (50)
Substituting (50) into (45), is obtained that l = )
0
,2. and then
n
j
(t) =
)
0
2.
t sin.t (51)
and the total solution for . = .
a
is
- 18-
1.4 Response to harmonic excitation
Figure 10: Response of an undamped system to harmonic excitation considering
.
a
= 2 1 rad, s, an excitation at . = 2 3 rad, s, n
0
= 0,
0
= 0.005 m, s and
)
0
= 1 N, kg
n(t) =
1
sin.t +
2
cos .t +
)
0
2.
t sin.t (52)
Again, imposing the initial conditions, the solution is written as
n(t) =

0
.
sin.t +n
0
cos .t +
)
0
2.
t sin.t (53)
A plot of n(t) is shown in Figure 11 for zero initial conditions. It can be observed
that n(t) grows unboundly. It is here dened the resonance phenomenon, i.e.,
the response becomes unbounded for excitation frequency . equal to the natural
frequency .
a
. Obviously, this is an academic result and should be interpreted as
such. At some point in time, the system would fail
1.4.2 Damped system
Take Figure 9, in which 1 (t) is as dened by Equation (44). The equation of motion
considering viscous damping is as follows
: n(t) +c n(t) +/n(t) = 1
0
cos .t or n(t) +2.
a
n(t) +.
2
a
n(t) = )
0
cos .t (54)
with .
a
=
p
/,:, = c, (2:.
a
), and )
0
= 1
0
,:. The same procedure as before is
used now to solve this dierential equation. In this case, the particular solution is
as in the undamped case excepts for a phase shift compared with (47), then
- 19-
1 Single-Degree-of-Freedom Systems
0
2
n
f
t
w
Figure 11: Response of an undamped system excited harmonically at its natural
frequency considering .
a
= . = 2 1 rad, s, n
0
=
0
= 0, and )
0
= 1 N, kg
n
j
(t) = l cos (.t 0) (55)
or in this alternative form
n
j
(t) = 1
1
sin.t +1
2
cos .t (56)
The derivatives of n
j
(t) are
n
j
(t) = 1
1
. cos .t 1
2
. sin.t, and n
j
(t) = 1
1
.
2
sin.t 1
2
.
2
cos .t (57)
Then, n
j
, n
j
, and n
j
are substituted into the equation of motion (54):

1
2
.
2
+1
1
.2.
a
+.
2
a
1
2
)
0

cos .t +

1
1
.
2
1
2
.2.
a
+.
2
a
1
1

sin.t = 0
This equation must hold at every time. It is particularized for t = , (2.) and t = 0,
yielding these two equations
(2.
a
.) 1
1
+

.
2
a
.
2

1
2
= )
0

.
2
a
.
2

1
1
(2.
a
.) 1
2
= 0
Solving these two linear equations, it is achieved
- 20-
1.4 Response to harmonic excitation
1
1
=
2.
a
.)
0
(.
2
a
.
2
)
2
+ (2.
a
.)
2
, and 1
2
=

.
2
a
.
2

)
0
(.
2
a
.
2
)
2
+ (2.
a
.)
2
(58)
From Equations (55) and (56), it can be seen that l =
p
1
2
1
+1
2
2
and 0 = tan
1 1
1
1
2
,
then, the particular solution can be written as
n
j
=
)
0
q
(.
2
a
.
2
)
2
+ (2.
a
.)
2
cos

.t tan
1
2.
a
.
(.
2
a
.
2
)

(59)
The total solution is again the sum of the particular solution and the homogeneous
one. For the underdamped case (0 < < 1) see Equation (25) this results in
n(t) = c
.
n
t
sin(.
o
t +c)
| {z }
Transient
+l cos (.t 0)
| {z }
steady state
(60)
in which and c are determined by the initial conditions following the same pro-
cedure as the undamped case. The result is
c = tan
1
.
o
(n
0
l cos 0)

0
+ (n
0
l cos 0) .
a
.l sin0
, and =
n
0
l cos 0
sinc
(61)
Figure 12 illustrates a typical response of a lightly damped SDOF system subjected
to a harmonic excitation. Note that for large values of t, the rst term of Equation
(60), which is the homogeneous solution, goes to zero and the total solution ap-
proaches the particular solution. Thus n
j
(t) is known as the steady-state response
and the rst term is called the transient response. The dierence between both solu-
tions comes from the exponential term, which make the transient solution negligible
after a while. If the system has a relatively large damping, the transient part will go
to zero quickly. Then, the steady-state response will be more important. It should
be recognized that the largest amplitude peak may occurred before the system has
reached its steady state.
It is now examined the case in which . = .
a
for a damped SDOF system. If it
is considered . = .
a
and a lightly damped system, that is .
o
' .
a
, the response of
the system can be approximated as
n(t) =
)
0
2.
2
a

c
.
n
t
1

| {z }
envelope function
cos .
a
t
in which we have used Equation (60) and the corresponding assumptions (The reader
is encouraged to demonstrate this expression). The response varies with time as a
cosine function, with its amplitude increasing according to the envelope function.
Note that the envelope function is strongly aected by the damping ratio, and then,
the steady state response of the system will be strongly aected by the damping
ratio, as well. Figure 13 illustrates this case.
Bearing this in mind, it is of interest to consider the magnitude l and the phase
0 as a function of the exciting frequency . (see Equation (59))
- 21-
1 Single-Degree-of-Freedom Systems
0 1 2 3 4 5
1.5
1
0.5
0
0.5
1
1.5
Time (s)
u
(
t
)
Response of a damped SDOF system to harmonic excitation
Total response
Steadystate
response
Figure 12: Response of damped system to harmonic force considering .,.
a
= 0.12
and = 0.04
0 1 2 3 4 5
0.2
0.15
0.1
0.05
0
0.05
0.1
0.15
Time (s)
u
(
t
)
Response of a damped SDOF system to harmonic excitation at
n
envelope cuve
steadystate
amplitude
Figure 13: Response of a damped system with = 0.1 to a sinusoidal force of
frequency . = .
a
and zero initial conditions
- 22-
1.4 Response to harmonic excitation
l =
)
0
q
(.
2
a
.
2
)
2
+ (2.
a
.)
2
, and 0 = tan
1
2.
a
.
(.
2
a
.
2
)
(62)
Equations (62) can be rewritten as
l
l
ct
= 1
o
=
1
q
(1 r
2
)
2
+ (2r)
2
,and 0 = tan
1
2r
(1 r
2
)
(63)
with r = .,.
a
and l
ct
= )
0
,.
2
a
= 1
0
,/ is the static displacement. Thus, 1
o
=
l,l
ct
is known as the normalized magnitude to the static displacement. Figure
14, shows the normalized magnitude and phase against the excitation frequency
(normalized to the natural frequency) for several values of the damping ratio. The
plot of the amplitude of a response quantity against the excitation 1
o
is called
frequency-response curve. Damping reduces 1
o
and then, the amplitude of the
response at all frequencies. The magnitude of this reduction depends importantly
on the excitation frequency. Note that as the exciting frequency approaches .
a
,
the magnitude reaches its maximum value when the damping ratio is very small
( < 0.1). Observe also that the phase shift crosses through 90
c
and that the phase
lies between 0 and . This denes resonance.
From the normalized magnitude 1
o
in Figure (14), it can be seen that as . goes
to zero (implying that the force is "slowly varying"), the 1
o
is slightly larger than 1
and is almost independent of damping. Thus the amplitude l goes to )
0
,.
2
a
, which
is the static displacement,
l '
)
0
.
2
a
=

1
0
n

I
n
=
1
0
/
= static displacement
This result implies that the amplitude of the dynamic response is almost the same
as the static response and is essentially controlled by the stiness property.
On the other hand, as . becomes large (implying that the force is "rapidly
varying"), 1
o
tends to zero and is essentially unaected by damping. The amplitude
l can be approximated as follows
l '
)
0
.
2
a
1
.
2
.
2
n
=
1
0
:.
2
Observe that the response is now governed by the mass value.
When . ' .
a
, 1
o
is highly aected by the damping ratio. For smaller damping
values, 1
o
can be several times larger than 1, implying that the dynamic response
can be much larger than the static one. As the damping becomes smaller, the peak
of 1
o
becomes sharper, and nally, if damping is zero, the peak is innite. This is
in accordance with the unbounded undamped response at resonance see Figure
10. As the damping increases, 1
o
decreases and nally, it disappears. Then, in this
case, the system response is governed by the damping.
For the undamped case, resonance occurs when . = .
a
. However, the damped
case, resonance does not occur exactly at the natural frequency. It can be shown
that the maximum value of 1
o
takes place at r =
p
1 2
2
if 0 1,

2 = 0.707
- 23-
1 Single-Degree-of-Freedom Systems
0 0.5 1 1.5 2
0
2
4
6
8
10
Frequency ratio /
n
N
o
r
m
a
l
i
z
e
d

m
a
g
n
i
t
u
d
e
Normalized magnitude of a damped SDOF system
0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
Frequency ratio /
n
P
h
a
s
e
Phase of a damped SDOF system
=0.05
=0.1
=0.2
=0.4
=0.7
=0.05
=0.1
=0.2
=0.4
=0.7
Figure 14: Plot of the normalized magnitude

l.
2
n
)
0

and phase of the steady-state


reponse versus frequency ratio for several damping ratios.
- 24-
1.4 Response to harmonic excitation
and at r = 0 if 1,

2. Then, the value of the exciting frequency for which 1


o
reaches its maximum value is the peak frequency
.
j
= .
a
p
1 2
2
for 0 0.707
Note that as damping tends to zero the peak frequency tends to the natural fre-
quency. As damping increases from zero, the peaks occur farther left from the
natural frequency or r = 1. Finally, when damping ratio is 0.707, the peak occurs
at r = 0.
It is interesting to examine how the peak magnitude changes with respect to the
damping ratio. Then, substituting r =
p
1 2
2
into 1
o
, it is obtained
1
o,max
=
1
2
p
1
2
l =
1
0
/2
p
1
2
and when r = 1
1
o
(r = 1) =
1
2
l =
1
0
/
1
2
1.4.3 Dynamic response factor
The displacement, velocity and acceleration response factors are introduced here.
These are dimensionless and dene the amplitude of these three quantities. The
steady-state displacement can be written as follows considering Equations (59) and
(63)
n
j
(t)
1
0
,/
= 1
o
cos (.t 0) (64)
where the displacement response factor 1
o
is the ratio between the amplitude l of
the dynamic response to the static displacement l
ct
.
Dierentiating Equation (64), it is obtained the velocity response
n
j
(t)
1
0
,

/:
= 1

sin(.t 0) (65)
where the velocity response factor 1

is related to 1
o
as follows
1

=
.
.
a
1
o
= r1
o
Dierentiating Equation (65), it is obtained the acceleration response
n
j
(t)
1
0
,:
= 1
o
cos (.t 0) (66)
where the acceleration response factor 1
o
is related to 1
o
as follows
1
o
=

.
.
a

2
1
o
= r
2
1
o
- 25-
1 Single-Degree-of-Freedom Systems
The dynamic response factors 1
o
, 1

and 1
o
are plotted as a function of .,.
a
in Figure 15. The displacement response factor is unity at .,.
a
= 0, the peaks are
reached by .,.
a
< 1, and approaches to zero as .,.
a
. The velocity response
factor is zero at .,.
a
= 0, the peaks are reached by .,.
a
= 1, and approaches to
zero as .,.
a
. The acceleration response factor is zero at .,.
a
= 0, the peaks
are reached by .,.
a
1, and approaches to unity as .,.
a
.
1.4.4 Frequency response function method
Up to now, we used the method of undetermined coecients to solve the dierential
equations. An alternative method is to treat the solution of Equation (17) as a
complex function. Then, a frequency domain analysis can be carried out. This
is useful for problems involving many degrees of freedom. Consider the complex
relation
c
).t
= cos .t +,sin.t
in which the real and complex parts are considered separately when solving complex
equations. Thus, the equation of motion for a harmonic excitation can be rewritten
as a complex equation
: n(t) +c n(t) +/n(t) = 1
0
c
).t
(67)
Here, the real part of the complex solution is the physical solution of n(t). This
representation is very useful in solving MDOF systems.
The particular solution of (67) is
n
j
(t) = lc
).t
(68)
Substitution of this equation into (67) yields

.
2
:+,c. +/

lc
).t
= 1
0
c
).t
l =
1
(/ .
2
:) +,c.
1
0
= H (,.) 1
0
(69)
where
H (,.) =
1
(/ .
2
:) +,c.
=
1,:
(.
2
a
.
2
) +,2.
a
.
(70)
is known as the frequency response function (FRF). Multiplying by the complex
conjugate over itself and taking the modulus of the result, it is obtained
l =
1
0
q
(/ :.
2
)
2
+ (c.)
2
c
)0
with 0 = tan
1
c.
(/ :.
2
)
(71)
Substituting the value of l into the particular solution (68) yields the solution
n
j
(t) =
1
0
q
(/ :.
2
)
2
+ (c.)
2
c
)(.t0)
(72)
The real part of this expression corresponds to the solution given by (62).
- 26-
1.4 Response to harmonic excitation
0 0.5 1 1.5 2
0
1
2
3
4
5
6
Frequency ratio /
n
R
d
=0.05
=0.1
=0.2
=0.4
=0.7
0 0.5 1 1.5 2
0
1
2
3
4
5
6
Frequency ratio /
n
R
v
0 0.5 1 1.5 2
0
1
2
3
4
5
6
Frequency ratio /
n
R
a
Figure 15: Displacement, velocity, and acceleration response factors for a damped
system excited by harmonic forces
- 27-
1 Single-Degree-of-Freedom Systems
1.4.5 Laplace transform based analysis
Another method is to consider the Laplace transform to solve the particular solution
of the forced harmonic equation of motion (54). The Laplace transform is (it is
recommended to the reader to consult a specialized book)

::
2
+c: +/

l (:) =
1
0
:
:
2
+.
2
(73)
where : is the complex variable and l (:) is the Laplace transform of n(t). Then
l (:) =
1
0
:
(::
2
+c: +/) (:
2
+.
2
)
(74)
The solution n(t) is obtained by the inverse Laplace transform of l (:), equivalent
to the solutions (62) and (72).
Consider the equation of motion with any excitation and its Laplace transform,
: n(t) +c n(t) +/n(t) = 1 (t) L

::
2
+c: +/

l (:) = 1 (:) (75)


then, it is obtained
H (:) =
l (:)
1 (:)
=
1
::
2
+c: +/
(76)
H (:) is the transfer function between the output (the system response) and the
input (the excitation). Note that if the value of : is restricted to lie along the
imaginary axis (: = ,.), the transfer function becomes FRF (70)
H (,.) =
1
/ :.
2
+,c.
(77)
Hence, the FRF of the system is the transfer function of the system evaluated along
: = ,..
1.5 General forced response
Up to now, the dynamic response of a SDOF system has been studied for given ini-
tial conditions (initial disturbances) and/or forced harmonic excitations. Harmonic
excitation refers to an applied force that is sinusoidal with a single frequency. In
this section the response of a SDOF system to dierent forces is considered, as well
as a general formulation for calculating the forced response to any applied load.
Since the SDOF system considered here is linear, the principle of superposition can
be used to calculate the response to combinations of forces based on the individual
response to a specic force.
Periodic forces are those that repeat in time. An example is an applied force
consisting of the sum of two harmonic forces at dierent frequencies where one
frequency is an integer multiple of the other. A non-periodic force is one that
does not repeat itself in time. A step function is an example of a force that is a
non-periodic excitation. A transient force is one that reduces to zero after a nite,
- 28-
1.5 General forced response
usually short, period of time. An impulse or a shock loading are examples of transient
excitations. All of the aforementioned classes of excitation are deterministic (i.e.,
they are known precisely as a function of time). On the other hand, a random
excitation is one that is unpredictable in time and must be described in terms of
probability and statistics. This section introduces a sample of these various classes
of force excitations and outlines strategies for the calculation and analysis of the
resulting motion when applied to a SDOF spring-mass-damper system.
Then, the objective is to nd the solution of the dierential equation of motion
: n(t) +c n(t) +/n(t) = 1 (t) (78)
subjected to initial conditions
n(0) = 0, n(t) = 0 (79)
in which 1 (t) is a force varying arbitrarily with time.
1.5.1 Impulse response function
A common excitation is the application of a short duration forced known as impulse.
An impulse excitation is a force applied for a very short length of time. If 1 (t) = 1,-,
with a duration of - starting at t = t (Figure ??). As - 0 the magnitude of the
impulse tends to innitive, but the area remains equal to unity, (1,-) - = 1. This
force is known as unit impulse.
The application of the Newtons second law of motion to a force acting on a
body mass and the integration of both side yields
d
dt
(: n(t)) = 1 (t) :( n(t
2
) n(t
1
)) =
Z
t
2
t
1
1 (t) dt (80)
In the case of a short unit impulse, which was not moving before the impulse was
applied (n(0) = 0 and n(t) = 0) at t
1
= 0 and t
2
= -, the velocity at t
2
= - can be
calculated as
n(-) =
1
:
Z
t
2
t
1
1 (t) dt =
1
:
(81)
After the application of the impulse, the force is removed and the response of the
SDOF system is a free vibration response disturbed by n(-)
.0
= n(0
+
) = 1,:.
Due to the short duration of the impulse, it can be assumed n(0
+
) = 0. Using
Equations (26) and (33) of a free damped vibrations, it can be obtained
=
1
:.
o
and c = 0
Then, using (25), the response of the system at time t to a unit impulse applied at
t (t t) is as follows
n(t) = /(t t) =
1
:.
o
c
.
n
(tt)
sin.
o
(t t) (82)
- 29-
1 Single-Degree-of-Freedom Systems
Figure 16: Time history of an impulse applied at time t and the corresponding
response
which is illustrated in Figure 16. The response n(t) is usually referred as /(t t),
which is called impulse response function.
1.5.2 Response to an arbitrary excitation
The response to a SDOF system to an arbitrary time-varying excitation can be
calculated as a sum of the responses to a series of short impulses, as shown Figure
17. Thus, the response at t
)
can be calculated considering the contributions of all
impulses applied at times t
i
before time t
)
Then
n(t
)
) =
)
X
i=1
1 (t
i
) /(t
)
t
i
) t (83)
As t 0, the latter expression can be written as
n(t) =
Z
t
0
1 (t) /(t t) dt (84)
which is known as convolution or Duhamel integral.
For underdamped SDOF system, the Duhamel integral (84), considering (82), is
as follows
n(t) =
1
:.
o
Z
t
0
h
1 (t) c
.
n
(tt)
sin.
o
(t t)
i
dt (85)
assuming zero initial conditions.
- 30-
1.5 General forced response
t
dt
F(t)
t
Response to impulse 1
t
. . .
t
Response to impulse 2
u(t)
t
Total response
.

.

.
Figure 17: Schematic explanation of convolution integral
- 31-
1 Single-Degree-of-Freedom Systems
The response for a general input force derived from (85) is for zero initial con-
ditions. The total response of an arbitrary input plus the initial conditions for an
undamped system is as follows
n(t) =

0
.
a
sin.
a
t +n
0
cos .
a
t +
Z
t
0
1 (t) /(t t) dt (86)
Example 2 Using Duhamel integral, determine the response of an undamped SDOF
system, assumed to be initially at rest, to a step function 1 (t) = 1
0
, t 0.
Solution: The displacement can be obtained from Equation (84) or (85) as
n(t) =
1
0
:.
a
Z
t
0
sin.
a
(t t) dt =
1
0
:.
a

cos .
a
(t t)
.
a

t=t
t=0
=
1
0
/
(1 cos .
a
t)
It is left for the reader to: (1) obtain the same results using the classical approach
and (2) obtain the result when damping is considered.
- 32-
2 Multi-Degree-of-Freedom Systems
In the preceding section, a single coordinate and a single second-order dierential
equation describe the dynamic motion of a mass-spring-damper system. However,
many structures cannot be modelled successfully by a SDOF model. Mass, damping
and stiness are the important dynamic properties of any real problem and they are,
due to their inherent nature, distributed. The description of the dynamic behavior
of such a system is dicult as it requires knowledge of the movement of an innite
number of points on the structure. Nevertheless, by the proper discretization of a
distributed system into elements of a nite size (nite elements), which are connected
at a limited number of points (nodes) the problem can be simplied considerably.
The analysis is then reduced to the analysis of the movement of a nite number of
nodal points in the direction of the prescribed DOFs (MDOFs) at each node. The
problem now consists in solving a MDOF system. The limiting MDOF case is a
SDOF system where only one DOF is considered to be sucient to describe the
dynamic behavior of a distributed system.
In this section an undamped two-degree-of-freedom example is rstly studied in
order to introduce the concept of MDOF systems, natural frequencies and mode
shapes. This is then extended to systems with an arbitrary number of degrees of
freedom.
2.1 Introduction
The distributed-parameter system is discretized such that mass, damping and sti-
ness properties are lumped at each of the pre-selected DOFs. Thus, the analysis of
a distributed-parameter system is now simplied to the analysis of a lumped parame-
ter (discrete) MDOF system. The location of the nodal points and DOFs associated
to each nodal point must be selected carefully in such a way that the movement of
the structure is described by a displacement vector at any time u(t). The elements
of u(t) are the displacement n
i
(t) (i = 1, ..., ) describing the movement of each
DOF as a function of time.
For example, consider the three-story building represented in Figure 18. If the
position of each of the oors is of interest, it is necessary to know the displace-
ment of every oor so u(t) =

n
1
(t) n
2
(t) n
3
(t)

, where n
i
is the horizon-
tal movement of the i oor. The loads act at each DOF, so a vector of loads
F (t) =

1
1
(t) 1
2
(t) 1
3
(t)

is applied to all DOFs. After applying the loads
and equilibrium conditions to all DOFs, the following matrix equation of motion
(see Equation (17) for a SDOF system) is obtained
M u(t) +C u(t) +Ku(t) = F (t) (87)
in which the vector u(t) is unknown. Methods for the development of this matrix
equation, which is fundamental in the dynamic analysis of damped MDOF systems,
are given in Books of Dynamics of Structuresfor instance, Chopra 1995, Clough
and Penzien 1993 and are briey reviewed in Chapter 4. The FE method imple-
mented through commercially available FE codes is the most common method used.
In the case of linear systems, the key assumption made in the development of this
- 33-
2 Multi-Degree-of-Freedom Systems
Figure 18: a) Simple undamped model of the horizontal vibration of a three-story
building. b) Restoring forces.
matrix equation is that a linear time-invariant dynamic system is being analyzed.
This leads to mass M, damping C and stiness K matrices being time independent.
The superposition principle and Maxwells reciprocity theorem, therefore, apply.
Similar as for a SDOF system, matrix equation (87) can be understood as a
statement that equilibrium of inertial F
n
(t), damping F
o
(t), elastic (stiness, or
restoring) forces F
c
(t) and external forces F (t), acting in the direction of each
DOF, is satised at each instant of time
F
n
(t) +F
o
(t) +F
c
(t) = F (t)
where
F
n
(t) = M u(t) , F
o
(t) = C u(t) , F
c
(t) = Ku(t)
For -DOFs, the matrix equation of motion is formed by second-order ordinary
linear dierential equations. Generally, these equations are coupled which means
that the unknown displacement n
i
(t) can be in more than one dierential equation.
2.1.1 Two-Degree-of-Freedom system
The most simple MDOF system is now analyzed. It corresponds to a lumped model
of a two-story building frame subjected to external forces 1
1
(t) and 1
2
(t). The
- 34-
2.1 Introduction
Figure 19: Two-story shear frame and forces acting on the masses
forces acting in each oor mass :
)
are shown in Figure 19. These include two
external forces 1
1
(t) and 1
2
(t), the elastic stiness forces 1
c1
(t) and 1
c2
(t), and
the damping forces 1
o1
(t) and 1
o2
(t). The external forces are supposed to act
along the positive direction. The elastic and damping forces are shown acting in the
opposite direction since they are internal forces that resist the motions.
The application of the Newtons second law of motion gives the following equation
for each mass:
1
)
(t) 1
o)
(t) 1
c)
(t) = :
)
n
)
(t) , or :
)
n
)
(t) +1
o)
(t) +1
c)
(t) = 1
)
(t) ; , = 1, 2
or, for both degrees of freedom in a matrix form

:
1
0
:
2

n
1
(t)
n
2
(t)

1
o1
(t)
1
o2
(t)

1
c1
(t)
1
c2
(t)

=

1
1
(t)
1
2
(t)

(88)
Assuming linear behavior, the elastic resisting forces are related to the relative dis-
placement between the storeys and the storey stiness. The storey stiness is the
sum of the lateral stinesses of all columns in the storey. For a storey of height /,
and a column with modulus 1 and second moment of area 1 , the lateral stiness of
a column with clamped ends, implied by the shear building idealization, is 1211,/
3
.
Thus, the storey stiness is
/
)
=
X
no of columns
1211
/
3
which relates the elastic stiness forces with the displacement. The force 1
c1
at the
rst oor is made of two contributions: 1
o
c1
from the story above, and 1
b
c1
from the
story bellow. Thus
1
c1
= 1
o
c1
+1
b
c1
with 1
o
c1
= /
2
(n
2
n
1
) and 1
b
c1
= /
1
n
1
When developing these equations it is assumed that force 1
c
acts in the direction as
shown in Figure 19. Therefore, the sign "" in 1
o
c1
is because positive inter-storey
- 35-
2 Multi-Degree-of-Freedom Systems
drift (n
2
n
1
) generate a force which acts on the oor in the left-to-right direction,
which is a negative direction, as dened in Figure 19. Therefore,
1
c1
= /
1
n
1
+ [/
2
(n
2
n
1
)] or 1
c1
= (/
1
+/
2
) n
1
/
2
n
2
(89)
Finally, similarly as for 1
b
c1
, the force 1
c2
is
1
c2
= /
2
(n
2
n
1
) (90)
Equations (89) and (90) can be written in matrix form as

1
c1
1
c2

/
1
+/
2
/
2
/
2
+/
2

n
1
n
2

or F
c
= Ku (91)
The damping forces 1
o1
(t) and 1
o2
(t) are related to the storey horizontal ve-
locities n
1
(t) and n
2
(t), and their dierence ( n
2
(t) n
1
(t)). Following the same
procedure as for elastic stiness forces, it can be shown that

1
o1
1
o2

c
1
+c
2
c
2
c
2
+c
2

n
1
n
2

or F
o
= C u (92)
When Equations (91) and (92) are substituted into Equation (88), the following
matrix equation can be obtained

:
1
0
:
2

n
1
(t)
n
2
(t)

c
1
+c
2
c
2
c
2
+c
2

n
1
n
2

/
1
+/
2
/
2
/
2
+/
2

n
1
n
2

=

1
1
(t)
1
2
(t)

(93)
This matrix dierential equation can be presented as a set of two second order
dierential equations
:
1
n
1
(t) + (c
1
+c
2
) n
1
(t) + (/
1
+/
2
) n
1
(t) (c
2
n
2
(t) +/
2
n
2
) = 1
1
(t)
:
2
n
2
(t) +c
2
n
2
(t) +/
2
n
2
(t) (c
2
n
1
(t) +/
2
n
1
) = 1
2
(t)
The analysis of these equations indicates that: (i) the two equations of motion
describing the motion of two DOFs n
1
and n
2
are coupled because the unknown
function n
2
(t) depends on the dynamic equilibrium conditions of DOF n
1
and vice
versa, and (ii) the coupling is physically provided by stiness /
2
and damping coef-
cient c
2
.
2.1.2 Mathematical modeling of damping
Whereas the formulation of mass and stiness matrices M and K is based on the
summation of physical properties of the individual discretized elements, the damp-
ing matrix C cannot be formulated in the same way. Nevertheless, it is usually
convenient to assume that the damping is viscous meaning that the damping force
- 36-
2.1 Introduction
vector F
o
(t) is directly proportional to the velocity vector u(t). This might not
represent the actual damping mechanism which physically dissipates vibration en-
ergy from the structural system. However, it is a good modeling approximation and
is very useful from a mathematical point of view as it simplies the solution of the
equations of motion.
Thus, for the sake of simplicity, linear viscous damping will be assumed in the
remained of this section.
2.1.3 Solutions of the equation of motion
Generally speaking, the existing methods to solve equations of motion formulated
by the FE method can be divided into two groups:
1. mode superposition (or modal solution, or normal mode) methods, and
2. direct (or step-by-step) time integration methods.
For linear dynamic problems, mode superposition method is the one usually
used since: (i) it provides a physical insight into dynamic behavior of the structure
modeled (as compared with direct integration methods that do not provide physical
insight into the actual dynamic behavior of the structure), and (ii) it is considerably
more ecient computationally than direct integration methods
The mode superposition solution of equations of motion in a MDOF discretized
dynamic system is based on the so called expansion theorem from linear algebra. This
theorem states that any -dimensional vector u
.1
can be expressed as a linear
combination of -dimensional vectors e
v,.1
which are orthogonal and, therefore,
linearly independent (none of them can be expressed as a linear combination of
others). That is
u =
.
X
v=1

v
e
v
The expansion theorem is now illustrated by the example of expressing a position
of a point [a, /, c]
T
in a 3D space using the following unit vectors
e
1
=

1
0
0

, e
2
=

0
1
0

, e
3
=

0
0
1

so

a
/
c

= a

1
0
0

+/

0
1
0

+c

0
0
1

(94)
where the three coordinates a, / and c are scaling factors for the three linearly
independent vectors. To conclude, the main idea behind the mode-superposition
methods is to express a time-dependent -dimensional vector u
.1
(t) which is the
solution of the MDOF equation of motion (87) as the following sum
- 37-
2 Multi-Degree-of-Freedom Systems
u(t) =
.
X
v=1

v
(t) e
v
(95)
where
v
(t) is known a generalized coordinate which is a function of time. Therefore,
at each instant of time t
i
the displacement vector u(t) can be presented as the
following linear combination
u(t
i
) =
.
X
v=1

v
(t
i
) e
v
where
v
(t
i
) can be seen as a constant which will change at the next instant of time.
The set of vectors e
v
is known as basis of an -dimensional vector space dened
by the MDOF vibration problem. The key is then how to nd sets of
v
(t) and
e
v
in order to obtain the solution given in Equation (95). To do this, and following
a procedure similar as for SDOF systems, a special case of an initially disturbed
undamped system will be studied initially.
2.2 Vibration absorber: an application of Two-Degree-of-Freedom
systems
The goal of this section is to provide the theoretical background for the design of
passive vibration absorbers. Damping devices can be categorized as: passive devices,
active devices and semi-active devices. Within passive devices, one can nd a large
number of dierent devices: viscous damper, viscoelastic damper, Coulomb friction
damper, hysteretic damper, tuned mass damper (TMD) or tuned liquid damper.
A passive TMD or tuned vibration absorber is basically an energy dissipation
device that in its simplest form consists of a mass (secondary mass) that is attached
to a structure (primary system) with spring and damper elements. Figure 20 shows
a CAD model of a TMD and Figure 21 shows the installation of a TMD under
the deck of a footbridge and a detail view of the TMD. Due to the damper, energy
dissipation happens when the secondary mass of the TMD oscillates. This is achieved
by transferring as much energy as possible from the primary system to the TMD
by a careful tuning of the natural frequency and damping ratio of the TMD. Since
the mass of the TMD is signicantly smaller than that of the primary system,
transferring energy from the primary system to the TMD generates a great relative
oscillation of the mass of the TMD. A TMD operates eciently only in a narrow
frequency band, That is, high energy transfer occurs when the natural frequency of
the TMD is tuned to the natural frequency of the primary structure. Therefore, if
the TMD is attached to a continuous structure, the TMD mitigates only one specic
vibration mode.
The concept of a TMD without an integrated damper was rstly developed by
Frahm in 1909 to reduce the rolling motion of ships (US Patent No. 989958). In
the third edition of Den Hartogs book "Mechanical Vibrations" (1947), an analysis
for the optimal design of the TMD parameters considering the TMD damper was
presented. Thus, the model of a primary mass-spring-damper system, modeling the
- 38-
2.2 Vibration absorber: an application of Two-Degree-of-Freedom
systems
Figure 20: CAD model of TMD designed to cancel vertical vibrations.
Figure 21: Typical implementation of a TMD for mitigation of vertical vibrations
on bridges.
- 39-
2 Multi-Degree-of-Freedom Systems
Figure 22: Two-degree-of-freedom model of a vibration absorber (TMD) attached
to a primary system considering an excitation force 1(t) acting on primary mass
and an excitation through base acceleration n
j
(t).
primary structure, with an attached secondary mass-spring-damper system, model-
ing the TMD, is represented by the two-degree-of freedom system shown in Figure
22.
The equation of motion of the two-degree-of-freedom model of Figure 22 is (the
reader is encouraged to derive these equation of motion)
:
1
n
1
(t) + (c
1
+c
2
) n
1
(t) + (/
1
+/
2
) n
1
(t) c
2
n
2
(t) /
2
n
2
(t) = 1 (t) :
1
n
j
(t)
(96)
:
2
n
2
(t) +c
2
n
2
(t) +/
2
n
2
(t) c
2
n
1
(t) /
2
n
1
(t) = :
2
n
j
(t) (97)
These two equation can be written as a matrix equation as was done before (93)

:
1
0
:
2

n
1
(t)
n
2
(t)

c
1
+c
2
c
2
c
2
+c
2

n
1
n
2

/
1
+/
2
/
2
/
2
+/
2

n
1
n
2

= (98)
=

1 (t)
0

:
1
:
2

n
j
(t) (99)
where n
1
is the displacement of the primary system and n
2
is the displacement of
the TMD mass. Parameters :
1
, c
1
, /
1
are the mass, damping and stiness of the
- 40-
2.2 Vibration absorber: an application of Two-Degree-of-Freedom
systems
primary mass and :
2
, c
2
, /
2
are those of the TMD. 1 (t) is the force acting on the
primary system and n
j
is the base acceleration.
2.2.1 The undamped vibration absorber
Let us consider model 22 with c
1
= c
2
= 0 and subjected only to a harmonic
excitation on the primary mass 1 (t) = 1
0
sin(.t). The equation of motion are now
as follows
:
1
n
1
(t) + (/
1
+/
2
) n
1
(t) /
2
n
2
(t) = 1
0
sin(.t)
:
2
n
2
(t) +/
2
n
2
(t) /
2
n
1
(t) = 0
Let n
1
= l
1
sin(.t) and n
2
= l
2
sin(.t) (considering only the steady state re-
sponse, that is, the particular solution for an undamped system), and thus n
1
=
.
2
l
1
sin(.t), n
2
= .
2
l
2
sin(.t). Substituting these values into the equation of
motion,
(/
1
+/
2
:
1
.
2
)l
1
/
2
l
2
= 1
0
/
2
l
1
+

/
2
:.
2

l
2
= 0
Solving,
l
1
=
1
0

/
2
:
2
.
2

(/
1
+/
2
:
1
.
2
) (/
2
:
2
.
2
) /
2
2
In order to cut down the amplitude of the vibration of the primary mass :
1
, l
1
= 0,

/
2
:
2
.
2

must be equal to zero. Hence /


2
= :
2
.
2
and .
2
= /
2
,:
2
.
Then, the absorber must be designed such that its natural frequency is equal
to the frequency of the applied force. When this happens, the amplitude of the
vibration of the primary mass is practically zero. In general, a TMD is used only
when the natural frequency of the original system is close to the excitation frequency.
Hence, /
1
,:
1
= /
2
,:
2
Example 3 A small reciprocating machine weighs 25/q and runs at a constant speed
of 6000 rpm.(See Figure 23) After it was installed, it was found that that excitation
frequency was to close to the natural frequency of the system. What vibration ab-
sorber should be added if the nearest natural frequency of the system should be at least
20% from the excitation frequency?
Solution. After the absorber is added to the machine, the whole system becomes
a two-degree-of-freedom system, which is simplied and represented by Figure (22.
The amplitudes of the steady state vibration of the two masses are given by
=
1
0

/
2
:.
2

(/
1
+/
2
'.
2
) (/
2
:.
2
) /
2
2
1 =
1
0
/
2
(/
1
+/
2
:
1
.
2
) (/
2
:
2
.
2
) /
2
2
- 41-
2 Multi-Degree-of-Freedom Systems
Figure 23: Illustration of a reciprocating machine.
- 42-
2.2 Vibration absorber: an application of Two-Degree-of-Freedom
systems
in which :
1
= ' and :
2
= :. The natural frequencies of the whole system are
obtained from

/
1
+/
2
'.
2

/
2
:.
2

/
2
2
= 0
and these frequencies should be at least 20% from the excitation natural frequency in
order to avoid resonant behavior. Dividing the last expression by /
1
/
2
yields

1 +/
2
,/
1
'.
2
,/
1

1 :.
2
,, /
2

/
2
,/
1
= 0
On the other hand, /
1
,' = /
2
,: = .
2
a
must be fullled. If it is dened r
2
= .
2
,.
2
a
, the former equation can be rewritten as
r
4
(2 +/
2
,/
1
) r
2
+ 1 = 0
and since r = 0.8, then /
2
,/
1
= 0.21. Since /
1
,' = /
2
,:, then /
2
,/
1
= :,' =
0.21. Therefore, : = 0.21' = 5.25 kg. Thus the obtained absorber should weigh
5.25 kg and have a spring of stiness 0.21/
1
.
2.2.2 The damped vibration absorber
Consider now the two-degree-of-freedom system represented in Figure 22 including
damping in both, primary and secondary mass.
Example 4 Determine the general motion of a damped two-degree-of-freedom sys-
tem subjected to a harmonic excitation on the primary mass.
Solution: From Equations (96), the motion is described by the following equations
:
1
n
1
(t) + (c
1
+c
2
) n
1
(t) + (/
1
+/
2
) n
1
(t) c
2
n
2
(t) /
2
n
2
(t) = 1
0
sin(.t)
:
2
n
2
(t) +c
2
n
2
(t) +/
2
n
2
(t) c
2
n
1
(t) /
2
n
1
(t) = 0
Using the frequency response method (studied in Chapter 1), we substitute 1
0
sin(.t)
by 1
0
c
).t
and n
1
= U
1
c
).t
and n
2
= U
2
c
).t
. Rearranging and dividing by c
).t
, the
equation of motion can be rewritten as

(/
1
+/
2
) :
1
.
2
+, (c
1
+c
2
) .

U
1
(/
2
+,c
2
.) U
2
= 1
0
(/
2
+,c
2
.) U
1
+

/
2
:
2
.
2
+,c
2
.

U
2
= 0
Using the Cramers rule,
U
1
=

1
0
(/
2
+,c
2
.)
0

/
2
:
2
.
2
+,c
2
.

[(/
1
+/
2
) :
1
.
2
+, (c
1
+c
2
) .] [/
2
:
2
.
2
+,c
2
.] (/
2
+,c
2
.)
2
which is of the form (+,1) , (C +,1) or (G+,H), which modulus is l
1
=
q

2
+1
2
C
2
+1
2
, giving
- 43-
2 Multi-Degree-of-Freedom Systems
l
1
=
s
1
2
0
(/
2
:
2
.
2
)
2
+c
2
2
.
2
(:
1
:
2
.
4
:
1
/
2
.
2
:
2
/
1
.
2
c
1
c
2
.
2
+/
1
/
2
)
2
+(/
1
c
2
. +/
2
c
1
. :
1
c
2
.
3
:
2
c
1
.
3
+:
2
c
2
.
3
)
2
l
2
=
s
1
2
0

/
2
2
c
2
2
.
2

(:
1
:
2
.
4
:
1
/
2
.
2
:
2
/
1
.
2
c
1
c
2
.
2
+/
1
/
2
)
2
+(/
1
c
2
. +/
2
c
1
. :
1
c
2
.
3
:
2
c
1
.
3
+:
2
c
2
.
3
)
2
From the complex variable theory, we can write
U
1
= 1
0
(G+,H) = l
1
c
)0
1
= l
1
(cos 0
1
+, sin0
1
)
U
2
= 1
0
(1 +,1) = l
2
c
)0
2
= l
2
(cos 0
2
+, sin0
2
)
where 0
1
= tan
1

1
G

and 0
2
= tan
1

1
1

. The excitation force is 1


0
sin(.t) =
Im

1
0
c
).t

, therefore
n
1
(t) = Im

U
1
c
).t

= Im

l
1
c
)(.t+0
1
)

= l
1
sin(.t +0
1
)
n
2
(t) = Im

U
2
c
).t

= Im

l
2
c
)(.t+0
2
)

= l
2
sin(.t +0
2
)
If the excitation force were 1
0
cos (.t) = Re

1
0
c
).t

, therefore
n
1
(t) = Re

U
1
c
).t

= Re

l
1
c
)(.t+0
1
)

= l
1
cos (.t +0
1
)
n
2
(t) = Re

U
2
c
).t

= Re

l
2
c
)(.t+02)

= l
2
cos (.t +0
2
)
Example 5 From Example 1, it was obtained the stiness, mass and damping co-
ecient of single-vibration-mode-response of a footbridge. Consider a harmonic ex-
citation of unity amplitude and a frequency 5% higher from the natural frequency of
the structure. Obtain the steady state response. Imaging now, a secondary mass is
added of value 5% of structure (primary) mass. Choose the stiness of the secondary
mass in such a way that the natural frequency of the secondary mass is the same as
the primary mass. Obtain the response of the resulting two-degree-of-freedom system
for dierent values of the damping coecient of the secondary mass. Plot then the
amplitude of the displacement of the primary mass versus the damping coecient of
the secondary mass. Which is the eect of the secondary mass on the primary mass
displacement? Which is the eect of the damping coecient of the secondary mass
on the primary mass displacement?
Up to now, many optimization criteria has been proposed for the design of
damped vibration absorbers for damped structure. One of the most common cri-
teria is the one based on the H

norm optimization criteria. The H

norm of an
FRF G(,.) represents the maximum amplitude of the absolute value |G(,.)|. The
- 44-
2.2 Vibration absorber: an application of Two-Degree-of-Freedom
systems
H

norm of the FRF between the primary mass displacement and the harmonic
excitation on the primary mass is minimized as follows
min
I
2
,c
2
|G(,.)|

= min
I
2
,c
2
max
.
|G(,.)|
It is usually dened the frequency ratio j =
q
I
2
n
2
I
1
n
1
, the mass ratio j =
n
2
n
1
and
damping coecients
1
and
2
. The optimization problem is then stated as
min
j,
2
|G(,.)|

= min
j,
2
max
.
|G(,.)|
The minimum value of the |G(,.)|

is achieved if and only if there exist two local


maxima and both have exactly the same amplitude (see Figure 24). If one assume
that
1
= 0, that is, the damping of the primary mass is vanished , there exist
at least two frequencies where |G(,.)| is invariant with respect to
2
(j is xed).
This observation was used by Den Hartog to develop the well-known xed points
method to design absorbers for structures with vanishing damping. Thus, simple
closed form approximations can be derived. These approximations are suciently
accurate for primary structures with damping ratios less than 10% (
1
< 0.10). If
the primary mass damping is high, the xed points method does not work, and
the optimization problem should be solved numerically. Thus,the optimal absorbing
capacity is obtained if both peaks have the same height and for a structure with
vanishing damping the optimal parameters j and
2
have been determined by Den
Hartog as
j =
1
1 +j

2
=
s
3j
8 (1 +j)
3
Example 6 An example of a TMD design for the cancelation of a single-degree-of-
freedom vibration of structure. Figure 25 shows the dynamic response factor between
the primary mass displacement and the excitation on the primary mass.
%Vibration absorber design
close all; clear all; clc
%Structure modal properties
f1=3.51; %natural frequency of the structure
w1=3.51*2*pi; %in rad/s
m1=18500; %modal mass associated the mode
k1=m1*((w1)^2);
c1=m1*2*0.007*(w1);
%Vibration absorber
rmass=0.02; %mass ratio
- 45-
2 Multi-Degree-of-Freedom Systems
Figure 24: Illustration of the Den Hartogs xed point method for primary structure
with vanishing damping
1
= 0.
m2=(m1*rmass); %secondary mass value
dampratio=sqrt(3*rmass/(8*(1+rmass)^3));
freqratio=1/(1+rmass);
f2=f1*freqratio
w2=w1*freqratio
c2=2*dampratio*w2*m2
k2=(w2^2)*m2
%FRFs
s=tf(s);
%FRF without absorber
G=(1/m1)/(s*s+s*(c1/m1) + (k1/m1));
%FRF with vibration aboserber
A=s*s*m1 + s*(c1+c2) + (k1 + k2);
B=((s*c2+k2)*(s*c2+k2))/(s*s*m2 + s*c2 +k2);
Gtotal=minreal(1/(A-B));
%BODE diagram
W=2*pi*linspace(2.5,4.5,1000);
bodemag(G,k-,Gtotal,b,W)
xlabel(Frequency,FontName,Times New Roman,FontSize,12);
ylabel(|G(jw)|,FontName,Times New Roman,FontSize,12);
title(TMD design,FontName,Times New Roman,FontSize,12);
legend(Without TMD,Tuned,"FontName,Times New Roman,FontSize,12)
The reader can easily analyze the eects of poor frequency tuning and poor
damping tuning. For instance, it can be seen that the eect of the TMD is very
- 46-
2.3 Free undamped vibrations of MDOF systems
TMD design
Frequency (Hz)
2.5 3 3.5 4 4.5
0
1
2
3
4
5
6
7
8
9
x 10
6
|
G
(
j
w
)
|

(
a
b
s
)
Without TMD
With TMD
Figure 25: Example of TMD design.
sensitive to an inaccurate frequency tuning, while the tuning of the damping ratio
is much less critical. Plot the dynamic amplication factor considering an error of
5% in the frequency, )
2
= 0.95)
1
j and )
2
= 1.05)
1
j. Plot the dynamic amplication
factor considering an error of 50% in the damping.
2.3 Free undamped vibrations of MDOF systems
If the system is undamped, C = 0 and it vibrates freely after an initial disturbance,
the set of equations that describe the dynamic of this system is
M u(t) +Ku(t) = 0 (100)
with the corresponding initial conditions at t = 0
u(t = 0) = u
0
(101)
u(t = 0) = u
0
(102)
It is assumed that after initial disturbance the motion of the MDOF system is
harmonic (as in the SDOF case, which is nothing else but a limiting MDOF case
when = 1, where all points vibrate with the same frequency, but have dierent
amplitude in the case of all DOFs. Therefore, similarly as for SDOF systems
u(t) = asin(.t +0) , u(t) = a.
2
sin(.t +0) with a R
.1
After replacing the displacement and acceleration vectors in Equation (100), the
following matrix equation is obtained
- 47-
2 Multi-Degree-of-Freedom Systems

.
2
M +K

asin(.t +0) = 0
since sin(.t +c) is not zero for every t, the only option for this equation to be
satised is that the following set of linear homogenous equations is satised

.
2
M +K

a = 0 or .
2
Ma = Ka (103)
where the unknowns are the vibration amplitudes in a. Equation (103) is known as
an undamped eigenvalue problem. Possible solutions are:
1. A trivial solution a = 0, which implies no motion, so it is not appropriate, or
2. To nd . which would guarantee the existence of a non-trivial solution of the
system. In this case, it must be satised
det

.
2
M +K

= 0 (104)
When this determinant is expanded, a polynomial of order in .
2
is obtained.
Equation (104) is known as the characteristic equation. This equation has real
and positive roots .
2
v
(r = 1, ..., ) for .
2
. This is because structural mass and
stiness matrices are symmetric and positive denite.
2.3.1 Natural frequencies and mode shapes
Each solution .
2
v
of (104) leads to a non-trivial solution of (103)
a =
v
r = 1, ...,
so that

.
2
v
M +K

v
= 0, r = 1, ..., (105)
Therefore, considering that every vector
v
, which is a constant multiplying
v
,
is also a solution, there are time domain solutions of Equation (100), each one
being as follows
u
v
(t) =
v

v
sin(.
v
t +0
v
) (106)
where
v
and 0
v
are two constants to be determined from the initial conditions.
Equation (106) can also be written as
u
v
(t) =
v
(1
v
sin(.
v
t) +C
v
cos (.
v
t)) (107)
where 1
v
and C
v
are the new constant to be determined.
Equation (107) shows that the solution of an undamped problem is in fact a col-
lection of sinusoidal functions each one having frequency .
v
, which is called a natural
frequency. Such equation also indicates that during this oscillation the structure
takes a shape dened by vector
v
which is known as a mode shape (or natural mode
- 48-
2.3 Free undamped vibrations of MDOF systems
of vibration or normal mode or characteristic vector). The pair .
v
and
v
dene an
r
tI
mode of vibration.
The process of extracting the pair .
v
and
v
is usually known as eigenvalue
extraction where `
v
= .
2
v
is called eigenvalue and
v
is eigenvector of the following
eigenproblem

M
1
K

v
= `
v

v
, r = 1, ..., (108)
Remind that an eigenproblem is dened as the problem of nding vectors b and
constants `, for a given matrix A, such that Ab = `b.
As an example, it is considered the three-story shear model building of Figure
18 with the following system properties are assumed
:
1
= :
2
= :
3
= 5000 kg
/
1
= /
2
= /
3
= 6000 N, m
The natural frequencies and modes shapes calculated by solving the eigenproblem
are shown in Figure 26. The natural frequencies calculated are
.
1
= 0.487 rad, s )
1
= 0.077 Hz
.
2
= 1.366 rad, s )
2
= 0.217 Hz
.
3
= 1.974 rad, s )
3
= 0.314 Hz
and the modal shapes (in which the largest value is scaled to unity)
c
1
=

0.445
0.802
1

, c
2
=

1
0.445
0.802

, c
3
=

0.802
1
0.445

2.3.2 Orthogonality of mode shapes


It can be shown that mode shapes are orthogonal with regard to mass and stiness
matrices, that is, they are independent and can be used to express the solution of
(100) as
u(t) =
.
X
v=1

v
(t)
v
(109)
where
v
(t) is termed generalized or modal coordinate. This expression can be re-
written in a matrix form
u(t) = q (t)
The proof of this key property is as follows. If .
a
and
a
, and .
v
and
v
are
two dierent modes of vibration with dierent frequencies .
a
6= .
v
, then Equation
(105) for mode : is
- 49-
2 Multi-Degree-of-Freedom Systems
Figure 26: Plot of the three modes shapes of the system of Figure18
.
2
a
M
a
= K
a
(110)
By multiplying by the transpose of
v
from the left
.
2
a

T
v
M
a
=
T
v
K
a
and transposing both sides of the equation, it is obtained
.
2
a

T
a
M
v
=
T
a
K
v
(111)
where we considere that M and K are symmetric matrices. Equation (105) is now
re-written for mode r and multiplied by the transpose of
a
.
2
v

T
a
M
v
=
T
a
K
v
(112)
Right-hand sides in Equations (111) and (112) are the same, which means, after
deducting them, that

.
2
a
.
2
v

T
a
M
v
= 0
As .
a
6= .
v
, the only option to full the latter equation is

T
a
M
v
= 0 with : 6= r (113)
Similarly, it can be proven that

T
a
K
v
= 0 with : 6= r (114)
Then, the orthogonality of the modes with dierent frequencies is proven. The or-
thogonality of natural modes implies that the following square matrices are diagonal
K

=
T
K, M

=
T
M (115)
- 50-
2.3 Free undamped vibrations of MDOF systems
where the diagonal elements are positive since M and K are positive denite

T
a
K
a
=/
v
0 and
T
a
Mc
a
= :
v
0 (116)
As we see below, these diagonal terms are related by
/
v
= .
2
a
:
v
(117)
The conclusion is that natural modes are orthogonal with respect to matrices M
and K and linearly independent, so they can be used in Equation (109) to describe
the solution vector u(t). This procedure is called modal expansion.
2.3.3 Modal matrices
As previously mentioned, considering Equation (105), each of the natural fre-
quencies and mode shapes can be re-written in the form
K
v
= M
v
.
2
v
(118)
If the modal matrix is as an matrix, in which each column is a modal
shape
=

1
...
.

and the spectral matrix is dened as an diagonal matrix containing squared


natural frequencies on its diagonal

2
=

.
2
1

.
2
.

then all modal equations (118) can be written in a compact form as the following
single matrix equation
K = M
2
K = M with
2
(119)
Due to the orthogonality of modes of vibration, the following square matrices are
diagonal
K

/
1

/
.

=
T
K, M

:
1

:
.

=
T
M
Then, multiplying Eq. (119) by
T
from left side, it is obtained

/
1

/
.

:
1

:
.

.
2
1

.
2
.

or /
v
= .
2
v
:
v
- 51-
2 Multi-Degree-of-Freedom Systems
We now pay attention to the de-coupling of the equations of motion. From
equation of motion (100), and after dierentiating the assumed solution (109), the
new equation of motion in the generalized (modal) coordinates is
.
X
v=1
(M
v

v
(t) +K
v

v
(t)) = 0
If both sides are multiplied from the left by
T
a
, this equation becomes
.
X
v=1

T
a
M
v

v
(t) +
T
a
K
v
(t)
v
(t)

0
Due to orthogonality conditions
:
v

v
(t) +/
v

v
(t) = 0 (120)
This operation can be performed for every
T
a
, : = 1, ..., , which means that it
is possible to establish : equations (120) which can be arranged in the following
matrix form

:
1

:
.

q (t) +

/
1

/
.

q (t) = 0 or M

q (t) +K

q (t) = 0 (121)
This means that the system of coupled equations of motion in physical coordinates
u(t) has been transformed into a system of independent (de-coupled) equations
of motion in the generalized coordinates q (t).
2.3.4 Normalization of modes
Being solutions of a homogenous system of equations, mode shapes do not have an
absolute value and, after eigenvalue extraction, they can be scaled (i.e. multiplied
by a constant) in a number of ways. Two scaling or normalization methods are
usually used:
1. A unity-scaled mode shape which is scaled in such a way its maximum value
has an amplitude of 1.0, or
2. A mass-normalized mode shape
v
, which is scaled using any mode shape
v
obtained via eigenvalue extraction, as follows

v
=

v

:
v
with :
v
=
T
v
Mc
v
(122)
Scaled in this way, a mass-normalized mode shape leads to the following diagonal-
ization of the mass matrix
1

:
v

T
v
M
1

:
v

v
=

1

:
v

T
v
M
v
=

1
:
v

:
v
= 1 kg
- 52-
2.3 Free undamped vibrations of MDOF systems
Therefore, considering Equation (117), in the case of mass normalized mode shapes
/
v
= .
2
v
1 = .
2
v
Hence option 2 is the most used one.
2.3.5 Response of undamped MDOF systems
As has been seen previously, the time-domain solution of the undamped system an
after initial disturbance can be written as (see Equation (107))
u
v
(t) =
v
(1
v
sin(.
v
t) +C
v
cos (.
v
t)) (123)
Since each u
v
(t) is a solution of Equation (100), a linear superposition of these
individual solutions is also a solution, so the nal solution is
u(t) =
.
X
v=1
c
v
u
v
(t) =
.
X
v=1

v
(1
v
sin(.
v
t) +1
v
cos (.
v
t)) (124)
with initial conditions at t = 0
u(0) =
.
X
v=1

v
1
v
(125)
and
u(0) =
.
X
v=1

v
.
v
1
v
(126)
With the vectors of initial displacements and velocities, and natural frequencies
and mode shapes known, each of equations (125) and (126) is a set of linear
algebraic equations in the unknown constants 1
v
and 1
v
.
From the modal expansion of the vectors u(t) and u(t) Equation (109) it
follows that:
u(t)=
.
X
v=1

v
(t)
v
(127)
u(t)=
.
X
v=1

v
(t)
v
(128)
and then
u(0)=
.
X
v=1

v
(0)
v
(129)
u(0)=
.
X
v=1

v
(0)
v
(130)
- 53-
2 Multi-Degree-of-Freedom Systems
By comparing Equations (129) and (130) with Equations (125) and (126), it is clear
that
v
(0) = 1
v
and
v
(0) = .
v
1
v
, where .
v
is the known natural frequency.
By multiplying both sides of Equation (129) by
T
v
M, and changing the sum-
mation index from r to :, yields

T
v
Mu(0) =
.
X
a=1

T
v
M
a

a
(0) = :
v

v
(0) (131)

T
v
M u(0) =
.
X
a=1

T
v
M
a

a
(0) = :
v

v
(0)
Therefore

v
(0) =

T
v
Mu(0)
:
v
(132)

v
(0) =

T
v
M u(0)
:
v
Thus, Equations (132) convert initial conditions given in physical coordinates into
initial conditions of a generalized coordinate for each of the modes. Finally,
considering that
1
v
=
v
(0)
1
v
=

v
(0)
.
v
and based on Equation (124), the nal solution can be expressed as
u(t) =
.
X
v=1


v
(0)
.
v
sin(.
v
t) +
v
(0) cos (.
v
t)

(133)
Also, by replacing
v
(0) =
v
sin0 and
v
(0) ,.
v
=
v
cos 0
v
, the following relation-
ship can be obtained (the same as SDOF system)
u
v
(t) =
.
X
v=1

v
sin(.
v
t +0
v
) (134)
where

v
=
p
.
2
v

2
v
0) +
2
v
(0)
.
v
(135)
0
v
= tan
1

.
v

v
(0)

v
(0)

The initial (modal) conditions of the generalized coordinate


v
(t) are calculated
from Equation (132).
The following observations regarding the above set of equations can be made:
- 54-
2.3 Free undamped vibrations of MDOF systems
0 20 40 60 80 100 120 140 160 180 200
0.1
0.05
0
0.05
0.1
Time (s)
u
1

(
m
)
First story displacement
0 20 40 60 80 100 120 140 160 180 200
0.1
0.05
0
0.05
0.1
Time (s)
u
2

(
m
)
Second story displacement
0 20 40 60 80 100 120 140 160 180 200
0.1
0.05
0
0.05
0.1
Time (s)
u
3

(
m
)
Third story displacement
Figure 27: Undamped response of each of the three oors for the building model
given in Figure 18 to a initial displacement n
3
(0) = 0.1
1. Equations (132) and (133) indicate that any scaling of a mode shape does not
aect the nal amplitude of the response because the same scaling is used
when calculating modal mass, so the selected scaling factors are canceled.
2. A total response of a freely vibrating MDOF system can be obtained as a sum
of responses corresponding to undamped SDOF systems. These systems
have properties :
v
and /
v
, and natural frequencies .
v
which satisfy condition
/
v
= .
2
v
:
v
. The individual SDOF responses are weighted by mode shapes.
3. Due to orthogonality of mode shapes, in the case when magnitudes of initial
displacements and/or velocities are an integer multiply of the r-th mode shape
(i.e., if it visually resembles it) Equation (132) indicates that the MDOF system
will vibrate only in the r-th mode as other modes will not be excited due to
zero initial conditions.
An example of a typical multi-mode undamped response to the initial disturbance
is shown in Figure 27.
- 55-
2 Multi-Degree-of-Freedom Systems
2.4 Free damped vibrations of MDOF systems
A system of equations which describe the motion of a damped MDOF system after
initial disturbances is as follows
M u(t) +C u(t) +Ku(t) = 0 (136)
with initial conditions at t = 0 given as
u(t = 0) = u
0
(137)
u(t = 0) = u
0
(138)
Following the same procedure as for an undamped MDOF system, the physical (dis-
placement) coordinates are replaced by generalized coordinates using mode shapes
determined from an undamped MDOF system as follows
u(t) = q (t) (139)
and the vector of velocities and acceleration are
u(t) = q (t) (140)
u(t) = q (t)
By the substitution of Equation (139) and (140) into the equation of motion
(136), and premultiplying the result
T
, the following equations are obtained

:
v

q (t) +
T
C q (t) +

/
v

q (t) = 0 (141)
Ideally, the matrix product
T
C should be a diagonal matrix too, and then,
the mode shapes should be able to diagonalize the damping matrix too. However,
this is not always the case, and for some types of structures this is not possible.
Only the case when this is possible will described here because in most cases this
assumption can be adopted.
Systems where the diagonalization is possible are called systems with propor-
tional damping or Rayleigh damping. To achieve the required diagonalization of the
damping matrix C it can be assumed that it is proportional to the mass and stiness
matrices in the form
1
C = cM+,K (142)
Pre- and post-multiplying C by
T
and , respectively, it is obtained
1
We will address more in detail the damping matrices in Part II of these notes.
- 56-
2.4 Free damped vibrations of MDOF systems

T
C =c
T
M+,
T
K = c

:
v

+,

/
v

c
v

(143)
With this assumption, the system of equations (136) can be transformed into a
system of independent (de-coupled) equations of the following form
:
v

v
(t) +c
v

v
(t) +/
v

v
(t) = 0, r = 1, ..., (144)
These equations have the same form as a SDOF equation of motion of a damped
system under initial disturbance. Dividing by :
v
, Equation (144) becomes

v
(t) + 2
v
.
v

v
(t) +.
2
v

v
(t) = 0, r = 1, ..., (145)
with the generalized initial conditions

v
(0) =

T
v
Mu(0)
:
v
(146)

v
(0) =

T
v
M u(0)
:
v
Taking into account that each vibration mode behaves as a SDOF system with
:
v
being the modal mass, /
v
being the modal stiness and
v
being the modal
damping ratio. Having in mind the SDOF solution, the overall solution is given as
u(t) =
.
X
v=1

v
(t)
v
(147)
where

v
(t) = c
r.rt

v
(0) cos (.
vo
t) +


v
(0) +
v
(0)
v
.
v
.
vo

sin(.
vo
t)

(148)
This solution can also be written as

v
(t) =
v
c
r.rt
sin(.
vo
t +0
v
) (149)
with

v
=
s

2
v
(0) +


v
(0) +
v
(0)
v
.
v
.
vo

2
(150)
0
v
= tan
1


v
(0) .
vo

v
(0) +
v
(0)
v
.
v

(151)
This is a set of decaying sinusoids with frequency .
vo
= .
v
p
1
2
v
, so the overall
response is decaying as well, as would be expected for a damped structure which is
freely vibrating after being initially disturbed. Figure 28 shows a damped response
of the three oors of the building model.
- 57-
2 Multi-Degree-of-Freedom Systems
0 20 40 60 80 100 120 140 160 180 200
0.1
0.05
0
0.05
0.1
Time (s)
u
1

(
m
)
First story displacement
0 20 40 60 80 100 120 140 160 180 200
0.05
0
0.05
0.1
0.15
Time (s)
u
2

(
m
)
Second story displacement
0 20 40 60 80 100 120 140 160 180 200
0.1
0.05
0
0.05
0.1
Time (s)
u
3

(
m
)
Third story displacement
Figure 28: Plot of a damped response of the system shown in Figure 18 assuming
c
1
= c
2
= c
3
= 500 Ns , m
- 58-
2.5 Response of MDOF systems under arbitrary loads
2.5 Response of MDOF systems under arbitrary loads
This is the most general case described by Equation (87) which is repeated here
M u(t) +C u(t) +Ku(t) = F (t) (152)
To solve this equation, the same procedure is applied as for damped free vibration.
Therefore, it is assumed that the solution can be presented using undamped mode
shapes and modal expansion
u(t) = q (t) (153)
and that the damping is proportional.
Again, after pre- and post multiplication by
T
and , independent equation
of motion of the following form are obtained
:
v

v
(t) +c
v

v
(t) +/
v

v
(t) =
T
v
F (t) , r = 1, ..., (154)
or
:
v

v
(t) +c
v

v
(t) +/
v

v
(t) = 1
v
(t) , r = 1, ..., (155)
with
1
v
(t) =
.
X
)=1
c
)v
1
)
(t) (156)
which is dened as a generalized load for mode r or the r-th modal force. As before,
the initial conditions for the generalized coordinate
v
(t) can be calculated using
Equation (132).
The form of Equation (155) is identical to a general forced excitation of a SDOF
system in such a way that its solution can be written as

v
=
v,tvoacicat
(t) +
v,)cvcco
(t) (157)
where Equation (148) can be used to calculate the transient response due to initial
conditions, yielding

v,tvoacicat
(t) = c

r
.
r
t

v
(0) cos (.
vo
t) +


v
(0) +
v
(0)
v
.
v
.
vo

sin(.
vo
t)

(158)
and the forced response to modal force can be calculated using the convolution
integral, as for a SDOF system (see Equation (85))

v,)cvcco
=
1
:
v
.
vo
Z
t
0
1
v
(t) c

r
.
ru
(tt)
sin(.
vo
(t t)) dt (159)
If the initial conditions are zero,
v
(0) = 0 and
v
(0) = 0, the transient response

v,tvoacicat
vanishes and the modal response
v
(t) corresponds to the forced response

v,)cvcco
. To calculate
v,)cvcco
commercial FE software usually integrate Equation
- 59-
2 Multi-Degree-of-Freedom Systems
(159) numerically. Finally, when all generalized coordinates
v
(t) are known, the
unknown displacement vector u(t) can be obtained by the superposition of modal
responses using Equation (153).
An alternative to the modal analysis and mode superposition approach consid-
ered up to now is to calculate u(t) by a direct integration method of equation of
motion (152). We will address these methods in Part II.
2.6 Response of MDOF systems under harmonic loads
A special case of a MDOF system excited by a single harmonic force having initially
unit amplitude (i.e. 1 N) in the direction of a DOF / and the response is calculated in
the direction of a DOF i. The response will be calculated using mode superposition.
Therefore, there is only one force 1
I
(t) in the sum shown in Equation (156)
1
I
(t) = 1 cos (.t) (160)
When calculating response n
i
(t), Equation (147) can be written as
n
i
(t) =
.
X
v=1

v
(t) c
iv
(161)
where c
iv
is amplitude of the vibration mode r corresponding to DOF i. Considering
Equations (160) and (156), the sum required to calculate the r-th modal force has
only one element
1
v
= c
Iv
1
I
(t) (162)
Therefore, after replacing (162) into (155), modal equation of motion for
v
(t), which
is required to calculate response n
i
(t) from Equation (160), is as follows
:
v

v
(t) +c
v

v
(t) +/
v

v
(t) = c
Iv
cos (.t) (163)
This modal equation has the same form as the equation of motion (54) of an SDOF
system under harmonic excitation having an amplitude of c
Iv
instead of 1
0
(t).
Then, the solution for the generalized coordinate
v
(t) for mode r can be calculated
using expression (60), as follows

v
(t) =
v
c

r
.
r
t
sin(.
vo
t +,
v
) +l
v
cos (.t 0
v
) (164)
Assuming that time is suciently large so that the transient response of each mode
of vibration dies out, the steady-state solution for the generalized coordinate can be
written using exponential form given for a SDOF system as follows (see Equation
(72))

v
(t) =
c
Iv
q
(/
v
:
v
.
2
)
2
+ (c
v
.)
2
c
)(.t0
r
)
(165)
Hence
- 60-
2.6 Response of MDOF systems under harmonic loads
n
i
(t) =
.
X
v=1
c
iv

v
(t) =
.
X
v=1
c
iv
c
Iv
q
(/
v
:
v
.
2
)
2
+ (c
v
.)
2
c
)(.t0r)
or
n
i
(t) = c
).t
.
X
v=1
c
iv
c
Iv
q
(/
v
:
v
.
2
)
2
+ (c
v
.)
2
c
)0r
= l
i
(.) c
).t
(166)
Equation (166) demonstrates that the steady-state response of DOF i to the unit
harmonic excitation of DOF / having frequency . is also a harmonic function having
the same frequency. From equation (166), the complex amplitude of this harmonic
motion, l
i
(.), containing magnitude and phase information, is given as the follow-
ing complex sum over all modes of vibration
l
i
(.) =
.
X
v=1
c
iv
c
Iv
q
(/
v
:
v
.
2
)
2
+ (c
v
.)
2
c
)0r
(167)
with
0
v
= tan
1
2
v
.
v
.
.
2
v
.
2
Equation (167) shows that the complex amplitude is independent of time and de-
pends on the excitation frequency ., modal properties of mode r (/
v
, :
v
and c
v
),
modal amplitudes of the excitation and response DOFs (c
Iv
and c
iv
) and 0
v
.
If one considers the analogy with Equations (70)-(72), the sum of complex
exponential functions of . in (167) can be rewritten as the sum of complex
functions as follows
l
i
(.) =
.
X
v=1
c
iv
c
Iv
(/
v
:
v
.
2
) +, (c
v
.)
=
.
X
v=1
(1,:
v
) c
iv
c
Iv
(.
2
v
.
2
) +,2
v
.
v
.
(168)
The complex function l
i
(.) represents a steady-state harmonic response of DOF i
due to a unit harmonic excitation of DOF / at frequency .. This function is similar
to the SDOF FRF obtained previously An usual notation for the FRF of MDOF
system is c
iI
(.) to denote the pair of excitation and response DOFs
c
iI
(.) =
.
X
v=1
c
iv
c
Iv
(/
v
:
v
.
2
) +, (c
v
.)
=
.
X
v=1
(1,:
v
) c
iv
c
Iv
(.
2
v
.
2
) +,2
v
.
v
.
(169)
Due to the linearity of the problem, an amplitude of a displacement response due
to a harmonic excitation having amplitude 1
I0
(instead of 1 N) is
l
i
(.) = c
iI
(.) 1
I0
- 61-
2 Multi-Degree-of-Freedom Systems
150
100
50
0
M
a
g
n
i
t
u
d
e

(
d
B
)
0.5 1 1.5 2 2.5 3
180
90
0
P
h
a
s
e

(
d
e
g
)
Bode Diagram
Frequency (rad/sec)
0.04 0.03 0.02 0.01 0 0.01 0.02 0.03 0.04
0.1
0.05
0
0.05
0.1
Nyquist Diagram
Real Axis
I
m
a
g
i
n
a
r
y

A
x
i
s
Figure 29: Receptance FRF plots, c
33
(.). Above: Magnitude and phase of the
FRF. Below: Nyquist plot of the FRF, showing the imaginary and the real parts of
the complex function
In other words, by scaling the FRF amplitudes by the actual amplitude of the har-
monic force, the actual steady-state harmonic response can be calculated. Moreover,
expressions for FRFs of MDOF systems (see Equation (169)) indicate that, similar
to SDOF systems, there will be amplication of the steady-state response every time
when the excitation frequency approaches one of the natural frequencies .
v
.
Figure 29 shows an example of a FRF plot corresponding to modal properties
of the 3-DOF shear frame presented previously. Note that, when expressed as a
function of ) [Hz], rather than [rad, s], it should be considered that . = 2). It
is assumed that modal damping ratios are 1% in all three modes of vibration and
that the excitation is at DOF 3 (top of the building). The FRF peaks corresponding
to natural frequencies are observed as well as rapid phase changes (between the
response and the force) which happen when the excitation frequency passes through
one of the excited natural frequencies of the MDOF system.
- 62-
2.6 Response of MDOF systems under harmonic loads
0.5 1 1.5 2 2.5 3
90
0
90
P
h
a
s
e

(
d
e
g
)
Bode Diagram
Frequency (rad/sec)
Nyquist Diagram
Real Axis
I
m
a
g
i
n
a
r
y

A
x
i
s
150
100
50
0
M
a
g
n
i
t
u
d
e

(
d
B
)
0.04 0.03 0.02 0.01 0 0.01 0.02 0.03 0.04
0.02
0.015
0.01
0.005
0
0.005
0.01
0.015
0.02
Figure 30: Mobility FRF corresponding to the receptance FRF shown in Figure 29.
There are three types of FRFs, depending on the type of the response. If the
response is displacement, then the corresponding FRF c
iI
(.) is known as receptance.
If the steady-state response is velocity, then the corresponding FRF is known as
mobility. After calculating rst derivatives of both sides of Equation (166), mobility
FRF can be calculated as
1
iI
= ,. c
iI
(.) (170)
Figure 30 shows a mobility FRF corresponding to the FRF in Figure 29. To obtain
Figure 30, at each excitation frequency ., the complex amplitude of the receptance
FRF in Figure 29 is multiplied by ,. yielding a new complex function for mobility
FRF.
Similarly, if the response is acceleration, then the corresponding FRF is called
accelerance and its relationship with the mobility and accelerance is as follows

iI
= ,. 1
iI
= (,.)
2
c
iI
(.) = .
2
c
iI
(.)
- 63-
2 Multi-Degree-of-Freedom Systems
0.5 1 1.5 2 2.5 3
0
90
180
P
h
a
s
e

(
d
e
g
)
Bode Diagram
Frequency (rad/sec)
0.01 0.008 0.006 0.004 0.002 0 0.002 0.004 0.006 0.008 0.01
0.02
0.015
0.01
0.005
0
0.005
0.01
0.015
0.02
Nyquist Diagram
Real Axis
I
m
a
g
i
n
a
r
y

A
x
i
s
150
100
50
0
M
a
g
n
i
t
u
d
e

(
d
B
)
Figure 31: Accelerance FRF corresponding to the receptance FRF shown in Figure
29.
Figure 31 shows an accelerance FRF corresponding to the receptance FRF in Figure
29. To obtain Figure 31, at each excitation frequency . the complex amplitude
of the receptance FRF in Figure 29 is multiplied by a real negative number .
2
yielding a new complex function for accelerance FRF.
2.7 Systems with distributed mass and stiness
The main objective of this section is to understand the dierences and similarities
between discrete and distributed models of dynamic systems. Most engineering
structures are continuous and have distributed material properties. A simple bridge
can be considered as a beam with uniform mass per unit length and uniform exural
stiness. A slab or plate has two-dimensional exural stiness properties with mass
distributed over its area. A curved shell is similar to a plate in its distribution of mass
and stiness but its geometry is three-dimensional,and this gives rise to membrane
deformation in addition to exure. Solid objects are fully three-dimensional with
- 64-
2.7 Systems with distributed mass and stiffness
x
dx
y
x
(a)
(b)
m x EI ( ),
L
V
dx
f x ( )
V+dV
M
M+dM
u x ( )
Figure 32: Beam with distribuited mass and stiness
stiness and mass being distributed throughout the volume of the material.
The formulation of the structural dynamic problem for one-dimensional systems
with distributed mass, such a beam, is presented here. The solutions for these simple
cases have an innite number of DOFs, dierently from lumped-mass systems with a
nite number of DOFs. The section ends with a discussion of why this innite-DOF
approach is not feasible for accurate predictions of practical engineering systems,
thus the need for discretizing methods for distributed-mass systems.
2.7.1 Vibration of beams
Equation of motion Consider a beam with distributed mass :(r) per unit length
and exural rigidity 11 (r), both may vary with position r, as shown Figure 32.
The relationship between the bending moment and curvature is the well-known
expression
' (r) = 11 (r)
d
2
n(r)
dr
2
(171)
where n(r) is the transverse displacement and ' (r) is the bending moment. Taking
into account the equilibrium of forces in the y-direction of a dierential element of
the beam, it is obtained
) (r) dr d\ = 0
where ) (r) is the distributed force and \ (r) is the shear force. The equilibrium of
moments yields
\ dr d' = 0
Using the last two equation, it is obtained the following
- 65-
2 Multi-Degree-of-Freedom Systems
) (r) =
d
2
' (r)
dr
2
(172)
Then, dierentiating Equation (171) two times and using (172), it is derived
) (r) = 11 (r)
d
4
n(r)
dr
4
(173)
In a dynamic problem, the distributed force will be formed by an applied dynamic
force and an inertial force, which is included following DAlemberts principle. The
former will be denoted by j (r, t) and will be considered positive in the y-direction.
(see Figure 32). The inertial force is the product of the acceleration and the mass
per unit length, and will be opposed to positive direction of the acceleration. Hence
) (r, t) = j (r, t) :(r)
d
2
n(r, t)
dt
2
(174)
Substituting Equation (174) into (173), the dierential equation of motion of a beam
subjected to dynamic loading is obtained
:(r)
0
2
n(r, t)
0t
2
+11 (r)
0
4
n(r, t)
0r
4
= j (r, t) (175)
To obtain a unique solution to this equation, two boundary conditions at each end
of the beam and the initial displacement n(r, 0) and initial velocity n(r, 0) must be
specied.
Viscous damping may be included in the equation of motion assuming that the
damping force is proportional to the velocity of the beam in the j-direction. Equa-
tion (174) could be rewritten as
) (r, t) = j (r, t) c
0n(r, t)
0t
:(r)
0
2
n(r, t)
0t
2
and the equation of motion would then become
:(r)
0
2
n(r, t)
0t
2
+c
0n(r, t)
0t
+11 (r)
0
4
n(r, t)
0r
4
= j (r, t) (176)
The above assumption of damping implies that viscous damping is distributed uni-
formly along the beam.
Free undamped vibrations of beams Consider the equation of free motion
j (r, t) = 0, the absence of damping c = 0 and that the beam is uniform (just for
the sake of simplicity) 11 (r) = 11 and :(r) = :, then Equation of motion (176)
is reduced to
:
0
2
n(r, t)
0t
2
+11
0
4
n(r, t)
0r
4
= 0 (177)
One of the methods to solve this equation is the use of separation of variables. It is
considered that the displacement is separated into two functions, one varies with r
and the other varies with t. Then
- 66-
2.7 Systems with distributed mass and stiffness
n(r, t) = c(r) (t) (178)
where c(r) is only a function of the distance r along the beam dening its deected
shape when it vibrates, and (t) denes the amplitude of the vibration with time.
Then
0
2
n(r, t)
0t
2
= c(r) (t)
0
4
n(r, t)
0r
4
= c
i
(r) (t)
where c
i
= d
4
c,dr
4
. Substituting Equation (178) into (177)
:c(r) (t) +11 (t) c
i
(r) = 0 (179)
This equation can be rewritten so that r and t variables are collected together
(t)
(t)
=
11c
i
(r)
:c(r)
= co::t = ` (180)
The rst expression is a function of t only and the second expression is a function
of r only. Then, both must be constant, say ` = .
2
. The partial dierential equation
(177) converts itself into two ordinary dierential equations, one function of time
and one function of the spatial coordinate r
(t) +.
2
(t) = 0 (181)
11c
i
(r) .
2
:c(r) = 0
The rst equation is the equation of the free vibration of an undamped SDOF
system. Therefore, the solution of the time dependent function is
(t) =
1
cos .t +
2
sin.t (182)
The second equation can be rewritten as
c
i
(r) ,
4
c(r) = 0 with ,
4
=
.
2
:
11
(183)
The general solution of this equation is
c(r) = C
1
sin,r +C
2
cos ,r +C
3
sinh,r +C
4
cosh,r (184)
The solution contains four unknown constants C
1
, C
2
, C
3
and C
4
and the eigenvalue
parameter ,. The application of the four boundary conditions, two at each end of
the beam will yield a solution for , (and then for the natural frequency .) and for
three of the constants in terms of the fourth resulting in the modal shape of (184).
- 67-
2 Multi-Degree-of-Freedom Systems
Uniform simply supported beam The natural frequencies and mode shapes of
a uniform beam simply supported at both ends are determined next. At r = 0 and
r = 1, the displacement and the bending moment are zero. Thus:
n(0, t) = c(0) (t) c(0) = 0 C
2
+C
4
= 0 (185)
and
' (0, t) = 11
0
2
n(0, t)
0
2
t
11c
00
(0) = 0 ,
2
(C
2
+C
4
) = 0 (186)
Solving these two equations, it is obtained C
2
= C
4
= 0. The general solution
reduces to
c(r) = C
1
sin,r +C
3
sinh1 (187)
The boundary conditions at r = 1 give
n(1, t) = c(1) (t) c(1) = 0 C
1
sin,1 +C
3
sinh,1 = 0 (188)
and
' (1, t) = 11
0
2
n(1, t)
0
2
t
11c
00
(1) = 0 ,
2
(C
1
sin,1 +C
3
sinh,1) = 0
(189)
Since sinh,1 cannot be zero (otherwise, the solution is the trivial solution, that is
. will be zero), then
C
1
sin,1 = 0 (190)
If C
1
= 0, then c(r) = 0, a trivial solution. Therefore, sin,1 must be zero, thus
,1 = : : = 1, 2, 3, ... (191)
Equation (183) then gives the natural frequencies
.
a
=
:
2

2
1
2
r
11
:
: = 1, 2, 3, ... (192)
The mode shape corresponding to .
a
is obtained by substituting Equation (191) in
Equation (187) with C
3
= 0
c
a
(r) = C
1
sin
:r
1
: = 1, 2, 3, ... (193)
The value of C
1
is arbitrary. If C
1
= 1, the maximum value of c
a
(r) is equal to
unity. Innite modal shapes with its natural frequency are obtained. Equations
(192) and (193) say that the rst mode is a half sine and that its frequency is
.
1
=
2
p
11,:1
4
(194)
- 68-
2.7 Systems with distributed mass and stiffness
( ) ( )
1
sin / x x L f p =
( ) ( )
2
sin 2 / x x L f p =
( ) ( )
3
sin 3 / x x L f p =
2
1 2
EI
m L
p
w =
2
2 2
4 EI
m L
p
w =
2
3 2
9 EI
m L
p
w =
Figure 33: Simply supported beam and its natural frequencies and mode shapes
and the second mode is a complete sine with frequency .
2
= 4.
1
(see Figure 33).
The third mode is one and a half sine with frequency .
3
= 9.
1
, and so on.
It should be taken into account that closed form solutions can obtained for
simplest form of structures and boundary conditions. Often, structures are much
more complex and cannot be modeled in such a simple way. In this case FEM and
MDOF discrete modeling are required.
Orthogonality of modes Natural modes of vibrations have always the property
of orthogonality. If they are seen as vectors, any two are found to be normal to one
another. This is an useful property for the dynamic analysis of structures under
dierent forcing functions. We now demonstrate the existence of this property for
the case of a beam.
Take the spatial equation of (181), but considering the general case of non-
uniform mass and stiness
h
11 (r) c
00
v
(r)
i00
= .
2
v
:(r) c
v
(r) (195)
Multiplying both side by c
a
(r) and integrating from 0 to 1
Z
1
0
c
a
(r)
h
11 (r) c
00
v
(r)
i00
dr = .
2
v
Z
1
0
:(r) c
a
(r) c
v
(r) dr (196)
The left side is integrated by parts
Z
1
0
c
a
(r)
h
11 (r) c
00
v
(r)
i00
dr =

c
a
(r)
h
11 (r) c
00
v
(r)
i0

1
0

n
c
0
a
(r)
h
11 (r) c
00
v
(r)
io
1
0
+
(197)
+
Z
1
0
11 (r) c
00
v
(r) c
00
a
(r) dr
- 69-
2 Multi-Degree-of-Freedom Systems
It is quite clear that quantities between {...} are zero at r = 0 and 1 if the ends
of the beam are free, simply supported, or clamped. At a clamped end c = 0 and
c
0
(0) = 0. At a simply supported end c(0) = 0 and c
00
(0) = 0. At a free end
c
00
(r) = 0 and c
000
(r) = 0. Then Equation (196) can be rewritten as
Z
1
0
11 (r) c
00
a
(r) c
00
v
(r) dr = .
2
v
Z
1
0
:(r) c
a
(r) c
v
(r) dr (198)
Similarly, Equation (195) can be written for c
a
(r) and multiplied both sides by
c
v
(r),
Z
1
0
11 (r) c
00
a
(r) c
00
v
(r) dr = .
2
a
Z
1
0
:(r) c
a
(r) c
v
(r) dr (199)
Substracting Equation (198) from (199) yields

.
2
a
.
2
v

Z
1
0
:(r) c
a
(r) c
v
(r) dr = 0 (200)
Therefore, provided the frequencies are dierent r 6= :, the orthogonality conditions
are
Z
1
0
:(r) c
a
(r) c
v
(r) dr = 0 (201)
and, considering Equation (196)
Z
1
0
c
a
(r)
h
11 (r) c
00
v
(r)
i00
dr = 0 for : 6= r (202)
Forced response We again consider the partial dierential equation (175), which
is to be solved for a given j (r, t). Generally, the excitation force tends to excite
several natural frequencies and modes of the structure simultaneously. In fact, every
mode is a possible solution of the dierential equation and hence, the displacement
under any arbitrary loads will be a linear combination of all possible modes. Then,
the total displacement can be expressed as
n(r, t) =

X
v=1
c
v
(r)
v
(t) (203)
Substituting this equation into the equation of motion (177), it is obtained

X
v=1
:(r) c
v
(r)
v
(t) +

X
v=1
h
11 (r) c
00
v
(r)
i00

v
(t) = j (r, t)
Now, this equation is multiplied by c
a
(r) and integrated over the length of the
beam, yielding
- 70-
2.7 Systems with distributed mass and stiffness

X
v=1

v
(t)
Z
1
0
:(r) c
a
(r) c
v
(r) dr+

X
v=1

v
(t)
Z
1
0
c
a
(r)
h
11 (r) c
00
v
(r)
i00
dr =
Z
1
0
c
a
(r) j (r, t)
(204)
in which the integral and the summation have been interchanged.
Recalling the orthogonality conditions (201) and (202), all the terms must be
zero, except for the case where : = r. Equation (204) can be rewritten as follows

a
(t)
Z
1
0
:(r) c
2
a
(r) dr+
v
(t)
Z
1
0
c
2
a
(r)
h
11 (r) c
00
v
(r)
i00
dr =
Z
1
0
c
a
(r) j (r, t) dr
and again rewritten as
'
a

a
(t) +1
a

a
(t) = 1
a
(t) (205)
where
'
a
=
Z
1
0
:(r) c
2
a
(r) dr
1
a
=
Z
1
0
c
2
a
(r)
h
11 (r) c
00
v
(r)
i00
dr
1
a
(t) =
Z
1
0
c
a
(r) j (r, t) dr
An alternative equation for 1
a
can be obtained using Eq. (197) if each of the end
of the beam is free, hinged, or clamped,
1
a
=
Z
1
0
11 (r)
h
c
00
i
2
dr
Thus, '
a
is the generalized mass (or modal mass), 1
a
is the generalized stiness
and 1
a
is the generalized force for the :-th mode. Then an innite number of
equation like (205) is obtained, one for each mode. Therefore, the partial dierential
equation (175) in the unknown function n(r, t) has been transformed into innite
number of ordinary dierential equations in the unknown
a
(t). Hence, the motion
n(r, t) can be obtained by solving the modal equation for
a
(t). The equation for
each mode is independent of the equation for the other modes and then can be
solved separately. Each modal equation is equivalent to the equation of motion of a
SDOF system. Thus, the results obtained before for a SDOF system under dierent
excitations can be used here.
One
a
(t) is solved, the contribution of :-th mode to the displacement is
n
a
(r, t) = c
a
(r)
a
(t)
and the total displacement is the contribution of all modes:
- 71-
2 Multi-Degree-of-Freedom Systems
n(r, t) =

X
a=1
n
a
(r, t) =

X
a=1
c
a
(r)
a
(t) (206)
As it has been done for a SDOF system, Equation (205) can be rewritten as

a
(t) +.
2
a

a
(t) =
1
a
(t)
'
a
: = 1, 2, ..., (207)
or for the damped case

a
(t) + 2.
a

a
(t) +.
2
a

a
(t) =
1
a
(t)
'
a
: = 1, 2, ..., (208)
It must be noted that the generalized force, or modal force, has exactly the
same meaning as for a discrete MDOF system. Indeed, the only dierence is that an
integral is used in continuous systems instead of a discrete sum for a discrete system.
Note that, in theory, there is an innite number of modes; however, in practice, just
a few of them are excited by a given loading.
For the particular case of simply supported beams, where c
a
(r) is given by
(193), the modal mass of each mode of vibration is exactly 50% of the total physical
mass '
tcto|
of the beam, provided that c
a
(r) is properly scaled
'
a
= 0.5'
tcto|
2.7.2 Vibration of plates
Free undamped vibrations of plates The equation of motion of a thin plate
is more complicated than that of a beam. Stresses produced by bending in one
direction (x-axis) produce strains in both directions (r and j axes) due to Poissons
ratio, which is the ratio of the transverse strain to the strain in direction of the
stress (see Figure 34). Thus, stresses and strains in orthogonal axes interact with
each other.
The reader is referred to a book on plates and shells for the full derivation of the
equilibrium equation for instance, Ventsel & Krauthamer (2001), which we just
quote
1

0
4
n(r, j)
0r
4
+ 2
0
4
n(r, j)
0r
2
0j
2
+
0
4
n(r, j)
0j
4

= ) (r, j) (209)
where n(r, j) is the displacement in the perpendicular direction of the plate plane,
. direction and ) (r, j) is the load normal to the surface of the plate. The exural
stiness is given by
1 =
1/
3
12 (1 i
2
)
(210)
with 1 being the modulus of elasticity, / the thickness of the plate and i the
Poissons ratio.
If free vibration without damping is considered, then
- 72-
2.7 Systems with distributed mass and stiffness
z
b
y
x
a
f x y ( , )
M
x
M
x
M
y
M
y
Figure 34: Bending on a thin plate
) (r, j) = :
0
2
n(r, j, t)
0t
2
(211)
with : being the mass per unit area. Thus, the equation of motion for free vibration
is as follows
:
0
2
n(r, j, t)
0t
2
+1

0
4
n(r, j, t)
0r
4
+ 2
0
4
n(r, j, t)
0r
2
0j
2
+
0
4
n(r, j, t)
0j
4

= 0 (212)
Again, the solution can be carried out using the separation of variables method.
Thus, the solution can be written as the product of a spatial function c(r, j) and a
function of time (t):
n(r, j, t) = c(r, j) (t) (213)
Following the same procedure as for the beam, Equation (213) is substituted into
(212), yielding two equation
(t) +.
2
(t) = 0 (214)
0
4
c(r, j)
0r
4
+ 2
0
4
c(r, j)
0r
2
0j
2
+
0
4
c(r, j, t)
0j
4
= ,
4
c(r, j) (215)
where
,
4
=
.
2
:
1
- 73-
2 Multi-Degree-of-Freedom Systems
The rst equation, as for the beam case, consists of a free vibration of a SDOF
system, whose frequency is obtained through the solution of the spatial equation
together with the boundary conditions of the plate.
Generally, there is no analytical solution for the spatial equation. However, it is
possible to derive an analytical solution for cases of circular and rectangular plates.
In the case of a rectangular plate of dimension a/ (see Figure 34), constant thick-
ness, uniform distributed mass and simply supported at all edges, the displacement
function is
c(r, j) = sin

rr
a

sin

:j
/

= c
va
(r, j) (216)
where r and : are integers. If one substitutes (216) into the spatial equation (215),
it straightforward to nd
,
2
=
2

r
2
a
2
+
:
2
/
2

and then the natural frequencies are


.
va
=
2

r
2
a
2
+
:
2
/
2

r
1
:
The lowest natural frequencies will take place at r = : = 1 and the corresponding
model shape is given by (216). This is a half sine curve in both r and j. The next
two natural frequencies will occur for r = 2 and : = 1 and for r = 1 and : = 2.
These will results in a half sine in one direction and a complete sine curve in the
other, thus dividing the plate in two with a line of zero displacement at mid-span.
The plate will move in opposite direction of each side of the nodal line. Other
combination of the integer r and : will produce more complicated patterns for the
modal shape. Figure 35 shows some examples of mode shapes.
Forced response A forcing function j (r, j, t) is now considered. For this case,
Equation (211) can be rewritten as
) (r, j) = j (r, j, t) :
0
2
n(r, j, t)
0t
2
and the equation of motion is now as follows
:
0
2
n(r, j, t)
0t
2
+1

0
4
n(r, j, t)
0r
4
+ 2
0
4
n(r, j, t)
0r
2
0j
2
+
0
4
n(r, j, t)
0j
4

= j (r, j, t) (217)
As we did for the beam, the principle of mode superposition is followed again and
the displacement can be written as
n(r, j, t) =

X
a=1
c(r, j) (t)
Once c
a
(r, j) is obtained (as we saw in the preceding subsection), it is necessary
to nd (t) from the following equation
- 74-
2.7 Systems with distributed mass and stiffness
r , n = 1 = 1
y
x
r , n = 2 = 1
r , n = 1 = 2
r , n = 2 = 2
Figure 35: Mode shapes of a rectangular simply supported thin plate

a
(t) +.
2
a

a
(t) =
1
a
(t)
'
a
: = 1, 2, ...,
or for the damped case

a
(t) + +2
a
.
a

a
(t) +.
2
a

a
(t) =
1
a
(t)
'
a
: = 1, 2, ...,
The form of the preceding two equations is the same as for the beam case. The
only dierence is that the modal force and the modal mass are function of two
variables:
'
a
= :
Z
o
0
Z
b
0
c
2
a
(r, j) drdj 1
a
(t) =
Z
o
0
Z
b
0
c
a
(r, j) j (r, j, t) drdj
For the special case of a simply supported rectangular plate with uniformly distrib-
uted properties and considering unity scaled mode shapes, as given by Equation
(216), the modal mass is exactly 25% of the total physical mass '
tcto|
'
a
= 0.25'
tcto|
- 75-
2 Multi-Degree-of-Freedom Systems
2.8 Component mode synthesis
Many structures, such us xed-wing aircraft, helicopters, exible spacecraft, exible
robots, a variety of civil structure, etc., can be modeled as assemblages of interact-
ing bodies. We will described the basis of component mode synthesis methods (also
known as dynamic substructuring). Component mode synthesis methods enable
structures to be analyzed in parts (substructures), which when joined together, the
complete structure dynamic is recovered. This can have advantages when complex
systems have to be analyzed, permitting thus analyzing independently simpler sub-
structures with easier and more understandable dynamics. Each of the individual
analysis is usually carried out by the nite element method.
Component mode synthesis can be carried out following two forms: (1) xed
interface and (2) free interface. It is here assumed the equations of motion are avail-
able for two substructures, R and S, which are to be joined. Damping and external
forces will be ignored for simplicity. It will also be assumed that the substructures
do not have rigid degrees of freedom.
2.8.1 The xed interface method
This method was rstly described by Craig and Bampton in 1968 in the paper:
Coupling of sub-structures for dynamic analysis Thus, this method is usually known
as the CraigBampton method or "xed constraint mode method". Consider the
equations of motion of each of the substructures, R or S, in global coordinates,
without damping or external forces, be as follows
M u +Ku = 0
where the displacement vector u contains two kinds of coordinates:
1. those not at junctions or boundaries, designated as u
.
These are always free
to move.
2. those at junctions or boundaries, designated as u
1
. These are initially xed
and they will later be joined to the other substructure.
The displacement vector can be expressed as
u =

u
.
u
1

and the mass and stiness matrices can also be as

M
..
M
.1
M
1.
M
11

u
.
u
1

K
..
K
.1
K
1.
K
11

u
.
u
1

= 0 (218)
Taking into account that the boundary coordinates are xed u
1
= 0, the natural
frequencies and mode shapes of the substructure, with junctions xed, are given by
the solution of
M
..
u
.
+K
..
u
.
= 0
- 76-
2.8 Component mode synthesis
Thus, the natural frequencies and mode shapes are obtained from the eigenvalue
problem

.
2
v
M
..
+K
..

v
= 0, r = 1, 2, ...
and the modal matrix is then as

.
=

.,1

.,2
...

The modal modal coordinates, q


.
, are related to the non-boundary coordinates as
u
.
=
.
q
.
All these are carried out on both substructures, R and S.
Constraint modes are now generated, for each substructure, by applying a unit
static displacement to each element in u
1
. The displacements of each constraint
mode are the resulting values of u
.
. From Equation (218) for static loading, all the
accelerations are zero, it is obtained
K
..
u
.
+K
.1
u
1
= 0
or
u
.
= K
1
..
K
.1
u
1
=
C
u
1
where
C
= K
1
..
K
.1
can be seen as a "modal matrix" giving the displacements
of the non-boundary coordinates for unit displacements of the boundary coordinates.
These operations are carried out for both substructures.
The following transformation matrix

u
.
u
1

.

C
0 I

q
.
u
1

(219)
is now applied to Equation (218), given for substructure R
[M

]
1
{ q}
1
+ [K

]
1
{q}
1
= 0 (220)
with
{q}
1
=

q
.
u
1

1
(221)
and where
[M

]
1
=
"

.

C
0 I

1
#
T
M
..
M
.1
M
1.
M
11

.

C
0 I

1
=

..
M

.1
M

1.
M

11

1
(222)
[K

]
1
=
"

.

C
0 I

1
#
T
K
..
K
.1
K
1.
K
11

.

C
0 I

1
=

..
0
0 K

11

1
(223)
In Equation (222), M

..
and K

..
will be diagonal matrices. It can be shown that
K

.1
and K

1.
are zero.
- 77-
2 Multi-Degree-of-Freedom Systems
A similar Equation to (220) can be written for substructure S
[M

]
S
{ q}
S
+ [K

]
S
{q}
S
= 0 (224)
Hence, the equations of motion for the two substructures, R and S, combined can
be expressed in a single matrix equation as
c
M
b
q +
c
Kb q = 0 (225)
with
b q =

q
1
.
q
S
.
u
1

(226)
and where
c
M =

[M

..
]
1
0 [M

.1
]
1
0 [M

..
]
S
[M

.1
]
S
[M

1.
]
1
[M

1.
]
S
[M

11
]
1
+ [M

11
]
S

(227)
c
K =

[K

..
]
1
0 0
0 [K

..
]
S
0
0 0 [K

11
]
1
+ [K

11
]
S

(228)
where the superscript R or S indicates the substructure. The boundary coordinates
u
1
in Equation (226), are, of course, common to both substructures, when joined,
and appear only once. The total number of modes used in the nal equations is
therefore the sum of the interface-xed modes used in the two substructures plus the
number of joint displacement coordinates. Equation (225) represents the complete
structure and will be solved using b q as a set of generalized coordinates. Local
displacement, u
.
and u
1
are obtained from (219)
2.8.2 The free interface method
The two subsystems to be joined, R and S, are analyzed independently with the
boundary, coordinates free instead of xed. Thus, equation of motion for the free
undamped vibration in global coordinates is
[M]
1
{ u}
1
+ [K]
1
{u}
1
= 0 (229)
for substructure R and
[M]
S
{ u}
S
+ [K]
S
{u}
S
= 0 (230)
for substructure S. Equations (229) and (230) are now both transformed into modal
coordinates using
{u}
1
= []
1
{q}
1
and {u}
S
= []
S
{q}
S
The resulting equations in modal coordinates for the two substructures, still inde-
pendent, are
[M

]
1
{ q}
1
+ [K

]
1
{q}
1
= 0 and [M

]
S
{ q}
S
+ [K

]
S
{q}
S
= 0 (231)
- 78-
2.8 Component mode synthesis
where
[M

]
1
=
h
[]
1
i
T
[M]
1
[]
1
, [K

]
1
=
h
[]
1
i
T
[K]
1
[]
1
and
[M

]
S
=
h
[]
S
i
T
[M]
S
[]
S
, [K

]
S
=
h
[]
S
i
T
[K]
S
[]
S
Matrices [M

]
1
, [K

]
1
, [M

]
S
and [K

]
S
will be diagonal as usual. Equations
(231) can be combined, as follows
"
[M

]
1
0
0 [M

]
S
#

q
1
q
S

+
"
[K

]
1
0
0 [K

]
S
#

q
1
q
S

= 0 (232)
This equation consists of the equations of motion of the two substructures, not
connected yet, so there is still no coupling between them.
The modal matrices []
1
and []
S
have some rows which correspond to global
coordinate displacements at boundary) nodes, and some which do not. If the global
coordinates at boundary separated out, and designated {u
1
}
1
and {u
1
}
S
, for sub-
structures R and S, respectively, then they should be equal when the substructures
are joined, that is
{u
1
}
1
= {u
1
}
S
Using the modal transformation, we have
{u
1
}
1
= [
1
]
1
{q}
1
{u
1
}
S
= [
1
]
S
{q}
S
and then
[
1
]
1
{q}
1
= [
1
]
S
{q}
S
(233)
Each connection in Equation (233), between a mode in substructure R and a mode in
substructure S, reduces the number of independent degrees of freedom in the system
by one. Since the generalized coordinates must be independent, the following method
can used to eliminate the dependent coordinates.
Firstly, Equation (233) can be rearranged as
h
[
1
]
1
[
1
]
S
i

q
1
q
S

= 0
or
Aq = 0 (234)
The vector q is of size
q
, where
q
is the number of modal coordinates of both
substructure together. The matrix A has
j
columns and
1
rows,
1
being the
number of global displacement coordinates at the boundaries of either subsystem,
obviously this number is the same for both substructures.
Equation (234) can be divided as

A
1
A
2

q
o
q
)

= 0 (235)
- 79-
2 Multi-Degree-of-Freedom Systems
where A
1
, which must not be singular, is square and formed by those columns of
A associated to the dependent coordinates, q
o
. The matrix A
2
corresponds to the
remainder of the columns associated to the independent coordinates q
)
. In theory,
it does not matter which
1
coordinates are chosen to be dependent, but some
choices may be more convenient than others in practical cases.
Writing Equation (235) as
A
1
q
o
+A
2
q
)
= 0
then
q
o
= A
1
1
A
2
q
)
Introducing the trivial Equation q
)
= Iq
)
, the latter equation can be rewritten as

q
o
q
)

A
1
1
A
2
I

q
)
(236)
Equation (236) is the required transformation; however, the individual
terms will usually have to be re-organized to make it compatible with Equation (232).
Thus, the elements of

q
o
q
)

must be re-organized to the original order



q
1
q
S

,
which corresponding changes to the matrix, which is renamed as B. Thus, Equation
(236) becomes

q
1
q
S

= Bq
)
This transformation is now applied to (232)
B
T
"
[M

]
1
0
0 [M

]
S
#
B q
)
+B
T
"
[K

]
1
0
0 [K

]
S
#
Bq
)
= 0
or
c
M
b
q +
c
Kb q = 0 (237)
with
b q = q
)
(238)
and where
c
M = B
T
"
[M

]
1
0
0 [M

]
S
#
B (239)
c
K = B
T
"
[K

]
1
0
0 [K

]
S
#
B (240)
Equation (237) is the equation of motion of the complete joined structure. It will
usually be solved for natural frequencies and modal shapes in the usual way, and
damping and external forces can then be added afterwards.
- 80-
3 Introduction to signal analysis
Understanding the principles of signal analysis is important to understand dynamic
analysis of structures. Measurements of real systems will always produce signals,
which may represent motions of a structure (displacements, accelerations, etc.),
loads applied to structures (forces, pressures, etc.) or other physical quantities as
functions of time. Generally, these signals will be continuous in reality. However,
analogue signals are treated always as discrete signals since digital computers are
used for the analysis. Therefore, in experimental dynamic analysis or FE analysis
we have to cope with discretized or digital signals rather than continuous signals.
Of course, the use of discretized signals may introduce errors in the results, which
must be understood.
3.1 Introduction to signal types
A signal can be dened as a simple mathematical function
n = ) (r) (241)
where r is the independent variable which species the domain of the signal. For
example:
n = sin(.t) is a function of the variable in the time and hence is a time signal.
l (.) = 1,

:.
2
+,c. +/

is a frequency domain signal.


c(r, j) is a spatial domain signal.
Dynamic signals can be deterministic or random. Deterministic signals can be
dened by a mathematical function and then, their value at any instant of time can
be predicted. Random signals cannot be described precisely using a mathematical
function and can only be described in terms of their statistical properties.
3.1.1 Deterministic signals
Deterministic signals can be classied further as periodic and aperiodic.
Periodic signals are those that repeat themselves in time, with a period T, such
that
n(t) = n(t +:T) with : = 1, ...,
An example of periodic function is r(t) = sin(.t +c). Any periodic function of
period T can be represented as a summation of innite series of sinusoids, i.e. a
Fourier series, given by
n(t) =
1
2
a
0
+

X
a=1
(a
a
cos (.
a
t) +/
a
sin(.
a
t))
- 81-
3 Introduction to signal analysis
Aperiodic signals or non-periodic signals do not repeat themselves in time. A
distinction can be made between two types of non-periodic signals, transient and
innite aperiodic.
Transient signals tend to be characterized by no signicant variation for long
periods of time with short periods of intense activity. For a signal to be truly
transient, the signal would have to be zero for an innite time before and after the
transient event. However, in practice it is only necessary for the transient event to be
captured completely within the period of time during which the signal is observed.
Innite aperiodic signals are those which are continuous but which do not repeat
in time. Examples might be sinusoidal signals with time varying mean, amplitude
or frequency. Conceptually, it is possible to consider aperiodic signals as periodic
with an innite period.
3.1.2 Random signals
Random signals cannot be predicted and must be described in terms of probability
functions and statistical averages rather than by explicit equations. Some denitions
usually employed are:
Sample Function: a single time history representing a random phenomenon.
Ensemble: a collection of sample functions from a random process.
Sample Record: A sample function observed over a nite time interval.
Random (or Stochastic) Process: A collection of all possible sample functions
that the random phenomenon might have produced.
Random processes may be categorized as being either stationary or nonstation-
ary.
Stationary random processes may be further categorized as being either ergodic
or nonergodic. For an ergodic random process, the statistical properties calculated
over a single sample function are the same as those calculated for the entire random
process. There are many specialized subclassications of nonstationary random
processes that are beyond the scope of this book.
3.2 Fourier Analysis of signal
3.2.1 The Fourier Series
A function n(t) which is periodic in time T can be represented as an innitive series
of sinusoids:
n(t) =
1
2
a
0
+

X
a=1
(a
a
cos (.
a
t) +/
a
sin(.
a
t)) (242)
where .
a
= 2:,T and
- 82-
3.2 Fourier Analysis of signal
a
0
=
2
T
Z
T
0
n(t) dt, a
a
=
2
T
Z
T
0
n(t) cos (.
a
t) dt, /
a
=
2
T
Z
T
0
j (t) sin(.
a
t) dt
(243)
Alternative forms of Equations (242) and (243) are:
n(t) = c
0
+

X
a=1
c
a
cos (.
a
t +c
a
) (244)
where
c
a
=
p
a
2
a
+/
2
a
, c
a
= tan
1

/
a
a
a

(245)
or
n(t) =

X

l
a
c
).nt
where
.
a
=
2:
T
, l
a
=
1
T
Z
T
0
n(t) c
).
n
t
dt
Note that l
a
= l

a
, Re (l
a
) = a
a
,2, Im(l
a
) = /
a
,2, and |l
a
| = c
a
,2. It can
be seen that all frequency components of this signal are discrete at integer multiples
of .
1
= 2,T, which describe its linear spectrum. Note that the units for the linear
spectrum are the same as those of the original signal.
3.2.2 The Fourier integral transform
Transient signals may be dened as being zero for a long period of time except for
a short duration in which there are signicant amplitude changes.
Z

|n(t)| dt <
For the purpose of a theoretical analysis, a transient signal may be considered to be
a periodic signal with a repeat period of innity. Thus,
n(t) =
Z

((.) cos (.t) +1(.) sin(.t)) (246)


where
(.) =
1

n(t) cos (.t) dt, 1(.) =


Z

n(t) sin(.t) dt (247)


An alternative form for Equations (246) and (247) is
n(t) =
Z

l (.) c
).t
d.
- 83-
3 Introduction to signal analysis
where
l (.) =
1
2
Z

n(t) c
).t
dt
The frequency content of the signal given by l (.) is now a continuous function
of frequency ., as opposed to periodic signals, in which the frequency content was
given by components at discrete frequencies. Therefore, this continuous function is
termed a "spectral density" (as opposed to "linear spectrum"). The units of the
spectral density are the units of the original signal per Hertz (e.g.

m, s
2

, Hz for
acceleration).
3.2.3 Digital signals
The sampling process of a continuos signal introduces additional complexity into
signal analysis. The process of obtaining a discrete time history signal (digital
signal) from a continuos signal has two implications:
1. The information content of a discretized (or sampled) signal is less than that
of the continuos signal. The continuous signal contains an innite number
of independent samples, which is reduced by the sampling process to a nite
number of independent samples.
2. Uncertainty is added to sampled signals. Quantication of the error is part of
the sampling process since the number of intervals is nite.
A discretized signal has a set of discrete values, spaced along the period. Thus,
the sampling time t is given by
t =
T

When dealing with discrete signals of nite duration T, it is necessary to use


the discrete Fourier transform (as opposed to the Fourier series of integral Fourier
transform). Consider a function which is dened only at discrete points (at
t = t
I
, / = 1, ..., ) can be represented by a nite series:
n(t
I
) n
I
=
1
2
a
0
+
^1
2
X
a=1

a
a
cos
2:/

+/
a
sin
2:/

(248)
where
a
0
=
2

.
X
I=1
n
I
, a
a
=
1

.
X
I=1
n
I
cos
2:/

, /
a
=
1

.
X
I=1
n
I
sin
2:/

(249)
An alternative form is
n(t
I
) n
I
=
.1
X
a=0
l
a
c
)
2rnI
^
(250)
- 84-
3.3 State-space analysis
where
l
a
=
1

.1
X
I=0
n
I
c
)
2rnI
^
(251)
and t
I
= /t. The last two expressions are known as the forward Fourier transform
and the inverse Fourier transform respectively. It can be seen that the discretiza-
tion of both the time and frequency domains has made an inherent assumption of
periodicity of both the time history and the spectrum.
Periodicity means that the range of frequencies that can be represented using
a discretized time history is limited to the Nyquist frequency )
c
,2, with )
c
=
1,t being sampling rate. Continuos signals with frequencies over )
c
cannot be
represented. Actually, the frequency content of the signal should be suciently
smaller than )
c
, otherwise, the discretized signal misrepresents the continuous signal.
Or the frequency rate )
c
should be suciently larger than the frequency content of
the signal is wanted to be represented accurately. This fact is known as aliasing. It
is recommended to the reader to consult a specialized book on signal processing for
vibration analysis.
3.3 State-space analysis
Basically, the state-space analysis allows us to change the form of second-order
dierential equation of an degree of freedom system to a 2 rst-order dierential
equations. The rst-order form of equation of motion is known as state-space form.
State-space form is specially important when one has to deal with complex modes.
Let us consider the two-degree-of freedom system of Figure 19, whose equation
of motion is repeated here

:
1
0
:
2

n
1
(t)
n
2
(t)

c
1
+c
2
c
2
c
2
+c
2

n
1
n
2

/
1
+/
2
/
2
/
2
+/
2

n
1
n
2

=

1
1
(t)
1
2
(t)

(252)
Expanding the equations
:
1
n
1
+ (c
1
+c
2
) n
1
c
2
n
2
+ (/
1
+/
2
) n
1
/
2
n
2
= 1
1
(253)
:
2
n
2
c
2
n
1
+c
2
n
2
/
2
n
1
+/
2
n
2
= 1
2
The two equations above are second-order dierential equations which require knowl-
edge of the initial states of position and velocity for all two degrees of freedom in
order to solve the transient response. In the state space formulation, the two second
order dierential equations are transformed into four rst-order dierential equa-
tions. Following typical state space notation, we will refer to the states as x and the
output as y.
Start by solving (253) for the two equations for the highest derivatives, in this
case the two second derivatives
n
1
= [1
1
(c
1
+c
2
) n
1
+c
2
n
2
(/
1
+/
2
) n
1
+/
2
n
2
] ,:
1
(254)
n
2
= [1
2
+c
2
n
1
c
2
n
2
+/
2
n
1
/
2
n
2
] ,:
2
- 85-
3 Introduction to signal analysis
We now change notation, using x to dene the four states: two displacement and
two velocities
r
1
= n
1
(255)
r
2
= n
2
r
3
= n
1
r
4
= n
2
One can observe the relationship between the state and its rst derivatives
r
1
= r
3
= n
1
(256)
r
2
= r
4
= n
2
and between the rst and second derivatives
r
3
= n
1
(257)
r
4
= n
2
Using the above equations, one can write
r
1
= r
3
r
2
= r
4
r
3
= [1
1
(c
1
+c
2
) r
3
+c
2
r
4
(/
1
+/
2
) r
1
+/
2
r
2
] ,:
1
r
4
= [1
2
+c
2
r
3
c
2
r
4
+/
2
r
1
/
2
r
2
] ,:
2
which can be rewritten in a matrix form as

r
1
r
2
r
3
r
4

0 0 1 0
0 0 0 1

(I
1
+I
2
)
n
1
I
2
n
1

(c
1
+c
2
)
n
1
c
2
n
1
I
2
n
2

I
2
n
2
c
2
n
2

c
2
n
2

r
1
r
2
r
3
r
4

0 0
0 0
1
n
1
0
0
1
n
2

1
1
1
2

or in a short way,
x = Ax +Bu (258)
in which we have used the common nomenclature: x is the state vector, A is the
system matrix, B is the input matrix and u is the input vector (do not confuse with
the displacement vector). The input vector corresponds to the forces applied to each
degree-of-freedom.
To account for cases where the desired output is not just the states but is a linear
combination of the states, an output matrix C is dened to relate the outputs y to
the states. Also, a matrix D, known as the feed-through matrix, is multiplied by
the input u to account for outputs that are related directly to the inputs
y = Ax +Cu
In our example, if we are interested in all the four states, displacements and
velocities, the matrix output equation C becomes the identity and D is zero
- 86-
3.3 State-space analysis

j
1
j
2
j
3
j
4

1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1

r
1
r
2
r
3
r
4

0 0
0 0
0 0
0 0

n
1
n
2

If we were only interested in the two displacements and not in the three velocities,
the output equation would be

j
1
j
2

1 0 0 0
0 1 0 0

r
1
r
2
r
3
r
4

0 0
0 0

n
1
n
2

If we were only interested in the accelerations, the output equation would be (note
the accelerations correspond to the second-derivative of states r
3
and r
4
)

j
1
j
2

=
"

(I
1
+I
2
)
n
1
I
2
n
1

(c
1
+c
2
)
n
1
c
2
n
1
I
2
n
2

I
2
n
2
c
2
n
2

c
2
n
2
#

r
1
r
2
r
3
r
4

1
n
1
0
0
1
n
2

n
1
n
2

With all the possible variations of the output equation, the state equation (258)
never changes. Note that for the general matrix equation of motion
M u(t) +C u(t) +Ku(t) = F (t)
the state equation can be always written as
x(t) =

0 I
M
1
K M
1
C

x(t) +

0
M
1

u(t)
where r
1
, ..., r
.
are the displacements, r
.+1
, ..., r
2.
are the velocities and n
1
, ..., n
.
are the forces applied at each degree-of-freedom.
- 87-
3 Introduction to signal analysis
- 88-
Part II
Finite element procedures for the
dynamic analysis of structures.
Francisco Javier Montns
Professor. Universidad Politcnica de Madrid, Spain.
- 89-
- 90-
4 Finite element discretization of continuous systems.
Stiness and Mass matrices
In the following sections we will briey review some concepts of the discretization of
continuous systems using nite elements. The idea is to obtain stiness, mass and
damping matrices of a MDOF problem that is an approximation of the continuous
one.
4.1 Stiness matrix
The stiness of a structure is readily dened from Hookes law for a spring element
1 = 1n (259)
where 1 is the force, n is the displacement and 1 is the stiness. When n = 1 we
have 1 = 1, i.e., the stiness is the force we have to introduce to obtain a unit
displacement. This concept is valid regardless of the actual problem to which 1
corresponds. The simplest spring element has two degrees of freedom, as shown in
Figure 36. One of them must be restrained if it is the only element of the "structure".
If the gure is not obvious for the reader, we recommend the reader to take particular
cases on the element shown in Figure 36 to convince him/herself that the stiness
matrix is actually the one shown in the Figure, and we also recommend the reader
to obtain some background from a matrix structural analysis book or notes.
For the case of an association of elements, like the one shown in Figure 37, the
stiness matrices of the dierent elements are mounted (assembled) into the global
stiness matrix in a way as shown in Figure 37. We again leave to the reader the
task of considering dierent boundary condition cases and to convince him/herself
that the system of equations shown in Figure 37 represents the actual behavior of
the "structure".
We see in the following subsections two typical cases of stiness matrix. The
rst one is that of a structural element (a 2D beam) and the second one of a con-
Figure 36: Stiness matrix of one spring element. Natural boundary conditions are
"forces of nature"; essential boundary conditions are some displacements, so the
structure is correctly restrained and the system of equations can be solved.
- 91-
4 Finite element discretization of continuous systems. Stiffness and
Mass matrices
tinuum element. The reader should read notes or a book on the basic nite element
formulation to obtain a proper description of the procedure.
4.1.1 Beam elements
For the case of beam elements, the local stiness matrix is obtained from matrix
structural analysis (the so called direct stiness method) such that
f
|
c
= K
|
c
u
|
c
(260)
where superindex | accounts for the local coordinates in which the equation is formed
and subindex c for the element. In expanded form (where we omit for clarity the |
superindex and the c subindex), if is the section area, 1 is the geometric inertia
of the section (second moment-area), 1 is Youngs modulus and 1 is the length of
the beam:

)
1
)
2
)
3
)
4
)
5
)
6

1
1
0 0
1
1
0 0
12
11
1
3
6
11
1
2
0 12
11
1
3
6
11
1
2
4
11
1
0 6
11
1
2
2
11
1
1
1
0 0
sym 12
11
1
3
6
11
1
2
4
11
1

n
1
n
2
n
3
n
4
n
5
n
6

(261)
where )
1
=
1
, )
2
= \
1
, )
3
= '
1
are the axial, shear and moment resultants in the
section of the beam at local node 1. The displacements n
1
= n
a
, n
2
= n
j
, n
3
= ,
:
are the displacements on the longitudinal direction of the beam, on the vertical
direction and the cross section rotation at node 1, see Figure 38left. Similar force
Figure 37: Assembly of a set of springs (two in parallel and one in series). Note
how the dierent element matrices are mounted in the global stiness (system of
equations) matrix.
- 92-
4.1 Stiffness matrix
and displacement meanings are applied for node 2 (degrees of freedom 4, 5 and
6). These local vectors and matrix are transformed to global coordinates using the
proper local-to-global transformation matrices
T =

R 0
0 R

(262)
where if c is the angle between the longitudinal axis of the bar and the global raxis
(see Figure 38right)
R =

cos c sinc 0
sinc cos c 0
0 0 1

(263)
so the elemental vectors and matrix in global coordinates become (the reader should
verify these formulae as an exercise)

K
j
= TK
|
T
T
u
j
= Tu
|
f
j
= Tf
|
(264)
These matrices are assembled in the global stiness matrix according to the usual
rules:
K =
.1
^
c=1
K
j
c
; f =
.1
^
c=1
f
j
c
; u =
.1
^
c=1
u
j
c
(265)
where 1 is the number of elements. If static analysis is to be carried-out, we solve
the equation
Ku = f (266)
Example 7 Obtain the stiness terms 1
66
and 1
56
of a beam using the strength
of materials procedures.
Solution: The stiness terms 1
6
may be obtained imposing the beam end slope
n
6
= ,
6
= 1, keeping the rest of displacements zero, see Figure 39. The forces and
reactions in each degree of freedom i are then the stiness matrix terms 1
i6
= 1
6i
.
Figure 38: Local (left) and global (right) coordinates of a planar beam element.
- 93-
4 Finite element discretization of continuous systems. Stiffness and
Mass matrices
Figure 39: Obtaining the terms of the stiness matrix of a beam.
For the particular case, the problem is hyperstatic but it may be solved using the
compatibility method and Mohr theorems
1 = ,
6
= ,
1
+
Z
1
0

' (r)
11
dr
= 0 +
'1
11
+
1
2
11
2
11
and
0 = n
5
= n
1
+,
1
1 +
Z
1
0

' (r) (1 r)
11
dr
= 0 + 0 1 +
'1
2
211
+
11
3
311
i.e. solving the system of equations
' =
411
1
= 1
66
and 1 =
611
1
2
= 1
56
where 1 is the geometric inertia of the section (second moment-area), 1 is Youngs
modulus, 1 is the length of the beam and

' (r) is the moment resultant on section
r of the beam. We leave to the reader to obtain the rest of the terms of the stiness
matrix of a beam
Example 8 Obtain the deection law n(r) ,1 and the slope law ,(r) for the case
of Example.7.
Solution: Once the values of ' and 1 are known, the Navier-Bresse and Mohr
theorems may be used again to obtain the deection of the beam

' (r) = ' +1(1 r) =


411
1

611
1
2
(1 r)
- 94-
4.1 Stiffness matrix
n(r) = n
1
+,
1
1 +
Z
a
0

' ( r) (r r)
11
d r
=
Z
a
0

4
1

6
1
2
(1 r)

(r r) d r
=

r
1

1
r
1

1
i.e. for = r,1 with [0, 1]
n(1)
1
=
2
( 1) (267)
and
,(r = 1) =
dn
dr
=
2
(3 2) (268)
We leave to the reader to verify that the boundary conditions are met.
4.1.2 Continuum elements
For continuum elements, the nite element formulation is used. In that formulation,
which we very briey review, shape functions
i
(X) are employed to interpolate
the displacements, see Figure 40 for the case of a bilinear 2D continuum element
using isoparametric interpolation. For example, for the bilinear plane element
c
u(X
c
) =

n(X)
(X)

=
P
4
i=1

i
(X) n
c
i
P
4
i=1

i
(X)
c
i


1
0
0
1


2
0
0
2


3
0
0
3


4
0
0
4

| {z }
N =

N
1
N
2
N
3
N
4

| {z }
2 11O1

n
c
1

c
1
n
c
1

c
2
n
c
3

c
3
n
c
4

c
4

T
(269)
where X are the coordinates and the element nodal 11O1 displacements are
u
c
=

n
c
1

c
1
n
c
1

c
2
n
c
3

c
3
n
c
4

c
4

| {z }
11O1 1
T
(270)
so
u(X
c
) = N u
c
(271)
Then the strains are obtained from
=
1
2
h
u+(u)
T
i
(272)
- 95-
4 Finite element discretization of continuous systems. Stiffness and
Mass matrices
Figure 40: Shape functions for a bilinear element using Lagrange polinomials and
local isoparametric coordinates.
in Voigt notation as

-
a
-
j

aj

| {z }

c
=


1,a
0
0
1,j

1,j

1,a

2,a
0
0
2,j

2,j

2,a

...

4,a
0
0
4,j

4,j

4,a

| {z }
B
c
=

B
c
1
B
c
2
B
c
3
B
c
4

n
c
1

c
1
.
.
.
n
c
4

c
4

| {z }
u
c
(273)
which is written in compact form as

c
= B
c
u
c
(274)
We have used the notation
1,a
= 0
1
,0r. Then, the proper constitutive equation
is written in matrix (Voigt) form as

c
= D
c

c
(275)
where
c
are the stresses in the element and D
c
is the constitutive matrix (which
diers for plane stress or plane strain). Then in each element

c
= D
c
B
c
u
c
(276)
If w are virtual displacements, using the same interpolation (Galerkin formulation)
w(X
c
) = N w
c
(277)
and the virtual strains are

c
= B
c
w
c
(278)
so the virtual work in the element is
Z

c
(
c
)
T

c
d
c
= w
cT
Z

c
B
cT
D
c
B
c
d
c

u
c
(279)
- 96-
4.1 Stiffness matrix
Hence the element stiness matrix is
K
c
=
Z

c
B
cT
D
c
B
c
d
c
(280)
which is usually evaluated numerically using Gauss quadrature
K
c
=
.11
X
i=1
B
cT
i
D
c
B
c
i
\
i
d
c
i
(281)
where i is the integration point, \
i
is the weight and 11 is the number of in-
tegration points. We note that B
c
i
are evaluated at Gauss quadrature points and
d
c
i
is also evaluated at those points because it contains the Jacobian of the trans-
formation from the normalized domain (a 2 2 square for bilinear elements,
(, j) [1, 1] [1, 1]) to the actual element domain X
c
. We omit further
details here. The reader should review basic linear static nite element notes or
books.
Example 9 Obtain the week form of a beam problem.
Solution: From the Bernoulli-Euler beam theory, we know that the eld equation
(also known as strong form) is
d
dr

d
dr

11
d
2
n
dr
2

= (282)
where is the distributed (down) load on the beam. Assuming a constant 11, and
weighting the dierential equation by an arbitrary function n(r) all over the domain
(known as variational or weighted form)
Z
1
0
n
d
dr

d
dr

11
d
2
n
dr
2

dr =
Z
1
0
ndr (283)
If Eq. (282) holds, it is obvious that Eq. (283) also holds for whatever well-behaved
n(r). Integrating twice by parts and selecting the arbitrary n(r) function
n

11
d
3
n
dr
3

1
0

Z
1
0
dn
dr

11
d
3
n
dr
3

dr =
Z
1
0
ndr
n

11
d
3
n
dr
3

1
0

dn
dr

11
d
2
n
dr
2

1
0
+
Z
1
0
d
2
n
dr
2

11
d
2
n
dr
2

dr =
Z
1
0
ndr
i.e.
Z
1
0
d
2
n
dr
2

11
d
2
n
dr
2

dr =
Z
1
0
ndr n
1

11
d
3
n
dr
3

| {z }
0
End shear force
+
dn
dr
1

11
d
2
n
dr
2

| {z }
0
End moment
- 97-
4 Finite element discretization of continuous systems. Stiffness and
Mass matrices
If we choose n(r) = n(r), then since where there are end forces there are no dis-
placements and vice-versa
Z
1
0
d
2
n
dr
2

11
d
2
n
dr
2

dr =
Z
1
0
ndr (284)
An expression that is known as weak form and that may alternatively be obtained
from the principle of minimum potential energy.
Example 10 Show using Equation (284) and the solution of Example 8 that the
stiness terms may be obtained using, for example
1
66
=
Z
1
0
d
2
n
6
dr
2

11
d
2
n
6
dr
2

dr (285)
where n
6
refers to the deection function obtained in Example 8
Solution: It is straightforward to verify it using
n
6
(r) =

r
1

r
1
1

1
n
00
6
(r) =
2
1
2
(3r 1)
1
66
=
Z
1
0
d
2
n
6
dr
2

11
d
2
n
6
dr
2

dr
=
411
1
4
Z
1
0
(3r 1)
2
dr =
411
1
which is the corresponding term in the stiness matrix. We leave to the reader the
task of obtaining the dierent n
i
(r) functions and to obtain the stiness coecients
as
1
i)
=
Z
1
0
d
2
n
i
dr
2

11
d
2
n
)
dr
2

dr (286)
The n
i
(r) are usually written in terms of the adimensional coordinate [0, 1] and
are known as Hermite (cubic) polynomials. These polynomials are the deections
obtained by the Bernoulli-Euler theory, and are used to link the matrix structural
analysis with the nite element theory. The Hermite polynomials have the same
shape as the cubic Splines used in many elds in engineering. We leave to the
reader to obtain all Hermite polynomials for the dierent boundary conditions which
dene the dierent stiness terms.
Example 11 Compute the stiness matrix of the building given in Figure 41, con-
sidering only the shown horizontal degrees of freedom, and where 11 is the column
bending stiness coecient, 1 is the length of the columns and : is the total mass
in the oors. Assume that the bending deformation of the oors is negligible when
compared to that of the columns.
Solution: Each of the columns has a bending stiness given by 1
22
in Equation
- 98-
4.2 Mass matrices
EI, L
EI, L
EI, L
m
m
m 1
2
3
1
2
3
Figure 41: Three storey building of Example 11
(261)
1
one column
= 12
11
1
3
Then, as it can be seen in the right side of Figure 41, both degrees of freedom 1
and 2 have the contribution of the stiness of 4 columns, whereas the DOF 3 has
the contribution of only 2 columns. The inuence of one DOF over the remaining
ones is that of 2 columns. Is this reasoning id not obvious for the reader, we leave
him/her as an exercise to obtain the stiness matrix through the concept of stiness.
Therefore, the stiness matrix of the structure is
K = 12
11
1
3

4 2 0
2 4 2
0 2 2

4.2 Mass matrices


Mass matrices are also derived using matrix analysis of structures for structural
elements (from the kinetic energy) or from the variational formulation in the case of
continuum elements. For these cases, the resultant matrix is called consistent mass
matrix. However, we will see that in some occasions this "consistent" matrix has not
a desirable layout, so it is sometimes diagonalized to form what is known as lumped
mass matrix. We will briey review these matrices.
4.2.1 Consistent mass matrix
For the case of structural elements, for example the beam element, the mass matrix
in local coordinates is (we omit the details of how to obtain it, but we note that is
- 99-
4 Finite element discretization of continuous systems. Stiffness and
Mass matrices
can be obtained using Hermite polynomials)
M
|
c
=
j1
420

140 0 0 70 0 0
156 221 0 54 131
41
2
0 131 31
2
140 0 0
Sym 156 221
41
2

(287)
where 1 is the length of the beam, is the section area and j is the density of the
beam. This matrix is converted to global coordinates and assembled in the global
mass matrix M in the same manner as the stiness matrix.
For continuum elements, the velocity and accelerations are computed using the
same interpolation functions as for the displacements
v (X
c
) = N v
c
(288)
a(X
c
) = N a
c
(289)
If the density is j, the principal of virtual work gives for the inertia terms of the
element f
A
c
=
R

c
ja
c
d
c
the following expression
w
cT
f
A
c
= w
cT
Z

c
N
T
jNd
c

a
c
(290)
So we dene the mass matrix as
M
c
|{z}
.11O1.11O1
=
Z

c
N
T
|{z}
.11O12
j N
|{z}
2.11O1
d
c
(291)
or in general
M
c
=
Z

c
N
T
mN d
c
(292)
where m is a small matrix or scalar which characterizes the mass "density" of the
element for each nodal degree of freedom. For 2D continuum mechanics elements it
is the 2 2 matrix
m=

j 0
0 j

(293)
where j is the density.
Example 12 Obtain the consistent mass matrix of a beam element.
Solution: If one uses a "rule of thumb" to obtain the mass matrix of a beam element
it would be something like
M =
j1
2

1
1
c1
2
1
1
c1
2

(294)
- 100-
4.2 Mass matrices
where c is a number dicult to determine from a "rule of thumb", so c = 0 is a
good choice (but there is no consensus about what this c should be). However, this
intuitive "rule of thumb" has the advantage of yielding diagonal mass matrices, but
the inconvenient that it is not "consistent" with the way the stiness matrix is ob-
tained. Hence, if the stiness matrix terms are obtained from Hermite polynomials,
see Example 10, the mass matrix terms should be obtained the same way, i.e. for
the term of Example 10,
'
66
=
Z
1
0
n
6
[j] n
6
dr
= j
Z
1
0

r
1

r
1
1

2
1
2
dr =
1
3
j
105
which is the value given in Equation (287). We leave to the reader to obtain the
remaining terms in Equation (287) in a similar way.
4.2.2 Lumped mass matrix
Diagonal or "lumped" mass matrices are of much interest in structural dynamics
as we will see later. In transient analysis, very ecient explicit algorithms may be
developed in which the solution of a system of equations is avoided. In the case
of modal extraction, the generalized eigenproblem may be simply converted to a
standard one, for which many more and faster solution procedures exist. Lump-
ing techniques are more adequate in problems in which the degrees of freedom are
homogeneous (i.e. when they have the same physical interpretations, for example
when they are all translational DOFs as in the 21 31 solid elements). When
the problem have DOFs of dierent nature, care must be exercised so the procedure
preserves the correct physical meaning.
There are many techniques to produce diagonal mass matrices. Almost all of
them are performed at the element level. We briey review some of the most used
ones.
Total mass scaling One method is, from the consistent mass matrix, to eliminate
those elements outside the diagonal and to scale the diagonal terms as to preserve
the total mass of the element. This method, also called special lumping technique
or Hinton-Rock-Zienkiewicz method should be used only in the case of homogeneous
DOF. The total mass of the element is computed as
' =
Z

c
: d
c
(295)
and that of the diagonal terms of the consistent mass matrix is (21 case)

'
o
0
0 '
o

=
a
X
i=1
Z

c
N
T
i
mN
i
d
c
(296)
Then, for the lumped mass matrix '
1
i)
we simply set
'
1
i)
=

0 if i 6= ,
('
i)
,'
o
) ' if i = ,
(297)
- 101-
4 Finite element discretization of continuous systems. Stiffness and
Mass matrices
i.e.
M
1
= diaq

'
'
o
'
ii

(298)
A handy way to write down the total mass is to use the inuence vector J which
contains a 1 in the DOF for one spatial direction and 0 in the remaining entries, i.e
for the 2D case we may have J = [1, 0]
T
and
' =
a
X
i=1
Z

c
J
T
N
T
i
mJ d
c
(299)
Row sum method This method is also valid when all DOF are homogeneous. In
this case we simply set
'
1
i)
=

0 if i 6= ,
a
P
I=1
'
iI
=
a
P
I=1
R

c
J
T
N
T
i
mN
I
J d
c
if i = ,
(300)
This method is similar to use piecewise shape functions in the computation of the
mass matrix.
Change of quadrature method In this procedure, the mass matrix is computed
with a dierent integration procedure than that of the stiness matrix. The "con-
sistent" mass matrix is actually computed numerically using the Gauss quadrature
rule. The nodal box is
M
i)
=
Z

N
T
i
mN
)
d =
.11
X
j=1
\
j
N
T
i
(
j
) mN
)
(
j
) J
j
(301)
where N
i
(
j
) is the value of the nodal shape function box evaluated at integration
point j (coordinates
j
), \
j
is the weight of the quadrature rule at integration point
j and J
j
is the volumetric Jacobian determinant of the isoparametric transformation
X(). Usually, the mass matrix is not diagonal because the optimal
j
are not at
the nodes of the element. At those integration points, all the shape functions are
generally nonzero and thus coupling terms appear. However, at the nodes of the
element all but one of the shape functions are zero. Thus selecting
j
to be at the
nodes of the element, if N
i
(
j
) 6= 0 then N
)
(
j
) = 0 for i 6= ,. The resulting
mass matrix is, hence, diagonal. The weights of the integration rule must be calcu-
lated in order to exactly integrate the particular polynomials in the isoparametric
coordinates. The resulting weights are those for the Lobatto integration rule, which
always include the endpoints. The rst two Lobatto rules are the trapezoidal and
Simpson rules. For elements with quartic and higher order shape functions (very
uncommon when using lumping), the position of the midnodes should be changed
in order to exactly integrate the corresponding polynomials.
Type of elem Rule Isoparametric coord. Weights
Linear Trapezoidal = 1 \
1
= \
2
= 1
Quadratic Simpson

1
= 1

2
= 0

3
= 1

\
1
= 1,3
\
2
= 4,3
\
3
= 1,3
- 102-
4.2 Mass matrices
Example 13 Compute the lumped mass matrix of a rod element of length 1
c
, and
mass per unit length :
c
Solution: We leave the details to the reader. We note that simple intuition yields
the correct answer.
M =
:
c
1
c
2

1 0
0 1

(302)
Example 14 Assume a rod element in which the shape functions are quadratic.
Compute the consistent and lumped mass matrices.
Solution: The lumped mass matrix for this case is
M =
:
c
1
c
6

1 0 0
0 4 0
0 0 1

(303)
We again leave the details to the reader.
Example 15 Compute the mass matrix of the building of Example 11. Assume that
the mass of the columns is negligible when compared to that of the oors.
Solution: Since the masses are lumped in the degrees of freedom, the mass matrix
is obviously
M =

:
:
:

(304)
Example 16 Consider the 4 storey building of Figure 42. Use 11,1
3
= 1 (in
consistent units of force/displacement) and ' = 1 (in consistent units of mass).
Compute the stiness and mass matrices for the building. Consider negliggible the
mass of the columns with respect to that of the oors. Consider the stiness of the
oors in bending much larger than that of the columns (i.e. rigid). Consider also
only horizontal degrees of freedom.
Solution: The element stiness matrix of a beam with only the horizontal degrees
of freedom is
K
c
=
1211
1
3

1 1
1 1

(305)
Since there are two columns
K
|cc|
=
2411
1
3

1 1
1 1

(306)
Then, for each level
K
1ct|cc|
= 240

1 1
1 1

(307)
K
2ao|cc|
= 120

1 1
1 1

(308)
K
3vo|cc|
= 48

1 1
1 1

(309)
- 103-
4 Finite element discretization of continuous systems. Stiffness and
Mass matrices
E,I,L
E,2I,L
E,5I,L
E,10I,L
2M
M
M
2M
1
2
3
4
Figure 42: Four storey building of Exercise 16
K
4tI|cc|
= 24

1 1
1 1

(310)
So
K =
4
^
|cc|=1
K
|cc|
=

360 120 0 0
120 168 48 0
0 48 72 24
0 0 24 24

(311)
Since the masses are lumped and ' = 1
M =

2 0 0 0
0 1 0 0
0 0 1 0
0 0 0 2

(312)
- 104-
5 Computational procedures for eigenvalue and eigen-
vector analysis.
5.1 The modal decomposition revisited. Mode superposition analy-
sis
In the rst part of these notes we have seen that the linear dynamics equation can
be written as
M u +C u +Ku = f (313)
where M is the mass matrix, K is the stiness matrix, C is the damping matrix,
u is the vector of unknown displacements and f is the load vector. In practice,
if the response of the structure is linear, a very ecient method to analyze the
dynamic behavior is to perform modal superposition, as we will see later. Modal
decomposition is basically a change of coordinates in the dimensional space (with
being the number of degrees of freedom). The displacement vector u(X,t), where
X are the coordinates and t is the time, can be expressed in such system as
u(X,t) =
.
X
i=1

i
(t)
i
(X) =


1

2
...
.

2
.
.
.

= (314)
where
i
(X) are the "base vectors", named "modes of vibration" or "eigenvectors",
which we will compute in this section, and
i
(t) are the coordinates in such system.
is the modal matrix. This is also viewed as a separation of variables technique for
solving dierential equations. Using this basis, the equation of motion is
M

+C

+K = f (315)
which can be premultiplied by
T
to obtain

T
M

+
T
C

+
T
K =
T
f (316)
Usually, damping in structures is small, so the essence of the dynamic behavior
remains unaltered. Consider the undamped equation of motionlater we will address
again the damping terms:

T
M

+
T
K =
T
f (317)
Now we choose the vectors
i
(modes) to be the special basis such that (this is the
most important equation in eigenvalue/eigenvector analysis, named the generalized
eigenvector/eigenvalue equation)
K
i
= `
i
M
i
(no sum) K = M diaq (`
i
)
| {z }

2
K = M (318)
If we pre-multiply by
T
we have

T
K =
T
M (319)
- 105-
5 Computational procedures for eigenvalue and eigenvector
analysis.
It is obvious that if
i
fullls Eq. (318), any multiple will do so:
K(c
i
) = `
i
M (c
i
) (320)
and, hence, we can choose the moduli to be such that


i
M
)
= 1 if i = ,

i
M
)
= 0 if i 6= ,
(321)
The last identity follows from the symmetry of K and M:
K
i
= `
i
M
i
K
)
= `
)
M
)
)

(

T
)
K
i
= `
i

T
)
M
i

T
i
K
)
= `
)

T
i
M
)
)
substract

both equations
`
i
= `
)
unless
T
)
M
i
= 0 (322)
Thus

T
M = I and
T
K = (323)
If we insert Equation (321) in (317) and we use b =
T
f (which are known as modal
participation factors) we obtain

+ = b (324)
i.e.

2
.
.
.

`
1
`
2
...
`
.

2
.
.
.

/
1
/
2
.
.
.
/
.

Since = diaq (`
1
, ..., `
.
) is a diagonal matrix, we obtain an uncoupled system of
equations:

1
(t) +`
1

1
(t) = /
1
(t)
...

.
(t) +`
.

.
(t) = /
.
(t)
(325)
which can be solved and integrated independently. We will address the time integra-
tion of such equations later. However, we note that the solution of the homogeneous
equation is of the type

i
(t) =
i
sin(.
i
t +c
i
) (326)
where
i
is the amplitude and .
i
=

`
i
is the (natural, circular, angular) frequency
in radians per second in SI units. It can be readily veried that (326) is the solution
of (325) by simple substitution
.
2
i

i
sin(.
i
t +c
i
) +`
i

i
sin(.
i
t +c
i
) = 0 (327)
For the forced case, the solution may be performed through the time integration
algorithms addressed later or through the Duhamel integral (which of course can be
also evaluated numerically):

i
(t) =
1
.
i
Z
t
0
/
i
(t) sin.
i
(t t) dt +c
i
sin.
i
t +,
i
cos .
i
t (328)
- 106-
5.2 Other eigenvalue and eigenvector problems
where the constants c
i
, ,
i
are determined from the initial conditions. Once the n
i
(t),
n
i
(t) and n
i
(t) have been solved for, the displacements, velocities and accelerations
of the structure may be obtained as
u(X,t) =
.
X
i=1

i
(t)
i
(X) = (329)
u(X,t) =
.
X
i=1

i
(t)
i
(X) =

(330)
u(X,t) =
.
X
i=1

i
(t)
i
(X) =

(331)
This procedures is usually called mode superposition analysis. In practice we note
that for a large structure (say thousands of degrees of freedom), only a few modes
: << are needed to obtain a rather accurate prediction of the response of the
structure. Furthermore, all modes are almost never computed in large structures.
For general matrices, eigenvalues and eigenvectors may be complex. However,
in structural dynamics the mass and stiness matrices are symmetric and at least
one of them positive denite, which is a sucient condition for the eigenvalues to
be real.
5.2 Other eigenvalue and eigenvector problems
There are many other problems in physics which also result in an eigenvalue problem
and to which the algorithms given in Section 6 are also applicable. One problem
frequently found in structural dynamics is the buckling problem. In the buckling
problem the mass matrix is substituted by the geometric (or nonlinear) stiness
matrix. The main ideas are presented in the following example.
Example 17 Obtain the nite element formulation (which can also be obtained from
matrix structural analysis) of the buckling equation of a Bernoulli-Euler column, see
Figure 43. Compare the result with the nite element formulation of the free vibration
of the same column in bending
Solution. The equilibrium of a Bernoulli-Euler beam section under a compressive
load is given by equations
X
)orcc: = 0
d\
dr
= (r) (332)
X
:o:c:t: = 0 \ =
d'
dr
+1
dn
dr
(333)
where r is the longitudinal beam coordinate, (r) is the distributed load on the beam,
1 is the compressive load, \ (r) is the shear force in the section and ' (r) is the
section moment resultant. Since from the Bernoulli-Euler beam theory
' = 11
d
2
n
dr
2
(334)
- 107-
5 Computational procedures for eigenvalue and eigenvector
analysis.
P
q
q
V
M
V+dV
M+dM
x
dx
Figure 43: Figure of Example 17.
we obtain
d
dr

d
dr

11
d
2
n
dr
2

+1
dn
dr

= (335)
If we assume that (r) = 0 (no lateral force), then
d
2
dr
2

11
d
2
n
dr
2

+1
d
2
n
dr
2
= 0 (336)
The weak form (or principle of virtual work) is obtained weighting the equation by
a function n which vanishes at the boundaries and integrating over all the domain
Z
1
0
n
d
2
dr
2

11
d
2
n
dr
2

dr +
Z
1
0
1n
d
2
n
dr
2
dr = 0 (337)
and applying integration by parts where n(0) = n(1) = 0

Z
1
0
dn
dr
d
dr

11
d
2
n
dr
2

dr
Z
1
0
1
dn
dr
dn
dr
dr = 0 (338)
and doing so again
Z
1
0
d
2
n
dr
2
11
d
2
n
dr
2
dr
dn
dr
11
d
2
n
dr
2

1
0
| {z }
A

Z
1
0
1
dn
dr
dn
dr
dr = 0 (339)
If we consider pinned boundary conditions ' (0) = ' (1) = 0, the second term
vanishes and the weak form is
Z
1
0
d
2
n
dr
2
11
d
2
n
dr
2
1
Z
1
0
dn
dr
dn
dr
dr = 0 (340)
Using the Galerkin approach
Z
1
0
=
.1
X
c=1
Z
1
c
and

n( [0, 1
c
]) =
P
4
i=1

i
n
c
i
n( [0, 1
c
]) =
P
4
i=1

i
n
c
i
(341)
- 108-
5.2 Other eigenvalue and eigenvector problems
where 1 is the number of elements, 1
c
is the length of each element,
i
are the
shape functions (in this case Hermite polynomials) and n
c
i
are the nodal "displacements"
(usually we consider two extreme displacements and two extreme rotations). Then,
following the usual procedure we obtain
K =
.1
^
c=1
K
c
; K
G
=
.1
^
c=1
K
c
G
(342)
where
1
c
i)
=
Z
1
c
d
2

i
dr
2
11
d
2

)
dr
2
dr and 1
c
Gi)
=
Z
1
c
d
i
dr
d
)
dr
dr (343)
and the matrix form of Eq. (340) is
K u 1K
G
u = 0 (344)
where u is the global nite element displacements vector. Equation (344) is recog-
nized as an eigenvalue problem, where 1 is the eigenvalue and u is the eigenvector
(buckling mode). Matrix K is the global stiness matrix, whereas K
G
plays the role
of a mass matrix, and is usually named geometric (or nonlinear or initial stress)
stiness matrix.
In general, the linearized buckling problem is determined by
K +`K
G
= 0 (345)
where is the buckling mode, ` is the load multiplier and K
G
is the geometric or
nonlinear stiness contribution such that
K
G
= K K
1
(346)
where K is the full (nonlinear) stiness matrix and K
1
is the linear one (for small
strain and displacements). The problem may also be written in the form
K =

` 1
`

K
1
(347)
On the other hand, for the second case, the dierential equation of the free vibration
of a column in bending may be obtained as before with = : n being the inertia
loads, so the equilibrium is
d
2
dr
2

11
d
2
n
dr
2

+:
d
2
n
dt
2
= 0 (348)
where : is the mass of the beam per unit length. In free vibration analysis, the
solution is of the type
n(r, t) = c(r) sin(.t +,) (349)
so
d
2
n
dt
2
= .
2
c(r) sin(.t +,) (350)
- 109-
5 Computational procedures for eigenvalue and eigenvector
analysis.
and
d
4
n
dr
4
=
d
4
c(r)
dr
4
sin(.t +,) (351)
i.e.

11
d
4
c
dr
4
:.
2
c

sin(.t +,) = 0 t (352)


Now the nite element discretization is applied on the mode c(r) and the corre-
sponding weighting function
c( [0, 1
c
]) =
4
X
i=1

c
c
i
(353)
So following the same steps as before, we obtain (we leave the details to the reader)
K

.
2
M

= 0 (354)
where
1
c
i)
=
Z
1
c
d
2

i
dr
2
11
d
2

)
dr
2
dr and '
c
i)
=
Z
1
c

i
:
)
dr (355)
Note that once the matrices are formed, both problems are numerically identical and
the "mass" matrix is the dierence between both problems.
In problems of continuum mechanics of solids, where nite elements are bricks, the
geometric stiness matrix is given by the stresses.
5.3 Computation of modes and frequencies
We have seen that the conditions that modes have to fulll in order to obtain the
decoupled system of equations are
K
i
= `
i
M
i
with
i
M
)
= c
i)
=

1 if i = ,
0 if i 6= ,
(356)
If the system is small, the computation of the eigenvalues or natural frequencies
(also named natural frequencies) can be done through the roots of the so-called
characteristic polynomial. This polynomial is obtained from the following equation
K
i
= `
i
M
i
(K`
i
M)
i
= 0 (357)
If (K`
i
M)
i
= 0 and
i
6= 0 (nontrivial solution), then
j (`
i
) = det (K`
i
M) = 0 (358)
If the system is small, then this is an ecient way of obtaining the eigenvalues `
i
.
Then, the eigenvectors are obtained solving the following system of equations:
[K`
i
M]
i
= 0 (359)
- 110-
5.3 Computation of modes and frequencies
Example 18 Consider a system such that the stiness and mass matrices are given
by (assuming consistent units, for example SI units)
K =

4 1 0
1 2 1
0 1 1

and M =

4 1 0
1 8 1
0 1 2

(360)
Compute the natural frequencies and modes of vibration. Verify the result.
Solution: In order to compute the eigen-frequencies, we establish the characteristic
polynomial
j (`) = det

4 1 0
1 2 1
0 1 1

4 1 0
1 8 1
0 1 2

= det

4 4` ` 1 0
` 1 2 8` ` 1
0 ` 1 1 2`

= 58`
3
+ 119`
2
60` + 3 = 0 (361)
The solution of this polynomial can be computed explicitly
2

`
1
= 0.056
`
2
= 0.728
`
3
= 1. 268

.
1
= 0.237
.
2
= 0.853
.
3
= 1. 126

)
1
= 0.0377 Hz
)
2
= 0.136 Hz
)
3
= 0.179 Hz

(362)
2
The solution of a cubic polinomio can be performed iteratively (say by bisection method) or
explicitly. Since both 1 and 1 are symmetric possitive semi-denite matrices, all eigenvalues are
real, as can be checked from the discriminant
= 18o/cd 4/
3
d + /
2
c
2
4oc
3
27o
2
/
2
_
_
_
0 three distinct real roots
= 0, thee real roots, two of them equal
< 0, two complex roots, one real root
where o, /, c, d are the coecients of the polinomio:
o`
3
+ /`
2
+ c` + d = 0
The explicit form uses Cardanos reduction by means of the change of variable
` = t
/
3o
to
t
3
+ jt + q = 0
with
j =
3oc /
2
3o
2
; q =
2/
3
9o/c + 27o
2
d
27o
3
which solution is due to Vite
t
I
= 2
_

j
3
cos
_
1
3
arccos
_
3q
2j
_

3
j
_
I
2
3
_
; with I = 1, 2, 3
so `
I
= t
I
/ (3o). This solution is well known in continuum mechanics because it also allows
to explicitly obtain the principal values of a tensor. Its geometric representation is the Haigh-
Wertergaard representation.
- 111-
5 Computational procedures for eigenvalue and eigenvector
analysis.
where we have also converted circular frequencies .
i
(assumed in radians per second)
into the usual, ordinary frequencies (expressed in cycles per second or Hz in SI
units).
The modes of the system can be computed solving a system of equations for each
frequency. For the third frequency we have

4 1 0
1 2 1
0 1 1

1. 268 1

4 1 0
1 8 1
0 1 2

c
1
c
2
c
3

0
0
0

1. 072 4 2. 268 1 0
2. 268 1 8. 144 8 2. 268 1
0 2. 268 1 1. 536 2

c
1
c
2
c
3

0
0
0

Since the modulus of the eigenvector is arbitrary, we can choose one arbitrary value;
for example take

c
1
= 1 (if no solution is possible, then

c
1
= 0). We place a hat
over the mode to emphasize that it is not normalized. The rst equation is
1. 072 4 2. 268 1

c
2
= 0

c
2
= 0.472 82 (363)
and substituting in the second equation we obtain
2. 268 1 + 8. 144 8 0.472 82 2. 268 1

c
3
= 0

c
3
= 0.697 91
The mass-weighted modulus is

T
3
M

3
=

1 0.473 0.698

4 1 0
1 8 1
0 1 2

1.0
0.473
0.698

= 5. 157
Hence, since we want this modulus to be one, the third mode is

3
=

3
q

T
3
M

3
=
1

5. 157

1.0
0.473
0.698

0.440 3
0.208 3
0.307 4

(364)
We leave to the reader to verify following the same procedure, that the remaining
modes are
= [
1
,
2
,
3
] =

0.0745 0.2433 0.4403


0.2665 0.1535 0.2082
0.3170 0.5827 0.3074

(365)
It is straightforward to verify that they are indeed the solution using

T
K =

0.074 0.243 0.440


0.266 0.153 0.208
0.317 0.583 0.307

4 1 0
1 2 1
0 1 1

0.074 0.243 0.440


0.266 0.153 0.208
0.317 0.583 0.307

0.056
0.727
1. 27

=
and we leave to the reader to verify that also
T
M = I.
- 112-
5.4 Reduction of the general eigenvalue problem to the standard
eigenvalue problem.
Example 19 Use Matlab (or similar) to compute the modes and natural frequencies
of the building of Example 16 (page 103), Figure 42. Plot the modes using matlab
(use 1 = 1 for the plots). Use subplot(1,4,i) to plot mode i
Solution:
The script for Matlab is
[modes,evals]=eigs(K,M), %compute eigenvalues and eigenvectors
ws = sqrt(evals), %circular freqs (rd/s)
fs = ws./(2*pi), %frequencies in Hz
subplot(1,4,1); %creates first figure
ylabel(height (floor)), hold on, %to place y-label
for i=1:4, %for each of the four modes...
subplot(1,4,i); %box for plotting mode i
plot([0;modes(:,5-i)],[0,1,2,3,4]); %plots mode i
xlabel([num2str(fs(5-i,5-i)), Hz]); %labels frequency
end,
The answer of Matlab is
modes =
-0.4923 -0.4512 0.2279 0.0460
0.6963 -0.4897 0.5076 0.1334
-0.1743 0.5863 0.7165 0.3356
0.0083 -0.0684 -0.2501 0.6578
evals =
264.8615 0 0 0
0 114.8894 0 0
0 0 46.3710 0
0 0 0 5.8781
ws =
16.2746 0 0 0
0 10.7186 0 0
0 0 6.8096 0
0 0 0 2.4245
fs =
2.5902 0 0 0
0 1.7059 0 0
0 0 1.0838 0
0 0 0 0.3859
And the gure with the modes is shown in Figure
5.4 Reduction of the general eigenvalue problem to the standard
eigenvalue problem.
In the previous section we have solved what is called "the general eigenvalue prob-
lem", that is to obtain the eigenvalue and eigenvector pairs (`
i
,
i
) such that
- 113-
5 Computational procedures for eigenvalue and eigenvector
analysis.
0 0.5 1
0
0.5
1
1.5
2
2.5
3
3.5
4
0.38587 Hz
h
e
i
g
h
t

(
f
l
o
o
r
)
1 0 1
0
0.5
1
1.5
2
2.5
3
3.5
4
1.0838 Hz
1 0 1
0
0.5
1
1.5
2
2.5
3
3.5
4
1.7059 Hz
1 0 1
0
0.5
1
1.5
2
2.5
3
3.5
4
2.5902 Hz
Figure 44: Modes of vibration of the structure of Example 16
K
i
= `
i
M
i
. However, many procedures are available for solving the ordinary
eigenvalue problem, that is the eigenvalue and eigenvector pairs (`
i
,
i
) such that

i
= `
i

i
(366)
These procedures may be used to solve the general eigenvalue problem with some
transformations. If the mass matrix is diagonal M = diaq ('
i
), then we can trans-
form K
i
= `
i
M
i
through an ecient and rather straightforward way
K
i
= `
i
M
1
2
M
1
2

i
M

1
2
KM

1
2
| {z }

K
M
1
2

i
| {z }

i
= `
i
M
1
2

i
| {z }

i
(367)
where
M

1
2
= diaq

1,
p
'
i

(368)
and

1
i)
= 1
i)
,
p
'
i
'
)
Once the standard eigenproblem

K

i
= `
i

i
has been solved, the eigenvalues re-
main unaltered and the modes are recovered by the following component scaling

i
= M

1
2

c
i

I
=

c
i

'
I
with / = 1, ..., (369)
- 114-
5.4 Reduction of the general eigenvalue problem to the standard
eigenvalue problem.
to yield the mass matrix normalized modes of the general problem

i
=

i
q

T
i
M

i
=
M

1
2

i
q

T
i

i
= M

1
2

(c
i
)
I
=

c
i

'
I
with / = 1, ...,
(370)
On the other hand, if the mass matrix is not lumped, a Cholesky factorization (for
possitive-denite matrices), or a similar decomposition, may be applied to the mass
matrix. The Cholesky decomposition is such that
M = LL
T
(371)
where L is a lower triangular matrix and L
T
is its transpose. The nice feature of this
decomposition for nite element usage is that the decomposition may be performed
over the storage of M without destroying the skyline format typical of the nite
element problems. This type of format brings huge savings for large problems in
terms of high speed memory storage and computation time and, hence, it is crucial
to preserve these type of storages. Modern commercial FE codes have fully sparse
storage (only nonzero entries are stored). In such cases, some zero entries become
nonzero during the decomposition. However, even in this case a so-called "symbolic
Cholesky decomposition" can be performed in advance (before actually doing any
operation over M) in order to account for the new structure.
Using the Cholesky decomposition the generalized eigenvalue problem may be
re-written in the format
K
i
= `
i
LL
T

i
L
1
KL
T
| {z }

K
L
T

i
| {z }

i
= `
i
L
T

i
| {z }

i
= `
i

i
(372)
Cholesky, LU and LDL
T
(Crout) factorizations are basic procedures for the solu-
tion of a system of equations and are available in any F.E. program. The procedure
is as follows:
Ax = b =
How we obtain
x = A
1
b?
LL
T
x
|{z}
y
= b

Ly = b (Forward reduction)
L
T
x = y (Back-substitution)
(373)
Note that
x = A
1
b x = L
T
L
1
b
| {z }
y
Once the Cholesky factorization of the mass matrix is obtained,

K is implicitly
known. Usually there is no need to obtain the explicit value of K. As we will see
later, all we need is to obtain the eect on a vector y, which can be done without
actually computing

K, simply storing K and L:
z =

Ky = L
1
KL
T
y
| {z }
x

"Backsubstitution": L
T
x = y
Multiplication: b = Kx
"Forward reduction": Lz = b
(374)
- 115-
5 Computational procedures for eigenvalue and eigenvector
analysis.
As before, once the eigenvectors of the standard problem are obtained, the modes
of the generalized problem may be recovered as

i
= L
T

i
(375)
which again is performed as a backsubstitution operation.
Example 20 Compute the eigenvalues (squared natural frequencies) and eigenvec-
tors (modes of vibration) of the structure of Examples 11 and 15. Take for example
(consistent units, for example SI units)
c = 12
11
:1
3
= 100 (376)
Solution: The stiness and mass matrices are given in Examples 11 and 15 respec-
tively
K = 12
11
1
3

4 2 0
2 4 2
0 2 2

and M =

:
:
:

Hence we can reduce the problem to standard problem where

K = M
12
KM
12
= 12
11
:1
3
| {z }

4 2 0
2 4 2
0 2 2

which characteristic polynomial is


det


K `I

= det

4 ` 2 0
2 4 ` 2
0 2 2 `

= 0
i.e.
8
3
24
2
` + 10`
2
`
3
= 0
We leave to the reader to check that the following are the eigenvalues of

K for the
case = 100
`
1
= 649.3959; `
2
= 310.9916; `
3
= 39.6125
and in general
`
1
= 6.4940; `
2
= 3.1099; `
3
= 0.3961
and the eigenvectors are, regardless of the value of
= [
1
,
2
,
3
] =

0.5910 0.7370 0.3280


0.7370 0.3280 0.5910
0.3280 0.5910 0.7370

These eigenvectors should be scaled to be mass-normalized.


The natural circular frequencies and periods of vibration are
.
1
=
p
`
1
= 2.5483

, .
2
= 1.7635

, .
3
= 0.6294

- 116-
5.4 Reduction of the general eigenvalue problem to the standard
eigenvalue problem.
T
1
=
2
.
1
=
2.4656

; T
2
=
3.5629

, T
3
=
9.9831

where

= 10 for the particular case.
Example 21 Compute the eigenvalues and eigenvectors of Example 18 using a
lumped mass matrix by the row sum method
Solution: The lumped mass matrix of Example 18 is
M =

5
10
3

M
1
2
= diaq

5,

10,

(377)
and

K =

4
5

1

50
0

50
1
5

1

30
0
1

30
1
3

0.8 0.141 0
0.141 0.2 0.183
0 0.183 0.333

(378)
The eigenvalues are given by the roots of the characteristic polynomial
j (`) = det

4
5
`
1

50
0

50
1
5
`
1

30
0
1

30
1
3
`

= `
3
+
4
3
`
2

11
25
` +
1
50
= 0 (379)
which are (we can use Vites procedure)
`
1
= 0.0539; `
2
= 0.4443; `
3
= 0.8352 (380)
The result can be compared to (362). It is seen that the rst eigenvalue is very
similar in both cases, whereas the approximation for the higher eigenvalues is worse.
This is usually the case for lumped masses. The eigenvectors of the standard problem
can be obtained as before

0.1567 0.2022 0.9667


0.8268 0.5085 0.2404
0.5402 0.8370 0.0875

(381)
We have already normalized these vectors respect to their moduli. The modes of the
generalized problem can be recovered as
= M

1
2

=

0.070 0.090 0.432


0.261 0.161 0.076
0.319 0.483 0.051

(382)
We see that again the best approximation respect to the unlumped problem is obtained
for the rst mode. One technique frequently employed to quantify a mode compar-
ison is the MAC (Modal Assurance Criterion) measures. This criterion is used to
- 117-
5 Computational procedures for eigenvalue and eigenvector
analysis.
compare experimental versus numerical modes. We can employ it here to compare
the modes obtained from both methods. The usual MAC measure is dened as
'C
i)
=

i
W

i
W
i
q

)
W

)
(383)
where
i
are the reference modes (i.e. experimental modes) and

)
are the modes to
be compared (i.e. the analytic modes). The matrix W is a weighting matrix (usually
the mass matrix, although sometimes the stiness matrix is also employed). For our
case we use W = M to compare the modes from both methods. One set is already
normalized to the mass matrix (

), whereas the other set is normalized to a dierent


mass matrix, so we have re-normalized it (set )
'C =
T
M

0.071 0.118 0.508


0.265 0.211 0.089
0.324 0.634 0.060

4 1 0
1 8 1
0 1 2

0.075 0.243 0.440


0.267 0.154 0.208
0.317 0.583 0.307

1 0.01 0.001
0.03 0.96 0.26
0.12 0.43 0.89

(384)
If the entry in the MAC matrix has an absolute value close to unity, the modes show
a good agreement, whereas absolute values below, say 0.8, show a bad agreement. It
is seen that both sets show a good agreement, being for the rst one an excellent
match.
Example 22 Compute the eigenvalues and eigenvectors of Example 18 by means
of a Cholesky factorization and a standard eigenvalue method.
Solution: The Cholesky factorization of the mass matrix is
M =

4 1 0
1 8 1
0 1 2

2 0 0
1
2

31
2
0
0
2

31

58

31

2
1
2
0
0

31
2
2

31
0 0

58

31

= LL
T
(385)
Hence (in actual FE computations the inverse and matrix multiplications are never
done)

K = L
1
KL
T
=

2 0 0
1
2

31
2
0
0
2

31

58

31

4 1 0
1 2 1
0 1 1

2
1
2
0
0

31
2
2

31
0 0

58

31

1
=

1
2

31
2

31
11
31

42

58
899
2

2
42

58
899
1253
1798

1.0 0.359 21 0.09433


0.359 21 0.354 84 0.355 80
0.09433 0.355 80 0.696 89

(386)
- 118-
5.5 Static condensation
We leave to the reader to show that the eigenfrequencies are those given in Eq. (362),
and that the eigenvectors are those given in Eq. (365) once the proper recovery
(
i
= L
T

i
) and mass scaling are performed.
Example 23 Compute the eigenvalues and eigenvectors of Example 16 by means of
a MATLAB standard eigenvalue subroutine and verify that the results are coincident.
Solution: We note that the mass matrix is diagonal, so the procedure y very simple,
see Eq. (367). We leave the exercise to the reader.
5.5 Static condensation
When the mass matrix is not denite-possitive, i.e., when there are zeroes in the
diagonal terms (massless degrees of freedom), the Cholesky factorization cannot be
performed (nor the inversion). In these cases, those degrees of freedom do not add
in themselves useful information to the dynamics equations. A usual procedure to
eliminate such degrees of freedom is the "static condensation" of those degrees of
freedom. Assume the following block-partitioned structure of the undamped dy-
namics equation
M u +Ku = f

M
nn
0
0 0

u
n
u
c

K
nn
K
nc
K
cn
K
cc

u
n
u
c

f
n
f
c

(387)
and of the statics one

K
nn
K
nc
K
cn
K
cc

u
n
u
c

f
n
f
c

(388)
where subindex : implies degrees of freedom to preserve (usually called masters
DOF) and subindex : implies degrees of freedom to eliminate (usually called slaves
DOF). Symmetry of K implies that K
nc
= K
T
cn
and that both K
nn
and K
cc
are
symmetric. The second matrix equation on both systems is
K
cn
u
n
+K
cc
u
c
= f
c
u
c
= K
1
cc
f
c
K
1
cc
K
cn
u
n
(389)
The rst equation of (387) may be written as
M
nn
u
n
+K
nn
u
n
+K
nc
u
c
= f
n
(390)
so substituting u
c
from (389) into (390) and arranging terms
M
nn
u
n
+

K
nn
K
nc
K
1
cc
K
cn

u
n
= f
n
K
nc
K
1
cc
f
c
(391)
which may be written as

M u
n
+

Ku
n
=

f (392)
and for the static case

Ku
n
=

f (393)
where

M = M
nn
,

K = K
nn
K
nc
K
1
cc
K
cn
and

f = f
n
K
nc
K
1
cc
f
c
(394)
- 119-
5 Computational procedures for eigenvalue and eigenvector
analysis.
In this case

M is a possitive-denite matrix which admits the Cholesky decomposi-
tion. The generalized eigenvalue problem is

K
n
= `

M
n
(395)
as it is straightforward to verify. Once the u
n
or
n
degrees of freedom are com-
puted, the eliminated ones may be readily recovered using Eq. (389), i.e. since in
this case of free vibrations there are no loads

c
= K
1
cc
K
cn

n
= S
n
(396)
where S is the static condensation matrix. In the absence of loads at the slave DOFs
we also have
u
c
= Su
n
(397)
Hence we can write
u = Tu
n
and = T
n
where T =

I
S

(398)
where T is the transformation matrix, mapping matrix or reduction matrix. We
note that
T
T
KT=

I, K
nc
K
1
cc

K
nn
K
nc
K
cn
K
cc

I
K
1
cc
K
cn

=

I, K
nc
K
1
cc

K
nn
K
nc
K
1
cc
K
cn
K
cn
K
cc
K
1
cc
K
cn
(= 0)

= K
nn
K
nc
K
1
cc
K
cn
=

K (399)
and

f = T
T
f
n
(400)
It is important to note that partitions (387) and (388) and inversion of matri-
ces are actually never performed explicitly. In fact, operation (389) is basically a
Gauss condensation procedure and, hence, the operations may be performed just
selecting those equations to be eliminated and performing a Gauss elimination on
them as usually done in order to solve the system of equations. The name of static
condensation corresponds to the fact that it is the same procedure as for statics.
It is instructive to note that the static condensation of the slave DOFs is the
same as the Gauss reduction of the corresponding equations. As in Gauss operations,
take Equation (388) and multiply the second row of the enlarged matrix by K
1
cc

K
nn
K
nc
K
1
cc
K
cn
I

f
n
K
1
cc
f
c

(401)
then multiply the second row by (K
nc
) and add the result to the rst row

K
nn
K
nc
K
1
cc
K
cn
0
K
1
cc
K
cn
I

f
n
K
nc
K
1
cc
f
c
K
1
cc
f
c

(402)
- 120-
5.6 Model order reduction techniques: the Guyan reduction
i.e. the reduced system is


K 0
S I

u
n
u
c


f
K
1
cc
f
c

(403)
which is solved by forward substitution yielding the same result as with static con-
densation

Ku
n
=

f and u
c
= K
1
cc
f
c
+Su
n
(404)
We note that in static condensation operations the matrices are never rearranged
in a FE code. Matrix storage is sparse and the is an index vector which locates the
equations. These equations are reduced one by one for all and each of the slaves
DOF with the aid of that index vector. The partitioned format given in this section
is just an "ideal" partition in order to make math more understandable.
5.6 Model order reduction techniques: the Guyan reduction
The previous static condensation method is useful when there are massless degrees of
freedom. Furthermore, an additional advantage is obtained: the number of "active"
degrees of freedom is reduced, so the computational cost in the eigenvalue analysis
is also reduced.
There are methods to reduce the number of degrees of freedom of the model in
order to increase the speed in obtaining a solution, even if they are not massless.
These model order reduction techniques are specially important in large models
(of thousands of degrees of freedom). They are also important when comparing
experimental measures with numerical ones.
Many model order reductions are obtained by common inspection of the model
and physical understanding of the model. These are the best methods (if the un-
derstanding is correct) and the cost in terms of accuracy is usually not large for
the rst, relevant modes. One simple example would be to condense the horizontal
and rotational degrees of freedom in a structure in which only vertical modes are of
interest. Another "intuitive" method would be to condense those degrees of freedom
that have "little" mass when compared to other ones (of course one would have to
"dene" the word "little") if we are interested only in global, lower modes. Hence,
physical insight is crucial to perform ecient, accurate model order reductions. Of
course there are also many quantitative "measures" over degrees of freedom to help
in the decision of selecting the degrees of freedom to condense-out. However, they
have to be used with great care and having in mind the objective of the analysis.
There are also many procedures to actually perform the condensation of the
selected degrees of freedom. We briey review one of them, the Guyan reduction,
which is the most known and used one because of its simplicity and because it was
probably the rst one
The Guyan reduction is based on the static condensation (and hence it is fre-
quently called Guyan static condensation) and is used (as most reduction techniques)
when there are no forces applied on those degrees of freedom to be eliminated. Equa-
tion (389) can be written for f
c
= 0 as
u
c
= K
1
cc
K
cn
u
n
(405)
- 121-
5 Computational procedures for eigenvalue and eigenvector
analysis.
i.e.
u =

u
n
u
c

I
K
1
cc
K
cn

u
n
= Tu
n
(406)
Then, the elastic energy, using Equation (399) can be written as
U =
1
2
u
T
Ku =
1
2
u
T
n
T
T
KTu
n
=
1
2
u
T
n

Ku
n
(407)
Hence, Guyan approximated the kinetic energy in a similar way, assuming u = T u
j
as
K =
1
2
u
T
M u '
1
2
u
T
n
T
T
MT u
n
=
1
2
u
T
n

M u
n
(408)
where

M = T
T
MT =

I, K
nc
K
1
cc

M
nn
M
nc
M
cn
M
cc

I
K
1
cc
K
cn

=

I, K
nc
K
1
cc

M
nn
M
nc
K
1
cc
K
cn
M
cn
M
cc
K
1
cc
K
cn

= M
nn
M
nc
K
1
cc
K
cn
K
nc
K
1
cc
M
cn
+K
nc
K
1
cc
M
cc
K
1
cc
K
cn
(409)
So the generalized eigenvalue problem is

i
= `
i

M

i
(410)
and the modes may be recovered by

i
= T

i
(411)
This reduction may be used both in the transient time integration procedures and in
the eigenvalue analysis. Again, it is noted that matrix inversions are actually never
computed in a nite element program because the operations are equivalent to those
employed solving a system of equations. It is also noted that Guyan reduction is
an approximate procedure because the kinetic energy is only approximated. Hence,
both the response and the eigenvalue-eigenvector pairs are approximate. The degree
of accuracy of this approximation depends on the selected degrees of freedom and
on the frequencies to approximate. It is noted that the reduced mass and stiness
matrices do not preserve the sparsity of the original ones. Also, the computation of
the reduced mass matrix is a costly operation. Hence, the procedure is economical
if : = dim(u
j
) << dim(u) = . If j is the number of eigenvalues/eigenvectors to
be obtained or considered in the analysis, a recommendation is to select : 10j
master degrees of freedom using the following guidelines
Select degrees of freedom in the main direction of the analysis (for example
horizontal for horizontal earthquakes).
Select degrees of freedom with the highest '
ii
,1
ii
ratios
Select degrees of freedom that are not too close to each other, spanning the
whole structure (or the relevant part for the given analysis)
- 122-
5.6 Model order reduction techniques: the Guyan reduction
Use physical insight!!
We nally note that Guyan reduction may be sought as a Rayleigh-Ritz subspace
projection given by T. We will work on Rayleigh-Ritz subspaces later.
Example 24 Apply Guyan reduction to a degree of freedom of Example 18.
Solution. We select the degree of freedom with less mass, which is the third DOF
K =

4 1
1 2

0
1

0 1

[1]

K
nn
K
nc
K
cn
K
cc

(412)
and
M =

4 1
1 8

0
1

0 1

[2]

M
nn
M
nc
M
cn
M
cc

(413)
Then (here for clarity we do compute the operations in matrix notation rather than
the actual way a FE program does)

K = K
nn
K
nc
K
1
cc
K
cn
=

4 1
1 2

0
1

[1]
1

0 1

=

4 1
1 1

(414)

M = M
nn
M
nc
K
1
cc
K
cn
K
nc
K
1
cc
M
cn
+K
nc
K
1
cc
M
cc
K
1
cc
K
cn
=

4 1
1 8

0
1

[1]
1

0 1

0
1

[1]
1

0 1

+
+

0
1

[1]
1
[2] [1]
1

0 1

=

4 1
1 12

(415)
The eigenvalues of the reduced problem are
det

4 1
1 1

4 1
1 12

= 0

`
1
= 0.0585
`
2
= 1.090
(416)
which are to be compared to the lowest eigenvalues of the complete problem (`
1
=
0.056 and `
2
= 0.728). As it is seen, the rst eigenvalue is approximated rather
well, whereas the second one is not so well approximated. We leave to the reader
to verify that a better approximation would be obtained if the rst degree of freedom
were condensed instead. This DOF has the highest 1
ii
,'
ii
ratio.
Example 25 For the structure of example 16, use the Guyan reduction technique
to reduce the system to a 2DOF system. Eliminate the DOF of oors 1 and 3.
Compute the new modes and frequencies (recover the DOF from the transformation
matrix T).
- 123-
5 Computational procedures for eigenvalue and eigenvector
analysis.
Solution: In order to be compact in notation we reorder the degrees of freedom (the
procedure is the same, but this way we keep this notes "cleaner")
K =

360 120 0 0
120 168 48 0
0 48 72 24
0 0 24 24

1
2
3
4

360 0 120 0
0 72 48 24
120 48 168 0
0 24 0 24

1
3
2
4
(417)
M =

2 0 0 0
0 1 0 0
0 0 1 0
0 0 0 2

1
2
3
4

2 0 0 0
0 1 0 0
0 0 1 0
0 0 0 2

1
3
2
4
(418)
i.e.
K =

360 0 120 0
0 72 48 24
120 48 168 0
0 24 0 24

360 0
0 72

120 0
48 24

120 48
0 24

168 0
0 24

K
cc
K
cj
K
jc
K
jj

(419)
M =

2 0 0 0
0 1 0 0
0 0 1 0
0 0 0 2

M
cc
M
cj
M
jc
M
jj

(420)

K = K
jj
K
jc
K
1
cc
K
cj
=

168 0
0 24

120 48
0 24

360 0
0 72

120 0
48 24

96 16
16 16

(421)

M = M
jj
M
jc
K
1
cc
K
cj
K
jc
K
1
cc
M
cj
+K
jc
K
1
cc
M
cc
K
1
cc
K
cj
(422)

M =

1 0
0 2

0 0
0 0

360 0
0 72

120 0
48 24

120 48
0 24

360 0
0 72

0 0
0 0

120 48
0 24

360 0
0 72

2 0
0 1

360 0
0 72

120 0
48 24

5
3
2
9
2
9
19
9

(423)
Using MATLAB or similar to solve the problem (which can also be easily done by
hand), the eigenvalues and modes are
- 124-
5.7 Inclusion of damping matrices
modesb =
-0.7686 -0.1335
0.1987 -0.6640
evalsb =
62.2272
5.9294
It is clearly seen that the rst eigenvalue is almost the same, whereas in the
second one the error is larger. The matrix T is
T =

1 0
0 1

360 0
0 72

120 0
48 24

1 0
0 1
1
3
0
2
3
1
3

(424)
so
= T

=

1 0
0 1
1
3
0
2
3
1
3

0.7686 0.1335
0.1987 0.6640

0.768 6 0.133 5
0.198 7 0.664
0.256 2 0.044 5
0.446 17 0.310 33

1
3
2
4
(425)
which in order are
=

0.768 6 0.133 5
0.256 2 0.044 5
0.198 7 0.664
0.446 17 0.310 33

1
2
3
4
(426)
It is seen that the error in the eigenvectors is larger than that of the eigenvalues.
We will see later that this is always the case.
5.7 Inclusion of damping matrices
Up to now we have deal with the undamped problem. For modal extraction, it is
the usual assumption for most problems, since the structural damping is usually
small (say 1% of the critical value). For the slightly damped problem, the modes are
assumed to be the same as those of the undamped problem, whereas the frequencies
are modied to damped ones (which in this case are also close to the undamped ones).
We have seen in Section 5.1 that once eigenvalues and eigenvectors are computed,
the response of a linear structure may be computed by a mode superposition analysis
in which the displacements x are computed as
u(X, t) = =
.
X
i=1

i
(X)
i
(t) (427)
- 125-
5 Computational procedures for eigenvalue and eigenvector
analysis.
where the modes
i
are computed through an eigenvalue/eigenvector analysis and
the modal coordinates
i
are computed solving the ordinary linear dierential equa-
tion

i
(t) +.
2
i

i
(t) = /
i
(t) (428)
An intuitive, eective, "engineering style" method to introduce slight damping all
over the structure is to introduce a modal damping ratio
i
(typically 0.5 2%) in
this equation, such that it becomes similar to the dierential equation of a damped
single degree of freedom problem

i
(t) + 2
i
.
i

i
(t) +.
2
i

i
(t) = /
i
(t) (429)
where
i
is the portion of critical damping of the SDOF problem

i
=
c
c
cv
=
c
2.
i
(430)
We note, however, that
i
is the value directly prescribed, so c is never needed. The
solution of this equation may be also obtained from the Duhamel integral

i
=
1
.
o
i
Z
t
0
/ (t) exp[
i
.
i
(t t)] sin.
o
i
(t t) dt+
+ exp(
i
.
i
t)

c
i
sin.
o
i
t +,
i
cos .
o
i
t

(431)
where c
i
, ,
i
are constants to be determined from the initial conditions and .
o
i
is de
damped circular frequency which value is
.
o
i
= .
i
q
1
2
i
(432)
This procedure is equivalent to prescribing

T
C = diaq(2
i
.
i
) C =
T
diaq(2
i
.
i
)
1
(433)
A question is generally raised on the physical meaning of the entries of C for this
case. In fact, it can be easily seen (you can verify it computing C for Example
18) that C is in general a fully populated matrix, raising two concerns. First what
is the actual physical meaning of having a damping term between two degrees of
freedom not connected through stiness and mass terms, and second, since it is fully
populated, the storage in typical, large nite element models is too high, so C is
used in this form only with uncoupled equations.
One procedure to establish a structural damping matrix without resorting to
that procedure is to use what is called Rayleigh damping or proportional damping.
This type of damping assumes the form
C = cK +,M (434)
where c and , are constants that we determine below. This type of damping has the
same matrix skyline (nonzero terms) as the stiness and consistent mass matrices,
- 126-
5.7 Inclusion of damping matrices
and hence, the same storage format can be used. However, since they are propor-
tional to the stiness and mass matrices, only the two scalars c and , needs to be
stored for this type of damping. These parameters are determined from the desired
damping at two frequencies/modes. These two frequencies should be representative
of the range of frequencies and modes of interest in our problem. The computation
of the parameters is shown in the following example. We note that Rayleigh damp-
ing is not the only possible proportional damping. Caughey damping is of the more
general form
C = M
j1
X
I=0
a
I

M
1
K

I
(435)
which includes the case of Rayleigh damping. In this equation j is the number of
sampling frequencies, and the a
I
parameters are obtained solving a set of j equations
of the type

i
=
1
2
j1
X
I=0

a
I
.
2I1
i

(436)
as it is easy to show using Eq. (433)we leave this to the reader. However, in
practice Rayleigh damping is mostly used because even though Caughey damping
would allow more damping accuracy still preserving the possibility of uncoupling the
equations, the resulting matrix is fully populated instead of being sparse, something
we strongly try to avoid in nite elements.
Example 26 For Example 18, compute the c and , parameters for a 1% of damping
for the lowest frequency mode and a 2% damping at the highest frequency mode.
Compute the eective damping of the middle mode and the damped frequency for
that mode. Plot the eective damping for the range of frequencies . [0.1, 3] rad, s
Solution: Using Eqs. (433) and (434)

T
C =
T
(cK +,M) = c+,I = c
2
+,I = diaq(2.
i

i
) (437)
so we obtain the system of equations
c.
2
i
+, = 2.
i

i


0.2368
2
c +, = 2 0.2368 0.01
1.1261
2
c +, = 2 1.1261 0.02

c = 0.0333
, = 0.00287
(438)
The damping ratio at the middle mode is

i
=
c.
2
i
+,
2.
i

2
=
0.0333 0.853
2
+ 0.00287
2 0.853
= 1.6% (439)
and the damped frequency is
.
o
= .
p
1
2
= 0.853
p
1 0.016
2
= 0.852 9
The plot shown in Figure 45 is obtained from
=
0.0333.
2
+ 0.00287
2.
(440)
Both contributions to Rayleigh damping are also shown.
- 127-
5 Computational procedures for eigenvalue and eigenvector
analysis.
0 0.5 1 1.5 2
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
Circular frequency [Hz]
D
a
m
p
i
n
g

r
a
t
i
o

Rayleigh damping


M + K
K
M
Figure 45: Rayleigh damping and contribution from both terms. Note that damping
of high frequencies may be very high.
Example 27 For the structure of Example 16, which modes have been calculated in
Example 19, we wish to model a structural damping of a 2% of the critical value.
Obtain the Rayleigh damping coecients.
Solution: The two equations for the Rayleigh damping, taking the larger and shorter
frequencies are
c.
2
i
+, = 2.
i

c 16.17
2
+, = 2 16.27 0.02
c 2.425
2
+, = 2 2.425 0.02
(441)
Then, the solution is:

c = 2. 166 8 10
3
, , = 8. 425 8 10
2

5.8 Complex eigenvalue problem: complex modes


Structural damping is usually distributed along all the dierent degrees of freedom of
the structure. However, there is a possibility that the structure has damping devices
located a some (usually few) degrees of freedom. In this case, the structure of the
previous damping matrix is not valid any more, and it is in general not possible
to obtain a decoupled system of equations. However, this is an important case,
since damping devices, both in engineering mechanics and in structural mechanics
(for example isolating devices in seismic engineering) are not uncommon and must
be modeled directly (even specifying the damping constant c). Furthermore, the
interaction with uids as in aeroelasticity imply always a nonproportional damping
matrix.ln c-fddf
If dierent parts of the structure have dierent damping (for example the soil
and the structure, or a metallic and a concrete part) one "engineering" (approximate
- 128-
5.8 Complex eigenvalue problem: complex modes
but handy) approach is to set dierent c and , parameters in Rayleigh damping for
dierent parts of the structure. Of course the equations will not decouple, but we
can take the "engineering approach" of neglecting the coupling terms (assuming the
are not too large).
If nonproportional damping is due to the presence of located damping devices
(or similar situation), then the previous approach is not useful. In this case, the
damping matrix may be split into two components, the proportional one (which
may or may not be zero) and the nonproportional one. The proportional one may
be computed as before. The nonproportional one is built as the stiness and mass
matrix: node by node (element by element)
In general, the dynamics equation is of the form
M u +C u +Ku = f (442)
We note that before we actually solved the eigenvalues and eigenvectors of equation
M u +Ku = f (443)
The eigenvalues and eigenvectors of Eq. (443) are both real because both K and
M are positive denite. However, the eigenvalues and eigenvectors of Eq. (442) are
only real when M
1
C and M
1
K commute (or C and K are M-commutative)
i.e. CM
1
K = KM
1
C or equivalently when they have the same characteristic
spaces, as it is the case of Rayleigh and Caughey damping. Actually this observation
is due to Caughey and it prompted him to propose his type of damping. We leave as
an exercise to the reader to check that both Rayleigh and Caughey dampings verify
this condition.
Of course, in the presence of damping devices, the Caughey condition will not be
fullled and the modes for problems (443) and (442) will not be the same. For this
case, the procedure diers substantially. For free vibration, the equation of motion
(442) is
M u +C u +Ku = 0 (444)
which is also solved using separation of variables
u(X, t) = (X) c
vt
(445)
so
u = cc
vt
and u = c
2
c
vt
(446)
and the eigenvalue problem results in

c
2
M +cC +K

= 0 (447)
which is sometimes referred to as nonlinear eigenvalue problem, complex eigenvalue
problem, nonclassical damping eigenvalue problem or quadratic eigenvalue problem
in the original displacements space. The value of c is referred to as complex eigen-
value, quadratic eigenvalue or nonclassical damping eigenvalue, and is the complex
eigenvector (in the displacements space) or nonclassical mode.
- 129-
5 Computational procedures for eigenvalue and eigenvector
analysis.
There are several formulations that can be used to obtain the non-classical modes.
We will see two of them. In the rst formulation the problem is converted through
a standard linear eigenvalue form, whereas in the second case, more typical of struc-
tural mechanics, the problem is analyzed using a general linear complex eigenvalue
form.
5.8.1 Formulation in nonsymmetric standard form
The rst step to perform an eigenvalue/eigenvector analysis of Eq. (442) is to write
this Equation using the following vectors which usually receive the name of state
vector and state vector velocity (note that once the displacement and velocity are
known, the acceleration is given by equilibrium, hence the name)
U :=

u
v

,

U :=

u
v

, where v := u (448)
Then, Eq. (442) can be written as the following system of dierential equations

u = v
M v +Cv +Ku = f
(449)
which may be written in matrix form as

u
v

0 I
M
1
K M
1
C

| {z }
A

u
v

0
M
1
f (t)

| {z }
B
(450)
i.e.

U = AU +B(t) (451)
The solution of this dierential equation may be performed using also "modal su-
perposition", i.e. using a decomposition of the type
U (X,t) = (X) (t) (452)
where is the mapping matrix, which contains eigenvectors of dimension 2 of
the modied damped model. These modes are the complex eigenvectors of the new
eigenproblem
A = with = diaq (`
i
) (453)
Vector contains the modal multipliers and `
i
are the 2 complex eigenvalues of
the new problem. As it is well known, complex modes and complex eigenvalues of
this type of problems appear as conjugate pairs. Eq. (451) is written now in the
form

= A +B(t) (454)
Since A is nonsymmetric, a new set of vectors is introduced to pre-multiply the
previous equation


T
A =
T
B(t) (455)
- 130-
5.8 Complex eigenvalue problem: complex modes
such that they fulll
T
i
A
)
= `
)

T
i

)
, where
i
are the columns of (the eigen-
vectors of A
T
) and
i
are those of .

T
A =
T
(456)
or alternatively eliminating we obtain
T
i
A = `
)

T
i
, which in matrix forma reads
A
T
= (457)
In order to simplify Eq.(455) we proceed to the usual normalization. Using

T
i
(A
)
) = `
)

T
i

)
(458)

T
i
A

)
= `
i

T
i

)
(459)
we obtain subtracting both equations
0 = (`
)
`
i
)
T
i

)
if `
i
6= `
)
then
T
i

)
= 0 (460)
so
T
is a diagonal matrix (
i
and
)
are self-orthogonal). We then can re-dene
the eigenvectors to obtain

= I; for example

i

1

T
i

i

i
(461)
and Eq. (455) takes now the form

=
T
B(t) (462)
which is now a system of uncoupled dierential equations

i
`
i

i
= /
i
(t) with /
i
=
T
i
B) (t) (463)
where without loss of generality we have done the typical split of the load into a load
vector B and a time function or multiplier ) (t). The solution of this dierential
equation is Duhamels integral

i
(t) = exp(/)
Z
t
0
exp(`
i
t) /
i
) (t) dt +co::t

(464)
with / =
Z
t
0
(`
i
) dt = `
i
t
If
i
(0) = 0 then

i
(t) = /
i
Z
t
0
c
A
.
(tt)
) (t) dt (465)
Assume ) (t) to be a superposition of ' trigonometric functions (or apply Fourier
transform)
) (t) =
A
X
n=1
a
n
sin(c
n
t +,
n
) (466)
- 131-
5 Computational procedures for eigenvalue and eigenvector
analysis.
Then for each term, the solution is

in
(t) = /
i
Z
t
0
c
(tt)A
.
sin(c
n
t +,
n
) dt
=
/
i
`
2
i
+c
2
n
[c
n
(cos ,
n
) c
tA
.
+`
i
(sin,
n
) c
tA
.
c
n
cos (,
n
+tc
n
) `
i
sin(,
n
+tc
n
)] (467)
and

i
(t) =
A
X
n=1

in
(t) (468)
One is often interested on the permanent part of the response. In such case, as
usual, we can decompose the complex eigenvalue into its real and imaginary parts
`
i
= o
i
+,.
o
(469)
so
c
A
.
t
= c
o
.
t
c
).
u.
t
(470)
and

in
(t) =
a
n
/
i
`
2
i
+c
2
n
[c
n
(cos ,
n
) c
o
.
t
c
).
u.
t
+`
i
(sin,
n
) c
o
.
t
c
).
u.
t
| {z }
transient
c
n
cos (,
n
+tc
n
) `
i
sin(,
n
+tc
n
)
| {z }
permanent
] (471)
For bounded responses with o < 0 and ) (0) = 0

in
:=

in
(t ) =
a
n
/
i
`
2
i
+c
2
n
[c
n
cos (tc
n
+,
n
) +`
i
sin(tc
n
+,
n
)]
=

in
sin(tc
n
+,
n
) +

1
in
cos (tc
n
+,
n
) (472)
and

t
in
:= c
o
.
t
c
).
u.
t

1
in
cos ,
n
+

in
sin,
n

(473)
such that

in
(t) =

t
in
(t) +

in
(t) (474)
with the amplitudes

in
=
/
i
`
i
`
2
i
+c
2
n
a
n
and

1
in
=
/
i
c
n
`
2
i
+c
2
n
a
n
(475)
Then, the computations may be performed in terms of those amplitudes

i
(t) =
A
X
n=1

in
(t) (476)
- 132-
5.8 Complex eigenvalue problem: complex modes
so
=
A
X
n=1

n
=
A
X
n=1


A
n
sin(tc
n
+,
n
) +

B
n
cos (tc
n
+,
n
)

(477)
where

A
n
and

B
n
are vectors with

in
of each mode. Thus
U =
A
X
n=1

A
n
sin(tc
n
+,
n
) +

B
n
cos (tc
n
+,
n
)

(478)
Let us consider the following partition

A
n
=
"

A
n

A
n

1
#
(479)
Then
u =
A
X
n=1
[

A
n

l
| {z }
:= a
n
sin(tc
n
+,
n
) +

B
n

l
| {z }
:=

b
n
cos (tc
n
+,
n
) ] (480)
5.8.2 Formulation in general symmetric form
The general form is more often employed in structural mechanics than the general
form because the resulting matrices are symmetric. Furthermore, the layout is
similar to the classicaly damped case, so many of the methods used in Section 6
may be used without relevant changes.
There are often used two versions of the general symmetric form. The rst one
is obtained if we add to the system of equations of equilibrium the equations given
by the identity
M u M u = 0 (481)
so the new following system of equations is formed

K 0
0 M

| {z }
K

u
u

| {z }
U
+

C M
M 0

| {z }
M

u
u

| {z }

U
=

f
0

| {z }
F
(482)
i.e.
K
|{z}
2.2.
U
|{z}
2.
+ M
|{z}
2.2.

U
|{z}
2.
= F
|{z}
2.
(483)
Alternatively, it can be written using the following identity
Ku Ku = 0 (484)
so

0 K
K C

| {z }

u
u

| {z }
U
+

K 0
0 M

| {z }

u
u

| {z }

U
=

0
f

| {z }

F
(485)
- 133-
5 Computational procedures for eigenvalue and eigenvector
analysis.
Note that the 2 2 matrices are symmetric if K, C and M are symmetric.
Furthermore, no inverse is needed. We note that using this last equation the standard
form may also be recovered simply premultiplying by

M
1
as if it were the case of
classical damping. We leave to the reader to show that Eq. (451) is recovered.
- 134-
6 Computational algorithms for eigenvalue and eigen-
vector extraction
The eigenvalue/eigenvector extraction is a computational expensive task for large
systems as those frequently encountered in nite element models (with thousands
of degrees of freedom). Usually, not all the modes and frequencies are of interest.
Depending on the problem, only the rst (say 10) modes of a relatively large problem
(say 1000 dOF) are relevant in the computation of the response of the structure
in a, for example, mode superposition analysis. Sometimes, the loading has some
dened frequency content (spectrum) so the analysis may be performed with a small
subset of the modes of the problem. Hence, many computational algorithms only
compute a relatively small number of modes and frequencies and the savings in
computational time and storage are huge for large problems when compared to
solving all eigenvalues and eigenvectors.
There are many computational methods developed both for solving the standard
eigenvalue problem and for solving the generalized eigenvalue problem. The proce-
dures may be classied attending to the property used in the computation of the
eigenvalues and eigenvectors
1. Characteristic polynomial iteration methods, and Sturm sequence (or spectrum
slicing) methods in which the objective is to obtain the roots of the charac-
teristic polynomial j (`
i
) = det (K`
i
M) = 0. In Sturm sequence methods
shifts and matrix factorizations are employed with the same objective.
2. Matrix projection/transformation methods. These are the typical diagonaliza-
tion or tridiaginalization algorithms. The objective is to obtain the transfor-
mation matrices (containing the modes) that diagonalize the stiness and mass
matrices, i.e. to nd such that
T
K = and
T
M = I. With these
methods we obtain simultaneously all eigenvalues and all eigenvectors. Usually
they are costly for large problems, but they also may be part of more sophis-
ticated algorithms to compute only a small set of eigenvalues/eigenvectors.
Jacobi algorithms are probably the best known of this type.
3. Vector iteration methods. In these type of methods we iterate on vectors
i
in order to obtain K
i
`
i
M
i
= 0 or, alternatively, in subspaces expanded
by those vectors. These algorithms are usually the most successful ones for
large problems, specially in FEA, but they usually use also other procedures
of the previous types. Most methods are based on Krylov subspaces of which
the Power method is the simplest one.
There is a large amount of possible algorithms of each type and of dierent
combinations of them. Below we will only review some procedures, specially those
simple and most used in the nite element context.
We rst review some concepts needed for a correct understanding of the al-
gorithms as matrix deation, the Rayleigh quotient, the Sturm sequence and the
shifting
- 135-
6 Computational algorithms for eigenvalue and eigenvector
extraction
6.1 Some previous concepts
6.1.1 Matrix deation
There is a possibility to apply a restriction to a matrix such that the restriction is
not part of the space spanned by the eigenvectors, i.e., the eigenvectors are perpen-
dicular to the restriction. This technique is frequently used to eliminate from the
eigenvalue problem an eigenvector already computed, so the iterative procedure does
not converge again to the same eigenvector. Assume that the restriction is given by
a vector w. As a usual case we want
I
= w to be the restriction, so
A
I
= `
I

I
Aw = `w (486)
and we want a deated matrix

A such that

A
I
= 0 (487)
i.e., the `
I
has been eliminated (turn to zero). An immediate choice is (called
Hotellings deation)

A = A`
I

T
I
(488)
as it is straightforward to show using
T
i

)
= c
i)
. If the matrix is diagonalized in a
given eigenvalue, we can write it as
A = P

`
I
0
0

A

P
T
(489)
with P being a projection matrix. Matrix

A contains the same eigenvalues as those
of A except for `
I
. Hence

A is also referred to as the deated matrix. We note
that P is not unique. Furthermore, this deation procedure is not always well
conditioned, so dierent deation techniques are used computationally. The specic
technique depends on the procedure to obtain the eigenvalues and eigenvectors. For
example, in vector iteration methods, it is customary to deate the vectors instead
of the matrices (i.e. eliminate the desired component from the vector), a procedure
which leaves the matrices untouched (a nice property in FEs). A (Gram-Schmidt)
deacted vector is built eliminating the component from a vector. For example
v = v

v
T

I
(490)
such that
v
T

I
= v
T

v
T

T
I

I
| {z }
=1
= 0 and v
T

i
= v
T

v
T


T
I

i
| {z }
=0 for i6=I
= v
T

i
(491)
For the generalized eigenvalue problem
v = v

v
T
M
I

I
(492)
such that
v
T
M
i
= v
T
M
i

v
T
M
I

T
I
M
i
| {z }
c
iI
=

v
T
M
i
if i 6= /
0 if i = /
(493)
- 136-
6.1 Some previous concepts
6.1.2 Rayleigh quotient
Assume that is an eigenvector of the standard or of the generalized eigenproblem.
Then
A = ` (standard problem) or K = `M (generalized problem) (494)
We can factor-out ` pre-multiplying by
T
as
` = j () =

T
A

or ` = j () =

T
K

T
M
(495)
These expressions are known as Rayleigh quotients (and have to do much with the
birth of the nite elements). If a mode is known, the frequency is explicitly computed
through Rayleigh quotient. And vice-versa, if the frequency is known, the following
system of equations is solved for
B = 0 (496)
where B = A`I or B = K`M, depending on the problem at hand. Of course
since the modulus of is undetermined care must be exercised when solving the
system of equations.
The Rayleigh quotient is the root of the nite element method and Rayleigh-Ritz
approximations. If x is an approximation of a mode, then
j (x) =
x
T
Ax
x
T
x
or j (x) =
x
T
Kx
x
T
Mx
(497)
are approximations to the eigenvalues corresponding to that eigenvectors. Fre-
quently, the "shape" of the rst modes of the structure may be guessed by intuition
(imagine the modes of a beam). Then, the frequencies can also be approximated.
The method is extensively applied to structures in order to obtain displacements and
stresses via energetic methods. Note that Rayleigh quotient yields a ratio between
the approximation of the elastic and kinetic energies.
One important property of Rayleigh quotient is that it approximates the eigen-
frequency at a quadratic rate when compared to the eigenvectors. To show it assume
that
x =
1
+c y (498)
where
1
is the eigenvector to be approximated (for simplicity but without loss of
generality we consider the rst one), c is an error (scalar) and y is the direction of
that error which can be written in terms of the remaining eigenvectors
y =
.
X
i=2
j
i

i
(note that there is no component in
1
) (499)
- 137-
6 Computational algorithms for eigenvalue and eigenvector
extraction
Hence
j (x) =
x
T
Ax
x
T
x
=

1
+c
P
.
i=2
j
i

T
A

1
+c
P
.
i=2
j
i

1
+c
P
.
i=2
j
i

1
+c
P
.
i=2
j
i

=
= `
1
z }| {

T
1
A
1
+
= 0
z }| {
2c
T
1
A
.
X
i=2
j
i

i
+c
2
P
.
i=2
j
2
i
= `
i
z }| {

T
i
A
i
+
=0 because
T
.

=0
z }| {
c
2
.
X
i=2
X
)6=i
...

T
1

1
| {z }
= 1
+ 2c
T
1
.
X
i=2
j
i

i
| {z }
= 0
+c
2
P
.
i=2
j
2
i

T
i

i
| {z }
= 1
+
=0 because
T
.

=0
z }| {
c
2
.
X
i=2
X
)6=i
...
=
`
1
+c
2
P
.
i=2
j
2
i
`
i
1 +c
2
P
.
i=2
j
2
i
=

1 +c
2
.
X
i=2
j
2
i
!
1

`
1
+c
2
.
X
i=2
j
2
i
`
i
!
= `
1
+c
2

.
X
i=2
j
2
i
`
i

.
X
i=2
j
2
i
`
1
!
+H.O.T. (500)
which shows that the eigenvalues are better approximated than the eigenvectors (the
proof for the generalized eigenproblem is identical).
One interesting property of the Rayleigh quotient is that, if we sort the eigen-
values such that `
1
`
2
... `
.
, then
`
1
j (x) `
.
(501)
This property follows from the completeness of
i
, i = 1, ..., in the representation
of the dimensional space, and from x =
P
.
i=1
r
i

i
(we leave the proof to the
reader).
Example 28 Consider again Example 16, page 103 (see also Example 19, page
113). Assume a "virtual modal shape" that is linear in height, i.e. c(j) = j, where
j is the height. Compute the Rayleigh quotient. Is the frequency obtained a good
approximation for the rst frequency. Guess a "virtual modal shape" to capture
the second frequency (the solution is of course not unique). Compute the Rayleigh
quotient and compare the result with the real one.
Solution:In this case, the "virtual mode" to consider is
=

1
2
3
4

- 138-
6.1 Some previous concepts
and the Rayleigh quotient is
.
2
' j =

T
K

T
M
=

1
2
3
4

360 120 0 0
120 168 48 0
0 48 72 24
0 0 24 24

1
2
3
4

1
2
3
4

2 0 0 0
0 1 0 0
0 0 1 0
0 0 0 2

1
2
3
4

=
432
47
= 9. 191 5
so . =

9. 191 5 = 3. 031 7, whereas the correct one is 2.4245. The approximation is


rough (the mode is not very well approximated), but valid as an "order of magnitude".
For the second frequency one could prescribe something like
=

1
2
1
0

so
. '

j =
s

T
K

T
M
=
v
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
t

1
2
1
0

360 120 0 0
120 168 48 0
0 48 72 24
0 0 24 24

1
2
1
0

1
2
1
0

2 0 0 0
0 1 0 0
0 0 1 0
0 0 0 2

1
2
1
0

= 7. 855 8
which is a rather good approximation to the second mode frequency which was .
2
=
6.8
6.1.3 Courant minimax characterization of eigenvalues and Sturm se-
quence
It is intuitive to assume that the bounds of Rayleigh quotient are the lowest and
highest eigenvalues. If they are sorted
`
1
j (x) `
.
(502)
or
`
1
= min
x 6= 0
j (x) and `
.
= max
x6=0
j (x) (503)
- 139-
6 Computational algorithms for eigenvalue and eigenvector
extraction
The proof is straightforward. Simply represent x is terms of normalized modal
coordinates
x =
.
X
i=1
r
i

i
j (x) =
x
T
Ax
x
T
x
=

.
X
i=2
r
i

T
i
!
A

.
X
i=2
r
i

i
!

.
X
i=2
r
i

i
!
T

.
X
i=2
r
i

i
!
=
.
X
i=2
r
2
i
`
i
.
X
i=1
r
2
i
(504)
Take the unit vector to be
x =
x
|x|
=
x
v
u
u
t
.
X
i=1
r
2
i
(505)
so
j (x) =
.
X
i=1
r
2
i
`
i
(506)
which is basically the equation of an ellipsoid. The sum is minimum when r
i
= 1
with i being the index of the minimum `
i
, and maximum when r
i
= 1 with i being
the index of the maximum `
i
. Hence Eqs. (503) are proven.
One interesting issue is what happens to the Rayleigh quotient when the pos-
sible vectors x are constrained by some condition. One possible condition is to be
perpendicular to the rst eigenvector
x
T

1
= 0 (507)
then since r
1
= 0, one immediately obtains
`
2
j (x) `
.
(508)
so `
2
is the minimum of all possible x subjected to such restriction. This is a useful
condition in iterative procedures to avoid to re-calculate a known eigenvalue.
A restriction may be thought as of a vector w to which the solution is perpendic-
ular. If we insert a restriction, we decrease the dimension of the possible solutions.
Assume that we have r independent restrictions. Then, the dimension of the possi-
ble solutions is r. If the r conditions are coincident with the rst r eigenvectors,
then a vector fullling such conditions may be written as
v =
.
X
i=v+1

i
j (v) =
.
X
i=v+1

2
i
`
i
`
v+1
`
v+1
= min(j (v)) (509)
However, if the conditions have a component on any of the rst r eigenvectors (say
/ instead of the component in r + 1), then in v we should include such eigenvector
v =
I

I
+
.
X
i=v+2

i
j (v) `
I
`
I
= min(j (v)) `
v+1
(510)
- 140-
6.1 Some previous concepts
Hence, `
v+1
is the maximum value of `
I
of all possibilities, i.e.
`
v+1
= max (`
I
) = max
subspaces
with v restrictions

min
vectors in
those subspaces
j (v)

= max

min
x
T
Ax
x
T
x

(511)
This is the Courant or Rayleigh min-max principle.
Assume that we have a structure of three degrees of freedom. If we restrain one
of the degrees of freedom, then the resulting structure has two degrees of freedom,
two modes and two frequencies. Of course the rst frequency of the restrained
structure is larger than that of the unrestrained structure. The same will apply to
the remaining frequencies. Thus
`
i
`
(1)
i
, i = 1, ...., 1 (512)
where the (1) implies one restriction. On the other hand, the min-max principle
says that
`
2
= max

`
(1)
1

(513)
for whatever restriction we apply. Hence
`
1
`
(1)
1
`
2
(514)
Or course, we can apply matrix deation to our problem to eliminate the corre-
sponding eigenvalue and reducing the dimension of the problem by one. Then
`
2
`
(1)
2
`
3
(515)
and so on. In general, to a restricted problem we can add a new restriction. There-
fore, in general we can write
`
(v)
1
`
(v+1)
1
`
(v)
2
`
(v+1)
2
... `
(v+1)
.v1
`
(v)
.v
(516)
This is the separation property or the Cauchy interlacing theorem of the eigenvalues.
Of special interest in structural dynamics is the case when the restriction consist
on just eliminating one row and one column of the stiness matrix (which physically
is like adding a support to the structure). In this case the successive application of
the Cauchy interlacing theorem to progressively restrained structures is called Sturm
sequence, i.e. the sequence given that the roots of the successive polynomials.
Example 29 For Example 18, compute eigenvalue bounds and perform the Sturm
sequence check.
Solution. If we restrain two degrees of freedom we are left with only one. Depending
on the degrees of freedom eliminated, the eigenvalue of the remaining structure is
`
(2)
case 1
=
1
11
'
11
=
4
4
= 1; `
(2)
case 2
=
1
11
'
11
=
2
8
= 0.25; `
(2)
case 3
=
1
11
'
11
=
1
2
= 0.5
(517)
- 141-
6 Computational algorithms for eigenvalue and eigenvector
extraction
Given the separation property of these eigenvalues, we can say that the structure
has at least one eigenvalue below 0.25 and one above 1 (which is true as the reader
should verify form the solution of Example 18). If for instance we restrain the last
degree of freedom, the remainig stiness and mass matrices are
K =

4 1
1 2

and M =

4 1
1 8

(518)
and the characteristic polynomial is
j (`) = det (K `M) = det

4 4` ` 1
` 1 2 8`

= 31`
2
42` + 7 (519)
which roots are
`
(1)
1
= 0.195, `
(1)
2
= 1.160 (520)
We recall that the eigenvalues of the complete problem are
`
1
= 0.056; `
2
= 0.728; `
3
= 1. 268
We note that
`
1
`
(1)
1
`
2
`
(1)
2
`
3
0.056 0.195 0.728 1.160 1.268
and that for both cases of `
(2)
(the third case is not in the same restraining, Sturm
sequence)
`
(1)
1
`
(2)
1
`
(1)
2
0.195

0.25
1

1.160
6.1.4 Shifting
Shifting is a technique used in many eigenvalue-eigenvector extraction algorithms to
extract other than the lower or higher eigenvalues. It is also used to determine the
number of eigenvalues below or above a given frequency, and also to improve the
conditioning in many operations. In essence it is a powerful technique with multiple
applications.
Shifting consists on altering (shifting) the original eigenvalue problem by a known
(given) parameter j (we consider the generalized eigenvalue problem, but the pro-
cedure is identical for the standard case)
[KjM]
| {z }
K
j

i
= `
j
i
M
i
(521)
such that the new eigenproblem is
K
j

i
= `
j
i
M
i
(522)
- 142-
6.1 Some previous concepts
The eigenvectors of the new problem are the same as those of the original one,
whereas the eigenvalues are shifted by the quantity j
`
j
i
= `
i
j (523)
Hence, since for structural dynamics all eigenvalues are positive, the number of
negative `
j
i
is the number of eigenvalues `
i
less that j, and the number of positive
`
j
i
is the number of eigenvalues `
i
larger than j. This conclusion seems obvious.
But consider `
j
i
< 0, then

T
i
K
j

i
= |`
j
i
|
T
i
M
i
< 0 (524)
i.e. the resulting K
j
is not positive denite. If we perform a LDU decomposition
(or a Crout or a Doolittle decomposition
3
), the number of diagonal terms less than
zero is the number of eigenvalues less than zero. Hence, given a problem, we can
determine the number of eigenvalues less than a given quantity j just performing
a shifting on the stiness matrix and computing the LDU decomposition. This
is a useful property used in many algorithms and is also frequently named Sturm
sequence check or spectrum slicing. Furthermore, this property may be used to know
the number of eigenvalues between two given values.
Shifting may also be used to avoid problems with null eigenvalues (as rigid body
motions) and to compute eigenvalues higher than a given value. Hence, it is a
technique extensively used in nite element programs.
6.1.5 Krylov subspaces and the Power method
The Power method is a simple iterative method to obtain the dominant eigenvalue
and eigenvector of a matrix A (standard eigenvalue problem). The essence of the
method is the fact that given a vector x
I
, then the vector
x
I+1
= Ax
I
(525)
3
The usual matrix factorizations of a matrix 1 are Gauss-type operations which nd matrices
such that
1 = 1L
where 1 is a lower triangular matrix and L is an upper triangular matrix. The dierences between
the decompositions are on the diagonal terms. If the diagonal tems in 1 are all ones (and those
of L not ones), the decomposition is a Crout decomposition. If the diagonal terms in L are all
ones, then the decomposition is named Doolittle decomposition. The non-one diagonal terms may
be given their own matrix such that
1 =

11

L
and the previous decompositions are recovered simply setting 1 =

11 or L = 1

L. For symmetric
possitive denite matrices we can obtain the Cholesky decomposition, which is of the type
1 =

1

1
T
=

L
T

L =

11

1
T
where

1 =

11
12
. Usually 111
T
are preferred in nite elements because of the conditioning of
the operations involved. The number of diagonal terms in 1 which are zero equals the number of
restrictions missing in the problem, so it is a typical check for a given mesh done by FE programs.
- 143-
6 Computational algorithms for eigenvalue and eigenvector
extraction
is a better approximation to the dominant eigenvector (that is paired with the largest
eigenvalue). The vector is usually normalized
x
I+1
=
x
I+1
q
x
T
I+1
x
I+1
(526)
The eigenvalue is then approximated by the Rayleigh quotient
j ( x
I+1
) =
x
T
I+1
A x
I+1
x
T
I+1
x
I+1
= x
T
I+1
Ax
I+1
(527)
The proof that the method converges to the dominant eigenvector is straightforward
and very similar to that of the Rayleigh quotient. After / iterations
x
I+1
= A
I
x
1
(528)
Since the modes span the whole dimensional space we can write
x
1
= c
1

1
+c
2

2
+... +c
.

.
=
.
X
i=1
c
i

i
(529)
so
x
I+1
= A
I
x
1
=
.
X
i=1
c
i
A
I

i
=
.
X
i=1
c
i
`
I
i

i
= c
.
`
I
.
.
X
i=1
c
i
c
.

`
i
`
.

i
(530)
and
x
T
I+1
x
I+1
=

c
.
`
I
.
.
X
i=1
c
i
c
.

`
i
`
.

i
!
T

c
.
`
I
.
.
X
i=1
c
i
c
.

`
i
`
.

i
!
= c
2
.
`
2I
.
.
X
i=1

c
i
c
.

`
i
`
.

2I
(531)
because
T
i

)
= c
i)
. Hence
x
I+1
=
x
I+1
q
x
T
I+1
x
I+1
=
.
X
i=1
c
i
c
.

`
i
`
.

i
v
u
u
t
.
X
i=1

c
i
c
.

`
i
`
.

2I
(532)
We note that
.
X
i=1

c
i
c
.

`
i
`
.

2I
=

c
.
c
.

`
.
`
.

2I
+
.1
X
i=1

c
i
c
.

`
i
`
.

2I
| {z }
0 if `
.
`
i
(533)
- 144-
6.1 Some previous concepts
The last summation goes to zero when / if `
.
`
i
. Hence
x
I

.
as / (534)
Of course, x
I

.
because in Eq. (529) component c
.
6= 0. If x
I
is perpendicular
to
.
, then x
I

.1
. But this is true only in exact arithmetics. The iterative
procedure is halted once a desired convergence is obtained. The convergence may
be measured in terms of the eigenvalue convergence, i.e.
j (x
I+1
) j (x
I
)
j (x
I+1
)
tol (535)
The Power method continuously builds vectors x
I+1
= A
I
x
1
. However, at each
time the information of the previously built vectors is discarded (or only accounted
for as a starting vector for the next iteration). However, from the Cayley-Hamilton
Theorem we known that if the dimension of matrix A is , then the powers A
I
with / 1 are linearly dependent on the powers up to / = 1. Hence
vectors x
I
with / can be written in terms of the previously computed ones. As
a consequence, these vectors are independent and span the whole space K
.
of A
K
.
(A, x) = :ja:

x, Ax, A
2
x, ..., A
.1
x

(536)
This is known as the Krylov space of order . The vectors are not orthogonal. Of
course, approximations to the eigenvectors may be obtained using Krylov subspaces
of order < . There are many numerical algorithms both to solve the eigenvalue
problem (Lanczos, Arnoldi,...) and to iteratively solve a system of equations (CG,
GMRES, BiCGSTAB,...) that are based on Krylov subspaces. Inherent to these
methods are the deation/orthogonalization procedures.
Example 30 Program the Power iteration method in your favorite language and
apply the algorithm to nd the dominant eigenvalue and eigenvector of matrix of
Eq.(378), page 117.
Solution. We use Matlab to program the algorithm. The resulting code and itera-
tions using x
1
= [1, 1, 1]
T
are as follows.
function [x,l] = powerm(A,y,tol)
% [x,l] = powerm(A,y,tol)
% Function to obtain the dominant eigenvalue
% and eigenvector using the power method
% A = Matrix
% y = starting vector
% x = eigenvector
% l = eigenvalue computed using Rayleigh quotient
% tol = relative tolerance on the eigenvalue
%
err = tol * 100; % initialization of error
i = 0; % iteration counter
while(err > tol),
- 145-
6 Computational algorithms for eigenvalue and eigenvector
extraction
x = y;
norm = sqrt(x*x); % normalizing of vector
x = x / norm;
y = A * x; % new iteration vector
l = x * y; % eigenvalue
if (i > 0),
err = abs(l - lold)/abs(l);
end,
lold = l;
i = i+1;
disp([Iter= ,num2str(i), Error=,num2str(err)]);
end,
return
Iterations
----------------------------
Iter= 1 Error=0.001 (this is a dummy iteration)
Iter= 2 Error=0.72187
Iter= 3 Error=0.015556
Iter= 4 Error=0.00070199
Iter= 5 Error=0.00018317
Iter= 6 Error=5.1787e-005
Iter= 7 Error=1.4655e-005
Iter= 8 Error=4.1471e-006
x =
0.9664
-0.2413
0.0890
l = 0.8352
Example 31 Show that if the starting vector is perpendicular to the eigenvector,
then many more iterations are needed. Why is the dominant eigenvector still ob-
tained?
Solution. In this case, in order to make a starting vector perpendicular to the
dominant eigenvector, we simply deate it
x
1
x
1

x
T
1

.
(537)
i.e.
x
1
=

1
1
1

0.814 1

0.9664
0.2413
0.0890

0.213 25
1.196 4
0.927 55

(538)
Then, using this starting vector, the iterations are as follows. Note that the dominant
eigenvector is still obtained. The reason is because computational arithmetics are not
- 146-
6.1 Some previous concepts
exact, so during the iterations, orthogonality respect to the eigenvector is successively
lost.
Iter= 1 Error=0.001
Iter= 2 Error=0.67912
Iter= 3 Error=0.59242
Iter= 4 Error=0.027273
Iter= 5 Error=0.001057
Iter= 6 Error=0.0022753
Iter= 7 Error=0.007864
Iter= 8 Error=0.02599
Iter= 9 Error=0.074243
Iter= 10 Error=0.14908
Iter= 11 Error=0.16932
Iter= 12 Error=0.1041
Iter= 13 Error=0.040761
Iter= 14 Error=0.012852
Iter= 15 Error=0.0037561
Iter= 16 Error=0.0010728
Iter= 17 Error=0.00030435
Iter= 18 Error=8.6185e-005
Iter= 19 Error=2.4392e-005
Iter= 20 Error=6.9025e-006
x =
-0.9672
0.2392
-0.0854
l = 0.8352
Example 32 Modify the power method program of Example 30 to compute the sec-
ond dominant eigenvector.
Solution. The second dominant eigenvector may be computed if during iterations
the second iteration vector is deated respect to the rst eigenvector. The Matlab
source code and the iterations output follows. Note that the two most dominant
modes of matrix of Eq. (378) are obtained. The starting vectors for the iterations
were
x
1
=

1 1
1 0
1 0

(539)
We note that although the procedure seems to be amenable of generalization, round-
o errors makes it useless for large problems (or many eigenvectors)
function [x,l] = powerm2(A,y,tol)
% [x,l] = powerm2(A,y,tol)
- 147-
6 Computational algorithms for eigenvalue and eigenvector
extraction
% Function to obtain two dominant eigenvalues
% and eigenvectors using the power method
% A = Matrix
% y = starting vectors
% x = eigenvectors
% l = eigenvalues computed using Rayleigh quotient
% tol = relative tolerance on the eigenvalues
%
err = tol * 100; % initialization of error
i = 0; % iteration counter
while(err > tol),
x = y;
for j=1:2, % loop on the two vectors
norm = sqrt(x(:,j)*x(:,j)); % normaliz. of vector
x(:,j) = x(:,j) / norm;
y(:,j) = A * x(:,j); % new iteration vector
l(j) = x(:,j) * y(:,j); % eigenvalue
end,
if (i > 0),
for j=1:2, % check for maximum error
errj = abs(l(j) - lold(j))/abs(l(j));
if (j == 1), err = errj; end,
if (j > 1 && errj > err), err = errj; end,
end,
end,
lold = l; % save old eigenvalues to meassure error
i = i+1; % iteration counter
disp([Iter= ,num2str(i), Error=,num2str(err)]);
y(:,2) = y(:,2) - (y(:,1)*y(:,2)) * y(:,1); % deflation
end,
return
Iter= 1 Error=0.001
Iter= 2 Error=0.72187
Iter= 3 Error=0.04004
Iter= 4 Error=0.10072
Iter= 5 Error=0.17471
Iter= 6 Error=0.17613
Iter= 7 Error=0.10297
Iter= 8 Error=0.042817
Iter= 9 Error=0.015306
Iter= 10 Error=0.0051707
Iter= 11 Error=0.0017121
Iter= 12 Error=0.00056277
Iter= 13 Error=0.00018443
Iter= 14 Error=6.0346e-005
- 148-
6.1 Some previous concepts
Iter= 15 Error=1.9726e-005
Iter= 16 Error=6.4426e-006
x =
0.9667 0.2040
-0.2404 0.5081
0.0875 -0.8368
l =
0.8352 0.4443
Example 33 Apply the power method to compute the lowest (less dominant) eigen-
value and eigenvector of the matrix given in Eq. (378).
Solution. To compute the lower eigenvalue and eigenvector, we simply note that
the lower eigenvector of

K is the inverse of the higher eigenvector of

K
1
Then

K
1
=

4
5

1

50
0

50
1
5

1

30
0
1

30
1
3

1
=

1.667 2.357 1.291


2.357 13.33 7.303
1.291 7.303 7.00

(540)
and the corresponding iterations using the program of Example 30 are
Iter= 1 Error=0.001
Iter= 2 Error=0.21019
Iter= 3 Error=0.0011926
Iter= 4 Error=5.8917e-006
x =
0.1568
0.8266
0.5404
l = 18.5495 (so lambda=1/l=0.0539)
Example 34 Apply the power method code of Example 30 to solve the modes and
frequencies of Example 16, page 103.
Solution: To use the Power method, we reduce the problem to the standard eigen-
- 149-
6 Computational algorithms for eigenvalue and eigenvector
extraction
value problem

K= M

1
2
KM

1
2
=

2 0 0 0
0 1 0 0
0 0 1 0
0 0 0 2

1
2

360 120 0 0
120 168 48 0
0 48 72 24
0 0 24 24

2 0 0 0
0 1 0 0
0 0 1 0
0 0 0 2

1
2
=

180 60

2 0 0
60

2 168 48 0
0 48 72 12

2
0 0 12

2 12

180.0 84.853 0 0
84.853 168.0 48.0 0
0 48.0 72.0 16.971
0 0 16.971 12.0

The results using the code of Example 30 follow. It is seen that the second mode
converges to the rst mode regardless of the continuous deation performed. The
same would occur in the rest of the cases. If you want to obtain a second mode you
have to deate the matrix itself, i.e.

K = K `
T
. We leave this to the reader.
Y =
1 1 1 1
1 0 0 0
0 0 1 1
1 0 1 0
>> [x,l] = powerm2(K,Y,0.0001)
Iter= 1 Error=0.01
Iter= 2 Error=0.59173
Iter= 3 Error=0.016717
Iter= 4 Error=0.016181
Iter= 5 Error=0.066173
Iter= 6 Error=0.20323
Iter= 7 Error=0.27335
Iter= 8 Error=0.1435
Iter= 9 Error=0.037813
Iter= 10 Error=0.0076735
Iter= 11 Error=0.0014653
Iter= 12 Error=0.00027649
Iter= 13 Error=5.2051e-005
x =
-0.6932 -0.6932 1.0000 1.0000
0.6985 0.6985 0 0
-0.1770 -0.1770 1.0000 1.0000
0.0121 0.0121 1.0000 0
- 150-
6.2 Determinant search method
l =
264.8585 264.8585
6.2 Determinant search method
The determinant search method is probably the most immediate method. If the
dimension of the problem is less than 4, then the solution is explicit. However, for
problems with larger dimension, it is not possible to obtain an explicit solution.
Then, any method to obtain the roots of a function may be employed. In this case
the function is
j (`) = det (K`M) (541)
Examples are the bisection method, the Regula-falsi, secant methods, etc.
Example 35 For Example 18 compute the eigenvalues using the bisection method
and the regula-falsi method.
Solution. In next table we show the procedure to compute an eigenvalue using both
methods. Recall that the extra point in the Regula-falsi method is obtained as
` =
j

`
(i)

`
())
j

`
())

`
(i)
j

`
(i)

`
())

where `
(i)
and `
())
are two iterations such that :iq:

`
(i)

6= :iq:

`
())

. As
it is seen, although the Regula-falsi should be superlinear, it is often the case that the
bisection method works better. It is common to use a combination of the methods.
Bisection method
` det (K`M)
0. 3.0
2.0 105.0
1.0 4.0
1.5 15.0
1.25 0.656 25
1.375 5.29
1.3125 1.891
1.2812 0.513
1.2656 0.0958
....
...
Regula-falsi method
` det (K`M)
0. 3.0
2.0 105.0
1.944 90.028
1.881 74.825
() bisection
1.571 22.446
0.9405 3.579
1.484 13.524
1.370 4.99
1.190 2.37
...
Table 1: Computational procedure for case 1
Note that we have converged to an eigenvalue, but we do not know to which eigen-
value, and we do not know where the remaining eigenvalues are (in fact in the rst
search domain we have all three eigenvalues). We can use Example 29 to make a
- 151-
6 Computational algorithms for eigenvalue and eigenvector
extraction
better guess. If we solve the system with one restriction
`
1
`
(1)
1
`
2
`
(1)
2
`
3
? 0.195 ? 1.160 ?
(542)
we can determine the regions where the three eigenvalues are. If the problem had
degrees of freedom, departing from a constrained problem up to 3 degrees of free-
dom (which solution is explicit), we could have bounds on the problem with one less
restriction, and so on.
6.3 Inverse iteration method
The inverse iteration method is one of the best known methods to compute the
eigenvalues and eigenvectors. It is closely related to the Power Iteration Method
seen before. It is also used as part of more sophisticated procedures as the subspace
iteration method. We present the method for the generalized eigenvalue problem
since the case of the standard problem is simpler and readily obtained from the
generalize one. The basic idea is to use the fundamental eigenvalue equation to
iterate on the eigenvector
K
1
`
i

i
= M
i
; K x
I+1
= Mx
I
(solve for x
I+1
) (543)
where x
I
is a known vector (guess) for iteration / and x
I+1
is the new vector. Note
that since the modulus of the eigenvector is undened, we only need the direction
of the vectors and, hence, we embed the eigenvalues in the iteration vectors x
I+1
.
The vector is then normalized as usual
x
I+1
=
x
I+1
q
x
T
I+1
M x
I+1
(544)
If x
1
is not orthogonal to
1
, then x
I+1

1
as / . Then the eigenvalue is
obtained through Rayleigh quotient
`
1
' j (x
I+1
) = x
T
I+1
Kx
I+1
(545)
Again, the procedure is halted once a desired convergence is obtained, measured in
terms of the eigenvalue convergence, i.e.
j (x
I+1
) j (x
I
)
j (x
I+1
)
tol (546)
A computationally more ecient algorithm is as follows
For / = 1, ...with y
1
a starting vector
1. K x
I+1
= y
I
(solve for x
I+1
)
2. y
I+1
= M x
I+1
(multiply x
I+1
by M)
3. j ( x
I+1
) =
x
T
I+1
y
I
x
T
I+1
y
I+1
(note that =
x
T
I+1
K x
I+1
x
T
I+1
M x
I+1
)
4. y
I+1
=
y
I+1
q
x
T
I+1
y
I+1
(note that Mx
I+1
| {z }
y
I+1
= M
x
I+1
q
x
T
I+1
M x
I+1
)
(547)
- 152-
6.3 Inverse iteration method
Upon convergence, the eigenvector is (the matrix is of course usually not inverted
but factorized)

1
= M
1
y
I+1
(548)
Once one eigenvalue/eigenvector is obtained, a shift or a deation may be performed
to obtain the next eigenvalue.
A question raised now is whether the procedure converges and how is the rate of
convergence. An easy check may be performed if we "ideally" transform the problem
to the canonical form assuming that we know the eigenvectors (which we dont, but
we know there exist such transformation, hence the word "ideally")
K x
I+1
= Mx
I
; K z
I+1
= Mz
I
; z
I+1
= z
I
(549)
where x
I
= z
I
and hence z
I
are the components of the vector in the modal basis-
The re-scaling is `
1
z
I+1
= z
I+1
for the rst eigenvector. Recall that the matrix
= diaq (`
i
) contains the eigenvalues. If z
1
= [1, 1, ..., 1]
T
, after / +1 iterations we
have (assuming we converge to the smallest `
1
)
z
I+1
= `
I
1
h
1,`
I
1
, 1,`
I
2
, ..., 1,`
I
.
i
T
=
h
1, `
I
1
,`
I
2
, ..., `
I
1
,`
I
.
i
T
(550)
In canonical form, the rst eigenvector is e
1
= [1, 0, ..., 0]
T
and, hence, this shows
that convergence rate is `
1
,`
2
for `
1
< `
2
< ... < `
.
. If they are the same, z
I+1
converges with `
1
,`
i
where i is the next distinct eigenvalue.
Finally, we note that the inverse iteration method may be carried-out with several
vectors simultaneously, taking advantage of the fact that K is factorized only once.
However, since any vector converges to the eigenvector of smallest eigenvalue except
for the case it is perpendicular to that eigenvector, all vectors would converge to
the same eigenvector. To avoid it, each eigenvector may be deated to the remainig
ones, making them orthogonal. Nonetheless, there are more ecient algorithms for
simultaneous iterations that we will see below.
Example 36 Compute the rst eigenvalue and eigenvector of Example 18
Solution We begin with y
1
= [1, 1, 1]
T
. Then the procedure is as given in table
2. Note that in a nite element program, matrix inversions are never performed.
Instead, K is factorized and then forward reductions/backsubstitutions are performed
on the dierent vectors (computational gains are huge). Note that after 3 iterations
a rather accurate solution is obtained
Example 37 Program the inverse iteration method in your favorite programming
language.
Solution. This is the source code in MATLAB. You can employ it to repeat the
previous example.
function [y,r] = inverse_iter(k,m,y,tol)
% *** [y] = inverse_iter(k,m,y)
% Funtion to compute the lowest eigenvalue-eigenvector pair
- 153-
6 Computational algorithms for eigenvalue and eigenvector
extraction
Inverse iterations
/ y
I
x
I+1
= M
1
y
I
y
I+1
= K x
I+1
j ( x
I+1
) y
I+1
=
y
I+1
q
x
T
I+1
y
I
1

1
1
1

1
3
4

7
29
11

0.0580

0.5959
2.4686
0.9364

0.5959
2.4686
0.9364

1.3336
4.7387
5.6750

10.0732
44.9180
16.0887

0.0561

0.5652
2.5205
0.9028

0.5652
2.5205
0.9028

1.3296
4.7528
5.6556

10.0709
45.0076
16.0640

0.0561

0.5646
2.5233
0.9006

Table 2: Computations using the inverse iteration method


% of the generalized eigenvalue problem
% k = stiffness matrix
% m = mass matrix
% y = trial load vector / eigenvector upon exit
%
i = 0; % iteration counter
U = chol(k); % cholesky factorization
err = tol*100; % initialization of error
while(err > tol)
xhat = U\y; % forward reduction
xhat = U\xhat; % backsubstitution
yhat = m * xhat,
norm = (xhat*yhat),
r = xhat*y/norm, % Rayleigh quotient
y = yhat / sqrt(norm) % Normalization
if (i > 0), err = abs(r - rold)/abs(r), end,
i = i + 1; rold = r; % increase counter and save old eigenv.
disp([Iteration ,num2str(i),; Error = , num2str(err)]);
end
y = m\y; % final mass scaling
return
6.4 Forward iteration method
The forward iteration method is similar to the inverse iteration method, but in this
case the higher eigenvalues are computed. It can be considered as the Power iteration
method for the generalized eigenvalue problem. This method may be interesting for
explicit transient analysis, where the largest eigenvalue of a mesh is needed to estab-
lish the time increment, as we will see later. The basic idea is to read the eigenvalue
- 154-
6.4 Forward iteration method
problem in a "forward" way, i.e.
M
i

i
= K
i
; M x
I+1
= Kx
I
(551)
and normalize x
I+1
x
I+1
=
x
I+1
q
x
T
I+1
M x
I+1
(552)
The algorithm converges to the largest eigenvector, i.e. x
I+1

.
for / .
The ecient version is
For / = 1, ...with y
1
a starting vector
1. M x
I+1
= y
I
(solve for x
I+1
)
2. y
I+1
= K x
I+1
(multiply x
I+1
by M)
3. j ( x
I+1
) =
x
T
I+1
y
I+1
x
T
I+1
y
I
(note that =
x
T
I+1
K x
I+1
x
T
I+1
M x
I+1
)
4. y
I+1
=
y
I+1
q
x
T
I+1
y
I
(note that Kx
I+1
| {z }
y
I+1
=
K x
I+1
q
x
T
I+1
M x
I+1
)
(553)
Upon convergence, the eigenvector is obtained (of course, again we note that the
matrix is usually not inverted).

.
= K
1
y
I+1
(554)
Example 38 Compute the last eigenvalue and eigenvector of Example 18
Solution We begin with y
1
= [1, 1, 1]
T
. Then the procedure is as given in the fol-
lowing table. Note that after 8 iterations a good solution is obtained
Forward iterations
/ y
I
x
I+1
= K
1
y
I
y
I+1
= M x
I+1
j ( x
I+1
) y
I+1
=
y
I+1
q
x
T
I+1
y
I+1
1

1
1
1

0.2414
0.0345
0.4828

0.9310
0.6552
0.4482

0.0552

1.0689
0.7522
0.5147

1.0689
0.7522
0.5147

0.3113
0.1761
0.3454

1.4212
1.0089
0.5215

0.6430

1.7724
1.2582
0.6504

... ... ... ... ... ...


8

1.9593
1.1720
0.5246

0.5562
0.2654
0.3950

2.4901
1.4820
0.6604

1.2681

1.9637
1.1687
0.5208

Forward iterations for the example


The eigenvector is
x = K
1
y =

4 1 0
1 2 1
0 1 1

1.9637
1.1687
0.5208

0.438 6
0.209 3
0.311 5

- 155-
6 Computational algorithms for eigenvalue and eigenvector
extraction
which is approximately the same as the computed one in Example 18. We note that
more iterations are needed than with the inverse iteration method. However, this
is due to the fact that the starting vector is a good choice for the eigenvector with
lowest eigenvalue, but not as good for that of the highest eigenvalue. We leave to
the reader to perform the computations with y
1
= [1, 1, 1]
T
and to see that fewer
iterations are needed.
6.5 Jacobi method for the standard eigenvalue problem
The Jacobi method is one projection method to compute eigenvalues and eigenvec-
tors of a symmetric matrix A. Its stability and simplicity makes this method one
of the preferred methods for small problems (say dimensions up to 30 30). The
problem to be solved is that of the standard eigenvalue problem
A = ;
T
A =
T
(555)
where we can normalize such that
T
= I. The method consists on the appli-
cation of successive rotation matrices P
i
A
I+1
= P
T
I+1
P
T
I
...P
T
2
P
T
1
AP
1
P
2
...P
I
P
I+1
(556)
such that

I+1
= P
1
P
2
...P
I
P
I+1

T
A = A
I+1
=
T
I+1
A
I+1
so
A
I+1
and
I+1
as / (557)
These rotation matrices are computed such that they annihilate an o-diagonal term
(and its symmetric term) at a time. Assume that we want to annihilate
24
and

42
. Then P
I
is such that
P
T
I+1
A
I
P
I+1
=

1 0 0 0
0 c 0 :
0 0 1 0
0 : 0 c

11

12

13

14

12

22

23

24

13

23

33

34

14

24

34

44

1 0 0 0
0 c 0 :
0 0 1 0
0 : 0 c

11
c
12
+:
14
c
12
+:
14

22
c
2
+ 2
24
c: +
44
:
2

13
c
23
+:
34
c
14
:
12

24

c
2
:
2

+c: (
44

22
)
...
...

13
c
14
:
12
c
23
+:
34

24

c
2
:
2

+c: (
44

22
)

33
c
34
:
23
c
34
:
23

44
c
2
2
24
c: +
22
:
2

(558)
where
c = cos 0; : = sin0 (559)
- 156-
6.5 Jacobi method for the standard eigenvalue problem
Hence, if we want to annihilate
24
, we have to make

24

c
2
:
2

+c: (
44

22
) = 0 (560)
i.e. in general, to annihilate
i)
(i 6= ,)

i)
cos 20 +
1
2
(
))

ii
) sin20 = 0 (561)
tan20 =
2
i)

ii

))
and 0 =

4
if
ii
=
))
(562)
In Equation (558) it is also seen that if, for example,
12
and
14
where zero, they
become nonzero. However, they depend on other nondiagonal terms. Hence, after
trying to make zero all nondiagonal terms, they only become smaller, but the proce-
dure may be applied repeatedly to make them as small as we want. Convergence has
been proven to be quadratic. Thus, once an approximate solution has been attained,
a very accurate one is obtained in few extra iterations.
In theory, the best way to obtain a fast convergence is to check which o-diagonal
terms are larger and to annihilate them rst. However, this is a time consuming task
and, hence, annihilation is usually performed just in order (row by row or column
by column). One pass over all the matrix is called a sweep. After each sweep,
convergence is checked and if not achieved, a new sweep is performed. As in all
cases, convergence is measured in terms of relative error in the eigenvalues

`
i(I+1)
`
i(I)

`
i(I+1)

< to|
A
, i = 1, ..., (563)
where `
i(I)
is the i t/ diagonal term of A matrix (
ii
) for iteration /. In case
convergence is reached to the desired tolerance, an additional check can be performed
in order to assure that all o-diagonal terms are also suciently zero respect to the
diagonal terms
v
u
u
t

2
i)(I+1)

ii(I+1)

))(I+1)
to|

(564)
In order to save some computational time, a transformation may be omitted if the
corresponding o-diagonal term is less than a given tolerance, which depends on
the number of sweeps performed. Since convergence is quadratic, for sweep o, the
tolerance may be set to
v
u
u
t

2
i)(I+1)

ii(I+1)

))(I+1)
10
2S
The layout of the procedure is as follows. Note that P
I
is never explicitly built
in ecient programs. The operations are carried-out using expressions of the type
- 157-
6 Computational algorithms for eigenvalue and eigenvector
extraction
(558).
otart For sweep o = 1, ...(until convergence is obtained)
1 otart For i = 1 to (sweep loop)
C otart For , = i + 1 to (row loop)
C.1. Rotation angle:

tan20 =
2
i)

ii

))
if
ii
6=
))
0 =

4
if
ii
=
))
C.2. c = cos 0; : = sin0;
C.3. Update A
I+1
= P
T
I+1
A
I
P
I+1
using (558)
C.4. Update
I+1
=
I
P
I+1
using operations (558)
C c:d End row (/ / + 1)
1 c:d End sweep
c:d Check convergence. If obtained, exit
(565)
Example 39 Compute the eigenvalues and eigenvectors of the

K matrix of Exam-
ple 22 using a standard Jacobi procedure, where

K =

1.0 0.359 0.094


0.359 0.355 0.356
0.094 0.356 0.697

(566)
This matrix corresponds to the standard form of the general eigenvalue problem of
Example 18 in which the mass matrix is decomposed using the Cholesky method.
Solution. We will do the procedure row by row. To annihilate

1
12
= 0.359, we
use Formula (562)
tan20 =
2
i)

ii

))
=
2 (0.359)
1.0 0.355
= 1.113
0 =
1
2
arctan(1.113) = 0.419 41; c = 0.913 33; : = 0.407 22 (567)
Hence
P
1
=

c : 0
: c 0
0 0 1

0.913 33 0.407 22 0
0.407 22 0.913 33 0
0 0 1

K
1
= P
T
1

K
0
P
1
=

1. 160 0 0.231
0 0.195 0.287
0.231 0.287 0.697

(568)
To annihilate

1
13(2)
= 0.231 we proceed again with Formula (562)
tan20 =
2
i)

ii

))
=
2 (0.231)
1. 160 0.697
= 0.997 84
0 =
1
2
arctan(0.999) = 0.392; c = 0.924; : = 0.382 (569)
- 158-
6.5 Jacobi method for the standard eigenvalue problem
Hence
P
2
=

c 0 :
0 1 0
: 0 c

0.924 0 0.382
0 1 0
0.382 0 0.924

(570)
and

K
2
= P
T
2

K
1
P
2
= P
T
2
P
T
1

KP
1
P
2
=

1. 255 2 0.109 63 0
0.109 63 0.195 0.265 19
0 0.265 19 0.601 28

(571)
where we see that the

1
12
has become nonzero, but smaller than before. The third
term is annihilated using
tan20 =
2
i)

ii

))
=
2 (0.265 19)
0.195 0.601 28
= 1. 305 5
0 =
1
2
arctan(1. 305 5) = 0.458 57; c = 0.896 69; : = 0.442 67 (572)
Thus
P
3
=

1 0 0
0 c :
0 : c

1 0 0
0 0.896 69 0.442 67
0 0.442 67 0.896 69

K
3
= P
T
3

K
2
P
3
= P
T
3
P
T
2
P
T
1

KP
1
P
2
P
3
=

1. 255 2 0.0983 0.0485


0.0983 0.0641 0
0.0485 0 0.732 2

which concludes the rst sweep. The new approximation to the eigenvectors after
the rst sweep is

3
= P
1
P
2
P
3
=

0.672 56 0.091 61 0.734 36


0.299 87 0.940 94 0.157 25
0.676 57 0.325 97 0.660 3

We leave to the reader the task of checking that after four sweeps we obtain a result
with an error less that 10
10

I
= P
1
P
2
...P
I
=

0.777 0.282 0.564


0.469 0.856 0.218
0.421 0.434 0.797

(573)
and

I
=

K
I
=

1.2679 0 0
0 0.0561 0
0 0 0.7280

(574)
which can be easily veried using

K =

I

K
I

T
I
. The eigenvectors of the generalized
- 159-
6 Computational algorithms for eigenvalue and eigenvector
extraction
eigenvalue problem may be obtained as

I
= L
T

I
=

2 0 0
1
2

31
2
0
0
2

31

58

31

0.777 0.282 0.564


0.469 0.856 0.218
0.421 0.434 0.797

0.440 0.0745 0.243


0.208 0.266 41 0.154
0.307 0.317 22 0.583

(575)
which are to be compared to those of Eq. (365)
Example 40 Program the Jacobi procedure for the standard eigenvalue problem in
your favorite computer language.
Solution. The source code in Matlab follows. For readability we built the P matrix.
We leave to the reader to optimize the code using row-column multiplications in the
form (558)
function [V,A] = jacobi(A,tol)
%*** [V,A] = jacobi(A,tol)
% function to compute the eigenvalues and eigenvectors
% of the standard eigenvalue problem using Jacobi rotations
% AV = VD; tol = tolerance in the eigenvalues
%
n = size(A,1); % size of the problem
err = 100.*tol; % the error in the eigenvalues
V = eye(n); % eigenvectors, initially identity
Aold = A; % initialization of D
S = 0; % sweep counter
while (err > tol),
for i = 1: n, % this loop accounts for a sweep (all rows)
for j = i+1:n, % all off-diagonal terms of the row
if (A(i,i) == A(j,j)),
c = 1/sqrt(2); s = 1/sqrt(2);
else,
t = 2*A(i,j)/(A(i,i)-A(j,j)); t = 0.5*atan(t);
c = cos(t); s = sin(t);
end,
% vvvvv this is usually optimized in a program
P = eye(n); % Jacobi transf. matrix
P(i,i) = c; P(j,j) = c; P(i,j) = -s; P(j,i) = s;
A = P*A*P, % Eigenvalues
V = V*P, % Eigenvectors
% ^^^^^this is usually optimized in a program
end,
end,
for j = 1:n, % looks for maximum error in eigenvalues
- 160-
6.6 The QR decomposition and algorithm
errj = abs(A(j,j) - Aold(j,j)) / abs(A(j,j));
if (j == 1),
err = errj;
else,
if (errj > err), err = errj; end,
end
end,
S = S+1;
disp([sweep= ,num2str(S), err= , num2str(err)]);
Aold = A;
end,
return,
Example 41 Use the previous Jacobi code to solve the eiganvalues and eigenvectors
of Example 16, page 103.
Solution: The following result is obtained in just three iterations (sweeps):
>> [V,D] = gjacobi(K,M,0.000001)
sweep = 1 error = 0.76449
sweep = 2 error = 0.0020468
sweep = 3 error = 2.9292e-008
V =
0.0460 0.2279 0.4512 0.4923
0.1334 0.5076 0.4897 -0.6963
0.3356 0.7165 -0.5863 0.1743
0.6578 -0.2501 0.0684 -0.0083
D =
5.8781 46.3710 114.8894 264.8615
6.6 The QR decomposition and algorithm
The QR decomposition or factorization is a decomposition of the type
A = QR (576)
where Q is an orthogonal matrix and R is an upper triangular matrix. This decom-
position may be obtained by several methods: using Gram-Schmidt orthogonaliza-
tions of the columns of A respect to the previous ones, by Housholder transforma-
tions or reections of the type P = I 2vv
T
, and by Jacobi-Givens rotations. The
purpose of the transformations are such that they annihilate the terms below the
diagonal of A. Hence the procedure is similar to that given for the Jacobi algorithm
- 161-
6 Computational algorithms for eigenvalue and eigenvector
extraction
(but applied only to the elements below the diagonal). After the transformations
we have
P
.
...P
2
P
1
| {z }
Q
T
A = R (577)
i.e. A = QR.
The QR algorithm uses this type of decomposition repeatedly, instead of per-
forming several "sweeps" as in the Jacobi method. The "trick" is that in the QR
decomposition the zeroed elements do not become nonzero until the next step (and
hence it is a strong candidate for tridiagonal matrices). Assume that we are in step
/. Then
R
I
= Q
T
I
A
I
(578)
and
R
I
Q
I
= Q
T
I
A
I
Q
I
(579)
which is a transformation equivalent to the Jacobi procedure. Obviously Q
T
I
A
I
Q
I
is not necessary diagonal; it will be only after enough iterations. Then for next
iteration
A
I+1
= R
I
Q
I
= Q
T
I
A
I
Q
I
(580)
and
A
I+1
and Q
1
Q
2
...Q
I
as / (581)
The algorithm is as follows
. For / = 1, ...(for each iteration)
1. For i = 1, (for each column)
C. For , = i + 1, (for each value below diagonal)
C.1 Obtain P such that
i)
is zeroed (*)
C.2 Obtain A PA (R shares storage with A)
C.3 Obtain Q
I
Q
I
P
0
1 c:d End loops i, ,
.2 Compute A AQ
I
.3 Compute Q
I
.4 Check convergence, exit if obtained.
(582)
(*) = use any transformation!
Exercise 42 Example 43 Program the QR algorithm using Jacobi-Givens trans-
formations in your favorite language.
Solution. The source code in Matlab follows. We did not optimize it for readability.
We leave the user the task of optimizing it.
function [V,A] = qrgivens(A,tol)
%*** [V,A] = qrgivens(A,tol)
% function to compute the eigenvalues and eigenvectors
% of the standard eigenvalue problem
- 162-
6.6 The QR decomposition and algorithm
% using the QR algorithm with Jacobi-Givens rotations
% AV = VD; tol = tolerance in the eigenvalues
%
n = size(A,1); % size of the problem
err = 100.*tol; % the error in the eigenvalues
V = eye(n); % eigenvectors, initially identity
Aold = A; % initialization of D
S = 0; % sweep counter
while (err > tol), % loop on QR-iterations
Q = eye(n); % rotation matrix, initially the identity
for i = 1: n, % this loop accounts for each column
for j = i+1:n, % all off-diagonal terms below diagonal
if (A(i,i) == A(j,j)),
c = 1/sqrt(2); s = 1/sqrt(2);
else,
t = 2*A(i,j)/(A(i,i)-A(j,j)); t = 0.5*atan(t);
c = cos(t); s = sin(t);
end,
% vvvvv this is usually optimized in a program
P = eye(n); % Givens transf. matrix
P(i,i) = c; P(j,j) = c; P(i,j) = s; P(j,i) = -s;
A = P*A; % This is R
Q = Q*P; % Eigenvectors
% ^^^^^this is usually optimized in a program
end,
end,
A = A*Q; % symmetric matrix RQ (eigenvalues)
V = V*Q; % eigenvector update
for j = 1:n, % looks for maximum error in eigenvalues
errj = abs(A(j,j) - Aold(j,j)) / abs(A(j,j));
if (j == 1),
err = errj;
else,
if (errj > err), err = errj; end,
end
end,
S = S+1;
disp([QR iter= ,num2str(S), err= , num2str(err)]);
Aold = A;
end,
return,
Exercise 44 Example 45 Apply the program developed in Example 43 to the ma-
trix of Exercises 22 and 39. Compare the results to those of the Jacobi subroutine.
Solution. The output of both functions follow.
Givens-QR iterations
- 163-
6 Computational algorithms for eigenvalue and eigenvector
extraction
QR iter= 1 err= 3.8757
QR iter= 2 err= 0.29841
QR iter= 3 err= 5.1019e-005
QR iter= 4 err= 6.0636e-015
V =
0.7765 0.2822 -0.5634
-0.4693 0.8556 -0.2182
0.4205 0.4339 0.7968
D =
1.2679 0.0000 0
-0.0000 0.0561 0.0000
-0.0000 -0.0000 0.7280
Jacobi iterations
sweep = 1 error = 4.5341
sweep = 2 error = 0.144
sweep = 3 error = 1.861e-006
V =
0.7765 0.2822 -0.5634
-0.4693 0.8556 -0.2182
0.4205 0.4339 0.7968
D =
1.2679 -0.0000 0.0000
-0.0000 0.0561 0.0000
0.0000 -0.0000 0.7280
6.7 Jacobi method for the generalized eigenvalue problem
The generalized eigenvalue problem can be solved in a similar way as the standard
problem. However, in this case we solve the following equation
K = M (583)
which is pre-multiplied by
T

T
K =
T
M (584)
Again, we approximate by successive transformations

I+1
= P
1
P
2
...P
I
P
I+1
(585)
and intend to apropiately normalize the vectors so
T
M = I. Then

T
I+1
K
I+1
=
T
I+1
M
I+1
(586)
i.e, ideally
K
I+1
and M
I+1
I and
I+1
as / (587)
- 164-
6.7 Jacobi method for the generalized eigenvalue problem
In this case, the intention of transformations is to annihilate both o-diagonal terms
of both K and M, and are of the form
P
I
=
ith ,th col

1
1 a
1
/ 1
1

ith row
,th row
(588)
Note that the transformation matrix P
I
of the standard Jacobi algorithm has only
one independent variable, 0, since both c and : are related to each other by 0. In this
case we have two independent variables a and /, to be determined from conditions
that both 1
i)
and '
i)
(i 6= ,) become zero. Applying the transformation to any
matrix A, we obtain, for the task of annihilating terms
24
P
T
I+1
A
I
P
I+1
=

1 0 0 0
0 1 0 /
0 0 1 0
0 a 0 1

11

12

13

14

12

22

23

24

13

23

33

34

14

24

34

44

1 0 0 0
0 1 0 a
0 0 1 0
0 / 0 1

11

12
+/
14

12
+/
14

44
/
2
+ 2
24
/ +
22

13

23
+/
34

14
+a
12
a
22
+/
44
+ (1 +a/)
24
...
...

13

14
+a
12

23
+/
34
a
22
+/
44
+ (1 +a/)
24

33

34
+a
23

34
+a
23

22
a
2
+ 2
24
a +
44

(589)
i.e. we want to impose
a
22
+/
44
+ (1 +a/)
24
= 0 (590)
Hence, for A being K and M, we respectively obtain the equations

a1
22
+/1
44
+ (1 +a/) 1
24
= 0
a'
22
+/'
44
+ (1 +a/) '
24
= 0
(591)
which is a system of two equations with two unknowns (a and /). This system may
be written as

a1
22
+/1
44
= (1 +a/) 1
24
a'
22
+/'
44
= (1 +a/) '
24
(592)
i.e.
a1
22
+/1
44
a'
22
+/'
44
=
1
24
'
24
(593)
- 165-
6 Computational algorithms for eigenvalue and eigenvector
extraction
hence
a(1
22
'
24
'
22
1
24
)
| {z }
1
+/(1
44
'
24
'
44
1
24
)
| {z }
J
= 0 / =
1
J
a (594)
which can be substituted in one of the other equations
a1
22

1
J
a1
44
=

1
1
J
a
2

1
24
(595)
i.e.
1
J
1
24
a
2
+

1
J
1
44
1
22

| {z }
=
1
24
J
(1
22
'
44
'
22
1
44
)
| {z }
=1
a 1
24
= 0 (596)
Thus
a
2
+1a J = 0 (597)
and in a similar way
/
2
+1/ +1 = 0
The solution to this system is
a =
J
C
and / =
1
C
(598)
where in general
1 = 1
ii
'
i)
'
ii
1
i)
J = 1
))
'
i)
'
))
1
i)
1 = 1
ii
'
))
1
))
'
ii
and
C =
1
2
+:iq:(1)
r
1
2
4
+1J (599)
The discriminant 1
2
,4 + 1J is positive for positive-denite matrices. In the un-
common case that both K and M have proportional coecients in (i, ,), then we
simply set a = 0 and / = 1
i)
,1
))
.
With the given transformation we have forced the o-diagonal terms of both the
stiness and mass matrices to be zero. However, we have not enforced the diagonal
terms of the mass matrix to be the identity. Hence the eigenvectors and eigenvalues
need to be scaled at the end of the procedure.
Of course, if M = I we recover the standard eigenvalue problem, and this
algorithm results equivalent to the standard Jacobi procedure. We leave the proof
to the reader.
- 166-
6.7 Jacobi method for the generalized eigenvalue problem
Example 46 Program a Generalized Jacobi subroutine in your preferred language.
Solution. This is an example of source code in Matlab. Usually, for eciency, the
P matrix is never explicitly built and the matrix multiplications are performed using
operations of the type (589).
function [V,D] = gjacobi(K,M,tol)
%*** GJacobi
% function to compute the eigenvalues and eigenvectors
% of the generalized eigenvalue problem
% KV = MVD; tol = tolerance in the eigenvalues
%
n = size(K,1); % size of the problem
err = 100.*tol; % the error in the eigenvalues
v = eye(n); % eigenvectors, initially identity
Kold = K; % initialization of Dold
s = 0;
while (err > tol),
for i = 1: n, % this loop accounts for a sweep (all rows)
for j = i+1:n, % all off-diagonal terms of the row
I = K(i,i)*M(i,j) - M(i,i)*K(i,j); % These are
J = K(j,j)*M(i,j) - M(j,j)*K(i,j); % the Jacobi
k = K(i,i)*M(j,j) - K(j,j)*M(i,i); % transformation
C = k/2 + sign(k)*sqrt(k*k/4 + I*J); % parameters
if (C == 0),
a = 0; b = -K(i,j)/K(j,j);
else,
a = J/C; b = -I/C;
end,
% vvvvv this is usually optimized in a program
P = eye(n); % Jacobi
P(i,j) = a; P(j,i) = b; % transformation matrix
K = P*K*P, % Eigenvalues
M = P*M*P, % Mass-orthogonality check
v = v*P, % Eigenvectors
% ^^^^^this is usually optimized in a program
end,
end,
for j = 1:n,
errj = abs(K(j,j) - Kold(j,j)) / abs(K(j,j));
if (j == 1),
err = errj;
else,
if (errj > err), err = errj; end,
end
end,
- 167-
6 Computational algorithms for eigenvalue and eigenvector
extraction
s = s+1; disp([sweep = ,num2str(s), error = , num2str(err)]);
Kold = K;
end,
for j=1:n, % mass-normalize results
v(:,j) = v(:,j)/sqrt(M(j,j)); % final eigenvectors
D(j) = K(j,j) / M(j,j); % final eigenvalues
end,
[D,indx] = sort(D); % sort eigenvalues
for j=1:n, % sort eigenvectors
V(:,j) = v(:,indx(j));
end,
return,
Example 47 Compute the eigenvalues and eigenvectors of Example 18, i.e.
K =

4 1 0
1 2 1
0 1 1

and M =

4 1 0
1 8 1
0 1 2

using the generalized Jacobi algorithm.


Solution. We will rst annihilate the 1
12
= 1 and '
12
= 1. Then
1 = 1
11
'
12
'
11
1
12
= 4 1 4 (1) = 8 (600)
J = 1
22
'
12
'
22
1
12
= 2 1 8 (1) = 10 (601)
1 = 1
11
'
22
1
22
'
11
= 4 8 2 4 = 24 (602)
and
C =
1
2
+:iq:(1)
r
1
2
4
+1J = 12 +
r
24
2
4
+ 80 = 26. 967 (603)
a =
J
C
=
10
26. 967
= 0.370 8; / =
1
C
=
8
26. 967
= 0.296 7 (604)
Thus
P
1
=

1 0.370 8 0
0.296 7 1 0
0 0 1

(605)
and
K
1
= P
T
1
KP
1
=

4. 769 5 0 0.296 7
0 1. 808 4 1.0
0.296 7 1.0 1.0

(606)
and
M
1
= P
T
1
MP
1
=

4. 110 8 0 0.296 7
0 9. 291 6 1.0
0.296 7 1.0 2.0

(607)
If we want to annihilate 1
13
and '
13
, then
1 = 1
11
'
13
'
11
1
13
= 2. 634 8 (608)
J = 1
33
'
13
'
33
1
13
= 0.890 1 (609)
1 = 1
11
'
33
1
33
'
11
= 5. 428 2 (610)
- 168-
6.7 Jacobi method for the generalized eigenvalue problem
and
C =
1
2
+:iq:(1)
r
1
2
4
+1J = 5. 830 4 (611)
a =
J
C
=
0.890 1
5.830 4
= 0.152 67; / =
1
C
=
2. 634 8
5. 830 4
= 0.451 91 (612)
Thus, the second transformation matrix is
P
2
=

1 0 0.152 67
0 1 0
0.451 91 0 1

(613)
and
K
2
= P
T
2
K
1
P
2
=

5. 242 0.452 0
0.452 1. 808 1.0
0 1.0 1. 021

(614)
M
2
= P
T
2
M
1
P
2
=

4. 251 1 0.451 91 0
0.451 91 9. 291 6 1.0
0 1.0 2. 186 4

(615)
To annihilate 1
23
and '
23
, we proceed the same way
1 = 1
22
'
23
'
22
1
23
= 11. 100 (616)
J = 1
33
'
23
'
33
1
23
= 3. 207 4 (617)
1 = 1
22
'
33
1
33
'
22
= 5. 533 7 (618)
C =
1
2
+:iq:(1)
r
1
2
4
+1J = 9. 343 9 (619)
a =
J
C
=
3. 207 4
9. 343 9
= 0.343 26; / =
1
C
=
11. 100
9. 343 9
= 1. 187 9 (620)
so the third transformation matrix which concludes the sweep is
P
3
=

1 0 0
0 1 0.343 26
0 1. 187 9 1

(621)
and the stiness and mass matrices after a sweep are
K
3
= P
T
3
K
2
P
3
=

5. 242 0.452 0.155


0.452 0.872 94 0
0.155 0 1. 921

(622)
M
3
= P
T
3
M
2
P
3
=

4. 251 1 0.451 91 0.155 12


0.451 91 14. 753 0
0.155 12 0 2. 594 7

(623)
The approximation to the eigenvectors after the rst sweep is

3
= P
1
P
2
P
3
=

1.0 0.189 0.280


0.297 1. 054 0.298
0.452 1. 188 1.0

(624)
- 169-
6 Computational algorithms for eigenvalue and eigenvector
extraction
After just three sweeps, the problem converged to a tolerance of 10
11
in the eigen-
values, obtaining
K
9
=

5.5032 0 0
0 0.8343 0
0 0 1.9610

(625)
M
9
=

4.3396 0 0
0 14.8818 0
0 0 2.6954

(626)

9
=

0.9193 0.2875 0.3994


0.4338 1.0281 0.2520
0.6404 1. 2228 0.9566

(627)
These values must be scaled by the resulting mass matrix
= M
1
9
K
9
=

1. 268 1 0 0
0 0.05606 0
0 0 0.727 5

(628)

9
=

0.4403 0.0745 0.2433


0.2082 0.2665 0.1535
0.3074 0.3170 0.5827

(629)
which is again the same result as computed in Example 18. Subroutine gjacobi of
the previous example returns the eigenvalues (and hence their corresponding eigen-
vectors) sorted from lower to higher.
6.8 Bathes subspace iteration method and Ritz bases.
The subspace iteration method was developed and coined by Bathe in the beginning
of the 1970s. The idea behind the subspace iteration method is to iterate on a re-
duced subspace of dimension such that j, where j is the number of eigenvalues
and eigenvectors to compute but << , with the dimension of the problem.
The brightness of the idea is that it is more robust to iterate in a subspace which
somehow spans the desired eigenvectors (i.e. the iterating vectors span the same
subspace than the combination of the eigenvectors but are not necessarily the eigen-
vectors themselves) than to pretend to simultaneously iterate on the eigenvectors (i.e.
the iterating vectors are 'normalized eigenvectors from the beginning). Thus, it
is basically a nonorthogonal "change of coordinates". The method is powerful when
computing few eigenvalues and eigenvectors of a large system. Furthermore, it is
easy to program and very robust, so it is available in most commercial nite element
codes.
The used (discrete) Rayleigh-Ritz approach
4
consists on considering load vec-
4
The idea is the same as considering the solution of a problem &(r) to be the superposition of
several known functions
&(r) =
n

.=1
o
.
`
.
(r)
where the parameters o. are to be obtained from some basic principle (Principle of Virtual Work
or any similar variational principle). Fourier and Laplace transformations can also be viewed in a
similar way.
- 170-
6.8 Bathes subspace iteration method and Ritz bases.
tors Y = [y
1
, y
2
, ..., y
q
], where the dimension of y
i
is that of the problem ().
Then we can generate a set of displacement vectors

X = [ x
1
, x
2
, ..., x
q
] which can
be considered as a reduced base (a subspace) of the dimensional solution. The
subspace of displacements is generated simply solving the simultaneous systems of
equations
K
|{z}
..

X
|{z}
.q
= Y
|{z}
.q
(630)
The cost of solving this equation is mainly that of the matrix factorization, which
is performed only once. Then, the displacements u (and modes ) can be approxi-
mately represented using this subspace as
u =
q
X
i=1
j
i
x
i
=

X and
i
=
q
X
)=1

i)
x
)
=

Xq
i
(631)
i.e. in matrix notation
[
1
, ..,
q
]
| {z }

= [ x
1
, .., x
q
]
| {z }

X
[q
1
, .., q
q
]
| {z }
Q

|{z}
.q
=

X
|{z}
.q
Q
|{z}
qq
(632)
The eigenvalue problem is

T
K

= `

T
M

(633)
i.e.
Q
T

X
T
K

X
| {z }

K
Q = Q
T

X
T
M

X
| {z }

M
Q (634)
where we dene the projected stiness and mass matrices on the subspace as

K =

X
T
K

X =

X
T
Y ;

M =

X
T
M

X
| {z }

Y
=

X
T

Y (635)
and the projected eigenvalue problem is

KQ =

MQ (636)
Hence, we have converted the problem of dimension into a projected problem of
dimension . The reduced problem may be solved by any of the available procedures
for small to medium problems. The Generalized Jacobi procedure is one of the most
used ones. Note that the reduced matrices are full even if the original ones are
usually sparse.
Convergence is checked as usual
max
i=1,..,q

`
i(I+1)
`
i(I)

`
i(I+1)

< to|
A
(637)
The lowest eigenvalues converge faster, so if sorted, the convergence check may
be performed on the highest eigenvalue. Additional convergence checks may be
- 171-
6 Computational algorithms for eigenvalue and eigenvector
extraction
performed on the eigenvectors if desired (usually once eigenvalue convergence has
been achieved)
kK
i
`
i
M
i
k
kK
i
k
< to|

(638)
where for example to|
A
= (to|

)
2
. Finally, we recall that the lowest eigenvectors are
found only in case the initial subspace is not perpendicular to the desired eigenvec-
tors. In order to guarantee that all j lowest eigenvalues and eigenvectors have been
found, we perform a swift with
j = (1 +c) `
j
< `
j+1
(639)
where c 0 is a small tolerance such that takes into account numerical errors in
the computation of the eigenvalues but at the same time does not make j larger
that the next eigenvalue, say c = (`
j+1
`
j
) ,100. Then, as explained in Section
6.1.4, a LDL
T
(or a Crout or Doolittle) factorization is performed so the number
of negative terms in the diagonal of D is the actual number of eigenvalues less that
`
j
. This number should be coincident with the expected number.
The algorithm is as follows
. Perform factorization K = LDL
T
1. For / = 1, ...with Y
0
a starting vector set
1.1. K

X
I+1
= Y
I
(solve for

X
I+1
; this is basically an inverse iteration)
1.2. Compute projected stiness

K
I+1
=

X
T
I+1
Y
I
( =

X
T
I+1
K

X
I+1
)
1.3. Compute mass-scaled vectors

Y
I+1
= M

X
I+1
1.4. Obtain projected mass matrix

M
I+1
=

X
T
I+1

Y
I+1
( =

X
T
I+1
M

X
I+1
)
1.5. Solve reduced eigenvalue problem

K
I+1
Q
I+1
=

M
I+1
Q
I+1

I+1
1.6. Convergence check for eigenvalues

`
i(I+1)
`
i(I)

`
i(I+1)

< to|
A
1.7. If no convergence, update loads Y
I+1
=

Y
I+1
Q
I+1
and go to step 1
C. If convergence, Sturm check: K
j
= KjM; K
j
= LDL
T
;
1. If eigenvalues are missing, restart procedure with new Y
0
set.
(640)
In this algorithm K is factorized only once at the beginning.
Up to now we have said nothing about the starting subspace X
1
or Y
1
. The
vectors Y
1
can be seen as load (inertial) vectors. The better guess on these vectors,
the faster the convergence. Since we are interested on the lowest eigenvectors, we
should include as (inertial) load vectors those with a relevant component on the
degrees of freedom with larger ratios between mass and stiness. However, at the
same time we should also "move" the whole structure. Using this reasoning, Bathe
recommends to use a starting load matrix Y = [y
1
, y
2
, ..., y
q
] such that the rst
vector contains all entries equal to one, the last vector contains random numbers
and the remaining vectors have all entries zero except for a one in a selected degree of
- 172-
6.8 Bathes subspace iteration method and Ritz bases.
freedom with high mass-to-stiness ratio ('
ii
,1
ii
). The selected degree of freedom
should have a large '
ii
,1
ii
value but at the same time be far from the other ones,
so the unit entries are reasonably distributed along the structure. Note that the
recommendations are similar to those given for selecting master DOF in the Guyan
model order reduction method
The other recommendation from Bathe, based on his experience, is to use a
number of vectors in the subspace such that
= min(2j, j + 8) (641)
The reason is that the larger , the faster the convergence, but at the same time
the more costly each iteration. Hence, this recommendation present a compromise
for typical large structural systems. Usually, convergence to a strict tolerance is
obtained in about 20 subspace iterations.
For extremely large problems (say 100.000), matrix operations become very
costly and several options exist in order to speed-up the algorithm. Many commercial
programs give the user a choice on some of the steps given in (640). Typical choices
are
Change the algorithm to solve the linear system of equations 1.1, for exam-
ple using a Pre-conditioned Conjugate Gradient (PCG) algorithm, which is
iterative and very ecient for large systems and advantage may be taken of
the iterative system to guess the rst iteration vector for the PCG algorithm.
Tolerances may be tuned so time is saved in the rst iterations. In this case
factorization is not needed.
Use a lumped mass matrix in step 1.3 (hence the operation is just a scaling).
Change the eigenvalue algorithm for the reduced space in 1.5
Example 48 Compute the rst two eigenvectors of Example 18, where recall
K =

4 1 0
1 2 1
0 1 1

and M =

4 1 0
1 8 1
0 1 2

(642)
Use j = = 2 and the following starting loads
Y
0
=

0.5 0.3
1.0 0.9
0 1.0

(643)
Solution. We proceed with the algorithm given in (640)

X
1
= K
1
Y
I
=

0.5 0.533 3
1.5 2.433 3
1.5 3.433 3

(644)

K
1
= X
T
1
Y
0
=

1.75 2.7
2.70 5.46

(645)
- 173-
6 Computational algorithms for eigenvalue and eigenvector
extraction

Y
1
= M

X
1
=

3.5 4.57
14.0 23.43
4. 5 9. 30

(646)

M
1
=

X
T
1

Y
1
=

29.50 51.38
51.38 91.38

(647)
The eigenvalues and eigenvectors are (compute them in your favorite way)
`
1
= 0.0561 and `
2
= 0.7277 (648)
Q
1
=

0.0953 1.2778
0.0507 0.7262

(649)
The eigenvalues are already very close to the exact ones, whereas the eigenvectors
are

1
= M
1
Y = M
1

Y Q
1
=

0.0747 0.2509
0.2663 0.1500
0.3170 0.5768

(650)
which are also very close to the solution. The reason that we obtained such a good
approximation in just one iteration is that the initial load vector spanned a sub-
space very close to that of the eigenvectors, even though they were not close to the
eigenvectors themselves and not even M perpendicular. In order to check that we
actually obtained the 2 lowest eigenvalues, we perform a shift
K
j
= K (`
2
1.01)
| {z }
j
M =

1.060 1.735 0
1.735 3.880 1.735
0 1.735 0.469 95

(651)
which LU factorization is
K
j
= LU =

1.0 0 0
1. 636 8 1.0 0
0 0.258 19 1.0

1. 06 1. 735 0
0 6. 719 8 1. 735
0 0 0.0220

(652)
Since the number of negative terms in the diagonal of U are just 2, this means that
only 2 eigenvalues are below 1.01`
2
.
Of course if we had not been so fortunate with the starting vectors, more iterations
would have been needed.
Example 49 Program the subspace iteration method in your preferred computer
language.
Solution. The source code in Matlab is as follows. It uses function gjacobi of
Example 46. We leave to the reader to use the code in a large problem and to see
that a solution is generally obtained in few iterations.
- 174-
6.8 Bathes subspace iteration method and Ritz bases.
function [V,D] = bathesubs(K,M,Y,p,tol)
% **** [V,D] = bathesubs(K,M,Y,p,tol)
% function to compute p eigenvalues and eigenvectors
% of the generalized eigenvalue problem
% using Bathes subspace iteration method
% K = stiffness matrix
% M = mass matrix
% p = number of eigenvalues-eigenvectors
% q = dimension of subspace, for example = min(2p,p+8)
% tol = tolerance on the eigenvalues
% Y = subspace, given initialized
U = chol(K); % Cholesky decomposition
err = 100*tol; % initialization
N = size(K,1); % dimension of the problem
q = size(Y,1); % dimension of the subspace
if (p>q | p>N | q>N), disp(Error in dimensions); return, end
Dold = ones(p,1); % initialization
itmax = 100; % maximum number of iterations
for i=1:itmax,
Xbar = U\Y; % forward reduction
Xbar = U\Xbar; % backsubstitution
Kbar = Xbar*Y; % projected stiffness matrix
Ybar = M*Xbar; % inertia load vector
Mbar = Xbar*Ybar; % projected mass matrix
% compute eigenvals and eigenvects of reduced problem
[Q,D] = gjacobi(Kbar,Mbar,tol);
for j = 1:p,
errj = abs(D(j)-Dold(j))/abs(D(j)); % compute errors
if (j == 1),
err = errj;
else,
if (errj > err), err = errj; end,
end
end,
Y = Ybar*Q; % improve load vectors
Dold = D; % save old eigenvalues for checking
if (err < tol), break; end, % exits if convergence reached
disp([ ** ITER = ,num2str(i), err = ,num2str(err)]);
end,
V = M\Y; %eigenvectors
mu = D(p) * 1.01; % Sturm sequence check:
Kmu = K - mu * M; % shift
[L,U] = lu(Kmu); % LU decomposition
neigs = 0; % number of eigenvalues (initialization)
for j=1:N
if(U(j,j) < 0.), neigs = neigs + 1; end
- 175-
6 Computational algorithms for eigenvalue and eigenvector
extraction
end,
if (neigs < p),
disp([Only ,num2str(neigs),<,num2str(p), evals found]);
disp([Change starting subspace]);
end,
return
Example 50 Use the previous subspace iteration code to solve the eigenvalues and
eigenvectors of the structure of Example 16, page 103. Repeat the procedure com-
puting only 2 eigenvalues and eigenvectors.
Solution: Using the subspace iteration method of the noted, the following result
is obtained if four modes are seek (because all modes are wanted, the procedure
converges in just one subspace iteration: it is identical to the generalized Jacobi
procedure)
>> [V,D] = bathesubs(K,M,Y,4,0.0000001)
sweep = 1 error = 14.3552
sweep = 2 error = 0.57577
sweep = 3 error = 0.0040998
sweep = 4 error = 1.0306e-007
sweep = 5 error = 0
** ITER = 1 err = 0.99622
sweep = 1 error = 0
V =
0.0460 0.2279 0.4512 0.4923
0.1334 0.5076 0.4897 -0.6963
0.3356 0.7165 -0.5863 0.1743
0.6578 -0.2501 0.0684 -0.0083
D =
5.8781 46.3710 114.8894 264.8615
If only two modes are wanted, then the iterations are:
>> [V,D] = bathesubs(K,M,Y,2,0.0000001)
sweep = 1 error = 0.39849
sweep = 2 error = 0
** ITER = 1 err = 0.98282
sweep = 1 error = 0.0061282
sweep = 2 error = 0
** ITER = 2 err = 0.23545
sweep = 1 error = 9.5609e-007
sweep = 2 error = 0
** ITER = 3 err = 0.01428
sweep = 1 error = 2.4376e-010
** ITER = 4 err = 0.001494
- 176-
6.9 Lanczos method
sweep = 1 error = 9.2034e-014
** ITER = 5 err = 0.00021751
sweep = 1 error = 0
** ITER = 6 err = 3.4638e-005
sweep = 1 error = 0
** ITER = 7 err = 5.6183e-006
sweep = 1 error = 0
** ITER = 8 err = 9.145e-007
sweep = 1 error = 0
** ITER = 9 err = 1.4895e-007
sweep = 1 error = 0
V =
0.0460 0.2279
0.1334 0.5076
0.3356 0.7165
0.6578 -0.2501
D =
5.8781 46.3710
6.9 Lanczos method
Lanczos method was proposed in 1950 as a method to compute the most extreme
(lowest or highest) eigenvalues and as a method to tri-diagonalize a matrix. As
mentioned previously (see Section 6.1.5), it is a method which uses a Krylov subspace
to compute a Ritz base which reduces the problem to a smaller subspace or order
<< . The resulting problem is a dimensional standard eigenvalue problem
which is tridiagonal and can be eciently computed using, for example, the Q1
iteration method. In this section we explain the basic formulation of the method.
However, we note that Lanczos methods are sensible to round-o errors and loss of
orthogonality of the vectors during iterations. Hence, an ecient method needs a
lot of "tips and tricks" in order to make it work for large problems. This is the main
reason because it was not extensively used in structural mechanics until the 1980s.
And it is the reason why most books recommend to use canned (and ne-tuned)
subroutines as LAPACK.
The generalized eigenvalue problem is
K
i
= `
i
M
i
(653)
Usually, the method is carried out in several stages, each stage having a shift on the
problem, generating the shifted eigenvalue problem
(KjM)
| {z }
K
j

i
= `
j
i
M
i
(654)
where
`
i
= `
j
i
+j (655)
- 177-
6 Computational algorithms for eigenvalue and eigenvector
extraction
As mentioned, the Lanczos procedure uses the Krylov sequence as the bases to form
a Ritz subspace. The Krylov sequence for the shifted generalized eigenvalue problem
is given by the inverse iteration methodsee Section 6.3, Eq. (543)
x
I+1
= K
1
j
Mx
I
(656)
Hence, the Krylov sequence is
K
I

K
1
j
M, x

= :ja:
n
x
1
, K
1
j
Mx
1
,

K
1
j
M

2
x
1
, ...,

K
1
j
M

I1
x
1
o
(657)
For simplicity of the exposition, we rename these Krylov vectors as {k
i
}, i = 1, ..., /;
i.e.
k
i
=

K
1
j
M

i1
x =

K
1
j
M

k
i1
(658)
We wish to use these vectors to generate an Morthogonal basis of Lanczos vectors
L
I
= {q
1
, q
2
, ..., q
I
} Q
I
= [q
1
, q
2
, ..., q
I
] (659)
such that
q
T
i
Mq
)
= c
i)
Q
T
I
MQ
I
= I (660)
Assume that we have already done that (i.e., we know Q
I
), and we want to generate
the next Lanczos vector q
I+1
. We generate this vector from k
I+1
, orthonormalizing
it to the previous vectors. We note that any vector r may be written in terms of
the Lanczos vectors as
r = r +
I
X
i=1
j
i
q
i
(661)
where j
i
are the components of the vector on L
I
and r is the remaining part (dened
as r = r
P
I
i=1
j
i
q
i
) which is normal to all Lanczos vectors. The new Krylov vector
may also be written as
k
I+1
=

k
I+1
+
I
X
i=1
i
i
q
i
(662)
where i
i
are the components on that base and

k
I+1
is the remaining part, which
is normal to all Lanczos vectors up to /. We want the new vector q
I+1
(to be
computed) to have the direction of

k
I+1
(which we know once k
I+1
is computed as

k
I+1
= k
I+1

P
I
i=1

k
T
I+1
q
i

q
i
) so we can write
k
I+1
=
I+1
X
i=1
i
i
q
i
(663)
- 178-
6.9 Lanczos method
Then using Eq. (658)
k
I+1
=

K
1
j
M

k
I
=
I
X
i=1
i
i

K
1
j
M

q
i
= i
I

K
1
j
M

q
I
+
I1
X
i=1
i
i

K
1
j
M

q
i
| {z }
r
i
(664)
= i
I
r
I
+
I1
X
i=1
i
i
r
i
=
I
X
i=1
i
i
r
i
(665)
where we have renamed
r
i
=

K
1
j
M

q
i
(666)
Looking at Eq. (661), we see that each vector can be written itself in terms of the
Lanczos vectors q
i
, assuming that r
i
= j
i+1
q
i+1
( a fact that will be clear later and
that is the purpose of this algebra)
r
i
=
i+1
X
)=1
j
)
q
)
(667)
so
k
I+1
= i
I

K
1
j
M

q
I
| {z }

k
I+1
+
I
X
i=1
j
)
q
)
(668)
where j
)
are some coecients. If we Morthonormalize k
I+1
with respect to L
I
(previous q
)
, , = 1, .., / Lanczos vectors) and deate it, we are left with i
I

K
1
j
M

q
I
.

k
I+1
= k
I+1

I
X
i=1

q
T
i
Mk
I+1

q
i
= i
I

K
1
j
M

q
I
= i
I
r
I
(669)
i.e. r
I
has the same direction as

k
I+1
which we want to be in the direction of q
I+1
.
Recalling again (661) and renaming c
I
= j
I
, ,
I
= j
I1
,
I
= j
I2
r
I
= r
I
+c
I
q
I
+,
I
q
I1
+
I
q
I2
+.... (670)
If we Morthonormalize again r
I
respect to q
I
q
T
I
Mr
I
= q
T
I
M r
I
| {z }
=0
+c
I
q
T
I
Mq
I
| {z }
=1
+,
I
q
T
I
Mq
I1
+
I
q
T
I
Mq
I2
+....
| {z }
=0
(671)
so
c
I
= q
T
I
Mr
I

= q
T
I
MK
1
j
Mq
I

(672)
and if we do Morthonormalize again r
I
respect to q
I1
q
T
I1
Mr
I
= q
T
I1
M r
I
| {z }
=0
+c
I
q
T
I1
Mq
I
| {z }
=0
+,
I
q
T
I1
Mq
I1
| {z }
=1
+
I
q
T
I1
Mq
I2
+....
| {z }
=0
(673)
- 179-
6 Computational algorithms for eigenvalue and eigenvector
extraction
so
,
I
= q
T
I1
Mr
I
= q
T
I1
M

K
1
j
M

q
I
=

q
T
I1
MK
1
j

| {z }
r
T
I1
Mq
I
(674)
Hence
,
I
= r
T
I1
Mq
I
(675)
Now going back to Eq.(670)
r
T
I1
= r
T
I1
+c
I1
q
I1
+,
I1
q
I2
+
I1
q
I3
+.... (676)
so
,
I
= r
T
I1
Mq
I
= r
T
I1
Mq
I
+c
I1
q
I1
Mq
I
+,
I1
q
I2
Mq
I
+....
| {z }
=0
(677)
Thus
q
I
=
r
I1
q
r
T
I1
M r
I1
(678)
and
,
I
=
q
r
T
I1
M r
I1
(679)
which is the preferred alternative. Thus we obtain from (678) and (679) what we
wanted: r
I
= ,
I+1
q
I+1
. Then if we do Morthonormalize again r
I
respect to q
I2
we obtain

I
= q
T
I2
Mr
I
(680)
and proceeding as with Eq. (677) we obtain

I
= r
T
I2
Mq
I
= r
T
I2
Mq
I
| {z }
=0
+c
I1
q
I2
Mq
I
+,
I1
q
I3
Mq
I
+....
| {z }
=0
= 0 (681)
The term r
T
I2
Mq
I
is zero because Eq. (678) shows that r
I2
is parallel to q
I1
and q
T
I2
Mq
I
= 0. Hence, all terms from
I
are zero, so it is
I
, which is an
important observation given by Lanczos. This implies that the orthogonalization
needs to be applied only to the previous two vectors.
Now that the coecients of (670) have been determined, we can write Eq. (678)
using Eq. (679) as
r
I
= ,
I+1
q
I+1
(682)
but since from Equations (666) and (670)
r
I
= K
1
j
Mq
I
= r
I
+c
I
q
I
+,
I
q
I1
(683)
we obtain
r
I
= K
1
j
Mq
I
c
I
q
I
,
I
q
I1
(684)
- 180-
6.9 Lanczos method
For all Lanczos steps from i = 1, ..., /, we can write all equations for r
I
in the
following matrix format (we leave the verication details to the reader)
K
1
j
MQ
I
Q
I
T
I
= r
I
e
T
I
(685)
where T
I
is the Lanczos tridiagonal matrix
T =

c
1
,
2
,
2
c
2
,
3
,
3
c
3
,
4
... ... ... ...
,
I1
c
I1
,
I
,
I
c
I

(686)
and
e
I
= [0, 0, 0, ..., 0, 1]
T
(687)
Pre-multiplying Eq. (685) by Q
T
I
M
Q
T
I
MK
1
j
MQ
I
Q
T
I
MQ
I
| {z }
= I
T
I
= Q
T
I
M r
I
| {z }
= 0
e
T
I
(688)
where Q
T
I
MQ
I
= I because of the Morthonormality of the Lanczos vectors and
Q
T
I
M r
I
= 0 because r
I
is parallel to q
I+1
, see Eq. (682). Then
T
I
= Q
T
I
MK
1
j
MQ
I
(689)
Now note that the generalized eigenvalue problem may be written as
1
`
j
i

i
= K
1
j
M
i
(690)
so pre-multiplying by Q
T
I
M
1
`
j
i
Q
T
I
M
i
= Q
T
I
MK
1
j
M
i
(691)
and using the Ritz subspace transformation

i
= Q
I

i
(692)
we have
1
`
j
i
Q
T
I
MQ
I
| {z }
= 1

i
= Q
T
I
MK
1
j
MQ
I
| {z }
T
I

i
(693)
i.e.
1
`
j
i

i
= T
I

i
(694)
Hence the eigenvalues of T
I
are the reciprocals of those of K
j

i
= `
j
i
M
i
, and
the eigenvectors of both problems are related by the Ritz approximation (692). The
- 181-
6 Computational algorithms for eigenvalue and eigenvector
extraction
eigenvalues of T may be computed using any algorithm. However, the QR algorithm
is very ecient for this type of matrix.
The Lanczos algorithm for the tridiagonalization (one Lanczos stage) is given in
the following frame. Note that q
I
plays the role of x
I
in the inverse iterations and
r
I
the role of x
I
.
Lanczos iterations (one stage)
Initialization. Given a starting vector r
0
. q
0
= 0; ,
1
=
q
r
T
0
M r
0
; q
1
= r
0
,,
1
; y
1
= Mq
1
1 :tart For / = 1, ... (until enough vectors are generated)
1.1 Solve r
I
from K
j
r
I
= y
I
, Eq. (666)
1.2 1
ct
orthogonalization r
I
= r
I
,
I
q
I1
, Eq.(684)
1.3 c
)
= y
T
I
r
I

= q
T
I
M r
I

, Eq. (672)
1.4 2
ao
orthogonalization r
I
= r
I
c
I
q
I
, Eq.(684)
1.5 Mass scaling y
I
= M r
I
1.6 Next , coe. ,
I+1
=
q
y
T
I
r
I

=
q
r
T
I
M r
I

, Eq. (679)
1.7 If enough number of vectors are obtained, exit loop
1.8 Next Lanczos vector q
I+1
= r
I
,,
I+1
, Eq. (678)
1.9 Next load vector y
I+1
= y
I
,,
I+1
(695)
Once a tridiagonal matrix is obtained, the eigenvalues and eigenvectors are com-
puted. These are an approximation for the desired eigenvalues and eigenvectors
using the Ritz subspace given by the Lanczos vectors. In order to know whether
they are a good approximation, some error bounds are needed. These can be com-
puted post-multiplying Eq. (685) by the eigenvector

i
of T
I
K
1
j
MQ
I

i
Q
I
T
I

i
= r
I
e
T
I

i
(696)
K
1
j
MQ
I

1
`
j
i
Q
I

i
= r
I
e
T
I

i
(697)
K
1
j
M
i

1
`
j
i

i
=

c
iI
r
I
(698)
where

c
iI
=

e
T
I

is the / t/ component of

i
. Taking the Mnorm (dened
as kvk
M
=

v
T
Mv)

K
1
j
M
i

1
`
j
i

i

M
=


c
iI

k r
I
k
M
(699)
- 182-
6.9 Lanczos method
We note that the left hand side of this equation is the M-weighted error in the
eigenvalue problem (690), whereas in the right hand side, using Eq. (682)
k r
I
k
M
= ,
I+1
q
q
T
I+1
Mq
I+1
= ,
I+1
(700)
Hence

K
1
j
M
i

1
`
j
i

i

M
=


c
iI

,
I+1
(701)
i.e. convergence is reached if


c
iI

,
I+1
< to| (702)
The algorithm requires several re-orthogonalizations and restarts in order to compute
several eigenvalues and eigenvectors. We do not get involved in such programming
details.
- 183-
6 Computational algorithms for eigenvalue and eigenvector
extraction
- 184-
7 Transient analyses in linear elastodynamics
7.1 Introduction
The dynamics equilibrium equation to be solved may be written as
f
A
(t) +

f
C
(t) +f
1
(t)

= f (t) (703)
where

f = External loads
f
A
= Inertia (mass) loads
f
C
= Damping loads
f
1
= Elastic (stiness) loads
(704)
In a completely nonlinear problem this equation has to be solved by equilibrium
iterations. The inertia terms are usually linear, so
f
A
(t) = M u(t) (705)
where M is the constant mass matrix and u are the nite element global nodal
displacements. However, elastic and damping terms are frequently nonlinear.
When all terms are linear, specially for proportional Rayleigh damping, mode
superposition analysis is very ecient. Using this technique, if transient response is
required, each uncoupled ordinary time dierential equation is integrated using one
of the procedures presented in this section, or alternatively, the Duhamel integral is
numerically evaluated. Depending on the frequency contents of the load, only a few
modes need to be considered, so modal decomposition analysis pays o even if modes
and frequencies of the structure needs to be obtained prior to time integration.
When there are some nonlinear terms, modal decomposition is not longer ecient
(modes change during the analysis), so one of the direct time-integration algorithms
given below needs to be used. A similar situation is found when the spectrum of the
load is large, because too many modes should be considered as to make the modal
superposition analysis probably uneconomical.
In any case, as we will see later, when the elastic and/or damping loads are
nonlinear, the equilibrium equations are solved iteratively and the equations are
linearized respect to the incremental displacements in each iteration. Hence, for
presenting the algorithms we may think of the problem to be an incrementally linear
problem. Later, we will make some considerations about nonlinearities. In the linear
case we have f
1
= Ku, where K is the stiness matrix, and f
C
= C u, where C
is the damping matrix. Hence
M u(t) +C u(t) +Ku(t) = f (t) (706)
is the equation we want to integrate in time.
7.2 Structural dynamics and wave propagation analyses. The Courant
condition
Really there is no fundamental dierence between structural dynamics and wave
propagation analysis. The dierence relies on the frequency content of the loading.
- 185-
7 Transient analyses in linear elastodynamics
Whereas in structural dynamics the loading spectrum goes from Hz to hundreds
of Hz, in wave propagation analysis it goes from thousands of Hz to millions of
Hz. These are typical values, but the consideration as structural dynamics or wave
propagation depends also on the structure: they are relative to the frequencies or
dimensions of the "structure". The rst type of loadings are due to earthquake
engineering, engine vibrations, wind loadings, etc. The second type of loadings are
due to impacts, blasts, drops, noise propagation, etc. In each oscillation, a wave
dissipates slightly. Then, low frequency waves dissipate slowly in time and space,
so they travel and are reected on nite boundaries coming back. High frequency
waves dissipate rapidly in both time and space. Hence, a high frequency wave sees a
large nite structure as an innite or semi-innite medium. Finite elements, if truly
"nite", are not able to model innite media, which aside does not have discretely
spaced "modes" (and mode superposition is not possible). However, up to some
frequency, with nite elements we are able to perform some "wave propagation"
analysis. Of course all these considerations depend on the relative wave length and
"structure" dimension, and there is no border between the two types of analyses. Sea
waves are considered as traveling waves because they are of large frequency compared
to the sea. Earthquake and wind loadings on a structure are considered structural
dynamics, because their wavelength is large when compared to the dimension of the
structure. However, these same loadings are considered traveling waves when the
"structure" is the Earth.
The practical consequence of the previous reasoning is that if we want to perform
a wave propagation analysis, we need to be able to map or represent the wavelength
on the structure. In other words, we need to "capture" the traveling of the wave
between nodes.
Assume that 1
&
is the wavelength that we wish to capture. Assume that this
wave travels at a speed c (which depends on the material properties). Then a
complete wavelength passes through a point in a time
t
&
=
1
&
c
(707)
If we want our nite element spatial discretization to capture this traveling, the
distance between nodes must be 1
c
1
&
. At the same time, if we want our nite
element time discretization to capture this traveling we must have t t
&
. Of
course, in both cases we usually want to do better, for example
t =
t
&
:
and 1
c
=
1
&
:
(708)
so we capture : points of a wavelength. Typically : = 10. Of course the reader will
now see the limitations of the nite element method to perform wave propagation
analysis. If 1
&
is very small (high frequencies), then the mesh discretization may
be extremely ne, and the number of elements may be too large because the time
step size t is very small. Furthermore, the number of time steps may be also very
large. However since these waves dissipate fast, the total analysis time is very short.
As we will see later, one of the best time marching algorithms for wave propagation
analysis is the central dierence method.
- 186-
7.3 Linear multistep methods. Explicit and implicit algorithms.
Dahlquist theorem.
The previous reasoning brings us to another consideration. The given t and 1
c
are consistent since they yield the same spatial and time wave discretization, which
are related by the wave speed
t =
t
&
:
=
1
&
:c
1
c
=
1
&
:

t =
1
c
c
(709)
When t is larger than this value we are not able to capture in time the mini-
mum wavelength 1
c
that is able to capture the mesh. Hence, if we want our time
discretization to be able to capture the waves traveling in the mesh, we need
t
1
c
c
(710)
This condition is usually named as the Courant or CFL (Courant-Friedrichs-Lewy)
condition and ct,1
c
is called the Courant number.
Finally we note that although in 11 analysis the determination of 1
c
is obvious,
this is not the case for 21 and 31 analysis, nor the case when high order elements
or structural elements (beams and plates) are used. In this case we rely on ad-
hoc "approximate" or measures or "bounds", or otherwise bounds given by element
eigenvalue analysis.
7.3 Linear multistep methods. Explicit and implicit algorithms.
Dahlquist theorem.
We focus now on the dynamics equation which we repeatedly have addressed
Ku +C u +M u = f (711)
This is a second order dierential equation that may be integrated using many
algorithms. In general
u = M
1
f M
1
C u M
1
Ku (712)
so we can write
u = Gu +D u +h (713)
where D = M
1
C and G = M
1
K are matrices of constants and h = M
1
f is
a vector of modied loadings. As we will see later, velocities and accelerations are
obtained using expressions of the type (this is just an example)
u
a+1
=
u
a+1
u
a
t
; u
a+1
=
u
a+1
u
a
t
=
u
a+1
2u
a
+u
a1
t
2
(714)
so multiplying by t
2
, Equation (713) can be discretized and set in the following
general form
I
X
i=0

c
a+1i
u
a+1i
+t,
a+1i
Du
a+1i
+t
2

a+1i
Gu
a+1i
t
2

a+1i
h
a+1i

= 0
(715)
- 187-
7 Transient analyses in linear elastodynamics
where / is the number of steps involved in the algorithm and c
a+1i
, ,
a+1i
and

a+1i
are constants dening the specic algorithm. This expression is the gen-
eral expression of a /-step linear multistep method (LMS). We will see that most
algorithms used in structural dynamics are LMS methods. Of course when estab-
lishing Equation (715) some Taylor series truncation errors are introduced (which
sometimes are tediously determined). These truncation errors establish the order
of convergence of the algorithm. For the linear multistep algorithms the truncation
error is typically of second order, i.e. t
2
c, with c being a constant which depends
on the specic algorithm at hand.
Explicit algorithms are those for which c
a+1
6= 0 and ,
a+1
=
a+1
= 0, be-
cause u
a+1
is given explicitly from the displacements of the previous steps u
a+1i
,
i 0. Hence in explicit algorithms it is not necessary to solve a system of equations
and the computation of each time step is extremely economical (but as we will see
later, usually many more steps than in implicit algorithms are necessary). Implicit
algorithms are those for which ,
a+1
or
a+1
do not vanish and therefore, since the
displacements at time step : + 1 are not explicitly known, the system of equations
(715) needs to be solved. The implicit backward dierence methods are those in
which ,
)
=
)
= 0 for all , 6= : + 1. We have written Equation (715) in terms of
displacements, but of course this equation may be written in terms of accelerations
(with dierent coecients). We leave this task to the reader as an exercise. Explicit
methods in structural dynamics are also frequently dened (these are "weaker" def-
initions) as those which enforce the equation of motion at a known step :, whereas
implicit methods are those that enforce the equation of motion at some still unknown
instant of time, for example at : + 1 or at : +0 with 0 1.
Modal superposition may be also applied to equations of the type (715) in order
to decouple the system of equations so each of these equations may be integrated
independently of the other equations. The only restriction is that D = M
1
C
must have the same eigenvectors as G = M
1
K, which happens with Rayleigh or
Caughey damping. Then Eq. (715) may be written as
I
X
i=0

c
a+1i

),a+1i
+ 2t,
a+1i

)
.
)

),a+1i
+t
2

a+1i
.
2
)

),a+1i
t
2

i
/
),a+1i

= 0
(716)
where .
2
)
are the eigenvalues of G = M
1
K and 2
)
.
)
those of D = M
1
C. The
modal coordinates are
)
, i.e.

),a+1i
=
T
)
u
a+1i
(717)
For linear problems, Equations (715) and (716) are completely equivalent and the
problem may be integrated either way. If the problem is solved using Eq. (716),
then the modes need to be computed rst. The time step integration is performed
on the decoupled system of equations and then total displacements u
a
are recovered
using the modal superposition
u
a
=
.
X
i=1

i,a

i
(718)
- 188-
7.3 Linear multistep methods. Explicit and implicit algorithms.
Dahlquist theorem.
There is no general guideline about which method is more ecient in linear analysis.
The answer depends on the size of the problem, on the number of modes to consider
and on the number of needed time steps. For large problems, if few modes are
capable of capturing the response (because of the frequency content of the loads),
then mode superposition is more ecient. If many modes need to be considered,
then the computation of those modes may be a signicative computational eort.
In any case, modal superposition allows for a simple understanding of the behavior
of both the structures and of the integration algorithms. Hence it is an extensively
used type of analysis.
In Section 5.8 we have seen that the equation of motion may be arranged in a
dierent form, converting the second order dierential equation into a rst order
multivariable dierential equation. The philosophy may also be applied to LMS
algorithms. Equations of the type (716) may be frequently written in the alternative
form
y
a+1
= Ay
a
+L
a
(719)
where y is a vector containing several variables, for example displacements and
velocities, say
y
a
=

(720)
or displacements at dierent steps, say
y
a
=

a1

(721)
L
a
is a known algorithmic load vector, computed form the loads, the step size
and the coecients which dene the specic algorithm and A is an algorithmic
matrix (usually nonsymmetric) containing the algorithmic variables and is called
the amplication matrix. The rank of the amplication matrix equals the number of
steps involved in the algorithm. Equation (719) is dened as a one step multivariate
algorithm. Equations of the form (719) are very important to analyze the stability of
algorithms. One algorithm is stable if the algorithmic response of a system remains
bounded in the absence of loads (L
i
= 0) , i.e. if ky
a+1
k is bounded. Using Equation
(719) recursively we obtain in the absence of loads for given initial conditions
y
a+1
= Ay
a
= A
a
y
1
(722)
Since we want y
a+1
to remain bounded for whatever y
1
, then A
a
must be bounded.
This happens if the (possible complex) eigenvalues `
i
are such that
(
|`
i
| 1 for real `
i
|`
i
| =

`
i

`
i

12
1 for complex `
i
or multiple eigenvalues
(723)
where

`
i
is the complex conjugate of `
i
, i.e.

`
i
= Re (`
i
) Im(`
i
) , with , =

1.
The spectral radius of a matrix is dened as
j (A) = max (|`
i
|) for i = 1, .., dim(A) (724)
- 189-
7 Transient analyses in linear elastodynamics
Hence we require for an algorithm to be stable that j (A) = 1. In practice, since
numerical errors show up during the integration process, it is convenient that those
numerical errors are dissipated, i.e. j (A) < 1, specially for high frequencies.
The requirement on the spectral radius of A comes from the canonical and
Jordan forms of matrix A:
A =
T
with =

`
1
`
2
...
`
.

(725)
Then dening z
a
=
T
y
a
, since is constant and the eigenvectors of A
z
a+1
=
T
y
a+1
=
T
A
a
z
1
|{z}
y
1
=
a
z
1
(726)
and

a
=

`
a
1
`
a
2
...
`
a
.

(727)
In a similar way if q

i
is a generalized eigenvector of a repeated eigenvalue `
i
in the
null-space of A if it exists
5
and J is the Jordan form of A
Jq

i
= `
i
q

i
+q
i
(738)
5
A matrix may have a defective kernel, i.e. two linearly dependent eigenvectors if the rank of
`
.
J (728)
is less than ` = dim(). Then there exist many vectors such that
[`
.
J] q
.
= 0 (729)
The dimension of the null subspace (nullspace) is then the same as the number of equal `
.
eigen-
values. Assuming that the dimension d of the nullspace is two, then if q
.
is an eigenvector of `
.
,
there exist a unique vector such that
[`
.
J] q

(.)
= q
.
and q
T
.
q

(.)
= 0 (i.e. q

(.)
q
.
) (730)
and hence it forms the remaining vector for the eigenvectors to be a `dimensional basis. Vector
q

(.)
is usually named generalized eigenvector corresponding to `
.
and
q

(.)
= `.q

(.)
+q. (731)
Relation (731) allows us to write (we leave the reader to perform the checking)
Q= [q1, ..., q^1, q

^1
] = (732)
= QJ = [q
1
, ..., q
^1
, q

^1
]
_
_
_
_
`1
...
`
^1
1
`^1
_

_
(733)
where J is known as Jordans canonical form and Q is the matrix of orthogonal vectors. Then
= QJQ
1
(734)
- 190-
7.3 Linear multistep methods. Explicit and implicit algorithms.
Dahlquist theorem.
and if Q = [q
1
, ..., q
i
, q

i
, .., q
.1
] is the basis of eigenvectors and generalized eigen-
vectors, we can dene z
a
= Q
1
y
a
and
z
a+1
= Q
1
y
a+1
= Q
1
A
a
Qz
1
|{z}
y
1
= J
a
z
1
(739)
where if `
i
is a repeated eigenvalue with a null-space
J
a
=

`
a
1
...
`
a
i
:`
a1
i
`
a
i
...
`
a
.

(740)
Hence, it is clear that if |`
i
| < 1 then z
a+1
is bounded, and if |`
i
| = 1, it is not
bounded only in the case it is complex or a repeated eigenvalue due to the term
:`
a1
i
. However in this case :`
a1
i
= : so it is called weak instability. We will
address the stability of algorithms below.
The structural mechanics algorithms may be written in forms where the ampli-
cation matrix is 2 2 or 3 3. Then the eigenvalues may be computed from the
well known characteristic polynomial (used for example to compute the principal
stresses or strains)
`
3
1

`
2
+11

` 111

= 0 (741)
where 1

, 11

and 111

are the invariants


1

=
11
+
22
+
33
= tracc (A) (742)
11


11

12

21

22


22

23

32

33


11

13

31

33

(743)
111

11

12

13

21

22

23

31

32

33

= det A (744)
There is a much celebrated theorem, Dahlquists theorem, that we will state here
without proof. Dahlquists theorem states that
We also leave to the reader to show using induction that using (731)

n
q

(.)
= `
n
.
q

(.)
+ n`
n1
.
q. (735)
and that in general
) () q

(.)
= ) (`
.
) q

(.)
+ )
0
(`
.
) q
.
(736)
So we can write

n
Q = QJ
n
= [q1, ..., q^1, q

^1
]
_
_
_
_
`
n
1
...
`
n
^1
n`
n1
^1
`
n
^1
_

_
(737)
- 191-
7 Transient analyses in linear elastodynamics
An explicit Astable linear multistep method (LMS) does not exist. Hence
all explicit methods have at most conditional stability.
A third order accurate which neglects only 0

t
3

linear multistep method


which is Astable, does not exist. Hence stable methods are at most second
order accurate.
The second order accurate Astable LMS method with the smallest error
constant the c constant in 0

t
2

= ct
2
is the trapezoidal rule.
In the following sections we will review some of the most used algorithms in
structural mechanics. We will see that the trapezoidal rule is a particular case,
probably the most used one, of the Newmark-, family of integration algorithms
because in virtue of Dahlquists theorem, we cannot do better in accuracy still
keeping the stability. However, we will see that some numerical dissipation is desired,
so frequently the trapezoidal rule is modied in many alternative algorithms.
7.4 Explicit algorithms: central dierence method
The most used explicit algorithm in structural mechanics is by large the central
dierence method. The algorithm may be constructed using Taylor expansion series
of a function
n(t +t) = n(t) +
dn
dt

t
t +
1
2
d
2
n
dt
2

t
(t)
2
+... +
1
:!
d
a
n
dt
a

t
(t)
a
+0

(t)
a+1

(745)
so between steps : and : + 1 and using the notation (t)
2
= t
2
u
a+1
= u
a
+ u
a
t +
1
2
u
a
t
2
+0

t
3

(746)
where 0

t
3

means that there are some high order terms that we are neglect-
ing which are of the order of (t)
2
. However, we can also apply Taylor formula
backwards
u
a1
= u
a
u
a
t +
1
2
u
a
t
2
+0

t
3

(747)
Now, substracting (747) from (746) we have
u
a+1
u
a1
= 2 u
a
t +0

t
3

(748)
hence we can factor-out u
a
to obtain a central dierence formula (i.e. the velocity
at a step is computed from the nite dierence displacements of the previous and
following step)
u
a
=
u
a+1
u
a1
2t
+0

t
2

'
u
a+1
u
a1
2t
(749)
If we now add (747) and (746) (note that the terms in t
3
cancel out leaving terms
in t
4
leading the error)
u
a+1
+u
a1
= 2u
a
+ u
a
t
2
+0

t
4

(750)
- 192-
7.4 Explicit algorithms: central difference method
so factoring-out u
a
we have another central dierence formula for the acceleration
u
a
=
u
a+1
2u
a
+u
a1
t
2
+0

t
2

(751)
Now we enforce the equation of motion at the known step :,
M u
a
+C u
a
+Ku
a
= f
a
(752)
so using Equations (749) and (751) we have
M

u
a+1
2u
a
+u
a1
t
2
+0

t
2

+C

u
a+1
u
a1
2t
+0

t
2

+Ku
a
= f
a
(753)
where we can factor-out Mu
a+1
as

1
t
2
M+
1
2t
C

u
a+1
= f
a

K
2
t
2
M

u
a

1
t
2
M
1
2t
C

u
a1
(754)
where we have neglected the terms 0

t
2

. Thus the accuracy of the algorithm is


of second order. Equation (754) may be written as
K

u
a+1
= f

a
(755)
with
K

=
1
t
2
M+
1
2t
C (756)
f

a
= f
a

K
2
t
2
M

u
a

1
t
2
M
1
2t
C

u
a1
(757)
We note that the central dierence method is truly explicit only in some especial
cases because otherwise we need to solve Equation (755). These cases are when K

is a diagonal matrix. Therefore, in central dierences M is almost always built as


a lumped matrix (hence the importance of these type of mass matrices) and the
damping matrix is considered as mass proportional (i.e. neglecting the stiness
proportional component of Rayleigh damping)
C = /M (758)
so K

is a diagonal matrix and (K

)
1
f
a
is simply a scaling operation of each
component of f
a
by the corresponding diagonal component of (K

)
1
, i.e.
(u
a+1
)
component )
=
(f

a
)
component )
(K

)
component ))
(759)
Since no system of equations is solved computational savings per time step are large.
Furthermore, the matrices in f

a
need not to be built and stored since all we need
is its eect on known displacements. Hence, for very large systems in which the
matrices need to be stored out-of-core, it may be adequate to mount f

a
element
by element, although in this case the element matrices need to be computed at
- 193-
7 Transient analyses in linear elastodynamics
each time (so the trade-o may not be benecial if we do not take advantage of
parallelization).
Given the above advantages, the reader may wonder why the central dierence
method is not by large the most used one in the integration of the dynamics equation.
The reason is its conditional stability. In order to analyze the stability of the algo-
rithm, we rst apply the modal decomposition to Equation (754). Premultiplying
by
T

1
t
2

T
M+
1
2t

T
C

u
a+1
=
=
T
f
a

T
K
2
t
2

T
M

u
a

1
t
2

T
M
1
2t

T
C

u
a1
(760)
and using u
a
=
a
we obtain the following uncoupled system of equations (please,
verify!)

1
t
2
+

i
.
i
t

i,a+1
= /
i,a

.
2
i

2
t
2

i,a

1
t
2


i
.
i
t

i,a1
(761)
i.e.

i,a+1
=
1
1 +.t

t
2
/
i,a

t
2
.
2
i
2

i,a
(1 t
i
.
i
)
i,a1

(762)
where
i
is the modal damping and .
i
is the modal circular frequency for mode i.
These are actually the equations to be used in case of modal superposition. Then
we write this equation for mode i in the form of Equation (719) using
y
a+1
=


a+1

(763)
where we omit the mode index for simplicity, we have (verify!)


a+1

| {z }
y
a+1
=

2 .
2
t
2
1 +.t

1 .t
1 +.t
1 0

| {z }
A

a1

| {z }
y
a
+

t
2
/
a
1 +.t
0

| {z }
L
a
(764)
Then, the characteristic equation is
det (A`I) =

2 .
2
t
2
1 +.t
`
1 .t
1 +.t
1 `

= 0 (765)
The most critical case is for zero damping, i.e. = 0. Then
det (A`I) =

2 .
2
t
2
` 1
1 `

= 0 (766)
i.e.
1 (`) = `
2
+`

.
2
t
2
2

+ 1 = 0 (767)
- 194-
7.4 Explicit algorithms: central difference method
so
` =

1
1
2
.
2
t
2

1
1
2
.
2
t
2

2
1 (768)
The critical value is when |`| = 1. This happens when
t =
T

=
2
.
(769)
as we can verify in Eq. (768) using t = 2,.
` =

1
1
2
.
2
4
.
2

| {z }
= 1

1
1
2
.
2
4
.
2

2
1
| {z }
= 0
= 1 (770)
Equation (769) forces us to select a time step less or equal than the so called critical
time step, which is
t t
cv
=
T

(771)
where T is smallest period in our problem, i.e. that of the largest modal frequency.
Usually the selection is as close as possible to the critical time step.
If our mesh is very ne, then the largest frequency will be very high, and the
critical time step too small. The number of steps needed to solve our problem may
be prohibitive if the total time of the analysis is large. This is the reason why
the central dierence method is preferable for blast, impact loadings and problems
alike, because the total time of the analysis is small, the spectrum of the loading
has large frequencies, and we need small time steps to capture the behavior of the
structure. For other problems, like earthquakes, wind vibrations, and problems
where the frequencies to consider are low and the total time of the analysis is high,
the central dierences method may be very inecient because of the restriction given
by the critical time step. However, if modal decomposition is applied, the time step
may be selected according to the highest mode to consider. Since this frequency is
usually not too large, the number of time steps may be small. However note that
computing the modes may also be time consuming.
Finally we note that the central dierences method is a two-step method (two
time increments, three time points) because Equation (754) involves two steps (three
points, :+1, : and :1). These type of methods need a starting procedure because
in the rst step, when computing u
a+1
we do not know u
a1
. However we may use
a second initial condition on u
0
, which upon substitution in Eq. (747) yields
u
1
= u
0
t u
0
+
t
2
2
u
0
(772)
where
u
0
= M
1
(Cu
0
+Ku
0
f
0
) (773)
The layout of the procedure is as follows
- 195-
7 Transient analyses in linear elastodynamics
The central dierences method.
: Initial phase:
.1 Form stiness , damping and mass matrices K, C, M
Note that usually M is lumped and C = /M
.2 Compute constants with t t
cv
.3 a
0
= 1,t
2
; a
1
= 1,2t
2
; a
2
= 2a
0
; a
3
= 1,a
2
.4 starting acceleration u
0
= M
1
(C u
0
+Ku
0
f
0
)
.5 starting vector u
1
= u
0
t u
0
+a
3
u
0
.6 K

= a
0
M+a
1
C
.7 Factorize K

, for example K

= LDL
T
1 : For : = 1, .... (for each time step)
1.1 f

a
= f
a
(Ka
2
M) u
a
(a
0
M a
1
C) u
a1
1.2 solve displacements using factorized matrix K

u
a+1
= f

a
1.3 Compute velocities and accelerations if needed (for step :)
1.4
u
a
= a
0
(u
a+1
2u
a
+u
a1
)
u
a
= a
1
(u
a+1
u
a1
)
Example 51 Write a program using your favorite computer language to integrate
the equation of motion using the central dierences method.
Solution. The source code in Matlab follows.
function [t,u,v,a] = centraldif(K,M,C,f,dt,N,u0,v0)
%*** function [t,u,v,a] = centraldif(K,M,C,f,dt,N,u0,v0)
% This is a program to integrate the equation of motion
% using the central differences algorithm
%
% K = stiffness matrix
% M = Mass matrix
% C = Damping matrix
% f = force vector: f(dim(K),1:N); 2nd dimens. may be >N
% dt= time increment
% N = number of steps
% u0= initial displacements
% v0= initial velocities
% t = time (for plots)
% u,v,a = displacements, velocities and accelerations
%
%* initial calculations
a0=1/dt^2; a1=1/(2*dt); a2=2*a0; a3=1/a2;
u(:,1) = u0;
v(:,1) = v0;
feff = f(:,1)-C*v0-K*u0;
a(:,1) = M\feff;
u00 = u(:,1) - dt*v(:,1) + dt^2 / 2 * a(:,1);
Keff = a0*M+a1*C; % Effective stiffness/mass
- 196-
7.4 Explicit algorithms: central difference method
U = chol(Keff); % factorization (Cholesky)
%* fist step
feff = f(:,1) - (K-a2*M)*u(:,1) - (a0*M-a1*C)*u00;
% solve displacements for next step
u(:,2) = U\feff; % forward reduction
u(:,2) = U\u(:,2); % backsubstitution
t(1) = 0;
%* loop on the different steps
for n=2:N, % initial conditions are n = 1
t(n) = t(n-1) + dt;
% effective load vector
feff = f(:,n) - (K-a2*M)*u(:,n) - (a0*M-a1*C)*u(:,n-1);
% solve displacements for next step
u(:,n+1) = U\feff; % forward reduction
u(:,n+1) = U\u(:,n+1); % backsubstitution
% solve velocities and accelerations
v(:,n) = a0*(u(:,n+1) + u00 - 2 * u(:,n));
a(:,n) = a1*(u(:,n+1)-u00);
end,
u(:,n+1)=u(:,n); v(:,n+1)=v(:,n); a(:,n+1)=a(:,n); t(:,n+1)=t(:,n);
return,
end
Example 52 One structural system is characterized or modeled using the following
stiness and mass matrices
K =

1993.7 1954.2
1954.2 1993.7

and M =

1 0
0 1

(774)
and C = 0. Assume that f (t) = 0 for all t. Use the code of Example 51 to compute
the response for 100 steps using the following time stepping and starting vectors
t = 0.099,
t = 0.1,;
t = 0.1
t = 1

with

u
0
=

1
1

u
0
=

1
1

(775)
and plot the result for the displacement at degree of freedom 1. Compute the eigen-
values and eigenvectors, the corresponding periods and comment on the results
Solution: The solution is given in Figures 46 and 47. We leave to the reader to
compute the eigenvalues and eigenvectors of the problem, which are
`
1
= 4
2
for
1
=
1

1
1

and `
2
= 400
2
for
2
=
1

1
1

(776)
so the periods are
T
1
=
2
.
1
=
2

`
1
= 1 and T
2
=
2

`
2
= 0.1 (777)
- 197-
7 Transient analyses in linear elastodynamics
The critical times for integrating each mode are
t
cv(1)
=
T
1

=
1

and t
cv(2)
=
T
2

=
0.1

(778)
The rst initial displacement vector was exciting the rst mode, so all computing
times except the last one were below the critical time. All predicted responses look
correct except the last one, were the solution has blown up. Please note the exponent
in the y-axis which makes the sinusoidal behavior look like a straight line.
The second initial displacement was exciting the second mode, hence the critical time
step is t
cv
= 0.1,. We see in Figure 47 several interesting phenomena. In the
rst graph of the gure, the time step is 0.099,, just below the critical time step.
This leads to a well known beating phenomena, very similar to that encountered
when sampling a signal with a time spacing close to a multiple of its period. The
second graph of the gure shows a weak increase in the response which is due to a
weak instability (recall that Jordans form had a term :`
a1
which is : at the critical
time step). The last two graphs of the gure show completely blown-up responses.
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1/
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1
0 1 2 3
3
2
1
0
1
x 10
4
time
u
1
t=1.0
Figure 46: Calculated response of Example 52 for a starting vector u = [1, 1]
T
using
the central dierence method
The Matlab code to run the plots, follows
figure;
K = [1993.7,-1954.2;-1954.2,1993.7]; % stiffness matrix
- 198-
7.4 Explicit algorithms: central difference method
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/
0 1 2 3
2
1
0
1
2
time
u
1
t=0.1/
0 1 2 3
1000
500
0
500
1000
time
u
1
t=0.1
0 1 2 3
4
3
2
1
0
1
x 10
10
time
u
1
t=1.0
Figure 47: Calculated response of Example 52 for a starting vector u = [1, 1]
T
using the central dierence method
- 199-
7 Transient analyses in linear elastodynamics
M = [1,0;0,1]; % mass matrix
C = [0,0;0,0]; % damping matrix
N = 1000; % number of steps
f(1:2,1:N+2)=0; % loading
u0 = [1,1]; v0 = [0,0]; % initial conditions
dt(1) = 0.099/pi; dt(2) = 0.1/pi; dt(3) = 0.1; dt(4) = 1;
tpend = 3; % time at end of plot
subplot(2,2,1);
[t,u,v,a] = centraldif(K,M,C,f,dt(1),N,u0,v0);
plot(t,u(1,:)); xlabel(time); ylabel(u_1);
title([\Deltat=,0.099/\pi]);
set(gca,xlim,[0,tpend]);
subplot(2,2,2);
[t,u,v,a] = centraldif(K,M,C,f,dt(2),N,u0,v0);
plot(t,u(1,:)); xlabel(time); ylabel(u_1);
title([\Deltat=,0.1/\pi]);
set(gca,xlim,[0,tpend]);
subplot(2,2,3);
[t,u,v,a] = centraldif(K,M,C,f,dt(3),N,u0,v0);
plot(t,u(1,:)); xlabel(time); ylabel(u_1);
title([\Deltat=,0.1]);
set(gca,xlim,[0,tpend]);
subplot(2,2,4);
[t,u,v,a] = centraldif(K,M,C,f,dt(4),N,u0,v0);
plot(t,u(1,:)); xlabel(time); ylabel(u_1);
title([\Deltat=,1.0]);
set(gca,xlim,[0,tpend]);
Example 53 Plot function ) = sin(2t,T) with T = 10 and the following time
steps
t = 10; t = 9; t = 7.3; t = 2 (779)
Solution. The objective of this example is to show that what you see in a plot is not
necessarily the true response, but a discretized version which may show a dierent
frequency and amplitude modulation. Have you ever seen the spikes of the wheel of
a car looking like they go the opposite direction as that of the car? The phenomena
is similar to how the stroboscope works. The Matlab commands follow. The results
are shown in Figure 48. Note that the theoretical frequency of the function is the
same, we have only changed the number of and spacing of the sampling points.
subplot(2,2,1); t=[0:10:200]; f = sin(2*pi/10*t); plot(t,f);
xlabel(time t = [0:10:200]); ylabel(f = sin(2*pi/10*t));
subplot(2,2,2); t=[0: 9:200]; f = sin(2*pi/10*t); plot(t,f);
xlabel(time t = [0: 9:200]); ylabel(f = sin(2*pi/10*t));
subplot(2,2,3); t=[0: 7.3:200]; f = sin(2*pi/10*t); plot(t,f);
- 200-
7.4 Explicit algorithms: central difference method
0 50 100 150 200
15
10
5
0
5
x 10
15
time t = [0:10:200]
f

=

s
i
n
(
2
*

/
1
0
*
t
)
0 50 100 150 200
1
0.5
0
0.5
1
time t = [0: 9:200]
f

=

s
i
n
(
2
*

/
1
0
*
t
)
0 50 100 150 200
1
0.5
0
0.5
1
time t = [0: 7.3:200]
f

=

s
i
n
(
2
*

/
1
0
*
t
)
0 50 100 150 200
1
0.5
0
0.5
1
time t = [0: 2:200]
f

=

s
i
n
(
2
*

/
1
0
*
t
)
Figure 48: Eect of the discrete representation of a continuous function with insuf-
cient number of points per period.
- 201-
7 Transient analyses in linear elastodynamics
xlabel(time t = [0: 7.3:200]); ylabel(f = sin(2*pi/10*t));
subplot(2,2,4); t=[0: 2:200]; f = sin(2*pi/10*t); plot(t,f);
xlabel(time t = [0: 2:200]); ylabel(f = sin(2*pi/10*t));\bigskip
7.5 Implicit algorithms
In contrast to explicit algorithms, in implicit algorithms we need to solve a system of
equations. However, usually implicit algorithms are unconditionally stable; at least
we will seek these versions. In the following subsections we will address some of the
best known ones. The most used one is the Newmark-, algorithm.
7.5.1 Houbolt method
The Houbolt method is one of the earliest methods and nowadays it is rarely used.
However it is formative, so it is presented here. In its basic form, is a three step
integration method obtained in a similar way as the central dierences method.
In this case velocities and accelerations are obtained as (the math is lengthy, but
the reader should verify these formulas in a similar way as we did for the Central
Dierence method)
u
a+1
=
1
t
2
(2u
a+1
5u
a
+ 4u
a1
u
a2
) +0

t
2

(780)
u
a+1
=
1
6t
(11u
a+1
18u
a
+ 9u
a1
2u
a2
) +0

t
2

(781)
Houbolts method is thus second order accurate. It is an implicit algorithm and the
equilibrium equation is solved at time step : + 1
M u
a+1
+C u
a+1
+Ku
a+1
= f
a+1
(782)
Then, upon substitution of velocities and accelerations

2
t
2
M+
11
6t
C +K

u
a+1
=
= f
a+1
+

5
t
2
M +
3
t
C

u
a

4
t
2
M +
3
2t
C

u
a1
+

1
t
2
M +
1
3t
C

u
a2
(783)
which can be written as
K

u
a+1
= f

a+1
(784)
with
K

=
2
t
2
M+
11
6t
C +K
f

a+1
= f
a+1
+

5
t
2
M +
3
t
C

u
a

4
t
2
M +
3
2t
C

u
a1
(785)
+

1
t
2
M +
1
3t
C

u
a2
(786)
- 202-
7.5 Implicit algorithms
Of course the algorithm necessitates a starting method. For this task other
algorithms are used, as the central dierence method with a small fraction of the
time increment used for the Houbolt method. In the following examples we develop
an algorithm and compute the amplication matrix for the Houbolt method.
Example 54 Sketch a computational procedure for the Houbolt method
Solution. The computational procedure is as follows
The Houbolt time integration method
: Initial phase:
.1 Form stiness , damping and mass matrices K, C, M
.2 Compute constants with t
.3 a
0
= 2,t
2
; a
1
= 11, (6t) ; a
2
= 5,t
2
; a
3
= 3,t
a
4
= 2a
0
; a
5
= a
3
,2; a
6
= a
0
,2; a
7
= a
3
,9
.4 use starting procedure to compute u
1
and u
2
.5 K

= K +a
0
M+a
1
C
.6 Factorize K

, for example K

= LDL
T
1 : For : = 3, .... (for each time step after the starting proc.)
1.1
f

a+1
= f
a+1
+M (a
2
u
a
+a
4
u
a1
+a
6
u
a2
)
+C (a
3
u
a
+a
5
u
a1
+a
7
u
a2
)
1.2 solve displacements using factorized matrix K

u
a+1
= f

a+1
1.3 Compute velocities and accelerations
1.4
u
a+1
= a
0
u
a+1
a
2
u
a
a
4
u
a1
a
6
u
a2
u
a
= a
1
u
a+1
a
3
u
a
a
5
u
a1
a
7
u
a2
Example 55 Compute the amplication matrix for the Houbolt method.
Solution. Since the Houbolt method is a three step method, the amplication matrix
is a 3 3 matrix, and the one step equivalent method is


a+1

a1

= A

a1

a2

+L
a+1
(787)
where are the modal coordinates from the modal decomposition. The modal decom-
position is quickly obtained if in Equation (783) we perform the following substitu-
tions which come from the model projection (we omit the mode index)
u ; K .
2
; M 1; C 2.; f
a+1
/
a+1
(788)
so we obtain
j
a+1
z }| {

2
t
2
+
11.
3t
+.
2


a+1
=
= /
a+1
+

5
t
2
+
6.
t

| {z }
j
a

4
t
2
+
3.
t

| {z }
j
a1

a1
+

1
t
2
+
2.
3t

| {z }
j
a2

a2
(789)
- 203-
7 Transient analyses in linear elastodynamics
and the single step multivalue form may be written as


a+1

a1

j
a
j
a+1
j
a1
j
a+1
j
a2
j
a+1
1 0 0
0 1 1

a1

a2

/
a+1
,j
a+1
0
0

(790)
7.5.2 Newmark-, method
The Newmark-, method is by large the most used method in structural dynamics. It
contains as special cases the trapezoidal rule, the linear acceleration method and the
central dierence method. The idea behind of the method is, instead of using nite
dierences between steps, to assume a constant "average" acceleration during the
step from : to :+1. This average acceleration is dierent when computing velocities
or displacements, which are computed using the uniform (constant) acceleration
formulae:
u
a+1
= u
a
+t [(1 ) u
a
+ u
a+1
]
| {z }
"average" accel. for velocities
(791)
u
a+1
= u
a
+t u
a
+
1
2
t
2
[(1 2,) u
a
+ 2, u
a+1
]
| {z }
"average" accel. for displacements
(792)
The method is implicit and enforces the equation of motion at time step : + 1
M u
a+1
+C u
a+1
+Ku
a+1
= f
a+1
(793)
It is clearly seen that the method only involves two time instances, : and : + 1
(one time increment). Hence, it needs no starting procedure. The accuracy of the
method is only of rst order, except in the case that = 1,2. We leave to the reader
to show this using the nite dierence formulas.
The truly average acceleration method (or trapezoidal rule) is obtained in the
case = 1,2 and , = 1,4, as it can be seen upon substitution in the formulas. We
have seen from Dahlquist Theorem that this is, in principle, the optimal method.
The central dierence method can be obtained using , = 0 and = 1,2. We
leave to the reader to compare the resulting equations with those obtained using
the central dierence method. For the case of = 1,2 and , = 1,6 we obtain the
linear acceleration method (the acceleration varies linearly over the time step) as a
particular case.
The Newmark-, method has several implementations. We will address the most
common ones.
D-form The most common and traditional implementation is the d-form (d for
displacements), in which the variable to be solved for in the linear system of equa-
tions is the displacements vector. From equation (792) we can factor-out the acceler-
ation at step :+1 and from Eq. (791) the corresponding velocity upon substitution
of the known acceleration
u
a+1
=
1
,t
2
(u
a+1
u
a
)
1
,t
u
a

1
2,
1

u
a
(794)
- 204-
7.5 Implicit algorithms
u
a+1
= u
a
+ (1 ) t u
a
+t u
a+1
(795)
After substitution in the equilibrium equation we have

K +
1
,t
2
M +

,t
C

u
a+1
=
= f
a+1
+M

1
,t
2
u
a
+
1
,t
u
a
+

1
2,
1

u
a

+C


,t
u
a
+

,
1

u
a
+
t
2

,
2

u
a

(796)
which can be written as usual
K

u
a+1
= f

a+1
(797)
There are implementations in terms of predictor-corrector algorithms. The predictor
quantities are those which depend on values known before solving the system of
equations. The correction quantities are the terms which depend on the solution
obtained from the system of equations. In this framework, Equations (794) and
(795) can be written as
u
a+1
=
1
,t
2
u
a

1
,t
u
a

1
2,
1

u
a
| {z }
predictor: u
j
a+1
+
1
,t
2
u
a+1
| {z }
corrector: u
c
a+1
(798)
u
a+1
= u
a
+ (1 ) t u
a
+t u
j
a+1
| {z }
predictor: u
j
a+1
+ t u
c
a+1
| {z }
corrector: u
c
a+1
(799)
so we can write
f

a+1
= f
a+1
M u
j
a+1
C u
j
a+1
(800)
and solve for K

u
a+1
= f

a+1
with
K

= K +
1
,t
2
M +

,t
C (801)
Then we correct the predictor values
u
a+1
= u
j
a+1
+
1
,t
2
u
a+1
; u
a+1
= u
j
a+1
+t u
c
a+1
(802)
We make now an important note. As we mentioned before, a special case of the
Newmark-, method is the central dierence when , = 0 and = 1,2. However, in
this form , is in the denominator and hence if we choose this case for the unmodied
d-form we may get into trouble. Equations need to be multiplied by , in order to
become solvable for this case. The a-form addressed below is a better suited form
if we want to include the central dierences method as a special case. In fact, the
a-form is the one also used for mixed implicit-explicit integration.
- 205-
7 Transient analyses in linear elastodynamics
Since the Newmark-, method is a LMS method, the stability may also be stud-
ied using the amplication matrix. The math is lengthy to obtain the theoretical
expression on the stability condition and the critical time step in unstable cases.
In Example 56 below we derive the stability matrix. The stability condition comes
from the spectral radius j (A) of the amplication matrix to be equal or less than
one. The resulting conditions are
Unconditional stable if
0 < 1,
1
2
, ,

+ 1,2
2

2
Conditional stable if
0 < 1,
1
2
, t t
cv
where
t
cv
=


1
2

+
q

+
1
2

2
4, + (4, 2)

+
1
2

2
4,
T

For = 0 is conditional stable if


t t
cv
=
1
p
(( + 1,2) 4,)
T

(803)
A-form The a-form (a for acceleration) is more straightforward. The predictor-
corrector implementation uses the original form given by Eqs. (791) and (792)
u
a+1
= u
a
+t u
a
+
t
2
2
(1 2,) u
a
| {z }
predictor: u
j
a+1
+ ,t
2
u
a+1
| {z }
corrector: u
c
a+1
(804)
u
a+1
= u
a
+ (1 ) t u
a
| {z }
predictor: u
j
a+1
+ t u
a+1
| {z }
corrector: u
c
a+1
(805)
Then, the following system of equations is obtained from Equation (793) please,
verify!
M

u
a+1
= f

a+1
(806)
where
M

=

M +tC +,t
2
K

(807)
f

a+1
= f
a+1
C u
j
a+1
Ku
j
a+1
(808)
Once the solution in accelerations is obtained, displacements and velocities are cor-
rected
u
a+1
= u
j
a+1
+,t
2
u
a+1
; u
a
= u
j
a+1
+t u
a+1
(809)
As we see, in this implementation the parameter , is always in the numerator and
we can substitute , = 0 without problem.
- 206-
7.5 Implicit algorithms
Mixed implicit-explicit algorithms The predictor-corrector a-form of the Newmark-
, method may be used to implement mixed explicit-implicit algorithms. What for?
The reason we may be interested in a mixed integration scheme is because in a prob-
lem we may have a "sti" part due to a very ne mesh or due to the presence of
penalties (for example in contact or imposing boundary conditions). This sti part
reduces the critical time step considerably and, hence, we may be unable to per-
form an explicit time integration within a reasonable time step (cost). However,
with mixed implicit-explicit algorithms, we may be able to implicitly integrate those
critical parts and still keep a suciently large time step (which may be even several
orders of magnitude larger) for explicitly integrate the rest of the model. This is
even more common nowadays where models come from CAD programs and meshes
may include very ne details that, otherwise, should be removed from the model
prior to an explicit time integration.
In order to derive the implicit-explicit formulation, assume that we have the
following mesh partition (it is usually done in element groups)

M = M
1
+M
1
K = K
1
+K
1
C = C
1
+C
1
F = F
1
+F
1
(810)
where superscript 1 stands for explicit and superscript 1 stands for implicit element
groups. Explicit integration is performed without taking into account the correction
phase, i.e. we use the following equation of motion for the a-form
M
1
u
a+1
+C
1
u
j
a+1
+K
1
u
j
a+1
= f
1
a+1
(811)
i.e. we solve
M
1
u
a+1
= f
1
a+1
C
1
u
j
a+1
K
1
u
j
a+1
(812)
We recover Equation (806), expanded

M
1
+tC
1
+,t
2
K
1

u
a+1
= f
1
a+1
C
1
u
j
a+1
K
1
u
j
a+1
(813)
Adding-up both equations and taking into account Eqs. (810), we have

M +tC
1
+,t
2
K
1

| {z }
M

u
a+1
= f
a+1
C u
j
a+1
Ku
j
a+1
| {z }
f

a+1
(814)
i.e. the only practical change is that the contributions of K
1
and C
1
to M

are
not considered. The rest of the algorithm is exactly the same.
One important observation is that due to the presence of K
1
on the left hand side
of the equation, which is almost never diagonal, the appealing feature of the explicit
algorithms seems to be lost: a system of equations needs to be solved. However,
note that M

will be diagonal except for those entries which correspond to implicit


element groups. Hence, if the number of implicit elements is small, the resulting
M

is almost diagonal, so storage needs are small and the time employed solving
the system of equations is almost of the same order as for the explicit method.
- 207-
7 Transient analyses in linear elastodynamics
Finally we note that the explicit integration in the explicit-implicit method is
not the central dierences method, so the accuracy and stability conditions are in
principle not the same. In fact, the resulting scheme is not longer a linear multistep
method (LMS) and, hence, the amplication matrix type of analysis is no longer
useful. Since the proof is out of the scope of this notes, we just mention that the
stability condition for these type of methods is
t t
cv
=
p
(
2
+ 2)
2
T

(815)
where is the damping ratio of the mode, , = 0 for explicit parts, 1,2 for
explicit parts and T is the period of the mode under consideration (the highest
frequency for the explicit part). It is seen that if = 0 and = 1,2 (as in the
central dierences case) then
t t
cv
=
T

(816)
as in the central dierences case. It can be shown that in this case we also obtain
second order accuracy.
Example 56 Compute the amplication matrix for the Newmark-, method
Solution. Since Newmark-, is not a nite dierences method, the amplication
matrix is better written in terms of displacements, velocities and accelerations as


a+1

a+1

a+1

= A

+L
a+1
(817)
where are the modal coordinates from the modal decomposition. The modal decom-
position is quickly obtained if in Equation (796) we perform the following substitu-
tions which come from the model projection (we omit the mode index)
u ; K .
2
; M 1; C 2.; f
a+1
/
a+1
(818)
so

.
2
+
1
,t
2
+
2.
,t


a+1
=
= /
a+1
+
1
,t
2

a
+
1
,t

a
+

1
2,
1

a
+2.


,t

a
+

,
1

a
+
t
2

,
2

(819)
i.e.

a+1
=
/
a+1
j
a+1
+
j
a
j
a+1

a
+
j
a
j
a+1

a
+
j
a
j
a+1

a
(820)
- 208-
7.5 Implicit algorithms
where we dened
j
a+1
= .
2
+
1
,t
2
+
2.
,t
j
a
=
1
,t
2
+
2.
,t
j
a
=
1
,t
+ 2.

,
1

j
a
=

1
2,
1

+
2.t
2

,
2

The accelerations are readily obtained from Eq.(798) using the same substitutions

a+1
=
1
,t
2

1
,t

1
2,
1

a
+
1
,t
2

a+1
=
1
,t
2
/
a+1
j
a+1
+
1
,t
2

j
a
j
a+1
1

a
+

1
,t
2
j
a
j
a+1

1
,t

a
+

1
,t
2
j
a
j
a+1

1
2,
1

a
=
1
,t
2
/
a+1
j
a+1
+c
a

a
+ c
a

a
+ c
a

a
(821)
where we dened
c
a
=
1
,t
2

j
a
j
a+1
1

(822)
c
a
=
1
,t
2
j
a
j
a+1

1
,t
(823)
c
a
=
1
,t
2
j
a
j
a+1

1
2,
1

(824)
The velocities may be obtained from Eq. (805) also with the same substitutions

a+1
=

a
+ (1 ) t

a
+t

a+1
=

,t
/
a+1
j
a+1
+tc
a

a
+ (1 +t c
a
)

a
+ [(1 ) t +t c
a
]

a
=

,t
/
a+1
j
a+1
+i
a

a
+ i
a

a
+ i
a

a
(825)
where
i
a
= tc
a
(826)
i
a
= 1 +t c
a
(827)
i
a
= (1 ) t +t c
a
(828)
- 209-
7 Transient analyses in linear elastodynamics
Then


a+1

a+1

a+1

j
a
j
a+1
j
a
j
a+1
j
a
j
a+1
i
a
i
a
i
a
c
a
c
a
c
a

,t
1
,t
2

/
a+1
j
a+1
(829)
Example 57 Sketch an algorithm for the Newmark-, method in a-form.
Solution. The algorithm may be implemented as follows, following the predictor-
corrector strategy
The Newmark-, time integration method
: Initial phase:
.1 Form stiness , damping and mass matrices K, C, M
.2 Initialize accelerations, solve for M u
0
= f
0
Ku
0
C u
0
.3 Compute constants with t
.4 a
0
= t; a
1
= ,t
2
; a
2
= t a
0
; a
3
=
1
2
t
2
(1 2,)
.5 M

= M +a
0
C +a
1
K
.6 Factorize M

, for example M

= LDL
T
1 : For : = 0, .... (for each time step)
1.1 u
j
a+1
= u
a
+t u
a
+a
3
u
a
1.2 u
j
a+1
= u
a
+a
2
u
a
1.3 f

a+1
= f
a+1
C u
j
a+1
Ku
j
a+1
1.4 solve accelerations using factorized matrix M

u
a+1
= f

a+1
1.5 Compute velocities and displacements
1.6
u
a+1
= u
j
a+1
+a
1
u
a+1
u
a+1
= u
c
a+1
+a
2
u
a+1
Example 58 Write a program in your favorite language to run the Newmark-,
algorithm
Solution. The source code in Matlab follows.
function [t,u,v,a] = Newmark(K,M,C,f,dt,N,u0,v0,be,ga)
%*** function [t,u,v,a] = Newmark(K,M,C,f,dt,N,u0,v0,be,ga)
% This is a program to integrate the equation of motion
% using the Newmark-beta integration method
%
% K = stiffness matrix
% M = Mass matrix
% C = Damping matrix
% f = force vector: f(dim(K),1:N); 2nd dimens. may be >N
% dt= time increment
% N = number of steps
% u0= initial displacements
% v0= initial velocities
% be= Newmark beta parameter
% ga= Newmark gamma parameter
- 210-
7.5 Implicit algorithms
% t = time (for plots)
% u,v,a = displacements, velocities and accelerations
%
%* initial calculations
u(:,1) = u0; % initial displacements
v(:,1) = v0; % initial velocities
a(:,1) = M\(f(:,1)-K*u0-C*v0); % initial acceleration by equilibrium
t(1) = 0; % time;
a0 = ga*dt; a1 = be*dt^2; % constants
a2 = dt-a0; a3 = (dt^2)/2-a1; % more constants
%* form effective mass matrix
Meff = M + a0*C + a1*K;
U = chol(Meff); % factorization (Cholesky)
for n=1:N-1;
t(n+1) = t(n) + dt;
u(:,n+1) = u(:,n) + dt*v(:,n) + a3*a(:,n); % predictor displac.
v(:,n+1) = v(:,n) + a2*a(:,n); % predictor veloc.
feff = f(:,n+1)-C*v(:,n+1)-K*u(:,n+1); % r.h.s.
a(:,n+1) = U\feff; % forward reduction
a(:,n+1) = U\a(:,n+1); % backsubstitution
u(:,n+1) = u(:,n+1) + a1*a(:,n+1); % displac. correct.
v(:,n+1) = v(:,n+1) + a0*a(:,n+1); % velocities corr.
end,
return,
end
Example 59 Use the program of Example 58 to repeat Example 52.
Solution. As it can be seen in Figures 49 and 50, the response predicted using the
Newmark-, method for parameters , = 0 and = 1,2 is exactly the same as for the
central dierence method. The response show the same blow-ups already discussed
in Example 52. If the Newmark parameters , = 1,4 and = 1,2 are used, the
algorithm is unconditionally stable and second order accurate. The predictions for
the same cases are shown in Figures 51 and 52 . It is shown that, even though for
large time steps the predicted response is not accurate, it does not blow-up. Hence,
if we are not interested on the higher modes response, we may use a larger time step
and still obtain a meaningful bounded response. Furthermore, in Figures 53 and 54
the predictions for = 0.55 and , = [( + 1,2) ,2]
2
= 0.27563, it is shown that
with 1,2 a numerical damping is obtained for all modes. However, the damping
is higher for the higher modes. Sometimes this is a desirable feature because it
"lters" the "noise" of high modes (usually very inaccurate and barely due to mesh
discretization) in the response. The Matlab commands to obtain the plots follow
figure;
K = [1993.7,-1954.2;-1954.2,1993.7]; % stiffness matrix
M = [1,0;0,1]; % mass matrix
- 211-
7 Transient analyses in linear elastodynamics
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/ =0 =0.5
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1/ =0 =0.5
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1 =0 =0.5
0 1 2 3
3
2
1
0
1
x 10
4
time
u
1
t=1.0 =0 =0.5
Figure 49: Prediction for the problem of Example 52 for u
0
= [1, 1] using the
Newmark-, algorithm with parameters , = 0 and = 1,2 and time steps t =
0.099,, t = 0.1,, t = 0.1 and t = 1. The result is the same as that using
the central dierences method.
- 212-
7.5 Implicit algorithms
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/ =0 =0.5
0 1 2 3
2
1
0
1
2
time
u
1
t=0.1/ =0 =0.5
0 1 2 3
1000
500
0
500
1000
time
u
1
t=0.1 =0 =0.5
0 1 2 3
4
3
2
1
0
1
x 10
10
time
u
1
t=1.0 =0 =0.5
Figure 50: Prediction for the problem of Example 52 for u
0
= [1, 1] using the
Newmark-, algorithm with parameters , = 0 and = 1,2 and time steps t =
0.099,, t = 0.1,, t = 0.1 and t = 1. The result is the same as that using
the central dierences method.
- 213-
7 Transient analyses in linear elastodynamics
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/ =0.25 =0.5
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1/ =0.25 =0.5
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1 =0.25 =0.5
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=1.0 =0.25 =0.5
Figure 51: Prediction for the problem of Example 52 for u
0
= [1, 1] using the
Newmark-, algorithm with parameters , = 1,4 and = 1,2 and time steps t =
0.099,, t = 0.1,, t = 0.1 and t = 1. The results are always bounded by the
initial displacements.
- 214-
7.5 Implicit algorithms
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/ =0.25 =0.5
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1/ =0.25 =0.5
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1 =0.25 =0.5
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=1.0 =0.25 =0.5
Figure 52: Prediction for the problem of Example 52 for u
0
= [1, 1] using the
Newmark-, algorithm with parameters , = 1,4 and = 1,2 and time steps t =
0.099,, t = 0.1,, t = 0.1 and t = 1. The results are always bounded by the
initial displacements.
- 215-
7 Transient analyses in linear elastodynamics
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/ =0.27563 =0.55
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1/ =0.27563 =0.55
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1 =0.27563 =0.55
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=1.0 =0.27563 =0.55
Figure 53: Prediction for the problem of Example 52 for u
0
= [1, 1] using the
Newmark-, algorithm with parameters = 1,2, , = [( + 1,2) ,2]
2
and time steps
t = 0.099,, t = 0.1,, t = 0.1 and t = 1. The results are always not only
bounded by the initial displacements, but also slightly damped.
- 216-
7.5 Implicit algorithms
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/ =0.27563 =0.55
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1/ =0.27563 =0.55
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1 =0.27563 =0.55
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=1.0 =0.27563 =0.55
Figure 54: Prediction for the problem of Example 52 for u
0
= [1, 1] using the
Newmark-, algorithm with parameters = 1,2, , = [( + 1,2) ,2]
2
and time steps
t = 0.099,, t = 0.1,, t = 0.1 and t = 1. The results are always not
only bounded by the initial displacements, but also considerably damped for high
frequencies
- 217-
7 Transient analyses in linear elastodynamics
C = [0,0;0,0]; % damping matrix
N = 1000; % number of steps
f(1:2,1:N+2)=0; % loading
u0 = [1,1]; v0 = [0,0]; % initial conditions
be = 0.; ga = 0.55; % beta and gamma pars.
be = ((ga+0.5)/2)^2; %optimal, modify if needed
dt(1) = 0.099/pi; dt(2) = 0.1/pi; dt(3) = 0.1; dt(4) = 1;
tpend = 3; % time at end of plot
txt = strcat( \beta=,num2str(be), \gamma=,num2str(ga));
subplot(2,2,1);
[t,u,v,a] = Newmark(K,M,C,f,dt(1),N,u0,v0,be,ga);
plot(t,u(1,:)); xlabel(time); ylabel(u_1);
title([\Deltat=,0.099/\pi,txt]);
set(gca,xlim,[0,tpend]);
subplot(2,2,2);
[t,u,v,a] = Newmark(K,M,C,f,dt(2),N,u0,v0,be,ga);
plot(t,u(1,:)); xlabel(time); ylabel(u_1);
title([\Deltat=,0.1/\pi,txt]);
set(gca,xlim,[0,tpend]);
subplot(2,2,3);
[t,u,v,a] = Newmark(K,M,C,f,dt(3),N,u0,v0,be,ga);
plot(t,u(1,:)); xlabel(time); ylabel(u_1);
title([\Deltat=,0.1,txt]);
set(gca,xlim,[0,tpend]);
subplot(2,2,4);
[t,u,v,a] = Newmark(K,M,C,f,dt(4),N,u0,v0,be,ga);
plot(t,u(1,:)); xlabel(time); ylabel(u_1);
title([\Deltat=,1.0,txt]);
set(gca,xlim,[0,tpend]);
7.5.3 Collocation Wilson-0 methods
The collocation and the Wilson-0 methods are neither directly derived from nite
dierence expressions. The idea behind these methods is to advance the equilibrium
equation further than : + 1, i.e.
M u
a+0
+C u
a+0
+Ku
a+0
= f
a+0
(830)
where 0 is a parameter that is usually 0 1. and by denition
u
a+0
= (1 0) u
a
+0 u
a+1
(831)
f
a+0
= (1 0) f
a
+0f
a+1
(832)
Then, the Newmark-, integration formulas are used extended to :+0, i.e. the time
increment is 0t
u
a+0
= u
a
+0t [(1 ) u
a
+ u
a+0
] (833)
u
a+0
= u
a
+0t u
a
+
1
2
(0t)
2
[(1 2,) u
a
+ 2, u
a+0
] (834)
- 218-
7.5 Implicit algorithms
so using Equation (831)
u
a+0
= u
a
+0t (1 0) u
a
| {z }
predictor: u
j
a+0
+ 0
2
t u
a+1
| {z }
corr.: u
c
a+0
(835)
and
u
a+0
= u
a
+0t u
a
+
1
2
(0t)
2
(1 20,) u
a
| {z }
predictor: u
j
a+0
+ ,0
3
t
2
u
a+1
| {z }
corr.: u
c
a+0
These two equations may be substituted into Equation (830) to obtain the following
equation in a-form
M

u
a+1
= f

a+1
(836)
where
M

= 0M +0
2
tC+,0
3
t
2
K (837)
f

a+1
= f
a+0
(1 0) M u
a
C u
j
a+0
Ku
j
a+0
(838)
Once u
a+1
is solved for, the displacements and accelerations are obtained from
Newmarks formulae with t as time increment
u
a+1
= u
a
+t u
a
+
t
2
2
(1 2,) u
a
+,t
2
u
a+1
(839)
u
a+1
= u
a
+ (1 ) t u
a
+t u
a+1
(840)
This method is equivalent to a three point LMS method because it involves time
steps :, :+1 and :+0. The Newmark-, method is recovered for 0 = 1. The original
Wilson-0 method is obtained for the particular choice of , = 1,6 and = 1,2. If
0 = 1 and , = 1,6 and = 1,2, the linear acceleration method is obtained as a
particular case. Second order accurate unconditional stable procedures are obtained
if
0 1; =
1
2
;
0
2 (0 + 1)
,
0
2
1,2
40
3
2
(841)
The Wilson-0 method (with , = 1,6 and = 1,2) is stable for 0 1.37. A much
used 0 value in the literature is 0 = 1.4.
These type of methods have been superseded by the Newmark-, and other meth-
ods which show superior performance, so they are seldom used. One of the drawbacks
of the method is the overshooting phenomena that we will see in the examples.
Example 60 Create a computer code to run the collocation algorithm.
Solution. The source code for Matlab follows.
function [t,u,v,a] = Wilson(K,M,C,f,dt,N,u0,v0,be,ga,te)
%*** function [t,u,v,a] = Wilson(K,M,C,f,dt,N,u0,v0,be,ga,te)
% This is a program to integrate the equation of motion
% using the collocation Wilson-theta integration method
- 219-
7 Transient analyses in linear elastodynamics
%
% K = stiffness matrix
% M = Mass matrix
% C = Damping matrix
% f = force vector: f(dim(K),1:N); 2nd dimens. may be >N
% dt= time increment
% N = number of steps
% u0= initial displacements
% v0= initial velocities
% be= Newmark beta parameter (1/6 for Wilson method)
% ga= Newmark gamma parameter (1/2 for Wilson method)
% te= Wilson theta parameter
% t = time (for plots)
% u,v,a = displacements, velocities and accelerations
%
%* initial calculations
u(:,1) = u0; % initial displacements
v(:,1) = v0; % initial velocities
a(:,1) = M\(f(:,1)-K*u0-C*v0); % initial acceleration by equilibrium
t(1) = 0; % time;
a0 = te^2*ga*dt; % constants for the effective mass
a1 = be*te^3*dt^2; %
a2 = te*dt; % Form here, these are
a3 = 0.5*a2^2*(1-2*te*be); % constants for the predictors for
a4 = a2*(1-ga*te); % computing the r.h.s. (eff. loads)
a5 = 0.5*dt^2*(1-2*be); % These are constants for the disp.
a6 = be*dt^2; % and velocities update at step
a7 = (1-ga)*dt; % n+1
a8 = ga*dt; %
%* form effective mass matrix
Meff = te*M + a0*C + a1*K;
U = chol(Meff); % factorization (Cholesky)
for n=1:N-1;
t(n+1) = t(n) + dt;
u(:,n+1) = u(:,n) + a2*v(:,n) + a3*a(:,n); % predictor displac.
v(:,n+1) = v(:,n) + a4*a(:,n); % predictor veloc.
feff = te*f(:,n+1) + (1-te)*f(:,n); % forces at theta
feff = feff-(1-te)*M*a(:,n)-C*v(:,n+1)-K*u(:,n+1); % r.h.s.
a(:,n+1) = U\feff; % forward reduction
a(:,n+1) = U\a(:,n+1); % backsubstitution
u(:,n+1) = u(:,n) + dt*v(:,n) + a5*a(:,n) + a6*a(:,n+1);
v(:,n+1) = v(:,n) + a7*a(:,n) + a8*a(:,n+1);
end,
return,
end
- 220-
7.5 Implicit algorithms
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/ =1 =0.25 =0.5
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1/ =1 =0.25 =0.5
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1 =1 =0.25 =0.5
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=1.0 =1 =0.25 =0.5
Figure 55: Prediction for the problem of Example 52 for u
0
= [1, 1] using the
Wilson-0 algorithm with parameters 0 = 1, , = 1,4 and = 1,2 and time steps
t = 0.099,, t = 0.1,, t = 0.1 and t = 1. The result is exactly the same as
that using the trapezoidal rule.
Example 61 Use the program of Example 60 to run again the problem of Example
52. Use the following two sets of parameters
0 = 1, , = 1,4, = 1,2 (Newmark-,, trapezoidal rule)
0 = 1.4, , = 1,6, = 1,2 (original Wilson-0 method)
Solution. The predictions for both cases are shown in Figures 55 to 58. It is seen
that in gures corresponding to the trapezoidal rule, the same results are obtained
as in Figures 51 and 52. The predictions shown in Figures 57 and 58 show two
eects. The st one is that a high numerical damping is introduced by the method
for high frequencies, a desirable feature. But on the other side, in the last two curves
an overshooting phenomena is observed, i.e., even though the response is bounded,
it is larger than one, its reference value. This is an observation made by several
researchers and which is highly undesirable.
- 221-
7 Transient analyses in linear elastodynamics
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/ =1 =0.25 =0.5
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1/ =1 =0.25 =0.5
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1 =1 =0.25 =0.5
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=1.0 =1 =0.25 =0.5
Figure 56: Prediction for the problem of Example 52 for u
0
= [1, 1] using the
Wilson-0 algorithm with parameters 0 = 1, , = 1,4 and = 1,2 and time steps
t = 0.099,, t = 0.1,, t = 0.1 and t = 1. The result is exactly the same as
that using the trapezoidal rule.
- 222-
7.5 Implicit algorithms
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/ =1.4 =0.16667 =0.5
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1/ =1.4 =0.16667 =0.5
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1 =1.4 =0.16667 =0.5
0 1 2 3
6
4
2
0
2
4
time
u
1
t=1.0 =1.4 =0.16667 =0.5
Figure 57: Prediction for the problem of Example 52 for u
0
= [1, 1] using the original
Wilson-0 algorithm with parameters 0 = 1.4, , = 1,6 and = 1,2 and time steps
t = 0.099,, t = 0.1,, t = 0.1 and t = 1. Note the slight numerical damping
for the third case.
- 223-
7 Transient analyses in linear elastodynamics
0 1 2 3
1.5
1
0.5
0
0.5
1
time
u
1
t=0.099/ =1.4 =0.16667 =0.5
0 1 2 3
1.5
1
0.5
0
0.5
1
time
u
1
t=0.1/ =1.4 =0.16667 =0.5
0 1 2 3
6
4
2
0
2
4
time
u
1
t=0.1 =1.4 =0.16667 =0.5
0 1 2 3
1000
500
0
500
time
u
1
t=1.0 =1.4 =0.16667 =0.5
Figure 58: Prediction for the problem of Example 52 for u
0
= [1, 1] using the
original Wilson-0 algorithm with parameters 0 = 1.4, , = 1,6 and = 1,2 and time
steps t = 0.099,, t = 0.1,, t = 0.1 and t = 1. Note the high numerical
damping and the overshooting phenomena for large time steps.
- 224-
7.5 Implicit algorithms
7.5.4 Hilbert-Hughes-Taylor (HHT) cmethod
The cmethod or HHT method was developed in order to introduce some numeri-
cal damping in Newmark-like methods without destroying the second order accuracy
and without involving more steps, so the procedure is still self-starting. The HHT
method uses the Newmark-, formulae with the following modied equilibrium equa-
tion
6
M u
a+1
+C u
a+1+c
+Ku
a+1+c
= f
a+1+c
(843)
with
u
a+1+c
= (1 +c) u
a+1
c u
a
(844)
u
a+1+c
= (1 +c) u
a+1
cu
a
(845)
f
a+1+c
= f ((1 +c) t
a+1
ct
a
) = f (t
a+1
+ct) (846)
' (1 +c) f
a+1
cf
a
(847)
and
u
a+1
= u
a
+t u
a
+
t
2
2
(1 2,) u
a
| {z }
predictor u
j
a+1
+ ,t
2
u
a+1
| {z }
correct. u
c
a+1
(848)
u
a+1
= u
a
+ (1 ) t u
a
| {z }
predictor u
j
a+1
+ t u
a+1
| {z }
correct. u
c
a+1
(849)
The method is very similar to a collocation method, but note that in Equation (843)
the inertia term is computed at step :+1 whereas the rest of the terms are computed
at a time t
a+1
+ ct, where c [1,3, 0]. Hence, the equilibrium equation is not
enforced at any particular time. The case c = 0 recovers the Newmark-, method.
For unconditional stability we need to choose
c [1,3, 0] , =
(1 2c)
2
, , =
(1 c)
2
4
(850)
Decreasing the value of c from c = 0, we increase the numerical dissipation. A
typical value often used is c = 0.05. The HHT method is clearly a three point
LMS method (:, : + 1 and : + 1 +c).
In order to obtain the system of equations to be solved, we substitute Eqs.(844)
to (847) into (843) to obtain
M u
a+1
+C[(1 +c) u
a+1
c u
a
] +K[(1 +c) u
a+1
cu
a
] = f
a+1+c
(851)
Substituting the Newmark-, Equations (848) and (849) we can write the system of
equations (please, verify!)
M

u
a+1
= f

a+1+c
(852)
6
Sometimes the method is dened instead by the following equation
1 u
n+1
+C u
n+1
+1u
n+1+o
= ]
n+1
(842)
- 225-
7 Transient analyses in linear elastodynamics
where
M

= M + (1 +c) tC+(1 +c) ,t


2
K (853)
f

a+1+c
= f
a+1+c
C

(1 +c) u
j
a+1
c u
a

(1 +c) u
j
a+1
cu
a

(854)
Example 62 Program the Hilbert-Hughes-Taylor algorithm in your favorite com-
puter language.
Solution. The code has very little changes over the Newmark-, one given in Ex-
ample 58. The updated source code follows.
function [t,u,v,a] = HHTmethod(K,M,C,f,dt,N,u0,v0,be,ga,al)
%*** function [t,u,v,a] = HHTmethod(K,M,C,f,dt,N,u0,v0,be,ga,al)
% This is a program to integrate the equation of motion
% using the Hibert-Hughes-Taylor alpha method
%
% K = stiffness matrix
% M = Mass matrix
% C = Damping matrix
% f = force vector: f(dim(K),1:N); 2nd dimens. may be >N
% dt= time increment
% N = number of steps
% u0= initial displacements
% v0= initial velocities
% be= Newmark beta parameter
% ga= Newmark gamma parameter
% al= Hilbert-Hughes-Taylor alpha parameter
% t = time (for plots)
% u,v,a = displacements, velocities and accelerations
%
%* initial calculations
u(:,1) = u0; % initial displacements
v(:,1) = v0; % initial velocities
a(:,1) = M\(f(:,1)-K*u0-C*v0); % initial acceleration by equilibrium
t(1) = 0; % time;
a0 = ga*dt; a1 = be*dt^2; % constants
a2 = dt-a0; a3 = (dt^2)/2-a1; % more constants
al1= al + 1;
%* form effective mass matrix
Meff = M + al1*a0*C + al1*a1*K;
U = chol(Meff); % factorization (Cholesky)
for n=1:N-1;
t(n+1) = t(n) + dt;
u(:,n+1) = u(:,n) + dt*v(:,n) + a3*a(:,n); % predictor displac.
v(:,n+1) = v(:,n) + a2*a(:,n); % predictor veloc.
feff = f(:,n+1)-C*(al1*v(:,n+1)-al*v(:,n))...
- 226-
7.5 Implicit algorithms
-K*(al1*u(:,n+1)-al*u(:,n)); % r.h.s.
a(:,n+1) = U\feff; % forward reduction
a(:,n+1) = U\a(:,n+1); % backsubstitution
u(:,n+1) = u(:,n+1) + a1*a(:,n+1); % displac. correct.
v(:,n+1) = v(:,n+1) + a0*a(:,n+1); % velocities corr.
end,
return,
end
Example 63 Run again the problem given in Example 52 with the following para-
meters
c = 0; , =
1
4
and =
1
2
(trapezoidal rule again)
c = 0.05; , =
(1 c)
2
4
; =
(1 2c)
2
(the usual choice)
c = 0.3; , =
(1 c)
2
4
; =
(1 2c)
2
(almost maximum damping)
Solution. We leave the reader to verify that for the rst case the same result given
in Figures 51 and 52 for the trapezoidal rule are again obtained. The other two cases
are shown in Figures 59 to 62. It is clearly shown that the lower the value of c the
higher the damping. For values close to zero the lower frequencies are only slightly
damped, whereas higher frequencies are more strongly damped. Figures 59 and 60
should be compared to the Newmark-, comparable Figures 53 and 54. It is seen that
the HHT method is superior in terms of damping more the higher modes but still
preserving the lower ones. Figures 61 and 62 show the case of very high damping.
7.5.5 Bathe-Baig composite (substep) method
The previous algorithms were developed mainly with linear elastodynamics in mind.
It has been shown that the trapezoidal rule is one of the best choices for linear
dynamics if there is no need to numerically damp higher modes. However, in non-
linear dynamics, for long simulations, it has been shown to blow-up. The reason
is that in nonlinear dynamics (for example using large displacements) the method
does not correctly preserve energy and momentum. Many algorithms have been
developed in order to preserve these quantities. However, they usually do not make
way in general purpose commercial nite element codes because they are complex
and implementation in large existing codes is not straightforward. The Bathe-Baig
algorithm has been rst applied to structural dynamics in 2005
7
and subsequently
analyzed in 2007
8
and 2012
9
, being this last version the most ecient one in linear
7
KJ Bathe, MMI Baig. On a composite implicit time integration procedure for nonlinear dynam-
ics. Computers and Structures J, 83(2005) 2513-2524.
8
KJ Bathe. Conserving energy and momentum in nonlinear dynamics: A simple implicit time
integration scheme. Computers and Structures J, 85(2007) 437-445.
9
KJ Bathe, G Noh. Insight into an implicit time integration scheme for structural dynamics.
Computers and Structures J, 98-99(2012) 1-6.
- 227-
7 Transient analyses in linear elastodynamics
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/ =0.05 =0.27563 =0.55
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1/ =0.05 =0.27563 =0.55
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1 =0.05 =0.27563 =0.55
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=1.0 =0.05 =0.27563 =0.55
Figure 59: Prediction for the problem of Example 52 for u
0
= [1, 1] using the HHT
c method with parameter c = 0.05 and ,, selected to obtain inconditional
stability. Time steps for the plots are t = 0.099,, t = 0.1,, t = 0.1 and
t = 1.
- 228-
7.5 Implicit algorithms
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/ =0.05 =0.27563 =0.55
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1/ =0.05 =0.27563 =0.55
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1 =0.05 =0.27563 =0.55
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=1.0 =0.05 =0.27563 =0.55
Figure 60: Prediction for the problem of Example 52 for u
0
= [1, 1] using the HHT
c method with parameter c = 0.05 and ,, selected to obtain inconditional
stability. Time steps for the plots are t = 0.099,, t = 0.1,, t = 0.1 and
t = 1.
- 229-
7 Transient analyses in linear elastodynamics
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/ =0.3 =0.4225 =0.8
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1/ =0.3 =0.4225 =0.8
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1 =0.3 =0.4225 =0.8
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=1.0 =0.3 =0.4225 =0.8
Figure 61: Prediction for the problem of Example 52 for u
0
= [1, 1] using the HHT
c method with parameter c = 0.3 and ,, selected to obtain inconditional
stability. Time steps for the plots are t = 0.099,, t = 0.1,, t = 0.1 and
t = 1.
- 230-
7.5 Implicit algorithms
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/ =0.3 =0.4225 =0.8
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1/ =0.3 =0.4225 =0.8
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1 =0.3 =0.4225 =0.8
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=1.0 =0.3 =0.4225 =0.8
Figure 62: Prediction for the problem of Example 52 for u
0
= [1, 1] using the HHT
c method with parameter c = 0.3 and ,, selected to obtain inconditional
stability. Time steps for the plots are t = 0.099,, t = 0.1,, t = 0.1 and
t = 1.
- 231-
7 Transient analyses in linear elastodynamics
analysis. The algorithm has been developed for nonlinear dynamics and has been
shown to preserve energy and momentum. Furthermore, the implementation is sim-
ple into existing codes. Hence, the method may make way in short into general
purpose nite element codes.
The idea behind the method is to split the step into two parts, each one integrated
in a dierent scheme (hence the words composite or substep). The partition is :,
:+, :+1. Then, the velocities and accelerations at step :+ (time t +t) are
computed using the trapezoidal rule
u
a+
= u
a
+t

u
a
+ u
a+
2

+0

t
2

(855)
u
a+
= u
a
+t

u
a
+ u
a+
2

+0

t
2

= u
a
+t u
a
+
1
4
(t)
2
( u
a
+ u
a+
) +0

t
2

(856)
Factoring out u
a+
and u
a+
u
a+
=

2
t

2
(u
a
t u
a
) u
a
| {z }
predictor: u
j
a+
+

2
t

2
u
a+
| {z }
correct.: u
c
a+
(857)
u
a+
=
2
t
u
a
u
a
| {z }
pred.: u
j
a+
+
2
t
u
a+
| {z }
corr.: u
c
a+
(858)
At this substep : +, the dynamics equation is enforced in order to compute u
a+
M u
a+
+C u
a+
+Ku
a+
= f
a+
(859)
with
f
a+
= f (t +t) ' (1 ) f
a
+f
a+1
(860)
Then, upon substitution of Equations (857) and (858) we obtain a d-form
K

u
a+
= f

a+
(861)
where
K

= K +
2
t
C +

2
t

2
M (862)
f

a+
= f
a+
M u
j
a+
C u
j
a+
(863)
For the second substep, we approximate the derivatives by the method of unde-
termined coecients. The rst time derivative may be written as
u
a+1
= c
1
u
a
+c
2
u
a+
+c
3
u
a+1
(864)
- 232-
7.5 Implicit algorithms
where the c
i
coecients are to be determined. Using Taylor backward series from
: + 1,
u
a
= u
a+1
t u
a+1
+
1
2
t
2
u
a+1
+0

t
3

(865)
u
a+
= u
a+1
(1 ) t u
a+1
+
1
2
(1 )
2
t
2
u
a+1
+0

t
3

(866)
Substituting these equations into Equation (864) we obtain
u
a+1
= c
1
u
a
+c
2
u
a+
+c
3
u
a+1
= c
1
u
a+1
c
1
t u
a+1
+c
1
1
2
t
2
u
a+1
+c
2
u
a+1
c
2
(1 ) t u
a+1
+c
2
1
2
(1 )
2
t
2
u
a+1
+c
3
u
a+1
+0

t
3

(867)
Identifying terms we obtain the following equations

c
1
+c
2
+c
3
= 0 (no u
a+1
term on the l.h.s.)
c
1
t c
2
(1 ) t = 1 (just u
a+1
on the l.h.s.)
c
1
+c
2
(1 )
2
= 0 (we want 0

t
3

accuracy)
(868)
Solving the system of equations we obtain the t in the denominator yields
0

t
2

accuracy
c
1
=
(1 )
t
; c
2
=
1
( 1) t
; c
3
=
( 2)
( 1) t
(869)
Of course the same approximation may be used for the accelerations
u
a+1
= c
1
u
a
+c
2
u
a+
+c
3
u
a+1
(870)
wich upon substitution of u
a+1
u
a+1
= c
1
u
a
+c
2
u
a+
| {z }
predictor u
j
a+1
+ c
3
u
a+1
| {z }
corr. u
c
a+1
(871)
is
u
a+1
= c
1
u
a
+c
2
u
a+
+c
3
c
1
u
a
+c
3
c
2
u
a+
| {z }
predictor u
j
a+1
+ c
3
c
3
u
a+1
| {z }
corr u
j
a+1
(872)
The equilibrium equation is also enforced at time step : + 1
M u
a+1
+C u
a+1
+Ku
a+1
= f
a+1
(873)
This equation yields
K
1
u
a+1
= f

a+1
(874)
where
K
1
= K +c
3
C +c
3
c
3
M (875)
f

a+1
= f
a+1
M u
j
a+1
C u
j
a+1
(876)
- 233-
7 Transient analyses in linear elastodynamics
Once u
a+1
is obtained, then the velocities and accelerations are updated using
u
a+1
= u
j
a+1
+c
3
u
a+1
(877)
u
a+1
= u
j
a+1
+c
3
c
3
u
a+1
(878)
An important eciency note given in the 2012 paper (see footnote 9) is that
matrices K

and K
1
in Eqs.(862) and (875) are dierent, so the storage space
is doubled and the factorization is also doubled. In nonlinear analysis this is not
a relevant issue, since the stiness matrix needs to be built and factorized several
times at each time step (frequently once per iteration). However, in linear analysis
this is not a desirable feature. But the solution is simple. Since the step may be
partitioned in any way given by , one may choose a substep : + such that both
matrices are equal
K

= K
1

2
t
C +

2
t

2
M = c
3
C+c
3
c
3
M (879)
so
( 2)
( 1)
=
2

= 2

2 (880)
This time step guarantees that only one K

is needed and factorized, increasing the


eciency considerably.
The method is has second order accuracy and is unconditionally stable, as we
will see in next section.
Example 64 Create a computer code to run the Bathe-Baig method.
Solution. The source code in Matlab language follows. We have kept the value
free, so both K

and K
1
need to be stored and factorized.
function [t,u,v,a] = BatheBaig(K,M,C,f,dt,N,u0,v0,ga)
%*** function [t,u,v,a] = BatheBaig(K,M,C,f,dt,N,u0,v0,ga)
% This is a program to integrate the equation of motion
% using the Bathe-Baig composite substep scheme
%
% K = stiffness matrix
% M = Mass matrix
% C = Damping matrix
% f = force vector: f(dim(K),1:N); 2nd dimens. may be >N
% dt= time increment
% N = number of steps
% u0= initial displacements
% v0= initial velocities
% ga= step partitioning parameter
% t = time (for plots)
% u,v,a = displacements, velocities and accelerations
%
- 234-
7.5 Implicit algorithms
%* initial calculations
u(:,1) = u0; % initial displacements
v(:,1) = v0; % initial velocities
a(:,1) = M\(f(:,1)-K*u0-C*v0); % initial acceleration by equilibrium
t(1) = 0; % time;
a0 = ga*dt; % constants
a1 = 2/a0; %
a1a1 = a1*a1;
c1 = (1-ga)/a0; %
c2 = 1/((ga-1)*a0); % more constants
c3 = (ga-2)/((ga-1)*dt); %
c3c3 = c3*c3;
%* form effective mass matrices (note that here we built both
% matrices to allow for any value of gamma)
Keffg = K + a1*C + a1a1*M; % Keff for first substep
Ug = chol(Keffg); % factorization (Cholesky)
Keff1 = K + c3*C + c3c3*M; % Keff for second substep
U1 = chol(Keff1); % factorization (Cholesky)
for n=1:N-1;
t(n+1) = t(n) + dt;
%
%* substep 1 (n->n+ga); using trapezoidal rule
ag = -a(:,n)-a1a1*(u(:,n)+a0*v(:,n)); % predictor acel n+ga
vg = -a1*u(:,n)-v(:,n); % predictor vel n+ga
feff = (1-ga)*f(:,n)+ga*f(:,n+1); % loads at n+ga
feff = feff - M*ag - C*vg; % effective loads
ug = Ug\feff; % forward reduction
ug = Ug\ug; % backsubstitution
vg = vg + a1*ug; % corrector for vel
%
%* substep 2 (n+ga->n+1); using 3 point deriv. interpolations
v(:,n+1) = c1*u(:,n) + c2*ug; % predictor values
a(:,n+1) = c1*v(:,n) + c2*vg + c3*c1*u(:,n) + c3*c2*ug;
feff = f(:,n+1) - C*v(:,n+1) - M*a(:,n+1);
u(:,n+1) = U1\feff; % forward reduction
u(:,n+1) = U1\u(:,n+1); % backsubstitution
v(:,n+1) = v(:,n+1) + c3*u(:,n+1); % corrector veloc.
a(:,n+1) = a(:,n+1) + c3c3*u(:,n+1); % corrector acels.
end,
return,
end
Example 65 Use the computer program of Example 64 to run (once more) the
problem of Example 52 with the following parameters
= 1,2, = 2

2, = 0.01, = 0.99 (881)


- 235-
7 Transient analyses in linear elastodynamics
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/ BB =0.5
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1/ BB =0.5
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1 BB =0.5
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=1.0 BB =0.5
Figure 63: Prediction for the problem of Example 52 for u
0
= [1, 1] using the Bathe-
Baig procedure with = 0.5. Time steps for the plots are t = 0.099,, t = 0.1,,
t = 0.1 and t = 1.
Solution. The results using the Bathe-Baig method are shown in Figures 63 to
70. It can be shown that very good characteristics are obtained for values close
to = 1,2. The predictions using = 2

2 are almost identical to those with


= 0.5, so the value of = 2

2 is a better recommendation for elastic analyses.


The value of = 0.01 oers characteristics similar to the trapezoidal rule, whereas
= 0.9 oers high damping only when the high modes are coarsely integrated.
7.6 Stability and accuracy analysis
We have until now studied the accuracy and stability of some of the given algorithms.
The stability analysis is given by the spectral radii of the amplication matrices, with
are given by one-step multivariate problems, for example, in the form


a+1

a+1

= A

+L
a+1
(882)
We have obtained the amplication matrix for some of the algorithms in this or other
form. For other algorithms the task is large and tedious. However, it is possible for
- 236-
7.6 Stability and accuracy analysis
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/ BB =0.5
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1/ BB =0.5
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1 BB =0.5
0 1 2 3
0.5
0
0.5
1
time
u
1
t=1.0 BB =0.5
Figure 64: Prediction for the problem of Example 52 for u
0
= [1, 1] using the
Bathe-Baig procedure with = 0.5. Time steps for the plots are t = 0.099,,
t = 0.1,, t = 0.1 and t = 1.
- 237-
7 Transient analyses in linear elastodynamics
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/ BB =0.58579
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1/ BB =0.58579
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1 BB =0.58579
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=1.0 BB =0.58579
Figure 65: Prediction for the problem of Example 52 for u
0
= [1, 1] using the Bathe-
Baig procedure with = 2

2. Time steps for the plots are t = 0.099,,


t = 0.1,, t = 0.1 and t = 1.
- 238-
7.6 Stability and accuracy analysis
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/ BB =0.58579
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1/ BB =0.58579
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1 BB =0.58579
0 1 2 3
0.5
0
0.5
1
time
u
1
t=1.0 BB =0.58579
Figure 66: Prediction for the problem of Example 52 for u
0
= [1, 1] using the
Bathe-Baig procedure with = 2

2. Time steps for the plots are t = 0.099,,


t = 0.1,, t = 0.1 and t = 1.
- 239-
7 Transient analyses in linear elastodynamics
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/ BB =0.01
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1/ BB =0.01
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1 BB =0.01
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=1.0 BB =0.01
Figure 67: Prediction for the problem of Example 52 for u
0
= [1, 1] using the
Bathe-Baig procedure with = 0.01. Time steps for the plots are t = 0.099,,
t = 0.1,, t = 0.1 and t = 1.
- 240-
7.6 Stability and accuracy analysis
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/ BB =0.01
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1/ BB =0.01
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1 BB =0.01
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=1.0 BB =0.01
Figure 68: Prediction for the problem of Example 52 for u
0
= [1, 1] using the
Bathe-Baig procedure with = 0.01. Time steps for the plots are t = 0.099,,
t = 0.1,, t = 0.1 and t = 1.
- 241-
7 Transient analyses in linear elastodynamics
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/ BB =0.9
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1/ BB =0.9
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1 BB =0.9
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=1.0 BB =0.9
Figure 69: Prediction for the problem of Example 52 for u
0
= [1, 1] using the Bathe-
Baig procedure with = 0.9. Time steps for the plots are t = 0.099,, t = 0.1,,
t = 0.1 and t = 1.
- 242-
7.6 Stability and accuracy analysis
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.099/ BB =0.9
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1/ BB =0.9
0 1 2 3
1
0.5
0
0.5
1
time
u
1
t=0.1 BB =0.9
0 1 2 3
0.5
0
0.5
1
time
u
1
t=1.0 BB =0.9
Figure 70: Prediction for the problem of Example 52 for u
0
= [1, 1] using the
Bathe-Baig procedure with = 0.9. Time steps for the plots are t = 0.099,,
t = 0.1,, t = 0.1 and t = 1.
- 243-
7 Transient analyses in linear elastodynamics
any algorithm to obtain this amplication matrix numerically. In fact, this is actually
what really happens in a computer and, hence, it is frequently more enlightening
than the analytical one. This is the reason we have delayed the presentation of the
stability and accuracy analysis until now.
To compute the amplication matrix, all we need is to run one step for two
problems. Assume that as initial conditions we prescribe

1
0

(883)
then


1


11

12

21

22

1
0


11

21

(884)
And similarly


0

0
1


21

22

(885)
So once the amplication matrix is known, the spectral radii is the modulus of the
maximum eigenvalue. In the following Examples the method is applied to several
cases. The plots obtained in that example are extremely important in the selection
of the integration method and on the time step size. The problem usually selected
for these type of tasks is
n +.
2
n = 0 (886)
which is equivalent to a problem with unit mass and .
2
as stiness.
On the other hand, we have seen that the amplitude of the physically undamped
problem decreases in most algorithms (the trapezoidal rule is an exception), a fact
also closely related to the spectral radii of the amplication matrix. Hence there is
a "numerical dissipation" given by the algorithm. Furthermore, we have also seen
that the frequency (or period) of the higher modes are badly estimated in many
procedures for large time steps. The question is then which time step should one
use for the dierent schemes in order to obtain a maximum amplitude decay (AD
%) and a maximum period elongation (PE %). These quantities are dened as
1% =
n
max
(t = 1 cycle) n
iaitio|
(t = 0)
n
iaitio|
(t = 0)
100 (887)
11% =
T
real
T
tIccvj
T
tIccvj
100 (888)
The problem usually selected to compute these quantities is

n +.
2
n = 0 with .
2
= (2)
2
, i.e. T = 1
initial conditions: n
0
= 1 and n
0
= 0
(889)
so T
tIccvj
= 1 and n
iaitio|
= 1. The plots of 1 are sometimes given in the equivalent
numerical damping

, as it it where due to a physical damping. The relationship
between both quantities is approximately given by

=
1
2
(890)
- 244-
7.6 Stability and accuracy analysis
10
2
10
0
10
2
10
0
10
2
10
4
10
6
10
8
Adimensional step size t/T
S
p
e
c
t
r
a
l

r
a
d
i
i

(
A
)
Wilson=1.4, =1/6, =1/2 (1 step)
Figure 71: Spectral radii for Wilson-0 method with 0 = 1.4, computed for one step.
It is seen that j (A) 1, which explains the overshoot phenomena in the rst steps
which is observed in the predictions with this algorithm.
These quantities may also be computed theoretically or numerically. In example 69
we give a code to do so, valid for any algorithm.
Example 66 Obtain and plot the spectral radii for the Wilson-0 method for t,T

10
3
, 10
3

. Consider only one step in the computations.


Solution. The solution is given in Figure 71. The scheme looks unstable because
j () 1 for many t. However this is due to the already mentioned overshooting
phenomena.
Example 67 Compute the spectral radii for t,T

10
3
, 10
3

for the following


methods (use several steps)
(1) Wilson-0 with 0 = 1.4
(2) Collocation with 0 = 1.287, = 1,2 and , = 0.18
(3) Trapezoidal rule (Newmark-, with , = 1,4 and = 1,2)
(4) Newmark-, with , = 0.3025 and = 0.6
(5) Hilbert-Hughes-Taylor method with c = 0.05
(6) Hilbert-Hughes-Taylor method with c = 0.3
(7) Bathe-Baig mathod with = 1,2.
Solution. The solution is given in Figure 72. It is shown that the spectral radii show
the dierent characteristics of the methods regarding dissipation of higher modes.
Example 68 Compute the spectral radii of the Wilson-0 methods (with , = 1,6
and = 1,2) for dierent 0 [1, 3] parameters. Use 100 steps.
- 245-
7 Transient analyses in linear elastodynamics
10
2
10
0
10
2
0
0.2
0.4
0.6
0.8
1
Adimensional step size t/T
S
p
e
c
t
r
a
l

r
a
d
i
i


(
A
)


Wilson=1.4
Collocation =1.287 =0.18
Trapezoidal rule
Newmark=0.3025, =0.6
HHT =0.05
HHT =0.3
BatheBaig =0.5
Figure 72: Spectral radii for dierent methods.
- 246-
7.6 Stability and accuracy analysis
1 1.5 2 2.5 3
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
Parameter
S
p
e
c
t
r
a
l

r
a
d
i
i


(
A
)
Wilson method, =1/6, =1/2
Figure 73: Spectral radii of the dierent Wilson-0 methods for 0 [1, 3]. It is seen
that unconditional stability is obtained for 0 1.37.
Solution: The solution is given in Figure 73. It is seen that unconditional
stability (j (A) 1) is obtained for 0 1.37. Hence it is frequently used with
0 = 1.4.
Example 69 Compute the amplitude decays (AD) and the period elongations (PE)
of the following methods
(1) Wilson-0 with 0 = 1.4
(2) Central dierences
(3) Trapezoidal rule (Newmark-, with , = 1,4 and = 1,2)
(4) Hilbert-Hughes-Taylor method with c = 0.05
(5) Hilbert-Hughes-Taylor method with c = 0.3
(6) Bathe-Baig mathod with = 1,2.
(7) Newmark-, with , = 0.3025 and = 0.6
Solution: The solution is given in Figures 74 and 75. These gures are very im-
portant when deciding the time step as shown in next example.
Example 70 We want to integrate a model in which the minimum period of interest
(corresponding to the maximum frequency) is given by T. We desire a maximum
error of about a 5% in both the amplitude decay and the period elongation. Decide
the time increments for the following methods
(1) Wilson-0 with 0 = 1.4
(2) Central dierences
(3) Trapezoidal rule (Newmark-, with , = 1,4 and = 1,2)
- 247-
7 Transient analyses in linear elastodynamics
0 0.05 0.1 0.15 0.2
5
0
5
10
15
20
25
30
A
m
p
l
i
t
u
d
e

d
e
c
a
y

(
A
D
)

%
t/T


Wilson =1.4
Central differences
Trapezoidal
HHT =0.05
HHT =0.3
BatheBaig
Newmark =0.6
Figure 74: Amplitude decays for dierent integration methods after one cycle.
(4) Hilbert-Hughes-Taylor method with c = 0.05
(5) Hilbert-Hughes-Taylor method with c = 0.3
(6) Bathe-Baig method with = 1,2.
(7) Newmark-, with , = 0.3025 and = 0.6
Solution. In view of the plots of Example 69,for the dierent methods the solution
is
(1) Wilson-0 with 0 = 1.4. t = 0.08T, given by an amplitude decay restriction.
(2) Central dierences. In this case the method is conditionally stable, so t would
be given not by T, but by the maximum T
max
of the mesh, and t = T
max
,.
(3) Trapezoidal rule (Newmark-, with , = 1,4 and = 1,2). The time step for
this case is always given by the PE criterion, since it does not suer AD. Then
t = 0.12T.
(4) Hilbert-Hughes-Taylor method with c = 0.05. In this case, the PE gives the
criterion. t = 0.12T
(5) Hilbert-Hughes-Taylor method with c = 0.3. For these parameters, the PE give
again the criterion. t = 0.1T
(6) Bathe-Baig method with = 1,2. This is the method that allows for a larger
time step (more accurate), and hence it is arguably the best one. The criterion is
again given by the PE restriction. t = 0.18T.
(7) Newmark-, with , = 0.3025 and = 0.6. For this case the AD is the more
restrictive criterion: t = 0.025T and, hence, after the central dierences method
(which is explicit), it has the most restrictive time step.
- 248-
7.6 Stability and accuracy analysis
0 0.05 0.1 0.15 0.2
5
0
5
10
15
20
t/T
P
e
r
i
o
d

E
l
o
n
g
a
t
i
o
n

(
P
E

=

(
T

T
t
h
e
o
r
y
)
/
T
t
h
e
o
r
y
)

%


Wilson =1.4
Central differences
Trapezoidal
HHT =0.05
HHT =0.3
BatheBaig
Newmark =0.6
Figure 75: Period elongations obtained with dierent integration methods after one
cycle.
- 249-
7 Transient analyses in linear elastodynamics
7.7 Consistent initialization of algorithms
One of the important issues that have unadvertently lead to the overshooting phe-
nomena are the start-up procedures in the time integration algorithms. A clear
example is the computation of the acceleration using the equilibrium equation at
the initial instant of time when the displacements or velocities do not vanish. These
are relevant problems that occur when equilibrium is not enforced at time steps and
when large time steps are used at the same time. Algorithms that present such
problems are the Wilson method and the HHT method. The Newmark method and
the Bathe method are free from such problems because they employ equilibrium at
the end of the steps. However, it is possible to prescribe consistent initial conditions
also for the Wilson and HHT methods. For more details see Benitez and Montns
10
10
Benitez JM, Montns FJ. The value of numerical amplication matrices in time integration
methods. Computers & Structures Journal, to appear.
- 250-
8 Transient analysis in nonlinear dynamics
8.1 The nonlinear dynamics equation
In the preceding sections we have seen several types of time integration methods
or "time marching methods" in which the solution is obtained using several (may
be thousands of) time steps. These methods may be used for the complete model
where if is the number of degrees of freedom, K, C, M, are matrices. If
the model is large, a model order reduction scheme may be applied; we have seen
an example of such schemes due to Guyan (the most used one). If the model is
linear, i.e. when the innitesimal elasticity assumptions are employed, then a modal
decomposition may be applied. Hence natural frequencies and mode shapes are
obtained. The problem may be integrated using only some modes up to a cutting
frequency, usually four times the highest relevant frequency of the loading. The time
employed computing those modes usually pays o because only some few uncoupled
scalar equations need to be integrated. Furthermore, physical insight is obtained
computing those modes, so an eigenvalue analysis is always the second step when
analyzing a nite element model (the rst step is always a static analysis, never start
with a dynamic analysis without performing and checking a linear static analysis).
However, if the problem is nonlinear, for example due to nonlinear constitutive
equations (plasticity, hyperelasticity, viscoelasticity, contact, etc.) or due to the
presence of large displacements or large strains (or both), then the analysis is much
more complex. First, the eigenvalues and eigenvectors change during the analysis
because the stiness of the problem does. Hence, there is no special Rayleigh-
Ritz space or coordinates in which to project the problem in order to decouple the
dynamic equations. This space was precisely given by the modes. In consequence,
we will need to perform the time integration with the full uncoupled equations. This
was one of the reasons why we did seek unconditional stable algorithms, because if
we integrate the full problem, there is no way to "lter" the higher modes which
yield the critical time step in explicit methods.
Furthermore, if the problem is nonlinear, the usual linear dynamic equation
M u(t) +C u(t) +Ku(t) = f (t) (891)
is no longer valid, because the stiness K and probably the damping C change
during the analysis. The mass M is usually constant for structural problems, but
in some special problems could also change. Thus, the equilibrium needs to be
considered in its original form given by Equation (703), page 185, which we recover
now for the reader comfort
f
A
(t) +f
C
(t) +f
1
(t) = f (t) (892)
where

f = External loads
f
A
= Inertia (mass) loads
f
C
= Damping loads
f
1
= Elastic (stiness) loads
(893)
- 251-
8 Transient analysis in nonlinear dynamics
8.2 Time discretization of the nonlinear dynamics equation
For nonlinear problems, these force vectors, especially the elastic loads, may be
dependent on the displacements at each time step, and the displacements depend
also on this equilibrium equation, i.e. are such that equilibrium is fullled. Hence,
there is no other good option than to establish equilibrium iteratively (the bad
option is to just keep going without establishing equilibrium). This means that at
each time step we iterate on the solution, trying some displacements and looking for
equilibrium. The procedure is as follows.
Assume that we know the solution at step : (the so called converged step)
u
a
, u
a
, u
a
known and such that f
A
a
+f
C
a
+f
1
a
= f
a
(894)
and we wish to obtain the solution at step : + 1.
u
a+1
, u
a+1
, u
a+1
unknown and such that f
A
a+1
+f
C
a+1
+f
1
a+1
6= f
a+1
(895)
We iterate on the displacements, so for iteration (i)
u
(i)
a+1
= u
a
+u
(i)
a+1
= u
(i1)
a+1
+
2
u
(i)
a+1
(896)
and in a similar way
u
(i)
a+1
= u
a
+ u
(i)
a+1
= u
(i1)
a+1
+
2
u
(i)
a+1
(897)
u
(i)
a+1
= u
a
+ u
(i)
a+1
= u
(i1)
a+1
+
2
u
(i)
a+1
(898)
we we have denoted by ()
(i)
= ()
(i)
a+1
()
a
the increment during the step and
by
2
()
(i)
= ()
(i)
a+1
()
(i1)
a+1
the increment between two iterations. Since usually
before convergence f
A
a+1
+f
C
a+1
+f
1
a+1
6= f
a+1
, rst a residual vector is dened to
be
r
(i)
a+1
= f
a+1
f
A(i)
a+1
f
C(i)
a+1
f
1(i)
a+1
(899)
Without loss of generality, we will assume the usual case that both f
A(i)
a+1
and f
C(i)
a+1
are, for all iterations and steps, given in the linear form
f
A(i)
a+1
= M u
(i)
a+1
(900)
and
f
C(i)
a+1
= C u
(i)
a+1
(901)
Hence
r
(i)
a+1
= f
a+1
M u
(i)
a+1
C u
(i)
a+1
f
1(i)
a+1
(902)
where f
1(i)
a+1

u
(i)
a+1

depends on the displacements u


(i)
a+1
in a nonlinear manner given
by the constitutive equation (plasticity, viscoplasticity, creep) or by large strain and
displacements assumptions. The objective of the iterative procedure is to bring
the residual to zero r
(i)
a+1
0. In such case we would have achieved equilibrium.
However, we must iterate on u
(i)
a+1
to obtain it. Once of the simplest and most used
- 252-
8.2 Time discretization of the nonlinear dynamics equation
methods is the Newton-Raphson iterative procedure, which is based on Taylors
expansion series:
r
(i+1)
a+1
= r
(i)
a+1
+
2
r
(i+1)
a+1
= r
(i)
a+1
+
0r
(i)
a+1
0u
(i)
a+1

2
u
(i+1)
a+1
0 (903)
"

0r
(i)
a+1
0u
(i)
a+1
#

2
u
(i+1)
a+1
= r
(i)
a+1
(904)
r
(i)
a+1
+
2
r
(i+1)
a+1
= f
(i)
a+1
+
0f
(i)
a+1
0u
(i)
a+1

2
u
(i+1)
a+1
M

u
(i)
a+1
+
2
u
(i+1)
a+1

u
(i)
a+1
+
2
u
(i+1)
a+1

f
1(i)
a+1
+
0f
1(i)
a+1
0u
(i)
a+1

2
u
(i+1)
a+1
!
+/.o.t.
(905)
Since we want this residual to go to zero to obtain equilibrium, using Equation (902),
we can express our wish as
0 = r
(i+1)
a+1
| {z }
our wish!
= r
(i)
a+1
+

2
r
(i+1)
a+1
=
0r
(i)
a+1
0u
(i)
a+1

2
u
(i+1)
a+1
our wish
= r
(i)
a+1
z }| {
0f
(i)
a+1
0u
(i)
a+1

2
u
(i+1)
a+1
| {z }
due to "follower" loads
M
2
u
(i+1)
a+1
C
2
u
(i+1)
a+1

0f
1(i)
a+1
0u
(i)
a+1

2
u
(i+1)
a+1
| {z }
mat. & geom. nonlinear.
Then, we obtain the more usual case in which only the elastic forces do depend on
the displacements. In such case we dene the eective stiness at iteration (i) of
step (: + 1), which we use at iteration (i + 1)
0f
1(i)
a+1
0u
(i)
a+1
= K
(i)
a+1
(906)
and we dene the geometric matrix for the loads as
0f
(i)
a+1
0u
(i)
a+1
= K
f (i)
a+1
(907)
Then
0 = r
(i+1)
a+1
| {z }
our wish!
= r
(i)
a+1
+K
f (i)
a+1

2
u
(i+1)
a+1
M
2
u
(i+1)
a+1
C
2
u
(i+1)
a+1
K
(i)
a+1

2
u
(i+1)
a+1
| {z }

2
r
(i+1)
a+1
=
0r
(i)
a+1
0u
(i)
a+1

2
u
(i+1)
a+1
our wish
= r
(i)
a+1
(908)
- 253-
8 Transient analysis in nonlinear dynamics
So the dynamics equation for the current iteration in order to fulll our wish is
M
2
u
(i+1)
a+1
+C
2
u
(i+1)
a+1
+

K
(i)
a+1
K
f (i)
a+1

2
u
(i+1)
a+1
| {z }

0r
(i)
a+1
0u
(i)
a+1

2
u
(i+1)
a+1
our wish
= r
(i)
a+1
(909)
The usual case is when the external loads do not depend on the displacements of
the structure. This is the case of dead loads as the weight. Loads like a pressure
does depend on the displacements since they are usually perpendicular to a surface
which may change direction during deformation. We will also assume for simplicity
that the loads are independent of the deformation of the structure, so
K
f (i)
a+1
=
0f
(i)
a+1
0u
(i)
a+1
= 0 (910)
Otherwise, we can redene
K
(i)
a+1

redene

K
(i)
a+1
K
f (i)
a+1

(911)
So in any case
M
2
u
(i+1)
a+1
+C
2
u
(i+1)
a+1
+K
(i)
a+1

2
u
(i+1)
a+1
= r
(i)
a+1
(912)
which is the same equation as the one for linear problems, but this time K
(i)
a+1
changes from iteration to iteration. Note that
2
()
(i+1)
a+1
are the incremental quan-
tities to be applied to the displacements, velocities or accelerations ()
(i)
a+1
at the
previous iteration, see Equation (896). As loads for the iteration we have the resid-
ual at the previous iteration r
(i)
a+1
. The iterative process ends when the convergence
criteria are met. Typical convergence criteria are

r
(i)
a+1

tol
1
, to guarantee no iteration is performed if in equilibrium

r
(i)
a+1

r
(0)
a+1

tol
2
, relative error in out-of-balance forces
r
(i+1)T
a+1

2
u
(i+1)
a+1
,

r
(0)T
a+1
u
a

tol
3
relative error in the out of balance energy
(913)
As the reader is prompted to verify, the basic structure of the previous integration
algorithms for the linear case is applicable to the nonlinear case with few changes,
and this is the reason why we did study in detail the linear case. The equivalences
are

2
u
(i+1)
a+1
u
a+1
;
2
u
(i+1)
a+1
u
a+1
;
2
u
(i+1)
a+1
u
a+1
(914)
and
K
(i)
a+1
K; r
(i)
a+1
f
a+1
(915)
For the nonlinear case, a procedure would be something like the one shown in
table 3. In the procedure we assume an algorithm in d-form.
- 254-
8.3 Example: The nonlinear Newmark-, algorithm in
predictor-multicorrector d-form
Newton-Raphson algorithm for nonlinear dynamic problems.
A Initialize the problem for the specic algorithm. Use elastic properties
B For each time step : = 1, ...
B.1 Compute reference errors and check them:

r
(0)
a+1

tol
1
B.2

r
(0)
a+1

tol
1
compute r
(0)T
a+1
u
a
B.3 Compute predictors u
j(1)
a+1
; u
j(1)
a+1
; u
j(1)
a+1
C For each iteration i = 1, ...
C.1 Form equivalent stiness matrix K
(i)
a+1
and equivalent forces f
(i+1)
a+1
C.2 Obtain displ. increment
2
u
(i+1)
a+1
solving K
(i)
a+1

2
u
(i+1)
a+1
= f
(i+1)
a+1
C.3 Correct displacements, velocities and accelerations
C.4 Check for convergence. If converged, exit iterative procedure
Table 3: Template for a typical integration of a nonlinear dynamics equation
8.3 Example: The nonlinear Newmark- algorithm in predictor-
multicorrector d-form
In this section we see an example of how the linear integration algorithms are ex-
tended to the nonlinear case. We select the Newmark-, algorithm in d-form. The
procedure for the rest of the algorithms is similar. We leave to the reader to extend
those algorithms for the nonlinear case.
In the Newmark-, algorithm the Newmark integration formulae are used, which
we recover in predictor-corrector form for the reader comfort, Eq.(798)
u
a+1
=
1
,t
2
u
a

1
,t
u
a

1
2,
1

u
a
| {z }
predictor: u
j
a+1
+
1
,t
2
u
a+1
| {z }
corrector: u
c
a+1
u
a+1
= u
a
+ (1 ) t u
a
+t u
j
a+1
| {z }
predictor: u
j
a+1
+ t u
c
a+1
| {z }
corrector: u
c
a+1
In the case at hand we are iteratively enforcing equilibrium, so
()
(i+1)
a+1
= ()
a
+()
(i+1)
a+1
= ()
(i)
a+1
+
2
()
(i+1)
a+1
= ()
a
+
i+1
X
)=1

2
()
())
a+1
(916)
- 255-
8 Transient analysis in nonlinear dynamics
Inserting these formulae into the previous equations we obtain
u
(i+1)
a+1
=
1
,t
2
u
a

1
,t
u
a

1
2,
1

u
a
| {z }
predictor: u
j
a+1
+
1
,t
2
i+1
X
)=1

2
u
())
a+1
| {z }
corrector: u
c
a+1
(917)
u
(i+1)
a+1
= u
a
+ (1 ) t u
a
+t u
j
a+1
| {z }
predictor: u
j
a+1
+ t
i+1
X
)=1

2
u
c
a+1
| {z }
corrector: u
c
a+1
(918)
which can be re-written as
u
(i+1)
a+1
=
1
,t
2
u
a

1
,t
u
a

1
2,
1

u
a
+
1
,t
2
i
X
)=1

2
u
())
a+1
| {z }
new predictor corrected: u
j(i+1)
a+1
+
1
,t
2

2
u
(i+1)
a+1
| {z }
corrector: u
c(i+1)
a+1
(919)
u
(i+1)
a+1
= u
a
+ (1 ) t u
a
+t u
j
a+1
+t
i
X
)=1

2
u
c())
a+1
| {z }
new predictor corrected: u
j(i+1)
a+1
+ t
2
u
c(i+1)
a+1
| {z }
corrector: u
c(i+1)
a+1
(920)
and of course
u
(i+1)
a+1
= u
a
+
i
X
)=1

2
u
())
a+1
| {z }
new predictor u
j(i+1)
a+1
+
2
u
(i+1)
a+1
| {z }
corrector u
j(i+1)
a+1
(921)
i.e. at each iteration the predictor quantities are updated by the corrector quantities
and stored as predictors for the next iteration. These type of algorithms are named
predictor-multicorrector algorithms for obvious reasons.
Then, the equilibrium Equation (912) is written as (please, verify to convince
yourself!)
K
(i)
a+1

2
u
(i+1)
a+1
= f
(i+1)
a+1
(922)
where
f
(i+1)
a+1
= f
a+1
M u
j(i+1)
a+1
C u
j(i+1)
a+1
f
1(i)
a+1
(923)
- 256-
8.4 Example: The HHT method in predictor-multicorrector a-form
and
K
(i)
a+1
= K
(i)
a+1
+
1
,t
2
M +

,t
C (924)
Once the solution of Equation (922), the predictor values for displacements, velocities
and acceleration are updated
u
j(i+1)
a+1
= u
j(i)
a+1
+
2
u
(i)
a+1
(925)
u
j(i+1)
a+1
= u
j(i)
a+1
+
1
,t
2

2
u
(i)
a+1
(926)
u
j(i+1)
a+1
= u
j(i)
a+1
+t
2
u
c(i)
a+1
(927)
and we proceed with iteration (i + 1). It is instructive to verify that comparing
Equations (922) and (904), we obtain

0r
(i)
a+1
0u
(i)
a+1
= K
(i)
a+1
and r
(i)
a+1
= f
(i+1)
a+1
(928)
We leave to the reader to obtain the procedure including follower forces.
Finally, we make the following remarks
In the linear case, the iterative procedure obviously converges in just one
iteration and only one correction is performed
In most nonlinear analysis, the stiness of the structure or problem decreases.
Hence, the natural frequencies also do so. In consequence, algorithms stable
for the initial material properties remain stable during all the analysis. The
same occurs in terms of accuracy. During the analysis, the accuracy increases.
However, there are some problems in which the stiness may increase (for
example contact problems). Then the algorithms may become unstable. In
those cases, the parameters and time step size must be selected using the
highest possible stiness.
In this example we have used a full Newton-Raphson procedure. However, fre-
quently this procedure is modied for several reasons as to prevent lack of
convergence using a line search procedure, to prevent multiple factorizations
of the eective stiness matrix K
(i)
a+1
using modied Newton-Raphson schemes
or BFGS (BroydenFletcherGoldfarbShanno) secant updates or similar pro-
cedures. The modications over the given procedures are straightforward upon
study of those techniques.
8.4 Example: The HHT method in predictor-multicorrector a-form
Another example of nonlinear implementation of a time integration algorithm is the
HHT cmethod in a-form, which is also suitable for explicit-implicit implementation
and that contains the Newmark-, algorithm in a-form as a particular case. In
- 257-
8 Transient analysis in nonlinear dynamics
nonlinear problems, the iterative scheme is applied to the Newmark integration
formulas which we reproduce again for the readers comfort
u
(i+1)
a+1
= u
a
+t u
a
+
t
2
2
(1 2,) u
a
+,t
2
u
(i)
a+1
| {z }
predictor u
j(i+1)
a+1
+ ,t
2

2
u
(i+1)
a+1

| {z }
correct. u
c(i+1)
a+1
(929)
u
(i+1)
a+1
= u
a
+ (1 ) t u
a
+t u
(i)
a+1
| {z }
predictor u
j(i+1)
a+1
+ t

2
u
(i+1)
a+1

| {z }
correct. u
c(i+1)
a+1
(930)
The equation of equilibrium is for this case, assuming that the viscous damping
given by C is linear
M u
(i+1)
a+1
+C
h
(1 +c) u
(i+1)
a+1
c u
a
i
+f
1(i+1)
a+1

u
(i+1)
a+1+c

= f
(i+1)
a+1+c
(931)
where
u
(i+1)
a+1+c
= (1 +c) u
(i+1)
a+1
cu
a
= (1 +c) u
(i)
a+1
cu
a
| {z }
predictor u
j(i+1)
a+1+c
+ (1 +c)
2
u
(i+1)
a+1
| {z }
correct. u
c(i+1)
a+1+c
(932)
u
(i+1)
a+1+c
= (1 +c) u
(i+1)
a+1
c u
a
= (1 +c) u
(i)
a+1
c u
a
| {z }
predictor u
j(i+1)
a+1+c
+ (1 +c)
2
u
(i+1)
a+1
| {z }
correct. u
c(i+1)
a+1+c
(933)
The residual form is (we omit the subindex for the residual because equilibrium is
not enforced at a particular time)
r
(i+1)
= f
(i+1)
a+1+c
M u
(i+1)
a+1
C
h
(1 +c) u
(i+1)
a+1
c u
a
i
f
1(i+1)
a+1+c

u
(i+1)
a+1+c

0
(934)
so taking into account that 0u
a+1+c
,0u
a+1
= (1 +c) I
r
(i+1)
' r
(i)
+
0r
(i)
0u
(i)
a+1

2
u
(i+1)
a+1
= r
(i)
+

0r
(i)
0u
(i)
a+1
0u
(i)
a+1
0 u
(i)
a+1
!

2
u
(i+1)
a+1
= r
(i)
+
0r
(i)
0 u
(i)
a+1

2
u
(i+1)
a+1
= f
(i)
a+c
M u
j(i)
a+1
C
h
(1 +c) u
j(i)
a+1
c u
a
i
f
1(i)
a+1+c

u
j(i)
a+1+c

M
2
u
(i+1)
a+1
(1 +c) tC

2
u
(i+1)
a+1

0f
1(i)
a+1+c
0u
(i)
a+1+c

0f
(i)
a+1+c
0u
(i)
a+1+c
!
0u
(i)
a+1+c
0u
(i)
a+1
0u
(i)
a+1
0 u
(i)
a+1

2
u
(i+1)
a+1
(935)
where
0f
1(i)
a+1+c
0u
(i)
a+1+c
0u
(i)
a+1+c
0u
(i)
a+1
0u
(i)
a+1
0 u
(i)
a+1
= (1 +c) ,t
2
K
(i)
a+1+c
(936)
- 258-
8.5 Example: The Bathe-Baig algorithm in
predictor-multicorrector d-form.
and for the follower forces the procedures is similar but results on a geometric
stiness
0f
(i)
a+1+c
0u
(i)
a+1+c
0u
(i)
a+1+c
0u
(i)
a+1
0u
(i)
a+1
0 u
(i)
a+1
= (1 +c) ,t
2
K
f (i)
a+1+c
(937)
Thus the system of equations to solve at each iteration is
M
(i)
a+1

2
u
(i+1)
a+1
= f
(i+1)
a+1
(938)
where
M
(i)
a+1
= M + (1 +c) tC+(1 +c) ,t
2

K
(i)
a+1+c
K
f (i)
a+1+c


0r
(i)
0 u
(i)
a+1
(939)
f
(i+1)
a+1
= f
(i)
a+1+c
M u
j(i)
a+1
C u
j(i)
a+1+c
f
1(i)
a+1+c

u
j(i)
a+1+c

r
(i)
(940)
Once the system of equations is solver, the corrector acceleration
2
u
(i+1)
a+1
is used
to correct the displacements and velocities usinf Eqs. (929) and (930).
8.5 Example: The Bathe-Baig algorithmin predictor-multicorrector
d-form.
In this Section we see, as a second example, a possible implementation of the Bathe-
Baig algorithm for nonlinear problems in predictor-multicorrector d-form. The pro-
cedure is similar to that already developed for the Newmark-, algorithm. Recovering
Equations (857) and (858), page (232) for the reader comfort
u
a+
=

2
t

2
(u
a
t u
a
) u
a
| {z }
predictor: u
j
a+
+

2
t

2
u
a+
| {z }
correct.: u
c
a+
(941)
u
a+
=
2
t
u
a
u
a
| {z }
pred.: u
j
a+
+
2
t
u
a+
| {z }
corr.: u
c
a+
(942)
Since now we must iterate to obtain a solution, such that
()
(i+1)
a+1
= ()
a
+()
(i+1)
a+1
= ()
(i)
a+1
+
2
()
(i+1)
a+1
= ()
a
+
i+1
X
)=1

2
()
())
a+1
(943)
- 259-
8 Transient analysis in nonlinear dynamics
these equations are written in predictor-multicorrector form for a given iteration
u
(i+1)
a+
=

2
t

2
u
(i)
a+
u
a
t u
a

u
a
| {z }
predictor: u
j(i+1)
a+
+

2
t

2
u
(i+1)
a+
| {z }
corr.: u
c(i+1)
a+
(944)
u
(i+1)
a+
=
2
t

u
(i)
a+
u
a

u
a
| {z }
pred.: u
j(i+1)
a+
+
2
t

2
u
(i+1)
a+
| {z }
corr.: u
c(i+1)
a+
(945)
or alternatively using
u
(i+1)
a+
=
i
X
)=1
u
())
a+
| {z }
pred.: u
j(i+1)
a+
+
2
u
(i+1)
a+
| {z }
corr.:u
c(i+1)
a+
(946)
we can write
u
(i+1)
a+
=

2
t

2
u
j(i+1)
a+

4
t
u
a
u
a
| {z }
predictor: u
j(i+1)
a+
+

2
t

2
u
(i+1)
a+
| {z }
corr.: u
c(i+1)
a+
(947)
u
(i+1)
a+
=
2
t
u
j(i+1)
a+
u
a
| {z }
pred.: u
j(i+1)
a+
+
2
t

2
u
(i+1)
a+
| {z }
corr.: u
c(i+1)
a+
(948)
Then, the equilibrium equation in residual form for the rst substep is
r
(i+1)
a+
= f
(i+1)
a+

u
(i+1)
a+

M u
(i+1)
a+
C u
(i+1)
a+
f
1(i+1)
a+

u
(i+1)
a+

0 (949)
i.e. substituting the previous equations and considering the case of possible follower
forces
r
(i+1)
a+
' r
(i)
a+
+
0r
(i)
a+
0u
(i)
a+

2
u
(i+1)
a+
= f
(i)
a+
+
0f
(i)
a+
0u
(i)
a+

2
u
(i+1)
a+
M

u
j(i+1)
a+
+

2
t

2
u
(i+1)
a+
!
C

u
j(i+1)
a+
+
2
t

2
u
(i+1)
a+

f
1(i)
a+

u
j(i)
a+

0f
1(i)
a+
0u
(i)
a+

2
u
(i+1)
a+
(950)
i.e.
K
(i)
a+

2
u
(i+1)
a+

= f
(i+1)
a+
(951)
- 260-
8.5 Example: The Bathe-Baig algorithm in
predictor-multicorrector d-form.
where
K
(i)
a+
=
0f
1(i)
a+
0u
(i)
a+

0f
(i)
a+
0u
(i)
a+
| {z }
K
(i)
a+
+
2
t
C +

2
t

2
M
0r
(i)
a+
0u
(i)
a+
(952)
and K
(i)
a+
includes the geometrical stiness of the follower forces, and
f
(i+1)
a+
= f
(i)
a+

u
j(i+1)
a+

M u
j(i+1)
a+
C u
j(i+1)
a+
f
1(i)
a+

u
j(i+1)
a+

r
(i)
a+
(953)
For the second substep we recover Equations (871) and (872) of page 233 already
converted to the iterative scheme
u
(i+1)
a+1
= c
1
u
a
+c
2
u
a+
+c
3
u
(i)
a+1
| {z }
predictor u
j(i+1)
a+1
+ c
3

2
u
(i+1)
a+1
| {z }
corr. u
c(i+1)
a+1
(954)
u
(i+1)
a+1
= c
1
u
a
+c
2
u
a+
+c
3
c
1
u
a
+c
3
c
2
u
a+
+c
3
c
3
u
(i)
a+1
| {z }
predictor u
j(i+1)
a+1
(955)
+ c
3
c
3

2
u
(i+1)
a+1
| {z }
corr u
c(i+1)
a+1
(956)
Now, the equilibrium equation is
r
(i+1)
a+1
= f
(i+1)
a+1

u
(i+1)
a+1

M u
(i+1)
a+1
C u
(i+1)
a+1
f
1(i+1)
a+1

u
(i+1)
a+1

0 (957)
i.e. substituting the previous equations and considering the case of possible follower
forces
r
(i+1)
a+1
' r
(i)
a+1
+
0r
(i)
a+1
0u
(i)
a+1

2
u
(i+1)
a+1
= f
(i)
a+1
+
0f
(i)
a+1
0u
(i)
a+1

2
u
(i+1)
a+1
M

u
j(i+1)
a+1
+c
3
c
3

2
u
(i+1)
a+1

u
j(i+1)
a+1
+c
3

2
u
(i+1)
a+1

f
1(i)
a+

u
j(i)
a+1

0f
1(i)
a+1
0u
(i)
a+1

2
u
(i+1)
a+1
(958)
i.e.
K
(i)
a+1

2
u
(i+1)
a+1

= f
(i+1)
a+1
(959)
where
K
(i)
a+1
=
0f
1(i)
a+1
0u
(i)
a+1

0f
(i)
a+1
0u
(i)
a+1
| {z }
K
(i)
a+1
+c
3
C +c
3
c
3
M
0r
(i)
a+1
0u
(i)
a+1
(960)
- 261-
8 Transient analysis in nonlinear dynamics
and
f
(i+1)
a+1
= f
(i)
a+1

u
j(i+1)
a+1

M u
j(i+1)
a+1
C u
j(i+1)
a+1
f
1(i)
a+1

u
j(i+1)
a+1

r
(i)
a+1
(961)
Example 71 Derive the eective algorithmic stiness matrix and the eective load
vector for an elastic pendulum, which is composed of a massless bar of length 1 in
static conguration and sectional stiness 1, and an end mass :.
Solution. This is a geometrically nonlinear problem of two degrees of freedom. Let
0 be the angle between the vertical and the bar of the pendulum. Let n
a
and n
j
be the
displacement of the mass from the static vertical position in equilibrium. The initial
length of the bar is 1, which already contains the straining in static equilibrium. For
a given displacement vector u the length of the bar is
| =
q
n
2
a
+ (1 +n
j
)
2
(962)
and for future reference we note that
0|
0n
a
=
n
a
|
;
0|
0n
j
=
(n
j
+1)
|
(963)
0
2
|
0n
2
a
=
(1 +n
j
)
2
|
3
;
0
2
|
0n
2
j
=
n
2
a
|
3
;
0
2
|
0n
a
n
j
=
(1 +n
j
)
|
3
n
a
(964)
and the position may also be computed from the angle
cos 0 =
n
a
|
and sin0 =
(1 +n
j
)
|
(965)
Let 1
0
be the length of the bar under relaxed conguration. Then the logarithmic
(engineering) strain is
- = ln
|
1
0
(966)
so the strain under static load is
-
0
= ln
1
1
0
(967)
and the elastic energy, assuming that the volume \
0
=
0
1
0
is preserved (isochoric
deformation) and that the strains are constant in the bar
W =
1
2
Z
\
1-
2
d\ =
1
2
Z
\
0
1-
2
d\
0
=
1
2
1
0
1
0

ln
|
1
0

2
=
1
2
i

ln
|
1
0

2
(968)
where i = 1
0
1
0
. The derivatives are
0W
0|
=
i
|

ln
|
1
0

;
0
2
W
0|
2
=
i
|
2

1 ln
|
1
0

(969)
The load may be computed using Castigliano theorem
)
1
a
=
0W
0n
a
=
0W
0|
0|
0n
a
=
i
|
2

ln
|
1
0

n
a
(970)
)
1
j
=
0W
0n
j
=
0W
0|
0|
0n
j
=
i
|
2

ln
|
1
0

(n
j
+1) (971)
- 262-
8.5 Example: The Bathe-Baig algorithm in
predictor-multicorrector d-form.
and the eective stiness is
1
11
=
0)
1
a
0n
a
=
0
2
W
0n
2
a
=
material
z }| {
0
2
W
0|
2

0|
0n
a

2
+
geometric
z }| {
0W
0|
0
2
|
0n
2
a
=
i
|
2

1 ln
|
1
0

n
2
a
|
2

+
i
|
2

ln
|
1
0

(1 +j)
2
|
2
=
i
|
4

n
2
a

n
2
a
(1 +n
j
)
2

ln
|
1
0

(972)
1
12
=
0)
1
a
0n
j
=
0
2
W
0n
a
0n
j
=
material
z }| {
0
2
W
0|
2

0|
0n
j

0|
0n
a

+
geometric
z }| {
0W
0|
0
2
|
0n
a
0n
j
=
i
|
2

1 ln
|
1
0

(1 +n
j
) n
a
|
2

i
|

ln
|
1
0

(1 +n
j
)
|
3
n
a
=
i
|
4

1 2 (1 +n
j
) n
a
ln
|
1
0

= 1
21
(973)
1
22
=
0)
1
j
0n
j
=
0
2
W
0n
2
j
=
material
z }| {
0
2
W
0|
2

0|
0n
j

2
+
geometric
z }| {
0W
0|
0
2
|
0n
2
j
=
i
|
2

1 ln
|
1
0

(n
j
+1)
2
|
2
+
i
|

ln
|
1
0

n
2
a
|
3
=
i
|
4

(n
j
+1)
2
+

n
2
a
(1 +n
j
)
2

ln
|
1
0

(974)
Note that all previous expressions may be written in terms of the coordinates
r = n
a
and j = n
j
+1 (975)
and
r = n
a
and j = n
j
(976)
On the other hand, the eective load vector is constant
f =

0
:q

(977)
In order to have a meassure of the appropiate time integration step, the elastic
frequencies are computed as
.
1
=
r
1
:
=
r
1
0
,1
0
:
and .
j
=
r
q|
:
(978)
which give us an estimation of the necessary time increment for an accurate descrip-
tion of the problem as, for example
t = min

T
1
10
,
T
j
10

(979)
- 263-
8 Transient analysis in nonlinear dynamics
Example 72 Write a code to compute a step of the time integration of a pendulum
like the one of the previous example.
Solution:
function [fk,K,C,M,W] = Nlbar(u)
% solves a step for a elastic pendulum required for
% time integration of the problem
%
% x = position coordinates vector (2dof) (ux,uy-L)
% fk= internal elastic force given by the spring
% K = stiffness matrix
% C = damping matrix
% M = mass matrix
% W = elastic energy
%
% elastic parameters in local coordinates, change as desired
L0= 1; % length of the pendulum in static horizontal position
E = 1000; % Youngs modulus
A = 1; % cross-sectional area
m = 1.; % mass at the end of the pendulum
g = 9.81; % gravity
L = L0; % static reference *** place here the static value
%
ux = u(1); uy = u(2);
x = ux; % coordinates
y = uy + L0;
l = sqrt(x^2+y^2); % current length
c = x / l; % angle of the bar with x axis
s = y / l; % (actually the sine and cosine of the angle)
%
M = [m,0;0,m]; % mass matrix (lumped mass)
C = [0,0;0,0]; % damping matrix (no damping)
k = E*A*L0; % stiffness constant of the bar
kl2= k/l^2;
kl4= k/l^4;
ls= log(l/L); % logarithmic strain
%** elastic strain
W = 1/2*k*ls^2;
%** forces vector
fk(1) = kl2*ux*ls;
fk(2) = kl2*ls*y;
%** stiffness matrix
K(1,1) = kl4*(ux^2 - (ux^2 - y^2)*ls);
K(1,2) = kl4*(1-2*y*ux*ls);
K(2,1) = K(1,2);
- 264-
8.5 Example: The Bathe-Baig algorithm in
predictor-multicorrector d-form.
K(2,2) = kl4*(y^2+(ux^2-y^2)*ls);
return,
end
Example 73 Write a code using your preferred computer language to solve the
previous example using the HHT algorithm in a-form. Prescribe a relative tolerance
of the residual forces of 10
5
. Use the following constants ...
Solution: The source code follows
function [t,u,v,a,W,T] = nlHHT(problem,f,dt,N,u0,v0,be,ga,al,nitmx,etol)
%*** [t,u,v,a,W,T] = nlHHT(problem,f,dt,N,u0,v0,be,ga,al,nitmx,etol)
% This is a program to integrate the equation of motion
% using the Hilber-Hughes-Taylor alpha method
%
% @problem = function that contains the problem to run, see below
% f = force vector: f(dim(K),1:N); 2nd dimens. may be >N
% dt= time increment
% N = number of steps
% u0= initial displacements
% v0= initial velocities
% be= Newmark beta parameter
% ga= Newmark gamma parameter
% al= HHT alpha parameter
% nitmx = maximum number of equilibrium iterations
% etol = relative error tolerance in equilibrium iterations
%
% t = time (for plots)
% u,v,a = displacements, velocities and accelerations
% W,T = elastic and kinematic energies
%
% The problem function must have the following format:
% [fk,K,C,M,W] = problem(u)
% where:
% @problem = handle of the fuction that contains the problem
% u = current displacements
% fk = internal "elastic" forces at displacements u
% K = dfint/du at displacements u
% C, M = damping and mass matrices
% W = elastic energy
%
%* initial calculations
t(1) = 0; % time;
a0 = ga*dt; a1 = be*dt^2; % constants
a2 = dt-a0; a3 = (dt^2)/2-a1; % more constants
al1= al + 1;
u(:,1) = u0; % initial displacements
- 265-
8 Transient analysis in nonlinear dynamics
v(:,1) = v0; % initial velocities
[fk,K,C,M,W(1)] = feval(problem,u0);% initial K,C,M matrices
a(:,1) = M\(f(:,1)-fk-C*v0); % initial acceleration by eq.
T(1) = 1/2*v(:,1)*M*v(:,1); % initial kinetic energy
%** main time integration loop
for n=1:N-1;
disp([Step ,num2str(n)]);
t(n+1) = t(n) + dt;
a(:,n+1) = a(:,n); % pred. accelerations
u(:,n+1) = u(:,n) + dt*v(:,n) + a3*a(:,n); % predictor displac.
u(:,n+1) = u(:,n+1) + a1*a(:,n+1); %
v(:,n+1) = v(:,n) + a2*a(:,n); % predictor veloc.
v(:,n+1) = v(:,n+1) + a0*a(:,n+1); %
fal = al1*f(:,n+1)-al*f(:,n); % ext. force n+1+alph
%** equilibrium iterations loop
for it = 1:nitmx
ual = al1*u(:,n+1)-al*u(:,n); % displ. at n+1+alpha
val = al1*v(:,n+1)-al*v(:,n); % velocities at n+1+alpha
[fk,K,C,M,W(n+1)] = feval(problem,ual); % K,C,M matrices
r = fal - M*a(:,n+1)- C*val - fk;
rnorm = norm(r); if (it == 1), rnorm0 = rnorm; end,
disp([ Iter ,num2str(it), rnorm = ,num2str(rnorm)]);
if (rnorm < 1.0e-30), break; end, % already in equilibrium
if (it > 1),
if (rnorm/rnorm0 < etol),
break; % convergence
end,
end,
% no convergence, then solve system of eq.
Meff = M + al1*a0*C + al1*a1*K; % lhs matrix
acorr= Meff\r; % accel. correction
a(:,n+1) = a(:,n+1) + acorr; % corrected accel.
v(:,n+1) = v(:,n+1) + a0*acorr; % corrected veloc.
u(:,n+1) = u(:,n+1) + a1*acorr; % corrected displac.
end,
T(n+1) = 1/2*v(:,n+1)*M*v(:,n+1);
end,
return,
end
Example 74 Write a script to run the pendulum problem and plot the results. Use
dierent integration parameters, dierent time steps and dierent constants for the
problem.
Solution: The script may be something like
N = 10000; % number of steps
- 266-
8.5 Example: The Bathe-Baig algorithm in
predictor-multicorrector d-form.
dt = 0.1; % time step
u0(1) = 0.0; u0(2) = 0.0; % initial position
v0(1) = 0.1; v0(2) = 0.0; % initial velocity
al = -0.00; % alpha parameter (0 = Newmark)
be = 0.25*(1-al)^2; % beta parameter (unconditionally stable)
ga = (1-2*al)/2; % gamma parameter (idem)
nitmx = 20; etol = 1e-8; % maximum number of iterations and tolerance
f(1:N) = 0.; % loading time function
%
%** integrate
%
[t,u,v,a,W,T] = nlHHT(@Nlbar,f,dt,N,u0,v0,be,ga,al,nitmx,etol);
%
%** plot
%
figure;
subplot(3,1,1);
plot(t,u); xlabel(Time); ylabel(Displacements);
subplot(3,1,2);
plot(t,W); xlabel(Time); ylabel(Stored energy);
subplot(3,1,3);
plot(t,T); xlabel(Time); ylabel(Kinetic energy);
%
%** plots the pendulum (nice video!)
%
n = length(t);
L = 1;
figure; set(gca,xlim,[-2,2],ylim,[-2,2]); axis square; hold on
for i=1:10:n,
h = plot([0,u(1,i)],[0,L+u(2,i)]);
set(h,marker,o);
xlabel([Time: ,num2str(t(i)), Step:,num2str(i)])
m(i) = getframe(gcf);
delete(h);
end
When running the problem, we see convergence iterations for the steps like (the
number of iterations deppend on the problem, the tolerances and the step size,
among others)
...
Step 9996
Iter 1 rnorm = 0.001797
Iter 2 rnorm = 8.0496e-008
Iter 3 rnorm = 3.6057e-012
Step 9997
Iter 1 rnorm = 0.00182
- 267-
8 Transient analysis in nonlinear dynamics
Iter 2 rnorm = 8.3172e-008
Iter 3 rnorm = 3.801e-012
Step 9998
Iter 1 rnorm = 0.0018432
Iter 2 rnorm = 8.5939e-008
Iter 3 rnorm = 4.0069e-012
Step 9999
Iter 1 rnorm = 0.0018667
Iter 2 rnorm = 8.8799e-008
Iter 3 rnorm = 4.2241e-012
...
The plots can be seen in Figures 76 and 77
- 268-
8.5 Example: The Bathe-Baig algorithm in
predictor-multicorrector d-form.
0 200 400 600 800 1000
-3
-2
-1
0
1
2
Time
D
i
s
p
l
a
c
e
m
e
n
t
s
0 200 400 600 800 1000
0
0.01
0.02
0.03
0.04
0.05
Time
S
t
o
r
e
d

e
n
e
r
g
y
0 200 400 600 800 1000
0.05
0.06
0.07
0.08
0.09
0.1
0.11
Time
K
i
n
e
t
i
c

e
n
e
r
g
y
Figure 76: Displacements, stored energy and kinetic energy of a nonlinear pendulum.
- 269-
8 Transient analysis in nonlinear dynamics
-2 -1 0 1 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Time: 308 Step:3081
Figure 77: Nonlinear elastic pendulum video (one snapshot).
- 270-
9 Harmonic analyses
9.1 Discrete Fourier Transform revisited
Harmonic analysis is a type of linear analysis in which the transient response of the
structure is not considered and we are only interested on the permanent response of
the structure. The problem must be linear in order to perform a harmonic analy-
sis. For nonlinear problems the procedures explained in Section 8 are the ones to
consider.
The permanent response of a structure to an arbitrary load may be obtained
as the superposition (recall that we are under linear behavior assumptions) of the
responses to a successive series of trigonometric functions obtained using the Discrete
Fourier Transform (DFT)
) (t
a
= :t) )
a
=
1
2
a
0
+
(.1)2
X
I=1
(a
I
cos .
I
t
a
+/
I
sin.
I
t
a
) (980)
=
1
2
a
0
+
(.1)2
X
I=1

a
I
cos
2/
t
:t +/
a
sin
2/
t
:t

(981)
=
1
2
a
0
+
(.1)2
X
I=1

a
I
cos
2/

: +/
a
sin
2/

(982)
where : is the step number such that t
a
= :t, is the number of steps such that
the nal time
11
is t
max
= t, the frequency .
I
= /.
1
= 2/,t
max
and
a
0
=
2

.
X
a=1
)
a
; a
I
=
1

.
X
a=1
)
a
cos
2/:

; /
I
=
1

.
X
a=1
)
a
sin
2/:

(983)
We note here that is not the number of steps of the analysis (in harmonic analysis
there will be no "steps"), but the number of points of the loading function. We note
that )
a
is known, so the coecients a
I
and /
I
can be readily computed. Thus we
can read the loading as an static term
1
2
a
0
plus several trigonometric functions.

a
I
cos .
I
t
a
/
a
sin.
I
t
a
(984)
where the coecients a
I
and /
I
are obtained as mentioned using Eqs. (983). Usually
the Fast Fourier Transform, FFT, is employed. An alternative is to write the series
in terms of amplitude and phase, using
a
I
cos .
I
t
a
+/
I
sin.
I
t
a
= c
I
cos (.
I
t
a
,
I
) (985)
where
c
I
=
q
a
2
I
+/
2
I
and ,
I
= arctan
/
I
a
I
(986)
11
Usually to avoid aliasing, the frequencies considered are only up to half of that given by the
nal time, i.e. .max = (`t). Hence, considering the I = 0 case, the upper limit of the sums is
usually (` 1) 2.
- 271-
9 Harmonic analyses
i.e.
) (t
a
= :t) )
a
= c
0
+
(.1)2
X
I=1
c
I
cos (.
I
t
a
,
I
) (987)
Alternatively
a
I
cos .
I
t
a
+/
I
sin.
I
t
a
= c
I
sin(.
I
t
a
,
I
) (988)
where
c
I
=
q
a
2
I
+/
2
I
and ,
I
= arctan
a
I
/
I
(989)
A yet another alternative form to trigonometric functions is to dene a complex
loading function. Then with , =

1 being the complex number, we use Eulers
formulae
c
).
I
tn
= cos .
I
t
a
+, sin.
I
t
I
; c
).
I
tn
= cos .
I
t
a
, sin.
I
t
I
(990)
so
cos .
I
t
a
=
c
).
I
t
n
+c
).
I
t
n
2
and sin.
I
t
a
= ,
c
).
I
t
n
c
).
I
t
n
2
(991)
Then, we can write

I
z }| {
(a
I
,/
I
)
c
).
I
t
n
z }| {
(cos .
I
t
a
+, sin.
I
t
a
) = a
I
cos .
I
t
a
+/
I
sin.
I
t
a
+, (a
I
sin.
I
t
a
/
I
cos .
I
t
a
) (992)
=
I
c
).
I
tn
so
I
are complex coecients which account for amplitude (magnitude) and phase
)
a
=
1
2
a
0
+
(.1)2
X
I=1
Re

I
c
).
I
t
n

(993)
In other way, dening the complex load function as follows, we obtain the typical
DFT denition
)
a
=
.1
X
I=0
1
I
c
).
I
t
n
with 1
I
=
1

.1
X
a=0
)
a
c
).
I
t
n
(994)
Again since )
a
is known, 1
I
are readily obtained. We note that we can interpret 1
I
to be a discretized function of the frequencies .
I
= 2/, (t), in general we may
think of it as a continuous function 1 (.), which is the Fourier transform of ) (t).
9.2 Harmonic analysis using the full space.
One of the possible harmonic analysis is performed using the original full matrices.
The loading vector is given
f (t) =
~
f) (t) 7F (.) =
~
f1 (.) c
).t
(995)
- 272-
9.2 Harmonic analysis using the full space.
where
~
f is the load vector and ) (t) is the modulating time function. Similarly 1 (.)
is the modulating frequency function. In practice, it is given in the discrete form
~
f1
I
(.
I
) c
).
I
t
(996)
The linear dynamics equation is now
M u +C u +Ku =
~
f1
I
(.
I
) c
).
I
t
(997)
We have seen in the rst part of this notes (see Section on the response to harmonic
excitation) that the transient response depends on the natural frequencies of the
structure, but the permanent response, which is the part of interest in this type of
analysis, has the frequency of the excitation. We have also seen that the permanent
(or particular) solution of this type of equation is of the form
u = Uc
).t
(998)
where U is the complex displacement amplitudes vector. Hence, for each frequency
.
I

.
2
I
M +,.
I
C +K

U
I
c
).
I
t
=

f1
I
(.
I
) c
).
I
t
(999)
i.e.
S
I
U
I
=
~
f1
I
U
I
= 1
I
H
I
~
f (1000)
where
S
I
= H
1
I
= .
2
I
M +,.
I
C +K (1001)
S
I
= H
1
I
is usually named dynamic stiness or dynamic impedance matrix, and
H
I
is named frequency response matrix or dynamic exibility matrix or Frequency
Response Matrix. The values of U are the frequency response functions if 1
I
= 1.
The dynamic stiness is factorized for each frequency, so in principle if the number
of frequencies to evaluate is large the method may be not economical. However, the
frequency content of the load is small (i.e. most 1
I
may be neglected), then it is a
strong candidate.
The method is specially useful when it is of interest only the response at some few
locations on the structure for loads at some of those locations. Static condensation
(Gauss elimination) is performed on the remaining degrees of freedom. Let us denote
by a star those DOF to keep and by a cross those irrelevant. Then

I
S

I
S

I
S

I

U

I
U

I
0

(1002)
Using the second set of equations
S

I
U

I
+S

I
U

I
= 0 U

I
=

1
S

I
U

I
(1003)
and the rst set yields upon substitution of this result
h
S

I
S

1
S

I
i
| {z }
S

I
= H
1
I
U

I
= F

I
(1004)
- 273-
9 Harmonic analyses
i.e.
U

I
= H

I
F

I
(1005)
gives the complex amplitude (modulus and phase) of the permanent displacements
at the desired locations for a loading at the frequency .
I
. Usually F

I
is a matrix of
vectors, each vector being zero except for a one on the possible location of a load.
Then U

I
is also a matrix that contains the relevant columns of H

I
. Finally, super-
position is applied to obtain the response for any load vector at a given frequency. In
fact, these quantities allow us to dene the transfer function. If l

I,i|
is the response
at DOF i for a load at DOF | and l

I,)|
is the response at DOF , for the same load,
then
l

I,i|
= H

I,i|
1

|
and l

I,)|
= H

I,)|
1

|
(1006)
so eliminating 1

)
from both equations we have
l

I,i|
=
H

I,i|
H

I,)|
l

I,)|
(1007)
Once l

I,)|
is known, then l

I,i|
can be determined from

H
|
i)
(.
I
) =
H

I,i|
H

I,)|
(1008)
which is known as the transfer function between DOF i and , for a harmonic load
of circular frequency .
I
acting at DOF |. These transfer functions are of much
interest to understand what happens on dierent locations of the structure, i.e.
which parts magnify the response and at which frequencies. Aside, if the response
in one of the DOFs is known or measured, these functions allow for the computation
of the response at other, dierent locations. The transfer functions are valid for
displacements, velocities and accelerations. FRF are also sometimes named transfer
functions.
We nally note that since the analysis is linear, the superposition principle may
be applied. Assume a load as
f (t) =
1
X
|=1
f
|
(t) F (.) =
1
X
|=1
F
|
(.) (1009)
Then we obtain
U (.) =
1
X
|=1
U
|
(.) and u(t) =
1
X
|=1
u
|
(t) (1010)
where the superscript | implies load or response of case |. Then, the power of
the harmonic analysis is that once the dynamic stinesses H

I
are obtained and
factorized for the DOF of interest (usually a small number), then to obtain the
response to a variety of loads cases is a very economical task.
Finally for this subsection we note that the FRF and transfer functions may
be dened in terms of accelerations instead of displacements. We leave the task
of deriving such functions following similar steps to the reader. The FRFs have
dierent specic names depending on the quantities they relate. Usual names are
- 274-
9.2 Harmonic analysis using the full space.
Dynamic stiness functions, when they relate forces to displacements (as the
usual stiness, but doing so as a function of the frequency)
Compliance response functions, when they relate displacements to forces (i.e.
the inverse of the dynamic stiness functions)
Mobility functions, when they relate velocities to forces
Impedance functions, when they relate forces to velocities (i.e. the inverse of
the mobility function)
Receptance functions or Inertance functions, when they relate accelerations to
forces
Dynamic mass functions, when they relate accelerations to forces (as masses
do, but as a function of frequency), i.e. the inverse of the inertance functions
These functions are represented in a number of dierent ways:
One plot for the real part against frequency and another plot for the imaginary
part. This is frequently named a component plot or Co-quad representation
One plot for the amplitude (may be in logarithmic scale) against frequency
(may be in octaves, specially for sound or human feeling) and another plot for
the phase. This is usually called a Bode representation
One single plot in which the x-axis is the real component and the y-axis is
the imaginary part. Dots and legends in the curve mark dierent frequencies.
This is named a Nyquist plot.
A more unusual representation is the Nichols representation in which the x-
axis represents the phase and the y-axis represent the amplitude (frequently
in logarithmic scale).
Example 75 For the three oor building of Examples 11, page 98 and 15, page 103,
compute the FRF of the second and third oors for a load in the rst oor. Compute
the transfer functions between both oors and the rst oor. Assume a 2% damping
at every frequency.
Solution: The FRF matrix is
H(.) =

K +,.C .
2
M

1
The damping matrix may be computed at each frequency to obtain the desired damp-
ing as if it had a mode at that frequency
c.
2
+, = 2.
If we take c = 0, we have mass proportional damping with , = 2.
H(.) =

K +,2.
2
M .
2
M

1
=

K .
2
(1 2,) M

1
(1011)
- 275-
9 Harmonic analyses
0 5 10 15 20 25 30
0
0.05
0.1
0.15
[rd/s]
A
m
p
l
i
t
u
d
e

o
f

F
R
F


U
1
U
2
U
3
0 5 10 15 20 25 30
100
0
100
[rd/s]
P
h
a
s
e

o
f

F
R
F


0 5 10 15 20 25 30
0
10
20
30
[rd/s]
A
m
p
l
i
t
u
d
e

t
r
a
n
s
f
e
r

f
c
n


U
2
/U
1
U
3
/U
1
Figure 78: Frequency response functions and transfer functions for the three storey
structure under harmonic loads in the rst oor. Upper: amplitudes of the FRF.
Middle: phase of the FRF. Lower: amplitudes of the transfer functions.
- 276-
9.3 Harmonic analysis using mode superposition
and the FRF are computed for a unit multiplier 1 (.) = 1 as
U (.) = H(.)
~
f
For this case
~
f =

1 0 0

T
(load in the rst oor). In the following lines we
reproduce a Matlab function to plot the FRF and the transfer functions. Figure
78 contains the FRF and transfer functions. It is seen that the peaks of the FRFs
correspont to the natural frequencies of the structure. To understand the transfer
functions we note that if we restrain the rst oor, we haveK =
1211
1
3

4 2
2 2

and M =

: 0
0 :

which for = 1211,

:1
3

= 100 has the natural frequencies


.
1
= 8.74 rad, s and .
2
= 22.88 rad, s (i.e. the peaks of the transfer function). Note
that these frequencies separate those of the unrestrained structure as we should expect
(see Section 6.1.3, page 139)
function [U] = harmonic(K,M,seta,w,F,f)
%** function [U] = harmonic(K,M,seta,w,F,f)
% Program to perform a harmonic analysis
% giving FRF
%
% K = stiffness matrix
% M = mass matrix
% seta= damping
% w = frequencies
% F = Fourier coefficients at frequencies w
% f = load vector
j = sqrt(-1);
for i=1:length(w)
S = K - w(i)^2 * (1-2*seta*j) * M;
U(:,i) = S\f * F(i);
end,
9.3 Harmonic analysis using mode superposition
For harmonic analysis this is the most common method, since it is the method for
which most advantage is obtained when using harmonic analyses. For the presen-
tation of this type of analysis we consider a load which is the superposition of 1
dierent load cases to be considered, so the equilibrium equation is written as
M u +C u +Ku =
1
X
|=1
f
|
(t) (1012)
where f
|
(t) =
~
f
|
)
|
(t), with
~
f
|
being the constant load vector for case | and )
|
(t)
being the modulating adimensional time function for such case. Since modal super-
- 277-
9 Harmonic analyses
position is considered, the system of equations become uncoupled

i
+ 2
i
.
i

i
+.
2
i

i
=
1
X
|=1
/
|
i
)
|
(t) (1013)
where if
i
is mode i, then /
|
i
=
T
i
~
f
|
is the participation factor of mode i on load
|. Then the Discrete Fourier Transform (DFT) of )
|
(t) is 1
|
(.), having values
1
|
I
. For one of these values )
|
, considering now the form given in Equation (988), it
can be written as
)
|
(t) = c
|
0
+
(.1)2
X
I=1
c
|
I
sin

.
I
t
a
,
|
I

(1014)
So in order to apply superposition we are interested on the response to equation

i
+ 2
i
.
i

i
+.
2
i

i
= /
|
i
c
|
I
sin

.
I
t
a
,
|
I

(1015)
However we have already seen in the Section "responding to harmonic excitation"
that the response is (we leave to the reader to verify this following the steps given
in that section)

|
iI
(t) =
|
iI
sin

.
I
t ,
|
I

1
|
iI
cos

.
It
,
|
I

(1016)
where

|
iI
=
/
|
i
c
|
I
.
2
i
1 (.
I
,.
i
)
2
h
1 (.
I
,.
i
)
2
i
2
+ (2
i
.
I
,.
i
)
2
(1017)
and
1
|
iI
=
/
|
i
c
|
I
.
2
i
2
i
.
I
,.
i
h
1 (.
I
,.
i
)
2
i
2
+ (2
i
.
I
,.
i
)
2
(1018)
Then, if ' modes are considered (usually those up to 4 times that of the cuto
frequency of the loading) and

frequencies are considered, for example ( 1) ,2,
the permanent solution for one load component | is
u
|
(t) =
A
X
i=1


.
X
I=1

|
iI
(t)

(1019)
Hence, for all load cases
u(t) =
A
X
i=1


.
X
I=1
1
X
|=1

|
iI
(t)

(1020)
If the loads have only one frequency .
1
(so the rest of the c
|
I
= 0 for / 6= 1) then
u(t) =
A
X
i=1

1
X
|=1

|
i1
(t)
!
=
A
X
i=1

1
X
|=1
C
|
i1
!
sin.
1
t
A
X
i=1

1
X
|=1
1
|
i1
!
cos .
1
t (1021)
- 278-
9.3 Harmonic analysis using mode superposition
where

C
|
i1
=
|
i1
cos ,
|
1
1
|
i1
sin,
|
1
1
|
i1
=
|
i1
sin,
|
1
+1
|
i1
cos ,
|
1
(1022)
The displacement of one degree for freedom, say q, is written as
n
j
(t) = l
j
sin(.
1
t 0
j
) (1023)
where the amplitude is
l
j
=
v
u
u
t

A
X
i=1
c
ij
1
X
|=1
C
|
i1
!
2
+

A
X
i=1
c
ij
1
X
|=1
1
|
i1
!
2
(1024)
and the phase is
0
1
= arctan
A
X
i=1
c
ij
1
X
|=1
1
|
i1
A
X
i=1
c
ij
1
X
|=1
C
|
i1
(1025)
where c
ij
is the component q of mode i. Then the velocity and acceleration are
n
j
(t) = .
1
l
j
| {z }

l
j
sin

.
1
t

0
j

(1026)
n
j
(t) = .
2
1
l
j
| {z }

l
j
sin(.
1
t (0
j
)) (1027)
so the velocity lags ,2 and the acceleration radians from the displacement.
Exercise 76 Repeat Example 75 using modal decomposition.
Solution: We leave this task to the reader. The result is the same as that performed
using the full matrices of the uncoupled model.
- 279-
9 Harmonic analyses
- 280-
10 Spectral and seismic analyses
10.1 Accelerograms and ground excitation
The spectral analysis is typical of earthquake engineering, although it is also used
in some other applications as in blast analyses. Aside, any of the previous types
of analysis can be (and are) performed in earthquake engineering. We present the
spectral analysis focused to earthquake engineering because it is the most extended
usage of the method.
There are many places in the world in which earthquakes are frequent and, hence,
building structures have to be prepared to sustain a given magnitude of earthquake
typical of the place. At dierent places, earthquakes have dierent characteristics in
terms of expected amplitudes, time duration and frequency contents. Those char-
acteristics depend on the location, orientation and number of faults, soil properties
that the waves nd in their traveling, etc. Most seismic zones have historical earth-
quake registers in which the ground acceleration is measured. Figure 79 shows a
typical accelerogram. Where no historical accelerograms are available, they can be
generated articially according to the characteristics of the site.
We leave to the reader to obtain, for a given accelerogram (which the reader
may nd over the internet), the displacement and velocity histories using a proper
integration method, as for example Newmark-,.
Example 77 Obtain the Discrete Fourier Transform of El Centro earthquake.
Solution: The accelerogram of El Centro earthquake may be obtained freely from
the internet. We can perform the DFT using the FFT algorithm of Matlab (or any
other program). The solution is shown in Figure 80, where the dierent frequency
content of the accelerogram can be seen.
10.2 The equation of motion for ground excitation. Accelerometers
and vibrometers.
The equation of motion for a ground motion in one given direction is written as
M u +C u +Ku = MJ
j
a
j
(t)
| {z }
f (t)
(1028)
where a
j
(t) is the ground acceleration given by the accelerogram and J
j
is the
inuence vector, which contains a one in those degrees of freedom in the direction of
the earthquake and a zero in the rest of the degrees of freedom. The displacement
vector u is the relative displacement of the structure to that of the ground (which
is the quantity of interest because if all the structure moves like a rigid solid, then
no deformation is induced). For example in 21 structures without rotation degrees
of freedom, J
j
= [1, 0, 1, 0, ...]
T
. If we consider the behavior of the structure to be
elastic, then we can apply modal decomposition to obtain (please, verify!)

i
+ 2
i
.
i

i
+.
2
i

i
= /
i
a
j
(t) (1029)
- 281-
10 Spectral and seismic analyses
0 10 20 30 40 50
0.3
0.2
0.1
0
0.1
0.2
0.3
Time [s]
G
r
o
u
n
d

a
c
c
e
l
e
r
a
t
i
o
n

[
g
]
NorthSouth component of El Centro Earthquake. May 18,1940
Figure 79: Component o of the acceleration of the well known El Centro
(Imperial Valley, San Andreas fault system) earthquake, of magnitude 7.1. Nine
people died and about 80% of the buildings suered damage. Lost amounted to 6
million U.S. dollars. Many accelerograms are freely distributed over the internet.
- 282-
10.2 The equation of motion for ground excitation. Accelerometers
and vibrometers.
0 20 40 60 80 100
0
1
2
3
4
5
6
x 10
3
Circular frequency
k
a
b
s
(
F
k
)

(
D
F
T

c
o
m
p
o
n
e
n
t
)
Figure 80: Discrete Fourier Transform (DFT) of El Centro 1940 Earthquake, N-S
component.
where
i
are the modal coordinates of modes
i
and
/
i
=
T
i
MJ
j
(1030)
is the modal participation factor of mode i.
Of course this equation may be particularized for a system with only one degree
of freedom, in which case /
i
= 1
n + 2. n +.
2
n = a
j
(t) (1031)
If the ground motion is of the type
n
j
(t) = l
j
sin.
j
t a
j
(t) = .
2
j
l
j
| {z }

l
j
sin.
j
t (1032)
i.e.
n + 2. n +.
2
n = .
2
j
l
j
sin.
j
t (1033)
which permanent solution we already know well from the rst part of these notes
n(t) = l sin(.
j
t ,) (1034)
with
l =
l
j
(.
j
,.)
2
q
[1 (.
j
,.)]
2
+ [2 (.
j
,.)]
2
; , = arctan
2 (.
j
,.)
1 (.
j
,.)
2
(1035)
- 283-
10 Spectral and seismic analyses
The rst equation may also be written in terms of ground acceleration amplitude
using

l
j
= .
2
j
l
j
l =

l
j
,.
2
q
[1 (.
j
,.)]
2
+ [2 (.
j
,.)]
2
(1036)
A vibrometer is an instrument which contains a single degree of freedom struc-
ture (SDOF) to measure the ground acceleration in which .
j
,. , so using
the previous formulae l l
j
, i.e. the displacement of the SDOF is that of the
ground. The vibrometer measures displacements as primary variable (accelerations
and displacements may be obtained by dierentiation). Because of the large masses
they employ, vibrometers are bulky and seldom used.
An accelerometer is an instrument which contains a SDOF such that .
j
,. 0
so using the previous formula l

l
j
,.
2
, i.e. the displacement is equivalent to the
acceleration of the ground (hence the name) scaled by 1,.
2
. The displacements and
velocities of the ground are obtained by integration. Accelerometers need a very
low mass, so they are the most used vibration measurement instrument. However,
nowadays laser and digital measurements are beginning to replace them.
10.3 Elastic Response Spectra: SD, SV, SA, PSV, PSA.
In 1941 Housner tried to understand the behavior of structures under earthquake
loading. By that time there were no computers, so engineers needed a simple way
to understand the behavior of structures and to be able to design them to sustain
some earthquake loading. The method Housner developed not only served that
objective but also gave ground to a method of analysis that is nowadays standard
in earthquake engineering in such a way that the characteristics of earthquakes for
a given site are given in terms of what he dened as the Elastic Response Spectra.
The idea of Housner is to reduce all systems to SDOF systems (in the idea that
one could characterize the structure by its rst, fundamental mode). Then only two
parameters characterize each SDOF i, the frequency .
i
=
p
/
i
,:
i
and the damping
ratio
i
= c
i
, (2:
i
.
i
). Then, he assumed sets of constant for all SDOF considered.
We know that the response (transitory and permanent) of that SDOF, characterized
by Equation (1031), to a ground acceleration is given by Duhamels integral (please,
verify!)
n
i
=
1
.
i
p
1
2
Z
t
0
c
.
.
(tt)
sin

.
i
p
1
2
(t t)

a
j
(t) dt (1037)
Housner took the maxima of these responses to build the Displacement Response
Spectrum SD
o1(., ) o
o
(., ) =

max
t=0
(n
i
(t)) =

max
t=0
(n(.
i
, , t)) (1038)
- 284-
10.3 Elastic Response Spectra: SD, SV, SA, PSV, PSA.
In a similar way, he dened the Velocity Response Spectrum SV
n
i
=
Z
t
0
c
.
.
(tt)
cos

.
i
p
1
2
(t t)

a
j
(t) dt+
+

p
1
2
Z
t
0
c
.
.
(tt)
sin

.
i
p
1
2
(t t)

a
j
(t) dt (1039)
o\ (., ) o

(., ) =

max
t=0
( n
i
(t)) =

max
t=0
( n(.
i
, , t)) (1040)
and the Absolute Acceleration Response Spectrum SA
n
i
+a
j
=
.
i

1 2
2

p
1
2
Z
t
0
c
.
.
(tt)
sin

.
i
p
1
2
(t t)

a
j
(t) dt
+ 2.
i
Z
t
0
c
.
.
(tt)
cos

.
i
p
1
2
(t t)

a
j
(t) dt (1041)
o(., ) o
o
(., ) =

max
t=0
( n
i
(t) +a
j
(t)) =

max
t=0
( n(.
i
, , t) +a
j
(t)) (1042)
To avoid evaluating these integrals, the following Pseudo Relative Velocity Spec-
trum and Pseudo Absolute Acceleration Spectrum
1o\ = .
i
o1 (1043)
1o = .
2
i
o1 (1044)
We leave to the reader to verify that for small damping ratios 1o\ ' o\ and
1o ' o. The 1o\ measures the maximum elastic energy in the system
W
max
=
1
2
/n
2
max
=
1
2
/

1o\
.

2
=
1
2
:(1o\ )
2
(1045)
Typical building codes consider earthquake loads dened in terms of normalized
o
o
(.). We remark that, obviously, the response spectra for a given . may be
obtained by numerical integration of the response of a single DOF problem n +
2. n +.
2
n = a
j
and taking the maximum of the required response.
Example 78 Use your favorite computer language to program a code to obtain the
2% damping response spectra for El Centro Earthquake and plot the results
Solution: The source code follows. In this code we use the trapezoidal rule. The
Newmark function is given in Example 58. In Figure 81 all response spectra are
shown. Several observations can be made. Regarding Sa, for low periods (sti struc-
tures) the acceleration is that of the ground. For long periods (exible structures),
the acceleration is even less than that of the ground, acting as an isolator. However,
as seen in Sd, the displacements may be large. The Sa spectra may be normalized to
the maximum ground acceleration, so the spectra can be used for other earthquakes
with the same frequency content but with dierent maximum acceleration. It has
been observed that the frequency content depends largely on the specic site and is
roughly valid for earthquakes of dierent duration and maximum accelerations. In
the second plot of Figure 81 a typical design spectra is shown. These design spectra
can be found in most building codes for seismic sites.
- 285-
10 Spectral and seismic analyses
function [Sa,Sv,Sd,PSV,PSA] = RSpectra(ag,dt,Ts,seta)
%*** function [Sa,Sv,Sd,PSV,PSA] = RSpectra(ag,dt,Ts,seta)
% program to obtain the response spectra of an accelerogram
% ag = groung acceleration
% dt = time increment in ground acceleration plot
% Ts = periods for which to compute the response spectra
% seta = damping ratio
% Sa = Acceleration absolute response spectra
% Sv = Velocity response spectra
% Sd = Displacement response spectra
% PSV = Pseudo Spectra of Velocities
% PSA = Pseudo Spectra of Accelerations
f = -ag; % load is minus mass time acceleration
N = length(ag); % number of points equals that of accel.
u0 = [0]; v0 = [0]; % initial conditions
be = 1/4; ga = 1/2; % Newmark-beta parameters = trapezoidal rule
agm= max(abs(ag)); % maximum ground acceleration
for i=1:length(Ts), % loop on all frequencies/periods
if (dt > Ts(i)/10),
disp([** Warning: unaccurate response spectra for T=,...
num2str(Ts(i))]);
end,
w = 2*pi/Ts(i); % circular frequency
K = [w^2]; % stiffness of the SDOF problem
M = [1]; % mass of the SDOF problem
C = [2*w*seta]; % damping of the SDOF problem & response
[t,u,v,a] = Newmark(K,M,C,f,dt,N,u0,v0,be,ga); % change as desire
Sv(i) = max(abs(v)); % Velocity response spectra
Sd(i) = max(abs(u)); % Displacement response spectra
aa1 = a + ag; % Adds ground acceleration
Sa(i) = max(abs(aa1)); % Absolute Acceleration Response Spectra
PSV(i)= Sd(i)*w; % Pseudo Velocity Spectra
PSA(i)= Sd(i)*w*w; % Pseudo Acceleration Spectra
end,
t=[Resp. Spectra for ,num2str(seta*100),% damping];
subplot(3,2,1); plot(Ts,Sa); ylabel(Sa); title(t);
subplot(3,2,3); plot(Ts,Sv); ylabel(Sv);
subplot(3,2,4); plot(Ts,Sd); ylabel(Sd);
subplot(3,2,2); plot(Ts,Sa./agm);
ylabel(Amplification Sa [Sa/max(a_g)]); title(t);
subplot(3,2,5); plot(Ts,PSV); xlabel(Period T); ylabel(PSV); title(t);
subplot(3,2,6); plot(Ts,PSA); xlabel(Period T); ylabel(PSA); title(t);
return,
- 286-
10.3 Elastic Response Spectra: SD, SV, SA, PSV, PSA.
0 0.5 1 1.5 2
0
5
10
15
S
a
Resp. Spectra for 2% damping
0 0.5 1 1.5 2
0
0.5
1
1.5
S
v
0 0.5 1 1.5 2
0
0.05
0.1
0.15
0.2
S
d
0 0.5 1 1.5 2
0
1
2
3
4
A
m
p
l
i
f
i
c
a
t
i
o
n

S
a

[
S
a
/
m
a
x
(
a
g
)
]
Resp. Spectra for 2% damping
0 0.5 1 1.5 2
0
0.5
1
1.5
Period T
P
S
V
Resp. Spectra for 2% damping
0 0.5 1 1.5 2
0
5
10
15
Period T
P
S
A
Resp. Spectra for 2% damping
Typical
design
spectra
Figure 81: Response Spectra for El centro 1940 Earthquake (N-S component). From
left upper corner to right bottom: o
o
(T), Relative to ground acceleration o
o
(T),
o

(T), o
o
(T), 1o\ and 1o.
- 287-
10 Spectral and seismic analyses
10.4 Modal superposition methods for spectral analysis. Modal
mass
Recovering Equation (1029), we obtain the response given by (1037) if multiplied
by the participation factor, i.e

i
= /
i
1
.
i
p
1
2
Z
t
0
c
.
.
(tt)
sin

.
i
p
1
2
(t t)

a
j
(t) dt
| {z }

i
(1046)
The complete response for ' modes considered is
u =
A
X
i=1
/
i

i
+u
v
(1047)
where u
v
is the static response of the remaining modes which we will address later.
The maximum response of the ' modes without u
v
is

i max
= /
i
o
o
(.
i
, ) = /
i
1o(.
i
, )
.
2
i
' /
i
o
o
(.
i
, )
.
2
i
(1048)
and

i max
= /
i
o
o
(.
i
, ) ' /
i
1o(.
i
, ) (1049)
Now, the problem is that for each mode the maximum shows up at dierent times, so
the total maximum response for the combination of modes is not determined. Hence,
there are several methods to combine them. We just mention the most simple ones.
Let j
i
be any maximum quantity for one mode (
i
,

i
, a displacement at a given
location, a component of stresses at a point, etc.). Then we can combine the maxima
the following way:
SRSS method (Square Root of Sum of Squares)
j =
v
u
u
t
A
X
i=1
j
2
i
(1050)
ASM method (Absolute Sum Method, too conservative)
j =
A
X
i=1
|j
i
| (1051)
TPM (Ten percent Method)
j =
v
u
u
t
A
X
i=1
j
2
i
+ 2
A
X
i=1
A
X
)=1
|j
i
j
)
| (1052)
- 288-
10.5 Static correction or mode acceleration method
There are of course more sophisticated combination rules which take into account
the damping ratio and the proximity of the modes (which may amplify the eect).
However, the topic is out of the scope of these notes.
One interesting property of modal participation factors in seismic engineering
is that the sum of the squares of the participation factors totalize the mass of the
structure, as it is straightforward to see if using the modal projection of J
j
J
j
=
.
X
i=1

T
i
MJ
j

| {z }
/
i

i
=
.
X
i=1
/
i

i
(1053)
and the 'orthogonality of the modes (
T
i
M
)
= c
i)
)
' = J
T
j
MJ
j
=
.
X
i=1
/
2
i
(1054)
Hence /
2
i
is known as the modal mass or mobilized modal mass. In large structures,
this quantity is employed to assess if the number of modes employed in the analysis
is adequate (for example is the mobilize at least 80% of the mass).
10.5 Static correction or mode acceleration method
If as usual only a few modes : < are considered, those with lower frequencies
which mobilize enough mass, then some of the response is missing. To compute in
an approximate manner the missing part of the response, we again use the modal
projection of u
u =
a
X
i=1

i
+
.
X
i=a+1

i
= u +u
v
(1055)
The quantity u
v
is the remaining part to be determined and the quantity u is the
one computed with the : considered modes. In this correction we assume that the
neglected modes are very sti (high .
i
), so the dynamic equation for these modes
can be approximated by the static counterpart

i
+ 2.
i

i
+.
2
i

i
= /
i
a
j
;.
2
i

i
= /
i
a
j
(1056)
If the problem were considered a static problem
K( u +u
v
) = MJ
j
a
j
(1057)
i.e.
u
v
=

K
1
MJ
j

a
j
u (1058)
where here a
j
is that of the ground (a innitely rigid structure). This correction
may be employed in dierent formats in many computational procedures. We leave
to the reader to express the static correction in terms of modal participation factors.
- 289-
10 Spectral and seismic analyses
Example 79 Compute the maximum acceleration and displacement response of the
three storey building of Examples 11 (page 98), 15 (page 103) and 20 (page 116) to
El Centro 1940 earthquake (N-S component) (a) the full original model with Rayleigh
damping of 2% for extreme modes, (b) using mode superposition with modes account-
ing for 90% of the mass of the structure, with a 2% of modal damping and (c) using
a response spectral analysis with a 2% of damping. Comment the results.
Solution: (a) To obtain the response using the matrices without modal decomposi-
tion, we may use the trapezoidal rule, which is Newmarks method with , = 1,4 and
= 1,2. The result for the three oors is shown in Figure .
The equation of motion is
M u +C u +Ku = Ma
j
which for this particular problem, using Rayleigh damping C = cK + ,M, given
the shape of M = diaq (:, :, :) may be written as

M u +

c

K +,

M

u +

Ku =

Ma
j
where

M = I is the identity matrix and

K =
1
:
K =

4 2 0
2 4 2
0 2 2

with = 1211,

:1
3

= 100 (problem data). The parameters of Rayleigh damping


may be obtained using the procedure given in Example 26, page 127. The frequencies
are given in Example 20, page 116, which we repeat for the readers comfort
.
1
=
p
`
1
= 2.5483

, .
2
= 1.7635

, .
3
= 0.6294

These frequencies are sorted from higher to lower. Then, using c.


2
i
+, = 2.
i

i
, the
following system of equations is formed

c.
2
1
+, = 2.
1

c.
2
3
+, = 2.
3

i.e.
c =
2
.
1
+.
3
=
2 0.02
25.483 + 6.294
= 1.259 10
3
, =
2.
1
.
3
.
1
+.
3
=
2 0.02 25.483 6.294
25.483 + 6.294
= 0.201 89
The response is given in Figure 82. (b) For the case of modal superposition, we
integrate each mode separately. Modes are given in Example 20 and are independent
of the actual value of . We reproduce them for the readers comfort
= [
1
,
2
,
3
] =

0.5910 0.7370 0.3280


0.7370 0.3280 0.5910
0.3280 0.5910 0.7370

- 290-
10.5 Static correction or mode acceleration method
0 20 40
0.2
0.1
0
0.1
0.2
3
r
d

f
l
o
o
r

d
i
s
p

[
m
]
El Centro 1940 earthquake (NS)
0 20 40
0.2
0.1
0
0.1
0.2
2
n
d

f
l
o
o
r

d
i
s
p

[
m
]
0 20 40
0.2
0.1
0
0.1
0.2
1
s
t

f
l
o
o
r

d
i
s
p

[
m
]
Time [s]
0 20 40
10
5
0
5
10
3
r
d

f
l
o
o
r

a
c
e
l

[
m
/
s
2
]
Response of 3 storey building to
0 20 40
10
5
0
5
10
2
n
d

f
l
o
o
r

a
c
e
l

[
m
/
s
2
]
0 20 40
10
5
0
5
10
1
s
t

f
l
o
o
r

a
c
e
l

[
m
/
s
2
]
Time [s]
Figure 82: Response of the three storey building of Example 20 to El Centro 1940
earthquake (N-S component). Response computed using the trapezoidal rule, the
original matrices and Rayleigh damping of 2% for the rst and last mode. Acceler-
ations are absolute and displacements are relative to that of the ground.
- 291-
10 Spectral and seismic analyses
The participation factors are
/
1
=
T
1
MJ
j
=

0.5910
0.7370
0.3280

1
1
1

1
1
1

= 0.182
/
2
=
T
2
MJ
j
= 0.474
/
3
=
T
3
MJ
j
= 1.656
The mobilized mass must equal the total mass, since we are using all modes:
' = 3: = /
2
1
+/
2
2
+/
2
3
= 0.182
2
+ 0.474
2
+ 1.656
2
= 0.0331 + 0.224 68 + 2.742 3 = 3
It is seen that the rst (higher) mode mobilizes only a 0.0331,3 100 = 1.1% of the
mass of the structure, the second (middle) mode mobilizes a total of 0.224 68,3
100 = 7.45% of the mass of the structure, whereas the third (lower) mode mobilizes
the 2.742,3 100 = 91.4% of the mass of the structure. Hence, the response of this
last mode is the most important one and accounts for most of the response of the
structure. We integrate this mode only. The equation of motion is

3
+ 2.
3

3
+.
2
3

3
= /
3
a
j
and the displacements and accelerations are recovered from the mode shape
u =
3

3
; u =

3
The results for the three oors are shown in Figure 83. Comparing the result to that
of Figure 82, it is shown that using only the fundamental mode (lower frequency),
the result is rather accurate, specially in terms of relative displacements. (c) For the
case of spectral analysis, we note that the Periods of natural vibration are
.
1
= 25.483 T
1
=
2
.
1
= 0.246 56
.
2
= 17.635 T
2
=
2
.
2
= 0.356 29
.
3
= 6.294 T
3
=
2
.
3
= 0.998 28
The maximum acceleration may be obtained from the acceleration spectra o
o
which
is given again in Figure 84 in greater detail. A typical design spectrum inferred from
the earthquake for this site (although design spectra in building codes are typically
proposed from a large earthquake database) is also shown in the gure. The o
o
for
the design spectra for the three modes are
T
1
= 0.246 56 o
o
(T
1
) = 10 m, s
2
T
2
= 0.356 29 o
o
(T
2
) = 10 m, s
2
T
3
= 0.998 28 o
o
(T
3
) = 6 m, s
2
- 292-
10.5 Static correction or mode acceleration method
0 20 40
10
5
0
5
10
3
r
d

f
l
o
o
r

a
c
e
l

[
m
/
s
2
]
Response of 3 storey building to
0 20 40
10
5
0
5
10
2
n
d

f
l
o
o
r

a
c
e
l

[
m
/
s
2
]
0 20 40
10
5
0
5
10
1
s
t

f
l
o
o
r

a
c
e
l

[
m
/
s
2
]
Time [s]
0 20 40
0.2
0.1
0
0.1
0.2
3
r
d

f
l
o
o
r

d
i
s
p

[
m
]
El Centro 1940 earthquake (NS)
0 20 40
0.2
0.1
0
0.1
0.2
2
n
d

f
l
o
o
r

d
i
s
p

[
m
]
0 20 40
0.2
0.1
0
0.1
0.2
1
s
t

f
l
o
o
r

d
i
s
p

[
m
]
Time [s]
Figure 83: Response of the three storey building of Example 20 to El Centro 1940
earthquake (N-S component). Response computed using the trapezoidal rule, the
third (dominant) mode and modal damping of 2%. Accelerations are absolute and
displacements are relative to that of the ground.
- 293-
10 Spectral and seismic analyses
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
2
4
6
8
10
12
14
Period T [s]
S
a
(
T
)


[
m
/
s
2
]
Aceleration Response Spectrum of El Centro Earthquake 1940 for 2% damping.
Figure 84: Acceleration response spectrum for El Centro Earthquake of 1940 (N-S
component). Bold line is a "design response spectrum" for the computations.
so the maximum accelerations for the three modes are
u
1
max
= /
1
o
o
(.
1
, )
1
= 0.182 10

0.5910
0.7370
0.3280

1. 075 6
1. 341 3
0.596 96

m
s
2
u
2
max
= /
2
o
o
(.
2
, )
2
= 0.474 10

0.7370
0.3280
0.5910

3. 493 4
1. 554 7
2. 801 3

m
s
2
u
3
max
= /
3
o
o
(.
3
, )
3
= 1.656 6

0.3280
0.5910
0.7370

3. 259
5. 872 2
7. 322 8

m
s
2
Since the modes are far enough for the given small critical damping, the SRSS com-
bination method may be adequate. Then the acceleration for storey i is
n
i max
=
q

n
1
i

2
+

n
2
i

2
+

n
3
i

2
so
u
max
=

4. 897 1
6. 220 9
7.863

m, s
2
The reader can verify with Figure 82 that the maximum response has been obtained
rather accurately with this simplied method. The dierence is probably less than
that of considering dierent earthquakes with the same characteristics. However,
- 294-
10.5 Static correction or mode acceleration method
only a few modes of the structure needs to be computed and no time integration is
necessary.
For the maximum displacement we can use the o
o
of Figure 81 or in an approximate
way the o
o
generated from the o
o
as 1o, i.e.
u
1
max
= /
1
o
o
(.
1
, )
.
2
1

1
=
1
.
2
1
u
1
max
=

1.656 3 10
3
2.065 5 10
3
0.9193 10
3

m
u
2
max
= /
2
o
o
(.
2
, )
.
2
2

2
=
1
.
2
2
u
2
max
=

11.233 10
3
5.000 10
3
9.008 10
3

m
u
3
max
= /
3
o
o
(.
3
, )
.
2
3

3
=
1
.
2
3
u
3
max
=

0.08227
0.14823
0.18485

m
The combination using the same rule is
u
max
=

0.08305
0.148 33
0.185 07

m
In the case of displacements it can be seen that the approximation given by the third
mode is even better than in the case of accelerations. Comparing to the maxima of
Figure 82, it can be deduced that the method yields good results.
- 295-
10 Spectral and seismic analyses
- 296-
11 Bibliography
In this section we include a dozen selected books for the readers convenience.
1. K.J. Bathe. Finite Element Procedures. Prentice Hall 1996.
2. A.K. Chopra. Dynamics of Structures. Prentice Hall 1995.
3. R.W. Clough, J Penzien. Dynamics of Structures. Mc Graw-Hill 1975.
4. J.P. Den Hartog. Mechanical Vibrations. Dover 1985.
5. J.L. Humar. Dynamics of Structures. Balkema 2002.
6. T.J.R. Hughes. The Finite Element Method: Linear Static and Dynamic Fi-
nite Element Analysis. Dover 2000.
7. L. Meirovitch. Fundamentals of Vibrations. Waveland 2010.
8. D.E. Newland. Mechanical Vibration Analysis and Computation. Dover 2006.
9. W.H.Press, S.A.Teukolsky, W.T. Vetterling, B.P.Flannery. Numerical Recipes
in Fortran 77: The Art of Scientic Computing, 2nd Ed. Cambridge 1992
10. S Rao. Vibration of Continuous Systems. Wiley 2007.
11. S Rao. Mechanical Vibrations. 5th edition of Prentice-Hall 2010.
12. O Zienkiewicz, R.L. Taylor. The Finite Element Method (3 volumes). 6th
edition of Elsevier 2005.
- 297-

You might also like