close

Вход

Забыли?

вход по аккаунту

?

487

код для вставкиСкачать
INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, VOL. 41, 65?93 (1998)
COMPLEX-TIME-STEP NEWMARK METHODS WITH
CONTROLLABLE NUMERICAL DISSIPATION
T. C. FUNG*
School of Civil and Structural Engineering, Nanyang ╣echnological ║niversity, Nanyang Avenue, Singapore 639798
ABSTRACT
In this paper, unconditionally stable higher-order accurate time-step integration algorithms with controllable numerical dissipation are presented. The algorithms are based on the Newmark method with complex
time steps. The ultimate spectral radius (k), the sub-step locations (b ) and the weighting factors (a ) are the
j
j which
algorithmic parameters. For an algorithm that is (2n!1)th order accurate,
the sub-step locations
may be complex, are shown to be the roots of an nth degree polynomial. The polynomial is given explicitly in
terms of n and k. The weighting factors are then obtained by solving a system of n simultaneous equations. It
is further shown that the order of accuracy is increased by one for the non-dissipative algorithms with k"1.
The stability properties of the present algorithms are studied. It is shown that if the ultimate spectral radius
is set between !1 and 1, the eigenvalues of the numerical amplification matrix are complex with magnitude
less than or equal to unity. The algorithms are therefore unconditionally C-stable. When the ultimate
spectral radius is set to 0 or 1, the algorithms are found to be equivalent to the first sub-diagonal and
diagonal Pade? approximations, respectively. The present algorithms are more general as the numerical
dissipation is controllable and are very suitable for parallel computers. The accuracy of the excitation
responses is found to be enhanced by the present complex-time-step procedure. To maintain high-order
accuracy, the excitation may need some modifications. ( 1998 John Wiley & Sons, Ltd.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
KEY WORDS: Newmark method; complex-time-steps; time-step integration method; higher-order algorithms; excitation modification; parallel algorithms
1. INTRODUCTION
The equations of motion for linear structural dynamic problems after spatial discretization using
the finite element method can be written as
[M] Mu?(t)N#[C]Mu5 (t)N#[K]Mu(t)N"MF(t)N
(1)
where [M], [C], [K] are the mass, damping and stiffness matrices, respectively, MF(t)N is the
applied load vector, Mu(t)N is the unknown displacement vector which in general is a function of
time t and dots denote differentiation with respect to time t. The initial conditions at t"0 are
Mu(0)N"Mu N and Mu5 (0)N"Mv N.
0
0
The step-by-step time integration methods have been widely used to obtain numerical solutions. For structural dynamic problems, there are advantages for a step-by-step time integration
algorithm to possess algorithmic damping, as the spatial resolution of the high-frequency modes
* Correspondence to: T. C. Fung, School of Civil and Structural Engineering, Nanyang Technological University,
Nanyang Avenue, Singapore 639798
CCC 0029?5981/98/010065?29$17.50
( 1998 John Wiley & Sons, Ltd.
Received 10 May 1996
Revised 5 February 1997
66
T. C. FUNG
is poor when using standard finite elements to discretize the spatial domain. The algorithmic
damping (or numerical dissipation) in the high-frequency range could help to damp out the
spurious high-frequency responses.
It is desirable for an algorithm to have controllable numerical dissipation in the high-frequency
range. The low-frequency responses should be preserved and the high-frequency responses should
be damped in a controllable way. The dissipation can be measured by the spectral radius which is
defined as the largest magnitude of the eigenvalues of a numerical amplification matrix. Wood1
suggested that the curve of the spectral radius against *t/╣ (where ╣ is the undamped natural
period and *t is the time step) should stay close to unit level as long as possible and decrease to
about 0и5 to 0и8 as *t/╣ tends to infinity. The corresponding spectral radius as *t/╣PR is
defined as the ultimate spectral radius and is denoted by k in this paper. In the extreme case, the
ultimate spectral radius approaches zero so that the high-frequency responses are eliminated in
one time step. The algorithms with such property are asymptotic annihilating.
Apart from algorithmic damping, the algorithms should be unconditionally stable so that there
is no time step restriction, i.e. time steps of any size can be used without introducing numerical
instability. The ability in using a large time step is of advantage to structural dynamic problems
where the responses are contributed mainly by the low-frequency modes. The time-step sizes need
not be too small to resolve the very high-frequency modes accurately, as long as the algorithm
remains numerically stable.
The time integration algorithms with desirable characteristics can be constructed via finite
difference methods, weighted residual methods, Taylor series collocation methods, Hamilton?s
principle, Hamilton?s law or least-squares methods (see, for example, books by Wood,1 Hughes,2
Zienkiewicz and Taylor,3 Hairer, et al.,4 Hairer and Wanner5). The interpolation functions over
a time interval may be locally defined as in the finite element method3 or globally defined as in the
Ritz method.6,7 Furthermore, the interpolated solutions may be discontinuous across two time
intervals.8~10 Very general p-step and p-stage integration algorithms, such as the generalized
Newmark algorithms (GN ) and the generalized single-step algorithms (SS ), can be used to
pj
pj
derive higher-order algorithms systematically using the weighted residual approach.1,3 The
procedure re-derives many well-known algorithms as special cases.
1.1. First-order algorithms
The Newmark method is the most popular algorithm for numerical solutions of structural
dynamic problems. The numerical solutions Mu
N and Mv
N approximating Mu(t)N and Mu5 (t)N
n`1
n`1
respectively at t"t
can be obtained from Mu N and Mv N at t"t by
n`1
n
n
n
Mu
N"Mu N#*tMv N#*t2(1!2b)/2 Ma N#*t2b Ma
N
n`1
n
n
n
n`1
(2a)
Mv
N"Mv N#*t (1!c) Ma N#*tcMa
N
n`1
n
n
n`1
(2b)
[M] Ma N#[C] Mv N#[K] Mu N"MF (t )N
n
n
n
n
(2c)
[M] Ma
N#[C] Mv
N#[K] Mu
N"MF (t
)N
n`1
n`1
n`1
n`1
(2d)
where *t"t !t , b and c are the algorithmic parameters. Starting from the given initial
n`1
n
conditions Mu N and Mv N at t"0, the approximate numerical solutions at other time point t"t
0
0
n
can be found by using equations (2a)?(2d), repeatedly.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
( 1998 John Wiley & Sons, Ltd.
COMPLEX-TIME-STEP NEWMARK METHODS
67
The algorithm is unconditionally stable and dissipative if the two algorithmic parameters b and
c satisfy 2b*c'1/2. To achieve maximal high-frequency numerical dissipation to filter out the
spurious high-frequency responses, b"(c#1/2)2/4 is recommended. With this b value, the
eigenvalues of the numerical amplification matrix are complex conjugates. It can also be shown
that the ultimate spectral radius is then given by D1!2/(c#1/2)D. However, the algorithm is
first-order accurate only.
For the algorithm to be second-order accurate and unconditionally stable, c"1/2 and b*1/4
are required. The algorithm is, however, non-dissipative, i.e. the ultimate spectral radius is unity
for all b. As a result, the high-frequency dissipation in the Newmark method incurs a loss of
accuracy and introduces excessive algorithmic damping in the low-frequency range. It can be
shown that the algorithm is asymptotic annihilating if b"1 and c"3/2. The algorithm,
however, achieves very poor accuracy in the low-frequency range.
1.2. Second-order algorithms
A lot of research work has been done to construct second-order accurate algorithms with
high-frequency numerical dissipation. Some of the well-known algorithms include Wilson-h
method,11 collocation method,12 Houbolt method,13 Park method,14 HHT-a method,15, 16
WBZ-a method,17 Generalized-a method18 and Bazzi-Anderheggen o-method.19
The Wilson-h method11 is a one-parameter algorithm with cubic variation in displacement and
the differential equation collocation at time (n#h)*t. The algorithm is second-order accurate,
unconditionally stable and dissipative if h*1и37. The curve of the spectral radius against *t/╣
for h"1и4 decreases initially but increases again after *t/╣"2и4 due to the emergence of the real
eigenvalues. The spectral radius plot has a cusp and is not decreasing progressively. Hence, the
mid-frequency responses are subjected to higher algorithmic damping than the high-frequency
responses.
The collocation method12 combines aspects of the Newmark method and the Wilson-h
method. There are three parameters and the algorithms are second-order accurate and unconditionally stable if c"1/2, h*1, h/(2h#2)*b*(2h2!1)/(8h3!4). Optimal h values for given
b have been derived in Reference 12. Various level of dissipation can be achieved by varying b.
The Houbolt method13 uses the idea of making a cubic curve passing through four successive
values of the displacement. It is essentially a two-step backward-difference method. The Houbolt
method is asymptotic annihilating and is known to be too dissipative in the low-frequency range
even though it is second-order accurate. Hulbert20 showed that a 3-step linear multi-step
asymptotic annihilating algorithm must have the same spectral characteristics as the Houbolt
method. Chung and Hulbert21 derived a family of unconditionally stable second-order accurate
asymptotic annihilating Houbolt algorithms for structural dynamic problems.
Asymptotic annihilating algorithms with smaller low-frequency dissipation can be constructed
by using 4 or more steps. The Park method14 is second-order accurate, unconditionally stable
and asymptotic annihilating. It combines Gear?s two-step and three-step stiffly stable algorithms.
It is a 3-step 2-stage (or 6-step) algorithms for structural dynamic problems. However, the method
is not self-starting and requires special starting procedure.
Hilber et al.15, 16 proposed the HHT-a method. The method makes use of the Newmark update
equations but modifies the collocation equation. It is second-order accurate and unconditionally
stable if 0*a'!1/2, b*1/4!a/2, c"1/2!a. It is shown that there is no good trying to
make the spectral radius less than 1/2 as the spurious root takes over. To prevent the spurious
( 1998 John Wiley & Sons, Ltd.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
68
T. C. FUNG
real eigenvalue from dominating, a should be greater than !1/3. For a given ultimate spectral
radius k between 1 and 1/2, the algorithmic parameters are suggested to be a"(k!1)/(k#1),
c"1/2!a, b"(1!a)2/4.
The WBZ-a method17 is very similar to the HHT-a method. The a parameter is related to the
mass matrix, instead of the damping and stiffness matrices as in the HHT-a method. However, the
second-order accurate WBZ-a algorithms are only conditionally stable.
Chung and Hulbert18 combined the HHT-a method and the WBZ-a method to construct the
generalized-a method. The method is unconditionally stable, second-order accurate with controllable high-frequency dissipation. In their formulation, the four algorithmic parameters can be
expressed in terms of the ultimate spectral radius (0)k)1) directly as a "k/(k#1),
&
a "(2k!1)/(k#1), b"(1!a #a )2/4, c"1/2!a #a .
.
.
&
.
&
Bazzi and Anderheggen19 proposed the o-method with the ultimate spectral radius o (between
0 and 1) as an algorithmic parameter. The algorithm is unconditionally stable if 0)o)1. The
algorithm is second-order accurate if there is no physical damping and is only first-order accurate
otherwise.
All these dissipative algorithms can be generated from the generalized single-step algorithm
SS321,3 or the h -method by Hoff and Pahl.22,23 Besides, all these dissipative algorithms
1
are linear multi-step algorithms.1,2 It is well known that only second-order accuracy can be
achieved by the linear multi-step algorithms if the algorithms are to be unconditionally stable.24
The third- and higher-order accurate linear multi-step algorithms are therefore conditionally
stable only.
1.3. Higher-order algorithms
Higher-order algorithms give very accurate numerical results and are good for long-term
prediction of system responses and preservation of system invariant (such as energy and
momentum). Fast and accurate algorithms are useful for dynamical design, analysis and control
of mechanical and structural systems.
Higher-order algorithms based on the Pade? approximations to the exponential function have
been widely used in solving differential equations.25,26 However, powers of the matrices appear in
the higher-order formulations. This makes the computational effort very high. It is well known
that the diagonal and first sub-diagonal Pade? approximations are non-dissipative and asymptotic
annihilating, respectively. Most of the higher-order algorithms derived from various methods are
equivalent to these non-dissipative A-stable or asymptotic annihilating L-stable algorithms.
The higher-order A-stable non-dissipative algorithms corresponding to the diagonal Pade?
approximation can be obtained by using Runge?Kutta methods,4,5 weighted residual
methods,27,28 Petrov?Galerkin method29 or bi-discontinuous time Galerkin method.10
The higher-order L-stable asymptotic annihilating algorithms corresponding to the first
sub-diagonal Pade? approximation can be obtained by using the time-discontinuous Galerkin
method,8~10 Petrov?Galerkin method29 and weighted residual methods.28
Other higher-order dissipative algorithms can be obtained from Runge?Kutta methods,4,5
bi-discontinuous time Galerkin method10 and weighted residual methods.1,3 However, the
high-frequency dissipation is not a directly controllable parameter in these algorithms. Sometimes, there are cusps in the curves of the spectral radii of the algorithms so that the responses in
the mid-frequency range would have more algorithmic damping than those in the high-frequency
range.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
( 1998 John Wiley & Sons, Ltd.
COMPLEX-TIME-STEP NEWMARK METHODS
69
Higher-order algorithms can be constructed by using the Romberg extrapolation technique.30
The accuracy of the solutions obtained from the second-order accurate Newmark method is
improved by evaluating the results at the end of the time step a few times with different sub-step
sizes. The fourth-, sixth- and eighth-order algorithms can be constructed with 3, 7 and 15
evaluations. The numerical results are very accurate and are comparable to the Pade? approximations. However, these extrapolated Newmark methods are unconditionally unstable, i.e. the
spectral radius is greater than unity even for very small time steps. Very small time-step sizes are
required to maintain numerical stability.
1.4. ╣arnow and Simo?s sub-marching procedure
Recently, Tarnow and Simo31 presented a sub-marching procedure to construct fourth-order
accurate algorithms from a fairly general class of second-order accurate algorithms. In their
procedure, a sequential sub-marching (forward and backward) of three evaluations is required to
advance one time step. The stability, conservative properties and implementation of the underlying second-order algorithm are retained. When the procedure is applied to the second-order
Newmark method, the resultant algorithm is non-dissipative. To introduce high-frequency
dissipation, the post-process filtering of the solution is suggested.
In this paper, a sub-stepping procedure is proposed. The numerical results at different sub-step
locations are evaluated independently and then combined linearly to give higher-order accurate
results at the end of the time step. As in Tarnow and Simo?s procedure, there are no changes in the
implementation of the underlying Newmark method except the use of complex time steps.
However, the high-frequency dissipation is controllable for the present algorithms. Besides, as the
numerical results at the sub-step locations are independent of each other, the evaluations can be
computed in parallel. Furthermore, the present procedure is more general as it can construct
other higher-order algorithms systematically from the underlying second-order accurate
Newmark algorithm.
1.5. Outline of the paper
In this paper, unconditionally stable higher-order accurate step-by-step time integration
algorithms with controllable numerical dissipation for linear structural dynamic problems are
constructed. The Newmark method with complex time steps is used. The numerical results at the
end of a time step are obtained by combining results at several sub-step locations. The general
formulation for a (2n!1)th order accurate algorithm is studied. The ultimate spectral radius k is
a controllable algorithmic parameter. There are in general n sub-step locations and the corresponding n weighting factors to be determined. It is shown that the sub-step locations, which may
be complex, are given by the roots of an nth degree polynomial. The coefficients of the polynomial
are given explicitly in terms of n and k in this paper. The weighting factors can then be obtained
by solving a system of n simultaneous equations.
It is further shown that when k"1, the algorithms for various n are non-dissipative with an
order of accuracy increased from 2n!1 to 2n. The leading truncation error terms for the relative
period error and algorithmic damping ratio are given explicitly in terms of n and k.
The stability properties of the proposed complex-time-step Newmark methods are investigated. It is shown that if !1(k)1, the algorithms are unconditionally stable for all n. The
eigenvalues of the numerical amplification matrix are complex conjugates. Besides, it is shown
( 1998 John Wiley & Sons, Ltd.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
70
T. C. FUNG
that the spectral radius decreases monotonically from unity to the ultimate spectral radius k as
the time step size increases. The present algorithms therefore show good spectral properties and
are unconditionally C-stable.
It can be verified that the present algorithms for k"0 and 1 correspond to the first
sub-diagonal and diagonal Pade? approximations, respectively. The present algorithms are more
general as the numerical dissipation in the algorithms is controllable.
The accuracy of the forcing excitation responses is studied. It is shown that the accuracy of the
excitation responses is improved by the present complex-time-step procedure. However, to
achieve the order of accuracy of the numerical amplification matrix, the excitation may need some
modifications.
The computational procedure is discussed. Since the numerical results at the sub-step locations
are independent of each other, the evaluations can be computed in parallel. The computational
efforts, the spectral radii, the algorithmic damping ratios and the relative period errors of the
present higher-order algorithms are compared with other higher-order algorithms.
2. SINGLE-DEGREE-OF-FREEDOM SYSTEM
It would be cumbersome and difficult to study the characteristics of an algorithm by applying it
to equation (1) directly. Instead, the modal decomposition method can be used to uncouple
equation (1) by involving the orthogonality properties of the free vibration mode shapes of the
undamped system. In this case, modal damping is assumed.
It has been rigorously established that the integration of the uncoupled equations is equivalent
to the integration of the original system.1~4 It is therefore more convenient and sufficient for the
purpose of investigating the characteristics of a proposed algorithm by considering the equation
of motion of a single-degree-of-freedom system in the form
u?(t)#2muuR (t)#u2u(t)"f (t)
(3)
where m, u and f (t) are the damping ratio, undamped natural frequency of the system and the
forcing excitation, respectively. To analyse a time-step integration algorithm, t and t
can be
n
n`1
conveniently chosen as 0 and *t, respectively.
Equation (3) is used to study the characteristics of the Newmark algorithm and the constructed
higher-order algorithms. There are many ways to express the Newmark algorithm. Equation (2) is the
single-step three-stage form. For comparison with the analytical solutions, the Newmark algorithm is
cast in the equivalent single-step two-stage form for the single-degree-of-freedom system as
G H
GH
G
u
u
f (t )
n`1 "[A (*t)] n #[L (*t)]
n
NM
NM
v
v
f (t
)
n`1
n
n`1
H
where [A (*t)] is the numerical amplification matrix and is given by
NM
C
2(2b!c) mu3 *t3#(2b!1)u2 *t2#4cmu*t#2
2
2#4cmu*t#2bu2*t2
!(2b!c) u4*t3!2u2*t
2#4cmu*t#2bu2*t2
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
2
2(2b!c) m2u2 *t3#(2c!1)mu*t2#*t
2#4cmu*t#2bu2*t2
!(2b!c)mu3*t3#(b!c)u2*t2#2(c!1) mu*t#1
2#4cmu *t#2bu2*t2
(4)
D
( 1998 John Wiley & Sons, Ltd.
71
COMPLEX-TIME-STEP NEWMARK METHODS
and
C
(1!2b)*t2#2m(c!2b)u *t3
2b*t2
2#4cmu *t#2bu2*t2
2#4cmu *t#2bu2*t2
[L (*t)]"
NM
2(1!c) *t!(c!2b) u2*t3
2#4cmu *t#2bu2*t2
2c*t
2#4cmu *t#2bu2*t2
D
The convergence of a numerical algorithm require consistency and stability according to Lax
equivalent theorem. The consistency property can be determined from the truncation error by
comparing the numerical amplification matrix with the analytical amplification matrix. The
stability requirements and other measures of accuracy would be considered in the next few sections.
The analytical amplification matrix for equation (3) is given by
[A(*t)]"e~mu*t
C
mu
cos(u *t)# sin(u *t)
$
$
u
$
1
sin(u *t)
$
u
$
u2
! sin (u *t)
$
u
$
mu
cos(u *t)! sin(u *t)
$
$
u
$
D
(5)
where u "J1!m2u is the damped vibration frequency.
$
By comparing the Taylor series expansions of [A ] in equation (4) and [A] in equation (5), it
NM
can be seen that the truncation errors are O(*t3) (i.e. second-order accurate) if c"1/2 and O(*t2)
(i.e. first-order accurate) otherwise. The leading truncation errors can be eliminated by linearly
combining the numerical amplification matrices at various sub-step locations (b *t) with weightj
ing factor a as shown in the next section.
j
3. TRUNCATION ERRORS ELIMINATION PROCEDURE
The sth order accurate numerical amplification matrix [A (*t)] with truncation error O(*ts`1) is
4
to be constructed from
[A (*t)]"+ a [A (b *t)]
(6)
4
j NM j
where a and b are algorithmic parameters to be determined. The range of the indices j depends
j
j
on the number of parameters used and is not specified at the mean time. The algorithmic
parameters are chosen so that the Taylor series expansions of equations (5) and (6) match for the
first (s#1) terms, i.e. from *t0 to *ts.
Comparing the Taylor series expansions of the entries in [A ] with those in [A], it can be
s
shown that the conditions to match the sth terms are as follows:
+ a "1
j
for s"1, +a b "1
j j
for s"2, +a b2"1, and c"1/2
j j
for s"3, +a b3"2/3, and b"1/4
j j
for s"k'3, +a bk"2k~1/k!
j j
for s"0,
( 1998 John Wiley & Sons, Ltd.
(7a)
(7b)
(7c)
(7d)
(7e)
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
72
T. C. FUNG
For example, to obtain a second-order algorithm, c"1/2 and a and b must satisfy equations
j
j
(7a)?(7c). It can be done easily by choosing a "1 and b "1. This is of course the original
1
1
Newmark algorithm. Higher-order algorithms can be constructed systematically by solving more
equations with more undetermined algorithmic parameters. It can be seen from equation (7) that
for third- or higher-order algorithms, it is required that b"1/4 and c"1/2.
It can be seen that for b "0, [A (0)]"[I] is an identity matrix. No evaluation is required.
0
NM
Therefore, it is at no cost to assume one of the sub-step locations to be at the beginning of the time
interval, i.e. b "0. The corresponding weighting factor a will be determined with other
0
0
algorithmic parameters.
It is shown in the next section that a can be related to the ultimate spectral radius k. By using
0
an additional n pairs of a and b , 2n equations from equation (7) can be solved. The resultant
j
j
algorithm would be (2n!1)th order accurate.
4. ULTIMATE SPECTRAL RADIUS
The spectral radius is defined as the largest magnitude of the eigenvalues of a numerical
amplification matrix. It is a function of u *t (or *t/╣ ) and other algorithmic parameters. For
unconditionally stable algorithms, the spectral radius must not exceed unity for all u *t. Consider
the numerical amplification matrix in equation (6) as *t approaches infinity
C D
C
D
n
1 0
!1
0
n
Lim [A
(*t)]"Lim + a [A (b *t)]"a
#+ a
(8)
k NM k
0 0 1
2n~1
k 0
!1
*t?=
*t?= k/0
k/1
The square of the magnitude of the eigenvalues Dj D is equal to the determinant of the matrix in
=
equation (8). Since k is the ultimate spectral radius, one has
A
B A
B
2
n
(9)
Lim [A (*t)] " a ! + a "(2a !1)2
0
s
0
k
*t?=
k/1
by using equation (7a). Hence the weighting factor a for the initial condition is related to the
0
ultimate spectra radius directly. In the following, a is defined as
0
a "1 (1#(!1)n k)
(10)
0 2
The advantage of defining a this way is that the algorithms are undefined when k"!1 and are
0
non-dissipative when k"1 as shown in the following sections.
k2"Dj D2"det
=
5. SOLUTION PROCEDURE
The remaining 2n algorithmic parameters (a , a , . . . , a , b , b , . . . , b ) are to be obtained by
1 2
n 1 2
n
solving the first 2n non-linear equations in equation (7). The equations can be written as
1
1
1
2
1
b
1
b2
1
F
b
2
b2
2
F
b
3
b2
3
F
2
b
n
b2
n
F
2
}
bn~1 bn~1 bn~1 2 bn~1
n
3
2
1
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
GHG H
a
1
a
2
a "
3
F
a
n
d
0
d
1
d
2
F
(11a)
d
n~1
( 1998 John Wiley & Sons, Ltd.
73
COMPLEX-TIME-STEP NEWMARK METHODS
bn
2
bn`1
2
bn`2
2
F
bn
1
bn`1
1
bn`2
1
F
bn
3
bn`1
3
bn`2
3
F
2
2
2
}
GHG H
bn
n
bn`1
n
bn`2
n
F
a
d
1
n
a
d
2
n`1
a " d
3
n`2
F
F
b2n~1 b2n~1 b2n~1 2 b2n~1
n
3
2
1
a
n
where
d "1 (1!(!1)n k)
0 2
(11b)
d
2n~1
and d "2k~1/k!
k
for k*1
(11c)
The a can be eliminated from equations (11a) and (11b), i.e.
j
bn
1
bn`1
1
bn`2
1
F
bn
2
bn`1
2
bn`2
2
F
bn
3
bn`1
3
bn`2
3
F
2
2
2
}
bn
n
bn`1
n
bn`2
n
F
b2n~1 b2n~1 b2n~1 2 b2n~1
n
3
2
1
1
1
1
2
1
b
1
b2
1
F
b
2
b2
2
F
b
2
b
n
b2
n
F
3
b2
3
F
2
}
bn~1 bn~1 bn~1 2 bn~1
n
3
2
1
~1
G HG H
d
0
d
1
d
2
F
d
n~1
d
n
d
n`1
" d
n`2
F
d
2n~1
(12a)
or
G HG H
d
[B]
0
F "
d
n~1
d
n
F
(12b)
d
2n~1
Note that in this equation, d are known while b in [B] are the unknowns to be determined. The
k
k
elements in matrix [B] can be expressed in terms of b using Cramer?s rule.32
k
Let D(k , k , . . . , k ) denote the determinant of a matrix with element bkp in the pth row and
1 2
n
q
qth column, i.e.
bkК bkК 2 bkК
n
2
1
bk╚ bk╚ 2 bk╚
n
2
D(k , k , . . . , k )" 1
1 2
n
} F
F
F
(13)
bkn bkn 2 bkn
n
2
1
Obviously, the determinant of the coefficient matrices in equations (11a) and (11b) are
D(0, 1, 2, . . . , n!1) and D(n, n#1, . . . , 2n!1), respectively.
It can be seen that the pth row and qth column element of [B] is given by
D(k , k , . . . , k )
1 2
n
where k "j!1 except k "n#p!1
B "
j
q
pq D(0, 1, 2, . . . , n!1)
( 1998 John Wiley & Sons, Ltd.
(14)
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
74
T. C. FUNG
For example, the [B] matrix for n"4 is given by
D(0, 4, 2, 3)
D(0, 1, 4, 3) D(0, 1, 2, 4)
D(5, 1, 2, 3) D(0, 5, 2, 3)
1
[B]"
D(0, 1, 2, 3) D(6, 1, 2, 3) D(0, 6, 2, 3)
D(0, 1, 5, 3) D(0, 1, 2, 5)
D(0, 7, 2, 3)
D(0, 1, 7, 3) D(0, 1, 2, 7)
D(4, 1, 2, 3)
D(7, 1, 2, 3)
D(0, 1, 6, 3) D(0, 1, 2. 6)
(15)
It can be shown that
<
(b !b )
p
q
,...,
p/2,3
m
q/1 , . . . , p~1
"(b !b
) (b !b
) и и и (b !b ) D(0, 1, 2, . . . , m!1) (16)
m
m~1 m
m~2
m
1
Furthermore, D(k , k , . . . , k ) in equation (13) can be considered as the determinant of a minor
1 2
n
of an extended matrix with additional b . As a result, by making use of equation (16) and the
k
Laplace expansion32 of determinant, D(k , k , . . . , k ) can be found. For example, D(0, 2, 3, 5)
1 2
n
can be obtained from the coefficient of b b4 in D(0, 1, 2, 3, 4, 5). As a result, D(0, 5, 2, 3) in equation
5 6
(15) can be obtained. However, the elements in [B] are still non-linear functions of b and it is still
k
very difficult to solve for b .
k
D(0, 1, 2, . . . , m)"
6. POLYNOMIAL FOR b
j
On the other hand, it can be seen that D(k , k , . . . , k )/D(0, 1, 2, . . . , n!1) is a homogenous
1 2
n
(i.e. all terms are of the same degree) and cyclic symmetrical (i.e. its value is unaltered by the
interchange of b and b ) function of b .
p
q
k
It is well known that any symmetric function of the roots of a polynomial can be expressed in
terms of the coefficients of the polynomial.33 An nth degree polynomial P (x) with b as the roots
n
j
can be constructed as
n
P (x)" < (x!b )"(x!b ) (x!b ) и и и (x!b
) (x!b )
n
j
1
2
n~1
n
j/1
"x n#& xn~1#& xn~2# и и и #&
x#&
1
2
n~1
n
where & is the signed sum of the products of k different b , i.e.
k
j
(17)
& "(!1)k
k
+
b b иииb
(18)
jК j╚
jk
,...,
,...,
jК,j╚O
j
/1,2
n
jК j╚Ok > > > Ojk
As a result, the elements in [B] can be expressed in terms of & directly. In fact, by manipulating
k
the equations in equation (12a), the system of equations could be simplified to
!&
!& !&
2 !&
n~1
n~2
1
n
!&
!&
0
2 !&
n
n~1
2
0
0
!&
2 !&
n
3
F
F
}
F
F
0
0
0
2
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
!&
n
G HG
d
0
d
1
d
2
F
d
n~1
d
n
d
#& d
n`1
1 n
" d
#& d
#& d
n`2
1 n`1
2 n
F
d
#& d
# и и и #&
d
2n~1
1 2n~2
n~1 n
H
(19)
( 1998 John Wiley & Sons, Ltd.
75
COMPLEX-TIME-STEP NEWMARK METHODS
The system of equations in equation (19) can be rearranged to the following form:
d
0
d
1
d
2
F
d
1
d
2
d
3
F
d
n~1
d
n
d
2
2
3
2
4
F
2
d
d
}
d
d
n~1
d
n
n`1
F
2 d
d
n`1
2n~2
GHGH
&
d
n
n
&
d
n~1
n`1
"! d
&
n~2
n`2
F
F
&
1
(20)
d
2n~1
In other words, the required b values are the roots of the polynomial with coefficient & which in
k
k
turn can be obtained by solving the simultaneous equations in equation(20).
Since d "(1!(!1)nk)/2 and d "2k~1/k!, & can be solved explicitly as
0
k
k
(!1)k2kCn (2n!1!k)! (n#(n!k)k)
k
&"
k
(2n!1)!
(n#nk)
n!
where Cn"
k (n!k)! k!
(21)
As a result, the n (complex) b values are given by the roots of the following polynomial:
k
(2n!1!k)! (n#(n!k)k)
n
+ (!1)k2kCn
xn~k"0
k
(2n!1)! (n#nk)
k/0
(22)
The explicit forms of the polynomials corresponding to n from 2 to 10 are listed in Table I. It can
be seen that when k"!1, the formulation is invalid.
Once the b values are determined, the corresponding a values can be obtained from equation
j
j
(11a).
7. ORDER OF ACCURACY WHEN k"1
Since k is an algorithmic parameter, it is of interest to know if there are any values of k that could
increase the order of accuracy. By including the next equation corresponding to k"2n in
equation (7e) into equation (11b), the n#1 equations can be written as
bn
1
bn`1
1
bn`2
1
F
bn
2
bn`1
2
bn`2
2
F
bn
3
bn`1
3
bn`2
3
F
2
2
2
}
bn
n
bn`1
n
bn`2
n
F
b2n~1 b2n~1 b2n~1 2 b2n~1
n
3
2
1
b2n
2
b2n
b2n
b2n
n
3
2
1
( 1998 John Wiley & Sons, Ltd.
GHG
d
n
a
1
d
n`1
a
2
d
a " n`2
3
F
F
d
2n~1
a
n
d
2n
H
(23)
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
76
T. C. FUNG
n
Table I. Polynomials for b values for various n
k
Polynomials in equation (22)
2
2 (2#k) x 2 1
x2!
#
3 1#k
3 1#k
3
2 (3#2k) x2 1 (3#k) x
2 1
x3!
#
!
5 1#k
5 1#k
15 1#k
4
2 (4#3k) x3 1 (4#2k) x2
4 (4#k) x
2
1
x4!
#
!
#
7 1#k
7 1#k
105 1#k
105 1#k
5
2 (5#4k)x4 1 (5#3k)x3
2 (5#2k) x2
1 (5#k)x
2
1
x5!
#
!
#
!
9 1#k
9 1#k
63 1#k
189 1#k
945 1#k
6
2 (6#5k) x5
1 (6#4k) x4
8 (6#3k) x3
1 (6#2k) x2
x6!
#
!
#
11 1#k
11 1#k
297 1#k
198 1#k
2 (6#k) x
2
1
!
#
3465 1#k
10 395 1#k
7
2 (7#6k) x6
1 (7#5k) x5
10 (7#4k) x4
2 (7#3k) x3
x7!
#
!
#
13 1#k
13 1#k
429 1#k
429 1#k
4 (7#2 k) x2
1 (7#k) x
2
1
!
#
!
6435 1#k
19 305 1#k
135 135 1#k
8
2 (8#7k) x7
1 (8#6k) x6
4 (8#5k) x5
1 (8#4k) x4
x8!
#
!
#
15 1#k
15 1#k
195 1#k
234 1#k
4 (8#3k) x3
2 (8#2k) x2
8
(8#k) x
!
#
!
6435 1#k
32 175 1#k
2 027 025 1#k
2
1
#
2 027 025 1#k
9
2 (9#8k) x8
1 (9#7k) x 7
14 (9#6k) x6
1 (9#5k) x5
x9!
#
!
#
17 1#k
17 1#k
765 1#k
255 1#k
(9#2k) x2
2 (9#4k) x4
2 (9#3k) x3
4
!
#
!
3315 1#k
29 835 1#k
765 765 1#k
1
(9#k) x
2
1
#
!
3 828 825 1#k
34 459 425 1#k
10
2 (10#9k) x9
1 (10#8k) x8
16 (10#7k) x7
x10!
#
!
19
1#k
19
1#k
969
1#k
7 (10#6k) x6
14 (10#5k) x5
1 (10#4k) x4
#
!
#
1938
24 225
14 535
1#k
1#k
1#k
(10#3k) x3
(10#2k) x2
(10#k) x
8
1
2
!
#
!
1 322 685
1#k
2 645 370
1#k
130 945 815 1#k
2
1
#
654 729 075 1#k
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
( 1998 John Wiley & Sons, Ltd.
77
COMPLEX-TIME-STEP NEWMARK METHODS
Following the same procedure as in the previous sections, equation (23) can be simplified to
d
0
d
1
d
2
F
d
1
d
2
d
3
F
d
2
d
3
d
4
F
2
2
2
}
d
n~1
d
n
d
n`1
F
d
d
d
2 d
n~1
n
n`1
2n~2
d
d
2 d
d
n
n`1 n`2
2n~1
GH
&
n
GH
d
n
d
n`1
d
"! n`2
&
n~2
F
F
d
2n~1
&
1
d
2n
&
n~1
(24)
Putting in d and other d values and multiplying the jth equation by (n!1#j )!/2j~2, one has
0
k
(n#1)!
1!
(n#2)!
2!
n!
1!
(n#1)!
2!
(n#2)!
3!
n!
2!
(n#1)!
3!
(n#2)!
4!
F
F
F
(1!(!1)nk)n!
(2n!1)!
(n!1)!
(2n)!
n!
2
2
2
n!
(n!1)!
(n#1)!
n!
(n#2)!
(n#1)!
}
(2n!1)! (2n!1)!
(2n!1)!
2
n!
(n#1)!
(n!2)!
(2n)!
(2n)!
(2n)!
2
(n#1)! (n#2)!
(2n!1)!
GH
GH
1
&
n
1
&
n~1
1
"!2n
&
n~2
F
F
1
&
1
1
(25)
Recall that the & are solved by using the first n equations. If the n#1 equation is satisfied as well,
k
the order of accuracy would be increased from 2n!1 to 2n. This requires the n#1 equations
with n unknowns being consistent. In other words, the n#1 equations should be linearly
dependent. In the following, the linearly dependent condition is established.
Before we proceed, we need some identities. Consider the following identity:
n
n
xn(1!x)n"xn + Cn (!1)n~kxn~k" + Cn (!1)n~kx2n~k
k
k
k/0
k/0
Differentiate equation (26) with respect to x m times, one has
(26)
dm
(2n!k)!
n
xn(1!x)n" + Cn (!1)n~k
x2n~k~m
(27)
k
dxm
(2n!k!m)!
k/0
It can be seen that if m(n, by Leibniz?s theorem,33 the left-hand side of equation (27) would still
contain (1!x) as a factor. Hence, letting x"1 would give zero value. In other words,
(2n!k)!
n
+ Cn (!1)n~k
"0 if m"0, 1, 2, . . . , n!1
k
(2n!k!m)!
k/0
( 1998 John Wiley & Sons, Ltd.
(28)
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
78
T. C. FUNG
On the other hand, if m"n and x"1, the left-hand side of equation (27) would give (!1)n n! As
a result,
n
(2n!k)!
+ Cn (!1)n~k
"(!1)nn!
k
(n!k) !
k/0
(29)
Now, pre-multiplying equation (25) by the following row vector:
Cn и и и (!1)n~kCn и и и (!1)n~1 Cn (!1)n Cn ]
[Cn !Cn
0
1
k
n~1 n~2
n
(30)
and making use of equations (28) and (29), one has the following equation:
(!1)n (1!k)n! & "0
n
(31)
The system of equations is consistent if equation (31) is satisfied. It can be seen that & O0 in
n
general. As a result, the equations are consistent if and only if k"1. In other words, if k"1, the
a and b values obtained by solving the first 2n equations would satisfy the (2n#1)th equation as
k
k
well. In conclusion, the order of accuracy is increased by one if k is set to one.
8. LEADING TRUNCATION ERROR TERMS
The leading truncation error terms for the elements in [A ]![A] can be shown to be propors
tional to the next unused equation in equation (7). It can be shown that the residual in the
equation for k"2n in equation (7e) is
n
(!1)n (1!k)
+ a b2n!d "
j j
2n (2n!1)!! (1#k)
j/1
(32)
where (2n!1)!!"(2n!1) (2n!3) и и и 5]3]1. When k"1, as shown in the previous section,
the residual is zero and the order of accuracy is increased by one.
The errors of an algorithm can be measured by the algorithmic damping ratio and the relative
period error. They are related to the numerical dissipation and dispersion (or the amplitude and
phase errors). If the complex eigenvalues j and j of the undamped numerical amplification
1
2
matrix (i.e. m"0) are expressed in the following form:
j "exp (u6 *t(!mM $i))
1,2
(33)
where i"J!1, then u6, m1 are defined as the algorithmic frequency and algorithmic damping
ratio, respectively. The relative period error is defined as (╣M !╣ )/╣ where ╣"2n/u and
╣M "2n/uN .
It can be shown that the relative period error and algorithmic damping ratio in general can be
written as
(n!k#nk2)u2n*t2n
Relative period error"
#O (*t2n`2)
(2n#1) (2n!1) [(2n!1)!!]2 22n~2 (1#k)2
(1!k)u2n~1 *t2n~1
#O(*t2n`1)
Algorithmic damping ratio"
[(2n!1)!!]2 22n~1(1#k)
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
(34a)
(34b)
( 1998 John Wiley & Sons, Ltd.
79
COMPLEX-TIME-STEP NEWMARK METHODS
From equation (34a), it can be seen that no real value of k could eliminate the leading error term
further. However, it can be shown that the leading error term is minimized if k"1. In the later
sections, it is shown that the algorithms are non-dissipative as well when k"1.
9. STABILITY CHARACTERISTICS
To study the stability characteristics of the proposed algorithms, the undamped numerical
amplification matrix (i.e. with m"0) is considered. The numerical amplification matrix in
equation (6) is in the following form:
C D
C
D C
D
1 0
4!b2 o2 4b o/u
S
╣/u
n
a
k
k
k
]"a
#+
"
(35)
0 0 1
!u╣
S
4#b2 o2 !4b uo 4!b2 o2
k
k
k
k/1
where o"u *t. Since a and b appear in complex conjugate pairs, the resultant expressions
k
k
S and ╣ are real. Therefore, it can be seen that the eigenvalues for the numerical amplification
matrix [A
] are complex conjugates. The squares of the magnitude of the eigenvalues is equal
2n~1
to the determinant of [A
]. Obviously, DA
D*0. The determinant of [A
] can be
2n~1
2n~1
2n~1
written as
[A
2n~1
A
B A
B
4b o
4!b2 o2 2
n
2
n
k
k
D"S2#╣ 2" a # + a
# + a
k 4#b2 o2
0
k 4#b2 o2
k
k
k/1
k/1
After some algebraic manipulations, it can be simplified to
DA
2n~1
(36)
(1!k2) o2n& 2
n
D"1!
(37)
(4#b2 o2) (4#b2 o2) и и и (4#b2 o2)
n
2
1
Making use of the special form of d , it can be further simplified to
k
(1!k2) o2n
DA
D"1!
(38)
2n~1
n 1 (2k)! (n#k!1)!
o2n# +
(nk2#2kk#n)o2(n~k)
2 k!k!(n!k) !
k/1
It can be seen that as o("u*t) approaches infinity, the magnitude of the eigenvalues Dj D is, as
=
expected, k, i.e.
DA
2n~1
Lim Dj D2" Lim Dj D2"Dj D2"Lim DA
D"k2
(39)
1
2
=
o P= 2n~1
*t?=
*t?=
Hence, for a desirable Dj D, two values of k ("$Dj D ) are possible. However, it can be seen that
=
=
the magnitude of the leading truncation error terms in equations (32), (34a) and (34b) increases as
k decreases from 1. As a result, for a desirable ultimate spectral radius Dj D, k"Dj D would give
=
=
better results than k"!Dj D. In conclusion, only algorithms with 0)k)1 would be useful.
=
Note that the denominator in equation (38) is greater than zero for all o, n and k. As a result,
DA
D is less than unity for !1(k)1. In other words, the algorithms are unconditionally
2n~1
stable if !1(k)1 for all n. Furthermore, as all the coefficients in the denominator polynomial
are greater than zero, it can be seen that the spectral radius is decreasing monotonically from
unity toward the ultimate spectral radius o increases. This shows a good numerical dissipation
property as the high-frequency responses are damped progressively. Figure 1 shows the spectral
( 1998 John Wiley & Sons, Ltd.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
80
T. C. FUNG
Figure 1. Comparison of spectral radii for various n(m"0, k"0и5)
radius for k"0и5 for various n. It can be seen that for n with a larger value, the spectral radius
stays close to the unit level longer and decreases to 0и5 as *t/╣ tends to infinity.
10. PADE? APPROXIMATIONS
It can be seen that the elements in the numerical amplification matrix are rational functions of o.
In other words, the elements in the exact amplification matrix are approximated by the rational
functions. When k"1, the algorithms are shown to be non-dissipative and of order of accuracy
of 2n. Both the numerator and denominator of the elements in the numerical amplification matrix
are 2nth degree polynomial of o. It is well known that the diagonal (n, n) Pade? approximations
give order 2n approximations with rational functions of 2nth degree polynomial of o in the
numerator and denominator for structural dynamic problems. Since the Pade? approximation is
unique for a given n, the present algorithms with k"1 re-derive the diagonal Pade? approximations.
Similarly, it can be shown that the present algorithms with k"0 correspond to the first
sub-diagonal (n!1, n) Pade? approximations which give order 2n!1 asymptotic annihilating
approximations.
11. A FEW SIMPLE ALGORITHMS
In the following, some simple complex-time-step algorithms with small n are considered. The
third-order algorithm can be obtained by using three pairs of a and b with b "0. It can be
j
j
0
shown that a , a , a , b and b can be expressed in terms of k as
0 1 2 1
2
1
1!k
4#7k#k2
1!k
4#7k#k2
a " (1#k), a "
#i
, a "
!i
(40a)
0 2
1
2
4
4
4J2#2k!k2
4J2#2k!k2
2#k
J2#2k!k2
2#k
J2#2k!k2
b "0, b "
!i
, b "
#i
0
1 3(1#k)
2
3(1#k)
3(1#k)
3(1#k)
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
(40b)
( 1998 John Wiley & Sons, Ltd.
81
COMPLEX-TIME-STEP NEWMARK METHODS
Figure 2. Variations of a and b with k for the third-order algorithms
j
j
In fact, b and b are the roots of the quadratic equation 3(1#k)x2!(4#2k) x#2"0. It can
1
2
be seen that the algorithmic parameters are complex if 1!J3(k(1#J3. Figure 2 shows
the variations of the algorithmic parameters with k. It can seen that a and a (b and b ) are
1
2 1
2
complex conjugates.
When k"0, the algorithm is asymptotic annihilating and third-order accurate. The algorithmic parameters are given by
1
1
1
1
1
a " , a " #i
, a " !i
, b "0,
0 2
1 4
2
0
4
J2
J2
2
J2
b " !i
,
1 3
3
2
J2
b " #i
2 3
3
(41)
When k"1, the algorithm is non-dissipative and fourth-order accurate. The algorithmic
parameters are given by
a "1, a "iJ3, a "!iJ3, b "0,
2
0
0
1
1
J3
b " !i
,
1 2
6
1
J3
b " #i
2 2
6
(42)
Figure 3(a) shows the spectral radii for various algorithms. The notations used are shown in
Table II. It can be seen that both the Houbolt and Park methods are asymptotic annihilating.
However, the dissipation in the low-frequency range is quite severe for the Houbolt method. The
(1, 2) Pade? approximation is asymptotic annihilating with much smaller low-frequency dissipation. This algorithm is equivalent to the present third-order complex-time-step algorithm with
k"0. For other values of k, the spectral radii are decreasing smoothly to the respective ultimate
spectral radii. The presented complex-time-step algorithm is non-dissipative when k"1.
Figure 3(b) shows the algorithmic damping ratios for various algorithms. The present thirdorder complex-time-step algorithms are comparable to the second-order algorithms. Figure 3(c)
shows the relative period errors for various algorithms. It can be seen that the relative period
errors for the present third-order complex-time-step algorithms are smaller than those for the
second-order algorithms and Tarnow and Simo?s fourth-order algorithm (TSNM).
( 1998 John Wiley & Sons, Ltd.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
82
T. C. FUNG
Figure 3(a). Comparison of spectral radii with third order complex-time-step algorithms (m"0)
Figure 3(b). Comparison of algorithmic damping ratios with third-order complex-time-step algorithms (m"0)
In Figures 3(a)?3(c), the spectral radii, the algorithmic damping ratios and the relative period
errors for the third-order complex-time-step algorithm with k"!0и5 are shown. The algorithm
has the same ultimate spectral radius as k"0и5. It can be seen that the algorithm is not as
accurate as the algorithm with k"0и5. This is also true for other values of DkD for other
higher-order complex-time-step algorithms. As a result, only when 0)k)1 would give useful
algorithms.
Other higher-order accurate complex-time-step algorithms can be constructed similarly.
For example, the fifth- and seventh-order accurate algorithms can be constructed by solving
b from 15(1#k) x3!6(3#2k)x2#3(3#k)x!2"0 and 105(1#k) x4!30(4#3k)x3
j
#30(2#k) x2!4(4#k) x#2"0, respectively. The fifth-order accurate algorithm has one
real and one pair of complex time steps while the seventh-order accurate algorithm has two pairs
of complex time steps.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
( 1998 John Wiley & Sons, Ltd.
83
COMPLEX-TIME-STEP NEWMARK METHODS
Figure 3(c). Comparison of relative period errors with third-order complex-time-step algorithms (m"0)
Table II. Relative computational effort and notations for various algorithms
Method
Notation
Effort`
Effort*
Order
Remark
Houbolt
Park
HHT (a"!0и3)
Newmark (b"1/4, c"1/2)
Central difference (b"0, c"1/2)
Fox?Goodwin (b"1/12, c"1/2)
Extrapolated Newmark
Extrapolated Newmark
Extrapolated Newmark
Tarnow and Simo?s Newmark
(1, 2) Pade? approximation
(2, 2) Pade? approximation
(2, 3) Pade? approximation
(3, 3) Pade? approximation
(3, 4) Pade? approximation
(4, 4) Pade? approximation
Present: third order
Present: fourth order
Present: fifth order
Present: sixth order
Present: seventh order
Present: eighth order
Houbolt
Park
HHT
NM
CD
FG
ENM4
ENM6
ENM8
TSNM
P12
P22
P23
P33
P34
P44
CTS3
CTS4
CTS5
CTS6
CTS7
CTS8
?
?
?
1
?
1
2
3
4
2
8
8
216
216
512
512
4
4
5
5
8
8
?
?
?
1
?
1
3
7
15
3
4
4
36
36
64
64
4
4
5
5
8
8
2
2
2
2
2
2
4
6
8
4
3
4
5
6
7
8
3
4
5
6
7
8
US, AA
US, AA
US
US. ND
CS, ND
CS,ND
UU
UU
UU
US, ND
US, AA
US, ND
US, AA
US, ND
US, AA
US, ND
US, AD
US, ND
US, AD
US, ND
US, AD
US, ND
Note: US: unconditionally stable; CS: conditionally stable; UU: unconditionally unstable; ND: nondissipative; AD: arbitrary dissipation; AA: asymptotic annihilating; #: effort for coefficient matrix
decomposition, compared with Newmark method (see text); *: effort for forward and backward substitutions, compared with Newmark method (see text)
Figure 4(a) shows the spectral radii for various high-order complex-time-step algorithms. The
same trend can be observed as the third-order algorithms. The spectral radii are decreasing
smoothly to the respective ultimate spectral radii. The higher-order algorithms exhibit smaller
( 1998 John Wiley & Sons, Ltd.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
84
T. C. FUNG
Figure 4(a). Comparison of spectral radii with various higher-order complex-time-step algorithms (m"0)
Figure 4(b). Comparison of algorithmic damping ratios with various higher-order complex-time-step algorithms (m"0)
low-frequency dissipation, as expected. Figures 4(b) and 4(c) show the algorithmic damping
ratios and the relative period errors for various higher-order complex-time-step algorithms,
respectively. It can be seen that the algorithms are separated into three groups with increasing
accuracy.
12. FORCING EXCITATION RESPONSES
So far, only the leading truncation error terms in the numerical amplification matrix are shown to
have been reduced by choosing the algorithmic parameters properly. It is of interest to check if
the truncation errors corresponding to the excitation responses are reduced as well by linearly
combining the responses at the specific sub-step locations.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
( 1998 John Wiley & Sons, Ltd.
COMPLEX-TIME-STEP NEWMARK METHODS
85
Figure 4(c). Comparison of relative period errors with various higher-order complex-time-step algorithms (m"0)
The excitation force f (t) in general can be approximated by a polynomial function or can be
expanded into Taylor series about t"0 over a small time interval *t as
f (t)"f (0)#f @(0)t#1 f A(0)t2#1! f @A(0)t3# и и и #1! f (n)(0)tn# и и и for 0)t)*t
2
3
n
(43)
If f (t) is a special function such as a step function or a delta function, special treatments are
required. For example, if f (t) is a delta function at t"0 representing an impulsive force, the
velocity after the impact at t"0` should be calculated from the momentum equations. The
subsequent response is an unforced vibration with f (t)"0. For the excitation described by
a piecewise continuous function, the time step *t should be chosen so that the excitation is
continuous within the time step.
If the truncated Taylor series is used to represent the excitation, the terms retained should
conform to the required accuracy. Penry and Wood34 noted that the accuracy of the numerical
solution depends on the adequate approximation of the forcing function as well. In the following,
the connection between the response and the excitation is studied.
Since the differential equation is linear, without loss of generality, only one term needs to be
considered. Assume f (t)"atn, where a is the amplitude coefficient and n is an integer. The
differential equation under consideration becomes
u?(t)#2muuR (t)#u2u(t)"atn
(44)
The exact solution of equation (44) can be divided into two parts: the homogeneous solution
and the particular solution. The homogeneous solution corresponds to the free vibration with no
excitation (i.e. a"0). The solution is related to the analytical amplification matrix as
G H
GH
u(*t)
u
"[A(*t)] 0
v
v(*t)
0
( 1998 John Wiley & Sons, Ltd.
(45)
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
86
T. C. FUNG
where v(t)"uR (t). u and v are the given initial displacement and velocity at t"0, respectively.
0
0
The expressions for the truncation errors and the procedure to reduce the truncation errors have
been studied in the previous sections.
The particular solution corresponding to the excitation response of equation (44) with initial
conditions u(0)"0 and uR (0)"0 at t"0 can be written as
A
B
c n!tn`2 c n!tn`3 c n!tn`4 c n!tn`5
= c n!tn`2`k
uN (t)"a 0
# 1
# 2
# 3
# и и и "a + k
1
(n#2)!
(n#3)!
(n#4)!
(n#5)!
(n#2#k)!
k/0
(46)
where c "1, c "!2mu, c "!2muc !u2c
for k*2. It can be seen that the leading
0
1
k
k~1
k~2
term of the displacement response for excitation atn is proportional to tn`2. However, the velocity
response is to be determined from the time derivative of equation (46) as well and the leading term
for the velocity response is proportional to tn`1. To maintain the accuracy of the numerical
solutions, the particular solutions should have the same truncation errors as the homogenous
solutions, if not better. As a result, if an algorithm is mth-order accurate, i.e. the leading error term
is proportional to tm`1, the excitation terms proportional to tm, tm`1, etc. need not be considered.
Only the terms from t0 to tm~1 are required.
It can be shown that the responses evaluated by linearly combining the responses at the
sub-step locations (b *t) with b"1/4, c"1/2, u "0, v "0, n"0, f (t)"a are given by
j
0
0
1
1
= c
k + a bk`2 *tk`2
u(*t)" ac + a b2 *t2# ac + a b3 *t3# и и и "a +
j j
j j
j j
2 0
4 1
2k`1
k/0
(47a)
1
= c
v(*t)"ac + a b *t# ac + a b2 *t2# и и и "a + k + a bk`1 *tk`1
0
j j
j j
j j
2k
2 1
k/0
(47b)
It can be seen that if a and b satisfy equation (7), the corresponding accuracy for the excitation
j
j
responses is increased to the same level of accuracy as in the numerical amplification matrix for
n"0 by the present complex-time-step procedure. In other words, the response accuracy is
enhanced by the proposed procedure as well for f (t)"a.
It can be shown that the responses evaluated by linearly combining the responses at the
sub-step locations (b *t) with b"1/4, c"1/2, u "0, v "0, n*1, f (t)"a6 tn are given by
j
0
0
1
1
= c
k + a bn`k`2 *tn`k`2
u(*t)" a6c + a bn`2 *tn`2# a6c + a bn`3 *tn`3# и и и "a6 +
j
j
0
j
1
j
j j
4
8
2k`2
k/0
(48a)
1
= c
1
k + a bn`k`1 *tn`k`1
v(*t)" a6c + a bn`1 *tn`1# a6c + a bn`2 *tn`2# и и и "a6 +
j j
j j
j j
4 1
2k`1
2 0
k/0
(48b)
Hence, for n*1, it is found that the approximate solutions match the exact solutions if
a
n!
1
"aN
+a bn`k`2
j j
(n#k#2)!
2k`2
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
(49)
( 1998 John Wiley & Sons, Ltd.
COMPLEX-TIME-STEP NEWMARK METHODS
87
Making use of equation (7), that is + a bn"2n~1/n!, this requires
j j
n!
a
a6"
a"
(50)
2n~1
d
k
For n"1 and 2, a6"a. However, for n*3, a6 would be different from a by a factor of n!/2n~1. In
other words, the given excitation in the form
f (t)"f #f t#f t2#f t3#f t4#f t5#f t6#f t7# и и и "f # + f tk
0
1
2
3
4
5
6
7
0
k
k/1
should be modified to the form
(51)
15
45
315
f
3
f t7#и и и"f # + k tk
f (t)"f #f t#f t2# f t3#3 f t4# f t5# f t6#
4
0
0
1
2
2 5
2 6
4 7
d
2 3
k/1 k
(52)
in order to maintain higher-order accuracy in the response calculation using the present higher
order algorithms. The following numerical example confirms the importance of using the
modified excitation in response calculation.
12.1. Numerical example
Consider the single-degree-of-freedom system in equation (3) with m"0 and u"1. The system
is subjected to a periodic excitation as shown in Figure 5. The excitation is piecewise continuous
with period ╣"1. Within one period, the excitation f (t) can be described by exp(2t)!1. The
periodic excitation as shown in Figure 5 is not smooth at all. The effect of using the modified
excitations rather than the actual excitation is studied.
Figure 5. Periodic forcing excitation f (t)
( 1998 John Wiley & Sons, Ltd.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
88
T. C. FUNG
The Taylor series expansion of f (t) about t"0 for 0)t)1 is
f (t)"2t#2t2#4/3t3#2/3t4#4/15t5#4/45t6#8/315t7#и и и#2n/n! tn#и и и
(53)
The modified excitations, also shown in Figure 5, for various higher-order complex-time-step
algorithms at t"0 are
f (t)"2t#2t2#2t3 for the fourth-order algorithms
(54a)
f (t)"2t#2t2#2t3#2t4#2t5 for the sixth-order algorithms
(54b)
f (t)"2t#2t2#2t3#2t4#2t5#2t6#2t7 for the eighth-order algorithms
(54c)
Table III shows the calculated numerical results using the present fourth-order complex-timestep method. It can be seen that the results are not very accurate with *t"1. The accuracy can be
increased by using a smaller time step *t"0и5. However, it can be seen that the accuracy is
improved by using the modified excitations as well. The improvement is not very significant as the
higher-order terms in the excitation are not incorporated in the calculation yet. The fourth-order
algorithms could not capture the higher-order characteristics in the excitation when *t"1.
In the same table, it can also be seen that the numerical results using the present sixth-order
complex-time-step method are not very accurate with *t"1. The accuracy can be increased by
using a smaller time step *t"0и5. However, the inaccuracy is not due to the time-step size being
too large but mainly due to the inappropriate approximation of the excitation. It can be seen that
the accuracy is improved significantly by using the modified excitations with *t kept as 1. The
same conclusion can be drawn for the present eight-order complex-time-step methods. By using
the modified excitations, the present higher-order algorithms can predict accurate responses with
a large time step.
Table III. Comparison of numerical results for various algorithms using the modified excitation
Method
*t
u(t"1)
% diff.
u(t"2)
% diff.
u(t"10)
% diff.
Exact solution
Fourth-order CTS method
Fourth-order CTS method
Fourth-order CTS method
with modified excitation
?
1и0
0и5
0и57346
0и52212
0и56427
?
!8и95%
!1и60%
2и62206
2и32979
2и57666
?
!11и15%
!1и73%
4и28188
3и77968
4и20651
?
!11и73%
!1и76%
1и0
0и54352
!5и22%
2и42292
!7и59%
3и92911
!8и24%
1и0
0и5
0и52988
0и56501
!7и60%
!1и47%
2и34621
2и57810
!10и52%
!1и68%
3и77866
4и20643
!11и75%
!1и76%
1и0
1и0
0и5
0и57150
0и52985
0и56501
!0и34%
!7и60%
!1и47%
2и60489
2и34616
2и57810
!0и66%
!10и52%
!1и68%
4и24831
3и77851
4и20642
!0и78%
!11и76%
!1и76%
1и0
0и57338
!0и01%
2и62120
!0и03%
4и28016
!0и04%
0и25
0и61947
8и02%
2и72308
3и85%
4и43810
3и65%
0и10
0и58084
1и29%
2и63831
0и62%
4и30706
0и59%
Sixth-order CTS method
Sixth-order CTS method
Sixth-order CTS method
with modified excitation
Eighth-order CTS method
Eighth-order CTS method
Eighth-order CTS method
with modified excitation
Second-order
Newmark method
Second order
Newmark method
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
( 1998 John Wiley & Sons, Ltd.
COMPLEX-TIME-STEP NEWMARK METHODS
89
The results obtained from the second-order accurate Newmark method with *t"0и25 and
*t"0и1 are shown as well. It can be seen that the numerical results with *t"0и25 are not very
accurate. The numerical results with *t"0и1 is acceptable.
In conclusion, by using the modified excitations, the present complex-time-step algorithms
could give accurate numerical results with a larger time step. The importance of using the
modified excitations and the advantage of using higher-order algorithms are clearly demonstrated.
13. COMPUTATIONAL PROCEDURE
In general, an sth order accurate numerical amplification matrix [A (*t)] is given by equation (6).
s
In actual computation, the matrices are not computed and combined. Instead, the results at the
sub-step locations are evaluated individually using equation (2) and then combined with the
corresponding weighting factors.
The computational procedures can be briefly summarized as follows:
(1) With the given initial conditions Mu N and Mv N, evaluate MU N and MV N at t "b *t using
0
0
j
j
j
j
equation (2). Note that when j"0, b "0, MU N"Mu N and MV N"Mv N and no evalu0
0
0
0
0
ation is required. Besides, if b and b ("bM ) are complex conjugates, MU N and MV N
1
2
1
2
2
corresponding to b can be obtained from MU N and MV N corresponding to b directly
2
1
1
1
without re-computation since MU N"MU1 N and MV N"MV1 N.
2
1
2
1
(2) Mu N and Mv N at the end of the time step are then obtained by combining MU N and MV N in
1
1
j
j
step (1) with the corresponding weighting factors, a , i.e.
j
Mu N"+ a MU N and Mv N"+ a MV N
1
j j
1
j j
(55)
(3) Mu N and Mv N obtained in the current time step become the initial conditions Mu N and Mv N
1
1
0
0
for the next time step.
(4) Steps (1)?(3) are repeated until the whole time span of interest is covered.
It can be seen that the implementation of the underlying Newmark method in Step (1) remains
unchanged except complex time steps are used. After the numerical results at the sub-step
locations are evaluated, they are then combined linearly with the corresponding weighting
factors.
It can be seen that in Step (1), the evaluations at each sub-step locations are independent of
each other. Hence, the evaluations can be computed in parallel. The algorithms therefore are very
suitable for parallel computers. Ordinary time step integration algorithms are sequential in
nature and are not very suitable for parallel computers. On the other hand, the present algorithms
are parallel in nature and can make use of parallel computers easily.
14. COMPUTATIONAL EFFORT
The third-order accurate L-stable (1, 2) Pade? approximation and the fourth-order accurate
A-stable (2, 2) Pade? approximation can be implemented by using the collocation method,
time discontinuous Galerkin method, bi-discontinuous Galerkin method or weighted residual
method. For example, the resultant equation for the third-order accurate L-stable (1, 2) Pade?
( 1998 John Wiley & Sons, Ltd.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
90
T. C. FUNG
approximation can be written as9,28
M#2 C*t# 5 K*t2
3
18
!1 C*t!1 K*t2
6
9
!M#1 C*t# 7 K*t2
3
18
M#1 C*t! 1 K*t2
6
18
M#2 C*t!2 K*t2
M
3
9
u
1 "
v *t
1
!M#1 C*t!1 K*t2 0
3
9
G H
G H
u
0
v *t
0
(56)
Similarly, the resultant equation for the fourth-order accurate A-stable (2, 2) Pade? approximation
can be written as27,28
M#C*t# 5 K*t2 1 M! 1 C*t! 1 K*t2
12
12
12
2
C*t#1 K*t2
2
"
M! 1 K*t2
12
M#C*t!7 K*t2 3 M!1 C*t!1 K*t2
12
12
12
2
C*t!1 K*t2
2
M! 1 K*t2
12
G H
u
1
v *t
1
G H
u
0
v *t
0
(57)
The number of equations to be solved simultaneously is twice the number of the degrees of
freedom and the bandwidth is also double. The computational effort for the coefficient matrix
decomposition is proportional to the number of unknowns and the square of the bandwidth. On
the other hand, the computational effort for the backward and forward substitutions is proportional to the number of unknowns and the bandwidth only. As a result, the computational effort
is roughly increased by eight times and four times, respectively, for coefficient matrix decomposition and forward and backward substitutions when compared with the ordinary Newmark
method to advance one time step.
The extrapolated Newmark method evaluates the results at the end of the time step several
times with different sub-step sizes. For the fourth-, sixth- and eighth-order algorithms, there are
2(*t, *t/2), 3(*t, *t/2, *t/4) and 4(*t, *t/2, *t/4, *t/8) different sub-step sizes. Hence the computational effort for coefficient matrix decomposition is only 2, 3 and 4 times higher than the
ordinary Newmark method to advance one time step. However, there are more forward and
backward substitutions. For the fourth-, sixth- and eighth-order algorithms, there are 3, 7 and 15
forward and backward substitutions, respectively. The numerical results given by the extrapolated Newmark methods are very accurate. Unfortunately, it can be shown that the extrapolated
Newmark methods are unconditionally unstable.
When Tarnow and Simo?s procedure is applied to the second-order accurate Newmark
method, the new fourth-order algorithm is unconditionally stable and non-dissipative. It needs
three evaluations to advance one time step. The computational effort is three times that of the
ordinary Newmark method for forward and backward substitutions. As there are only two
different time steps (a*t and (1!2a) *t, where a"(2#21@3#2~1@3)/3:1и3152), only two
coefficient matrices need to be decomposed. The effort for coefficient matrix decomposition is
twice that of the ordinary Newmark method only. In Tarnow and Simo?s algorithm, the
sub-marching results are to be computed sequentially. In the present algorithms, the results at all
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
( 1998 John Wiley & Sons, Ltd.
COMPLEX-TIME-STEP NEWMARK METHODS
91
sub-step locations can be computed simultaneously in parallel. The computation time could be
reduced by the present algorithms using parallel computers.
The computational effort for the present third- and fourth-order complex-time-step algorithms
is higher than the ordinary Newmark method since complex number is used. In general, the effort
for complex number multiplication is four times higher than that of real number multiplication.
However, as b and b are complex conjugates, the numerical results corresponding to b can be
1
2
2
obtained from b directly. As a result, the present third-order complex-time-step algorithms
1
require 4 times the effort of an ordinary Newmark method to advance one time step. However,
the present algorithms have the advantages of being third-order accurate, unconditionally stable
with controllable numerical dissipation, while the Newmark method is only second-order
accurate, unconditionally stable and non-dissipative if 2b*c"1/2 or first-order accurate,
unconditionally stable and dissipative if 2b*c'1/2.
For even higher-order algorithms, the present algorithms and the extrapolated Newmark
method require more evaluations. Tarnow and Simo?s Newmark method is limited to fourthorder accurate only. The Pade? approximations derived from the time discontinuous Galerkin
method, bi-discontinuous Galerkin method, Petrov?Galerkin method or weighted residual
method require to solve even more simultaneous equations. For example, for an eighth-order
algorithm derived from the Petrov?Galerkin method,29 the number of unknowns and bandwidth
would be 8 times of that of the original problems. The computational effort for coefficient matrix
decomposition and forward and backward substitutions are therefore 83"512 times and 82"64
times that of the ordinary Newmark method. Similarly, for the sixth-order algorithm derived
from the Petrov?Galerkin method,29 the number of unknowns and bandwidth would be 6 times
of the original problems. The computational effort for coefficient matrix decomposition and
forward and backward substitutions are therefore 63"216 times and 62"36 times that of the
ordinary Newmark method.
The direct application of the Pade? approximation would require evaluations of powers of the
coefficient matrix. The computational effort is very intensive as well.
For the present complex-time-step algorithms, the fifth-order algorithms require 1 complex
and 1 real time steps evaluations. Hence, the computational effort is only 5 times that of the
ordinary Newmark method. For the seventh-order algorithms, there are 2 complex time steps
evaluations. Hence, the computational effort is only 8 times that of the ordinary Newmark
method. It can be seen that the computational effort is not increasing as fast as the weighted
residuals method or the Petrov?Galerkin method. Besides, the present algorithms could be
implemented on parallel computers readily.
Table II summarizes the relative computational effort of these higher-order algorithms. The
second-order accurate Newmark method is used as reference. From this stable, it can be
concluded that the present complex-time-step algorithms are efficient with smaller computational
effort for a given order of accuracy and are suitable for parallel implementation. In addition, the
numerical dissipation in the high-frequency range is controllable.
15. CONCLUSIONS
In this paper, the general formulation of the complex-time-step Newmark method is presented. It
is shown that unconditionally stable higher-order accurate time-step integration algorithms with
controllable numerical dissipation can be constructed systematically for linear structural dynamic problems. The ultimate spectral radius is an algorithmic parameter. An algorithm of
( 1998 John Wiley & Sons, Ltd.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
92
T. C. FUNG
(2n!1)th order accuracy can be constructed by choosing n sub-step locations and their
weighting factors properly. The sub-step locations, which may be complex, are shown to be the
roots of an nth degree polynomial. The polynomial is given explicitly in terms of k and n. Once the
sub-step locations are determined, the corresponding weighting factors can be obtained by
solving n simultaneous equations.
It is shown that when the algorithms are set to non-dissipative by choosing k"1, the order of
accuracy is increased from 2n!1 to 2n. The leading truncation error terms for the relative period
error and algorithmic damping ratio are given explicitly.
The stability characteristics of the proposed algorithms are studied. It is shown that the
eigenvalues of the numerical amplification matrix are complex conjugates. The spectral radius is
less than unity if !1(k)1. The algorithms are therefore unconditionally C-stable. In
addition, it is shown that the spectral radius decreases monotonically from unity to the ultimate
spectral radius k as o increases. As a result, the present algorithms show good spectral properties.
It is found that when k"0 and k"1, the present complex-time-step Newmark methods are
equivalent to the first sub-diagonal (n!1, n) and diagonal (n, n) Pade? approximations, respectively. The present algorithms are of course more general with controllable numerical dissipation.
The effect of the complex-time-step procedure on the forcing excitation response accuracy is
investigated. It is shown that the accuracy of the excitation response is improved by the present
procedure. To achieve higher-order accuracy, the Taylor series expansion of the excitation may
need some modifications.
The computational procedure is discussed. It is found that it is more efficient to carry out the
calculation using complex number. For the complex conjugate parameters, the results are
complex conjugate as well so that only one of them needs to be evaluated. As the results at
different sub-step locations are independent of each other, the calculations can be evaluated
simultaneously in parallel.
The spectral radii, the algorithmic damping ratios and the relative period errors for the present
higher-order complex-time-step algorithms and other higher-order algorithms are compared.
The present algorithms have the advantages of low computational effort, with controllable
dissipation and they are suitable for parallel implementation.
REFERENCES
1. W. L. Wood, Practical ╣ime-Stepping Schemes, Clarendon Press, Oxford, 1990.
2. T. J. R. Hughes, ╣he Finite Element Method: Иinear Static and Dynamic Finite Element Analysis, Prentice-Hall,
Englewood Cliffs, N.J., 1987.
3. O. C. Zienkiewicz and R. L. Taylor, ╣he Finite Element Method, 4th edn, McGraw-Hill, New York, 1991.
4. E. Hairer, S. P. NЭrestt and G. Wanner, Solving Ordinary Differential Equations, ╗ol. I, Springer, Berlin, 1987.
5. E. Hairer and G. Wanner, Solving Ordinary Differential Equations, ╗ol. II, Springer, Berlin, 1991.
6. C. D. Bailey, ?Application of Hamilton?s law of varying action?, AIAA J., 13, 1154?1157 (1975).
7. C. D. Bailey, ?Hamilton, Ritz and elastodynamics?, J. Appl. Mech., 43, 684?688 (1976).
8. G. M. Hulbert, ?Time finite element methods for structural dynamics?, Int. J. Numer. Meth. Engng., 33, 307?331 (1992).
9. G. M. Hulbert, ?A unified set of single-step asymptotic annihilation algorithms for structural dynamics?, Comput.
Methods Appl. Mech. Engng., 113, 1?9 (1994).
10. M. Borri and C. Bottasso, ?A general framework for interpreting time finite element formulations?, Comput. Mech., 13,
133?142 (1993).
11. E. L. Wilson, ?A computer program for the dynamic stress analysis of underground structures?, SESM Report No.
68-1, Division of Structural Engineering and Structural Mechanics, University of California, Berkeley, 1968.
12. H. M. Hilber and T. J. R. Hughes, ?Collocation, dissipation and overshoot for time integration schemes in structural
dynamics?, Earthquake Engng. Struct. Dyn., 6, 99?118 (1978).
13. J. C. Houbolt, ?A recurrence matrix solution for the dynamic response of elastic aircraft?, J. Aeronaut. Sci., 17, 540?550
(1950).
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
( 1998 John Wiley & Sons, Ltd.
COMPLEX-TIME-STEP NEWMARK METHODS
93
14. K. C. Park, ?Evaluating time integration methods for nonlinear dynamic analysis?, in T. Belytschko et al. (eds.), Finite
Element Analysis of ╣ransient Nonlinear Behaviour, AMD 14, ASME, New York, 1975, pp. 35?38.
15. H. M. Hilber, T. J. R. Hughes and R. L. Taylor, ?Improved numerical dissipation for time integration algorithms in
structural dynamics?, Earthquake Engng. Struct. Dyn., 5, 283?292 (1977).
16. T. J. R. Hughes, ?Transient algorithms and stability?, in T. Belytschko and T. J. R. Hughes (eds.), Computational
Methods for ╣ransient Analysis, Elsevier, New York, 1983.
17. W. L. Wood, M. Bossak and O. C. Zienkiewicz, ?An alpha modification of Newmark?s method?, Int. J. Numer. Meth.
Engng., 15, 1562?1566 (1980).
18. J. Chung and G. M. Hulbert, ?A time integration algorithm for structural dynamics with improved numerical
dissipation: the generalized-a method?, J. Appl. Mech., 60, 371?375 (1993).
19. G. Bazzi and E. Anderheggen, ?The o-family of algorithms for time-step integration with improved numerical
dissipation?, Earthquake Engng. Struct. Dyn., 10, 537?550 (1982).
20. G. M. Hulbert, ?Limitation on linear multistep methods for structural dynamics?, Earthquake Engng. Struct. Dyn., 20,
191?196 (1991).
21. J. Chung and G. M. Hulbert, ?A family of single-step Houbolt time integration algorithms for structural dynamics?,
Comput. Methods Appl. Mech. Engng., 118, 1?11 (1994).
22. C. Hoff and P. J. Pahl, ?Development of an implicit method with numerical dissipation from a generalized single-step
algorithm for structural dynamics?, Comput. Methods Appl. Mech. Engng., 67, 367?385 (1988).
23. C. Hoff and P. J. Pahl, ?Practical performance of the h method and comparison with other dissipative algorithms in
1
structural dynamics?, Comput. Methods Appl. Mech. Engng., 67, 87?110 (1988).
24. G. G. Dahlquist, ?A special stability problem for linear multistep methods?, BI╣, 3, 27?43 (1963).
25. R. S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Cliffs, N.J., 1962.
26. L. D. Lambert, Computational Methods in Ordinary Differential Equations, Wiley, New York, 1973.
27. M. Gellert, ?A new algorithm for integration of dynamic systems?, Comput. Struct., 9, 401?408 (1978).
28. T. C. Fung, ?Unconditionally stable higher order accurate Hermitian time finite elements?, Int. J. Numer. Meth.
Engng., 39, 3475?3495 (1996).
29. P. W. Mo?ller, ?High-order hierarchical A- and L-stable integration methods?, Int. J. Numer. Meth. Engng., 36,
2607?2624 (1993).
30. M. Austin, ?Higher order integration of smooth dynamical systems: theory and numerical experiments?, Int. J. Numer.
Meth. Engng., 36, 2107?2122 (1993).
31. N. Tarnow and J. C. Simo, ?How to render second order accurate time-stepping algorithms fourth order accurate
while retaining the stability and conservation properties?, Comput. Methods Appl. Mech. Engng., 115, 233?252 (1994).
32. F. Ayres, Jr., ╣heory and Problems of Matrices, McGraw Hill, New York, 1962.
33. M. M. Gow, A Course in Pure Mathematics, Hodder and Stoughton, London, 1960.
34. S. N. Perry and W. L. Wood, ?Comparison of some single-step methods for the numerical solution of the structural
dynamics equation?, Int. J. Numer. Meth. Engng., 21, 1941?1955 (1985).
.
( 1998 John Wiley & Sons, Ltd.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
algorithm is second-order accurate,
unconditionally stable and dissipative if h*1и37. The curve of the spectral radius against *t/╣
for h"1и4 decreases initially but increases again after *t/╣"2и4 due to the emergence of the real
eigenvalues. The spectral radius plot has a cusp and is not decreasing progressively. Hence, the
mid-frequency responses are subjected to higher algorithmic damping than the high-frequency
responses.
The collocation method12 combines aspects of the Newmark method and the Wilson-h
method. There are three parameters and the algorithms are second-order accurate and unconditionally stable if c"1/2, h*1, h/(2h#2)*b*(2h2!1)/(8h3!4). Optimal h values for given
b have been derived in Reference 12. Various level of dissipation can be achieved by varying b.
The Houbolt method13 uses the idea of making a cubic curve passing through four successive
values of the displacement. It is essentially a two-step backward-difference method. The Houbolt
method is asymptotic annihilating and is known to be too dissipative in the low-frequency range
even though it is second-order accurate. Hulbert20 showed that a 3-step linear multi-step
asymptotic annihilating algorithm must have the same spectral characteristics as the Houbolt
method. Chung and Hulbert21 derived a family of unconditionally stable second-order accurate
asymptotic annihilating Houbolt algorithms for structural dynamic problems.
Asymptotic annihilating algorithms with smaller low-frequency dissipation can be constructed
by using 4 or more steps. The Park method14 is second-order accurate, unconditionally stable
and asymptotic annihilating. It combines Gear?s two-step and three-step stiffly stable algorithms.
It is a 3-step 2-stage (or 6-step) algorithms for structural dynamic problems. However, the method
is not self-starting and requires special starting procedure.
Hilber et al.15, 16 proposed the HHT-a method. The method makes use of the Newmark update
equations but modifies the collocation equation. It is second-order accurate and unconditionally
stable if 0*a'!1/2, b*1/4!a/2, c"1/2!a. It is shown that there is no good trying to
make the spectral radius less than 1/2 as the spurious root takes over. To prevent the spurious
( 1998 John Wiley & Sons, Ltd.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
68
T. C. FUNG
real eigenvalue from dominating, a should be greater than !1/3. For a given ultimate spectral
radius k between 1 and 1/2, the algorithmic parameters are suggested to be a"(k!1)/(k#1),
c"1/2!a, b"(1!a)2/4.
The WBZ-a method17 is very similar to the HHT-a method. The a parameter is related to the
mass matrix, instead of the damping and stiffness matrices as in the HHT-a method. However, the
second-order accurate WBZ-a algorithms are only conditionally stable.
Chung and Hulbert18 combined the HHT-a method and the WBZ-a method to construct the
generalized-a method. The method is unconditionally stable, second-order accurate with controllable high-frequency dissipation. In their formulation, the four algorithmic parameters can be
expressed in terms of the ultimate spectral radius (0)k)1) directly as a "k/(k#1),
&
a "(2k!1)/(k#1), b"(1!a #a )2/4, c"1/2!a #a .
.
.
&
.
&
Bazzi and Anderheggen19 proposed the o-method with the ultimate spectral radius o (between
0 and 1) as an algorithmic parameter. The algorithm is unconditionally stable if 0)o)1. The
algorithm is second-order accurate if there is no physical damping and is only first-order accurate
otherwise.
All these dissipative algorithms can be generated from the generalized single-step algorithm
SS321,3 or the h -method by Hoff and Pahl.22,23 Besides, all these dissipative algorithms
1
are linear multi-step algorithms.1,2 It is well known that only second-order accuracy can be
achieved by the linear multi-step algorithms if the algorithms are to be unconditionally stable.24
The third- and higher-order accurate linear multi-step algorithms are therefore conditionally
stable only.
1.3. Higher-order algorithms
Higher-order algorithms give very accurate numerical results and are good for long-term
prediction of system responses and preservation of system invariant (such as energy and
momentum). Fast and accurate algorithms are useful for dynamical design, analysis and control
of mechanical and structural systems.
Higher-order algorithms based on the Pade? approximations to the exponential function have
been widely used in solving differential equations.25,26 However, powers of the matrices appear in
the higher-order formulations. This makes the computational effort very high. It is well known
that the diagonal and first sub-diagonal Pade? approximations are non-dissipative and asymptotic
annihilating, respectively. Most of the higher-order algorithms derived from various methods are
equivalent to these non-dissipative A-stable or asymptotic annihilating L-stable algorithms.
The higher-order A-stable non-dissipative algorithms corresponding to the diagonal Pade?
approximation can be obtained by using Runge?Kutta methods,4,5 weighted residual
methods,27,28 Petrov?Galerkin method29 or bi-discontinuous time Galerkin method.10
The higher-order L-stable asymptotic annihilating algorithms corresponding to the first
sub-diagonal Pade? approximation can be obtained by using the time-discontinuous Galerkin
method,8~10 Petrov?Galerkin method29 and weighted residual methods.28
Other higher-order dissipative algorithms can be obtained from Runge?Kutta methods,4,5
bi-discontinuous time Galerkin method10 and weighted residual methods.1,3 However, the
high-frequency dissipation is not a directly controllable parameter in these algorithms. Sometimes, there are cusps in the curves of the spectral radii of the algorithms so that the responses in
the mid-frequency range would have more algorithmic damping than those in the high-frequency
range.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
( 1998 John Wiley & Sons, Ltd.
COMPLEX-TIME-STEP NEWMARK METHODS
69
Higher-order algorithms can be constructed by using the Romberg extrapolation technique.30
The accuracy of the solutions obtained from the second-order accurate Newmark method is
improved by evaluating the results at the end of the time step a few times with different sub-step
sizes. The fourth-, sixth- and eighth-order algorithms can be constructed with 3, 7 and 15
evaluations. The numerical results are very accurate and are comparable to the Pade? approximations. However, these extrapolated Newmark methods are unconditionally unstable, i.e. the
spectral radius is greater than unity even for very small time steps. Very small time-step sizes are
required to maintain numerical stability.
1.4. ╣arnow and Simo?s sub-marching procedure
Recently, Tarnow and Simo31 presented a sub-marching procedure to construct fourth-order
accurate algorithms from a fairly general class of second-order accurate algorithms. In their
procedure, a sequential sub-marching (forward and backward) of three evaluations is required to
advance one time step. The stability, conservative properties and implementation of the underlying second-order algorithm are retained. When the procedure is applied to the second-order
Newmark method, the resultant algorithm is non-dissipative. To introduce high-frequency
dissipation, the post-process filtering of the solution is suggested.
In this paper, a sub-stepping procedure is proposed. The numerical results at different sub-step
locations are evaluated independently and then combined linearly to give higher-order accurate
results at the end of the time step. As in Tarnow and Simo?s procedure, there are no changes in the
implementation of the underlying Newmark method except the use of complex time steps.
However, the high-frequency dissipation is controllable for the present algorithms. Besides, as the
numerical results at the sub-step locations are independent of each other, the evaluations can be
computed in parallel. Furthermore, the present procedure is more general as it can construct
other higher-order algorithms systematically from the underlying second-order accurate
Newmark algorithm.
1.5. Outline of the paper
In this paper, unconditionally stable higher-order accurate step-by-step time integration
algorithms with controllable numerical dissipation for linear structural dynamic problems are
constructed. The Newmark method with complex time steps is used. The numerical results at the
end of a time step are obtained by combining results at several sub-step locations. The general
formulation for a (2n!1)th order accurate algorithm is studied. The ultimate spectral radius k is
a controllable algorithmic parameter. There are in general n sub-step locations and the corresponding n weighting factors to be determined. It is shown that the sub-step locations, which may
be complex, are given by the roots of an nth degree polynomial. The coefficients of the polynomial
are given explicitly in terms of n and k in this paper. The weighting factors can then be obtained
by solving a system of n simultaneous equations.
It is further shown that when k"1, the algorithms for various n are non-dissipative with an
order of accuracy increased from 2n!1 to 2n. The leading truncation error terms for the relative
period error and algorithmic damping ratio are given explicitly in terms of n and k.
The stability properties of the proposed complex-time-step Newmark methods are investigated. It is shown that if !1(k)1, the algorithms are unconditionally stable for all n. The
eigenvalues of the numerical amplification matrix are complex conjugates. Besides, it is shown
( 1998 John Wiley & Sons, Ltd.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
70
T. C. FUNG
that the spectral radius decreases monotonically from unity to the ultimate spectral radius k as
the time step size increases. The present algorithms therefore show good spectral properties and
are unconditionally C-stable.
It can be verified that the present algorithms for k"0 and 1 correspond to the first
sub-diagonal and diagonal Pade? approximations, respectively. The present algorithms are more
general as the numerical dissipation in the algorithms is controllable.
The accuracy of the forcing excitation responses is studied. It is shown that the accuracy of the
excitation responses is improved by the present complex-time-step procedure. However, to
achieve the order of accuracy of the numerical amplification matrix, the excitation may need some
modifications.
The computational procedure is discussed. Since the numerical results at the sub-step locations
are independent of each other, the evaluations can be computed in parallel. The computational
efforts, the spectral radii, the algorithmic damping ratios and the relative period errors of the
present higher-order algorithms are compared with other higher-order algorithms.
2. SINGLE-DEGREE-OF-FREEDOM SYSTEM
It would be cumbersome and difficult to study the characteristics of an algorithm by applying it
to equation (1) directly. Instead, the modal decomposition method can be used to uncouple
equation (1) by involving the orthogonality properties of the free vibration mode shapes of the
undamped system. In this case, modal damping is assumed.
It has been rigorously established that the integration of the uncoupled equations is equivalent
to the integration of the original system.1~4 It is therefore more convenient and sufficient for the
purpose of investigating the characteristics of a proposed algorithm by considering the equation
of motion of a single-degree-of-freedom system in the form
u?(t)#2muuR (t)#u2u(t)"f (t)
(3)
where m, u and f (t) are the damping ratio, undamped natural frequency of the system and the
forcing excitation, respectively. To analyse a time-step integration algorithm, t and t
can be
n
n`1
conveniently chosen as 0 and *t, respectively.
Equation (3) is used to study the characteristics of the Newmark algorithm and the constructed
higher-order algorithms. There are many ways to express the Newmark algorithm. Equation (2) is the
single-step three-stage form. For comparison with the analytical solutions, the Newmark algorithm is
cast in the equivalent single-step two-stage form for the single-degree-of-freedom system as
G H
GH
G
u
u
f (t )
n`1 "[A (*t)] n #[L (*t)]
n
NM
NM
v
v
f (t
)
n`1
n
n`1
H
where [A (*t)] is the numerical amplification matrix and is given by
NM
C
2(2b!c) mu3 *t3#(2b!1)u2 *t2#4cmu*t#2
2
2#4cmu*t#2bu2*t2
!(2b!c) u4*t3!2u2*t
2#4cmu*t#2bu2*t2
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
2
2(2b!c) m2u2 *t3#(2c!1)mu*t2#*t
2#4cmu*t#2bu2*t2
!(2b!c)mu3*t3#(b!c)u2*t2#2(c!1) mu*t#1
2#4cmu *t#2bu2*t2
(4)
D
( 1998 John Wiley & Sons, Ltd.
71
COMPLEX-TIME-STEP NEWMARK METHODS
and
C
(1!2b)*t2#2m(c!2b)u *t3
2b*t2
2#4cmu *t#2bu2*t2
2#4cmu *t#2bu2*t2
[L (*t)]"
NM
2(1!c) *t!(c!2b) u2*t3
2#4cmu *t#2bu2*t2
2c*t
2#4cmu *t#2bu2*t2
D
The convergence of a numerical algorithm require consistency and stability according to Lax
equivalent theorem. The consistency property can be determined from the truncation error by
comparing the numerical amplification matrix with the analytical amplification matrix. The
stability requirements and other measures of accuracy would be considered in the next few sections.
The analytical amplification matrix for equation (3) is given by
[A(*t)]"e~mu*t
C
mu
cos(u *t)# sin(u *t)
$
$
u
$
1
sin(u *t)
$
u
$
u2
! sin (u *t)
$
u
$
mu
cos(u *t)! sin(u *t)
$
$
u
$
D
(5)
where u "J1!m2u is the damped vibration frequency.
$
By comparing the Taylor series expansions of [A ] in equation (4) and [A] in equation (5), it
NM
can be seen that the truncation errors are O(*t3) (i.e. second-order accurate) if c"1/2 and O(*t2)
(i.e. first-order accurate) otherwise. The leading truncation errors can be eliminated by linearly
combining the numerical amplification matrices at various sub-step locations (b *t) with weightj
ing factor a as shown in the next section.
j
3. TRUNCATION ERRORS ELIMINATION PROCEDURE
The sth order accurate numerical amplification matrix [A (*t)] with truncation error O(*ts`1) is
4
to be constructed from
[A (*t)]"+ a [A (b *t)]
(6)
4
j NM j
where a and b are algorithmic parameters to be determined. The range of the indices j depends
j
j
on the number of parameters used and is not specified at the mean time. The algorithmic
parameters are chosen so that the Taylor series expansions of equations (5) and (6) match for the
first (s#1) terms, i.e. from *t0 to *ts.
Comparing the Taylor series expansions of the entries in [A ] with those in [A], it can be
s
shown that the conditions to match the sth terms are as follows:
+ a "1
j
for s"1, +a b "1
j j
for s"2, +a b2"1, and c"1/2
j j
for s"3, +a b3"2/3, and b"1/4
j j
for s"k'3, +a bk"2k~1/k!
j j
for s"0,
( 1998 John Wiley & Sons, Ltd.
(7a)
(7b)
(7c)
(7d)
(7e)
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
72
T. C. FUNG
For example, to obtain a second-order algorithm, c"1/2 and a and b must satisfy equations
j
j
(7a)?(7c). It can be done easily by choosing a "1 and b "1. This is of course the original
1
1
Newmark algorithm. Higher-order algorithms can be constructed systematically by solving more
equations with more undetermined algorithmic parameters. It can be seen from equation (7) that
for third- or higher-order algorithms, it is required that b"1/4 and c"1/2.
It can be seen that for b "0, [A (0)]"[I] is an identity matrix. No evaluation is required.
0
NM
Therefore, it is at no cost to assume one of the sub-step locations to be at the beginning of the time
interval, i.e. b "0. The corresponding weighting factor a will be determined with other
0
0
algorithmic parameters.
It is shown in the next section that a can be related to the ultimate spectral radius k. By using
0
an additional n pairs of a and b , 2n equations from equation (7) can be solved. The resultant
j
j
algorithm would be (2n!1)th order accurate.
4. ULTIMATE SPECTRAL RADIUS
The spectral radius is defined as the largest magnitude of the eigenvalues of a numerical
amplification matrix. It is a function of u *t (or *t/╣ ) and other algorithmic parameters. For
unconditionally stable algorithms, the spectral radius must not exceed unity for all u *t. Consider
the numerical amplification matrix in equation (6) as *t approaches infinity
C D
C
D
n
1 0
!1
0
n
Lim [A
(*t)]"Lim + a [A (b *t)]"a
#+ a
(8)
k NM k
0 0 1
2n~1
k 0
!1
*t?=
*t?= k/0
k/1
The square of the magnitude of the eigenvalues Dj D is equal to the determinant of the matrix in
=
equation (8). Since k is the ultimate spectral radius, one has
A
B A
B
2
n
(9)
Lim [A (*t)] " a ! + a "(2a !1)2
0
s
0
k
*t?=
k/1
by using equation (7a). Hence the weighting factor a for the initial condition is related to the
0
ultimate spectra radius directly. In the following, a is defined as
0
a "1 (1#(!1)n k)
(10)
0 2
The advantage of defining a this way is that the algorithms are undefined when k"!1 and are
0
non-dissipative when k"1 as shown in the following sections.
k2"Dj D2"det
=
5. SOLUTION PROCEDURE
The remaining 2n algorithmic parameters (a , a , . . . , a , b , b , . . . , b ) are to be obtained by
1 2
n 1 2
n
solving the first 2n non-linear equations in equation (7). The equations can be written as
1
1
1
2
1
b
1
b2
1
F
b
2
b2
2
F
b
3
b2
3
F
2
b
n
b2
n
F
2
}
bn~1 bn~1 bn~1 2 bn~1
n
3
2
1
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
GHG H
a
1
a
2
a "
3
F
a
n
d
0
d
1
d
2
F
(11a)
d
n~1
( 1998 John Wiley & Sons, Ltd.
73
COMPLEX-TIME-STEP NEWMARK METHODS
bn
2
bn`1
2
bn`2
2
F
bn
1
bn`1
1
bn`2
1
F
bn
3
bn`1
3
bn`2
3
F
2
2
2
}
GHG H
bn
n
bn`1
n
bn`2
n
F
a
d
1
n
a
d
2
n`1
a " d
3
n`2
F
F
b2n~1 b2n~1 b2n~1 2 b2n~1
n
3
2
1
a
n
where
d "1 (1!(!1)n k)
0 2
(11b)
d
2n~1
and d "2k~1/k!
k
for k*1
(11c)
The a can be eliminated from equations (11a) and (11b), i.e.
j
bn
1
bn`1
1
bn`2
1
F
bn
2
bn`1
2
bn`2
2
F
bn
3
bn`1
3
bn`2
3
F
2
2
2
}
bn
n
bn`1
n
bn`2
n
F
b2n~1 b2n~1 b2n~1 2 b2n~1
n
3
2
1
1
1
1
2
1
b
1
b2
1
F
b
2
b2
2
F
b
2
b
n
b2
n
F
3
b2
3
F
2
}
bn~1 bn~1 bn~1 2 bn~1
n
3
2
1
~1
G HG H
d
0
d
1
d
2
F
d
n~1
d
n
d
n`1
" d
n`2
F
d
2n~1
(12a)
or
G HG H
d
[B]
0
F "
d
n~1
d
n
F
(12b)
d
2n~1
Note that in this equation, d are known while b in [B] are the unknowns to be determined. The
k
k
elements in matrix [B] can be expressed in terms of b using Cramer?s rule.32
k
Let D(k , k , . . . , k ) denote the determinant of a matrix with element bkp in the pth row and
1 2
n
q
qth column, i.e.
bkК bkК 2 bkК
n
2
1
bk╚ bk╚ 2 bk╚
n
2
D(k , k , . . . , k )" 1
1 2
n
} F
F
F
(13)
bkn bkn 2 bkn
n
2
1
Obviously, the determinant of the coefficient matrices in equations (11a) and (11b) are
D(0, 1, 2, . . . , n!1) and D(n, n#1, . . . , 2n!1), respectively.
It can be seen that the pth row and qth column element of [B] is given by
D(k , k , . . . , k )
1 2
n
where k "j!1 except k "n#p!1
B "
j
q
pq D(0, 1, 2, . . . , n!1)
( 1998 John Wiley & Sons, Ltd.
(14)
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
74
T. C. FUNG
For example, the [B] matrix for n"4 is given by
D(0, 4, 2, 3)
D(0, 1, 4, 3) D(0, 1, 2, 4)
D(5, 1, 2, 3) D(0, 5, 2, 3)
1
[B]"
D(0, 1, 2, 3) D(6, 1, 2, 3) D(0, 6, 2, 3)
D(0, 1, 5, 3) D(0, 1, 2, 5)
D(0, 7, 2, 3)
D(0, 1, 7, 3) D(0, 1, 2, 7)
D(4, 1, 2, 3)
D(7, 1, 2, 3)
D(0, 1, 6, 3) D(0, 1, 2. 6)
(15)
It can be shown that
<
(b !b )
p
q
,...,
p/2,3
m
q/1 , . . . , p~1
"(b !b
) (b !b
) и и и (b !b ) D(0, 1, 2, . . . , m!1) (16)
m
m~1 m
m~2
m
1
Furthermore, D(k , k , . . . , k ) in equation (13) can be considered as the determinant of a minor
1 2
n
of an extended matrix with additional b . As a result, by making use of equation (16) and the
k
Laplace expansion32 of determinant, D(k , k , . . . , k ) can be found. For example, D(0, 2, 3, 5)
1 2
n
can be obtained from the coefficient of b b4 in D(0, 1, 2, 3, 4, 5). As a result, D(0, 5, 2, 3) in equation
5 6
(15) can be obtained. However, the elements in [B] are still non-linear functions of b and it is still
k
very difficult to solve for b .
k
D(0, 1, 2, . . . , m)"
6. POLYNOMIAL FOR b
j
On the other hand, it can be seen that D(k , k , . . . , k )/D(0, 1, 2, . . . , n!1) is a homogenous
1 2
n
(i.e. all terms are of the same degree) and cyclic symmetrical (i.e. its value is unaltered by the
interchange of b and b ) function of b .
p
q
k
It is well known that any symmetric function of the roots of a polynomial can be expressed in
terms of the coefficients of the polynomial.33 An nth degree polynomial P (x) with b as the roots
n
j
can be constructed as
n
P (x)" < (x!b )"(x!b ) (x!b ) и и и (x!b
) (x!b )
n
j
1
2
n~1
n
j/1
"x n#& xn~1#& xn~2# и и и #&
x#&
1
2
n~1
n
where & is the signed sum of the products of k different b , i.e.
k
j
(17)
& "(!1)k
k
+
b b иииb
(18)
jК j╚
jk
,...,
,...,
jК,j╚O
j
/1,2
n
jК j╚Ok > > > Ojk
As a result, the elements in [B] can be expressed in terms of & directly. In fact, by manipulating
k
the equations in equation (12a), the system of equations could be simplified to
!&
!& !&
2 !&
n~1
n~2
1
n
!&
!&
0
2 !&
n
n~1
2
0
0
!&
2 !&
n
3
F
F
}
F
F
0
0
0
2
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
!&
n
G HG
d
0
d
1
d
2
F
d
n~1
d
n
d
#& d
n`1
1 n
" d
#& d
#& d
n`2
1 n`1
2 n
F
d
#& d
# и и и #&
d
2n~1
1 2n~2
n~1 n
H
(19)
( 1998 John Wiley & Sons, Ltd.
75
COMPLEX-TIME-STEP NEWMARK METHODS
The system of equations in equation (19) can be rearranged to the following form:
d
0
d
1
d
2
F
d
1
d
2
d
3
F
d
n~1
d
n
d
2
2
3
2
4
F
2
d
d
}
d
d
n~1
d
n
n`1
F
2 d
d
n`1
2n~2
GHGH
&
d
n
n
&
d
n~1
n`1
"! d
&
n~2
n`2
F
F
&
1
(20)
d
2n~1
In other words, the required b values are the roots of the polynomial with coefficient & which in
k
k
turn can be obtained by solving the simultaneous equations in equation(20).
Since d "(1!(!1)nk)/2 and d "2k~1/k!, & can be solved explicitly as
0
k
k
(!1)k2kCn (2n!1!k)! (n#(n!k)k)
k
&"
k
(2n!1)!
(n#nk)
n!
where Cn"
k (n!k)! k!
(21)
As a result, the n (complex) b values are given by the roots of the following polynomial:
k
(2n!1!k)! (n#(n!k)k)
n
+ (!1)k2kCn
xn~k"0
k
(2n!1)! (n#nk)
k/0
(22)
The explicit forms of the polynomials corresponding to n from 2 to 10 are listed in Table I. It can
be seen that when k"!1, the formulation is invalid.
Once the b values are determined, the corresponding a values can be obtained from equation
j
j
(11a).
7. ORDER OF ACCURACY WHEN k"1
Since k is an algorithmic parameter, it is of interest to know if there are any values of k that could
increase the order of accuracy. By including the next equation corresponding to k"2n in
equation (7e) into equation (11b), the n#1 equations can be written as
bn
1
bn`1
1
bn`2
1
F
bn
2
bn`1
2
bn`2
2
F
bn
3
bn`1
3
bn`2
3
F
2
2
2
}
bn
n
bn`1
n
bn`2
n
F
b2n~1 b2n~1 b2n~1 2 b2n~1
n
3
2
1
b2n
2
b2n
b2n
b2n
n
3
2
1
( 1998 John Wiley & Sons, Ltd.
GHG
d
n
a
1
d
n`1
a
2
d
a " n`2
3
F
F
d
2n~1
a
n
d
2n
H
(23)
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
76
T. C. FUNG
n
Table I. Polynomials for b values for various n
k
Polynomials in equation (22)
2
2 (2#k) x 2 1
x2!
#
3 1#k
3 1#k
3
2 (3#2k) x2 1 (3#k) x
2 1
x3!
#
!
5 1#k
5 1#k
15 1#k
4
2 (4#3k) x3 1 (4#2k) x2
4 (4#k) x
2
1
x4!
#
!
#
7 1#k
7 1#k
105 1#k
105 1#k
5
2 (5#4k)x4 1 (5#3k)x3
2 (5#2k) x2
1 (5#k)x
2
1
x5!
#
!
#
!
9 1#k
9 1#k
63 1#k
189 1#k
945 1#k
6
2 (6#5k) x5
1 (6#4k) x4
8 (6#3k) x3
1 (6#2k) x2
x6!
#
!
#
11 1#k
11 1#k
297 1#k
198 1#k
2 (6#k) x
2
1
!
#
3465 1#k
10 395 1#k
7
2 (7#6k) x6
1 (7#5k) x5
10 (7#4k) x4
2 (7#3k) x3
x7!
#
!
#
13 1#k
13 1#k
429 1#k
429 1#k
4 (7#2 k) x2
1 (7#k) x
2
1
!
#
!
6435 1#k
19 305 1#k
135 135 1#k
8
2 (8#7k) x7
1 (8#6k) x6
4 (8#5k) x5
1 (8#4k) x4
x8!
#
!
#
15 1#k
15 1#k
195 1#k
234 1#k
4 (8#3k) x3
2 (8#2k) x2
8
(8#k) x
!
#
!
6435 1#k
32 175 1#k
2 027 025 1#k
2
1
#
2 027 025 1#k
9
2 (9#8k) x8
1 (9#7k) x 7
14 (9#6k) x6
1 (9#5k) x5
x9!
#
!
#
17 1#k
17 1#k
765 1#k
255 1#k
(9#2k) x2
2 (9#4k) x4
2 (9#3k) x3
4
!
#
!
3315 1#k
29 835 1#k
765 765 1#k
1
(9#k) x
2
1
#
!
3 828 825 1#k
34 459 425 1#k
10
2 (10#9k) x9
1 (10#8k) x8
16 (10#7k) x7
x10!
#
!
19
1#k
19
1#k
969
1#k
7 (10#6k) x6
14 (10#5k) x5
1 (10#4k) x4
#
!
#
1938
24 225
14 535
1#k
1#k
1#k
(10#3k) x3
(10#2k) x2
(10#k) x
8
1
2
!
#
!
1 322 685
1#k
2 645 370
1#k
130 945 815 1#k
2
1
#
654 729 075 1#k
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
( 1998 John Wiley & Sons, Ltd.
77
COMPLEX-TIME-STEP NEWMARK METHODS
Following the same procedure as in the previous sections, equation (23) can be simplified to
d
0
d
1
d
2
F
d
1
d
2
d
3
F
d
2
d
3
d
4
F
2
2
2
}
d
n~1
d
n
d
n`1
F
d
d
d
2 d
n~1
n
n`1
2n~2
d
d
2 d
d
n
n`1 n`2
2n~1
GH
&
n
GH
d
n
d
n`1
d
"! n`2
&
n~2
F
F
d
2n~1
&
1
d
2n
&
n~1
(24)
Putting in d and other d values and multiplying the jth equation by (n!1#j )!/2j~2, one has
0
k
(n#1)!
1!
(n#2)!
2!
n!
1!
(n#1)!
2!
(n#2)!
3!
n!
2!
(n#1)!
3!
(n#2)!
4!
F
F
F
(1!(!1)nk)n!
(2n!1)!
(n!1)!
(2n)!
n!
2
2
2
n!
(n!1)!
(n#1)!
n!
(n#2)!
(n#1)!
}
(2n!1)! (2n!1)!
(2n!1)!
2
n!
(n#1)!
(n!2)!
(2n)!
(2n)!
(2n)!
2
(n#1)! (n#2)!
(2n!1)!
GH
GH
1
&
n
1
&
n~1
1
"!2n
&
n~2
F
F
1
&
1
1
(25)
Recall that the & are solved by using the first n equations. If the n#1 equation is satisfied as well,
k
the order of accuracy would be increased from 2n!1 to 2n. This requires the n#1 equations
with n unknowns being consistent. In other words, the n#1 equations should be linearly
dependent. In the following, the linearly dependent condition is established.
Before we proceed, we need some identities. Consider the following identity:
n
n
xn(1!x)n"xn + Cn (!1)n~kxn~k" + Cn (!1)n~kx2n~k
k
k
k/0
k/0
Differentiate equation (26) with respect to x m times, one has
(26)
dm
(2n!k)!
n
xn(1!x)n" + Cn (!1)n~k
x2n~k~m
(27)
k
dxm
(2n!k!m)!
k/0
It can be seen that if m(n, by Leibniz?s theorem,33 the left-hand side of equation (27) would still
contain (1!x) as a factor. Hence, letting x"1 would give zero value. In other words,
(2n!k)!
n
+ Cn (!1)n~k
"0 if m"0, 1, 2, . . . , n!1
k
(2n!k!m)!
k/0
( 1998 John Wiley & Sons, Ltd.
(28)
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
78
T. C. FUNG
On the other hand, if m"n and x"1, the left-hand side of equation (27) would give (!1)n n! As
a result,
n
(2n!k)!
+ Cn (!1)n~k
"(!1)nn!
k
(n!k) !
k/0
(29)
Now, pre-multiplying equation (25) by the following row vector:
Cn и и и (!1)n~kCn и и и (!1)n~1 Cn (!1)n Cn ]
[Cn !Cn
0
1
k
n~1 n~2
n
(30)
and making use of equations (28) and (29), one has the following equation:
(!1)n (1!k)n! & "0
n
(31)
The system of equations is consistent if equation (31) is satisfied. It can be seen that & O0 in
n
general. As a result, the equations are consistent if and only if k"1. In other words, if k"1, the
a and b values obtained by solving the first 2n equations would satisfy the (2n#1)th equation as
k
k
well. In conclusion, the order of accuracy is increased by one if k is set to one.
8. LEADING TRUNCATION ERROR TERMS
The leading truncation error terms for the elements in [A ]![A] can be shown to be propors
tional to the next unused equation in equation (7). It can be shown that the residual in the
equation for k"2n in equation (7e) is
n
(!1)n (1!k)
+ a b2n!d "
j j
2n (2n!1)!! (1#k)
j/1
(32)
where (2n!1)!!"(2n!1) (2n!3) и и и 5]3]1. When k"1, as shown in the previous section,
the residual is zero and the order of accuracy is increased by one.
The errors of an algorithm can be measured by the algorithmic damping ratio and the relative
period error. They are related to the numerical dissipation and dispersion (or the amplitude and
phase errors). If the complex eigenvalues j and j of the undamped numerical amplification
1
2
matrix (i.e. m"0) are expressed in the following form:
j "exp (u6 *t(!mM $i))
1,2
(33)
where i"J!1, then u6, m1 are defined as the algorithmic frequency and algorithmic damping
ratio, respectively. The relative period error is defined as (╣M !╣ )/╣ where ╣"2n/u and
╣M "2n/uN .
It can be shown that the relative period error and algorithmic damping ratio in general can be
written as
(n!k#nk2)u2n*t2n
Relative period error"
#O (*t2n`2)
(2n#1) (2n!1) [(2n!1)!!]2 22n~2 (1#k)2
(1!k)u2n~1 *t2n~1
#O(*t2n`1)
Algorithmic damping ratio"
[(2n!1)!!]2 22n~1(1#k)
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
(34a)
(34b)
( 1998 John Wiley & Sons, Ltd.
79
COMPLEX-TIME-STEP NEWMARK METHODS
From equation (34a), it can be seen that no real value of k could eliminate the leading error term
further. However, it can be shown that the leading error term is minimized if k"1. In the later
sections, it is shown that the algorithms are non-dissipative as well when k"1.
9. STABILITY CHARACTERISTICS
To study the stability characteristics of the proposed algorithms, the undamped numerical
amplification matrix (i.e. with m"0) is considered. The numerical amplification matrix in
equation (6) is in the following form:
C D
C
D C
D
1 0
4!b2 o2 4b o/u
S
╣/u
n
a
k
k
k
]"a
#+
"
(35)
0 0 1
!u╣
S
4#b2 o2 !4b uo 4!b2 o2
k
k
k
k/1
where o"u *t. Since a and b appear in complex conjugate pairs, the resultant expressions
k
k
S and ╣ are real. Therefore, it can be seen that the eigenvalues for the numerical amplification
matrix [A
] are complex conjugates. The squares of the magnitude of the eigenvalues is equal
2n~1
to the determinant of [A
]. Obviously, DA
D*0. The determinant of [A
] can be
2n~1
2n~1
2n~1
written as
[A
2n~1
A
B A
B
4b o
4!b2 o2 2
n
2
n
k
k
D"S2#╣ 2" a # + a
# + a
k 4#b2 o2
0
k 4#b2 o2
k
k
k/1
k/1
After some algebraic manipulations, it can be simplified to
DA
2n~1
(36)
(1!k2) o2n& 2
n
D"1!
(37)
(4#b2 o2) (4#b2 o2) и и и (4#b2 o2)
n
2
1
Making use of the special form of d , it can be further simplified to
k
(1!k2) o2n
DA
D"1!
(38)
2n~1
n 1 (2k)! (n#k!1)!
o2n# +
(nk2#2kk#n)o2(n~k)
2 k!k!(n!k) !
k/1
It can be seen that as o("u*t) approaches infinity, the magnitude of the eigenvalues Dj D is, as
=
expected, k, i.e.
DA
2n~1
Lim Dj D2" Lim Dj D2"Dj D2"Lim DA
D"k2
(39)
1
2
=
o P= 2n~1
*t?=
*t?=
Hence, for a desirable Dj D, two values of k ("$Dj D ) are possible. However, it can be seen that
=
=
the magnitude of the leading truncation error terms in equations (32), (34a) and (34b) increases as
k decreases from 1. As a result, for a desirable ultimate spectral radius Dj D, k"Dj D would give
=
=
better results than k"!Dj D. In conclusion, only algorithms with 0)k)1 would be useful.
=
Note that the denominator in equation (38) is greater than zero for all o, n and k. As a result,
DA
D is less than unity for !1(k)1. In other words, the algorithms are unconditionally
2n~1
stable if !1(k)1 for all n. Furthermore, as all the coefficients in the denominator polynomial
are greater than zero, it can be seen that the spectral radius is decreasing monotonically from
unity toward the ultimate spectral radius o increases. This shows a good numerical dissipation
property as the high-frequency responses are damped progressively. Figure 1 shows the spectral
( 1998 John Wiley & Sons, Ltd.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
80
T. C. FUNG
Figure 1. Comparison of spectral radii for various n(m"0, k"0и5)
radius for k"0и5 for various n. It can be seen that for n with a larger value, the spectral radius
stays close to the unit level longer and decreases to 0и5 as *t/╣ tends to infinity.
10. PADE? APPROXIMATIONS
It can be seen that the elements in the numerical amplification matrix are rational functions of o.
In other words, the elements in the exact amplification matrix are approximated by the rational
functions. When k"1, the algorithms are shown to be non-dissipative and of order of accuracy
of 2n. Both the numerator and denominator of the elements in the numerical amplification matrix
are 2nth degree polynomial of o. It is well known that the diagonal (n, n) Pade? approximations
give order 2n approximations with rational functions of 2nth degree polynomial of o in the
numerator and denominator for structural dynamic problems. Since the Pade? approximation is
unique for a given n, the present algorithms with k"1 re-derive the diagonal Pade? approximations.
Similarly, it can be shown that the present algorithms with k"0 correspond to the first
sub-diagonal (n!1, n) Pade? approximations which give order 2n!1 asymptotic annihilating
approximations.
11. A FEW SIMPLE ALGORITHMS
In the following, some simple complex-time-step algorithms with small n are considered. The
third-order algorithm can be obtained by using three pairs of a and b with b "0. It can be
j
j
0
shown that a , a , a , b and b can be expressed in terms of k as
0 1 2 1
2
1
1!k
4#7k#k2
1!k
4#7k#k2
a " (1#k), a "
#i
, a "
!i
(40a)
0 2
1
2
4
4
4J2#2k!k2
4J2#2k!k2
2#k
J2#2k!k2
2#k
J2#2k!k2
b "0, b "
!i
, b "
#i
0
1 3(1#k)
2
3(1#k)
3(1#k)
3(1#k)
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
(40b)
( 1998 John Wiley & Sons, Ltd.
81
COMPLEX-TIME-STEP NEWMARK METHODS
Figure 2. Variations of a and b with k for the third-order algorithms
j
j
In fact, b and b are the roots of the quadratic equation 3(1#k)x2!(4#2k) x#2"0. It can
1
2
be seen that the algorithmic parameters are complex if 1!J3(k(1#J3. Figure 2 shows
the variations of the algorithmic parameters with k. It can seen that a and a (b and b ) are
1
2 1
2
complex conjugates.
When k"0, the algorithm is asymptotic annihilating and third-order accurate. The algorithmic parameters are given by
1
1
1
1
1
a " , a " #i
, a " !i
, b "0,
0 2
1 4
2
0
4
J2
J2
2
J2
b " !i
,
1 3
3
2
J2
b " #i
2 3
3
(41)
When k"1, the algorithm is non-dissipative and fourth-order accurate. The algorithmic
parameters are given by
a "1, a "iJ3, a "!iJ3, b "0,
2
0
0
1
1
J3
b " !i
,
1 2
6
1
J3
b " #i
2 2
6
(42)
Figure 3(a) shows the spectral radii for various algorithms. The notations used are shown in
Table II. It can be seen that both the Houbolt and Park methods are asymptotic annihilating.
However, the dissipation in the low-frequency range is quite severe for the Houbolt method. The
(1, 2) Pade? approximation is asymptotic annihilating with much smaller low-frequency dissipation. This algorithm is equivalent to the present third-order complex-time-step algorithm with
k"0. For other values of k, the spectral radii are decreasing smoothly to the respective ultimate
spectral radii. The presented complex-time-step algorithm is non-dissipative when k"1.
Figure 3(b) shows the algorithmic damping ratios for various algorithms. The present thirdorder complex-time-step algorithms are comparable to the second-order algorithms. Figure 3(c)
shows the relative period errors for various algorithms. It can be seen that the relative period
errors for the present third-order complex-time-step algorithms are smaller than those for the
second-order algorithms and Tarnow and Simo?s fourth-order algorithm (TSNM).
( 1998 John Wiley & Sons, Ltd.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
82
T. C. FUNG
Figure 3(a). Comparison of spectral radii with third order complex-time-step algorithms (m"0)
Figure 3(b). Comparison of algorithmic damping ratios with third-order complex-time-step algorithms (m"0)
In Figures 3(a)?3(c), the spectral radii, the algorithmic damping ratios and the relative period
errors for the third-order complex-time-step algorithm with k"!0и5 are shown. The algorithm
has the same ultimate spectral radius as k"0и5. It can be seen that the algorithm is not as
accurate as the algorithm with k"0и5. This is also true for other values of DkD for other
higher-order complex-time-step algorithms. As a result, only when 0)k)1 would give useful
algorithms.
Other higher-order accurate complex-time-step algorithms can be constructed similarly.
For example, the fifth- and seventh-order accurate algorithms can be constructed by solving
b from 15(1#k) x3!6(3#2k)x2#3(3#k)x!2"0 and 105(1#k) x4!30(4#3k)x3
j
#30(2#k) x2!4(4#k) x#2"0, respectively. The fifth-order accurate algorithm has one
real and one pair of complex time steps while the seventh-order accurate algorithm has two pairs
of complex time steps.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
( 1998 John Wiley & Sons, Ltd.
83
COMPLEX-TIME-STEP NEWMARK METHODS
Figure 3(c). Comparison of relative period errors with third-order complex-time-step algorithms (m"0)
Table II. Relative computational effort and notations for various algorithms
Method
Notation
Effort`
Effort*
Order
Remark
Houbolt
Park
HHT (a"!0и3)
Newmark (b"1/4, c"1/2)
Central difference (b"0, c"1/2)
Fox?Goodwin (b"1/12, c"1/2)
Extrapolated Newmark
Extrapolated Newmark
Extrapolated Newmark
Tarnow and Simo?s Newmark
(1, 2) Pade? approximation
(2, 2) Pade? approximation
(2, 3) Pade? approximation
(3, 3) Pade? approximation
(3, 4) Pade? approximation
(4, 4) Pade? approximation
Present: third order
Present: fourth order
Present: fifth order
Present: sixth order
Present: seventh order
Present: eighth order
Houbolt
Park
HHT
NM
CD
FG
ENM4
ENM6
ENM8
TSNM
P12
P22
P23
P33
P34
P44
CTS3
CTS4
CTS5
CTS6
CTS7
CTS8
?
?
?
1
?
1
2
3
4
2
8
8
216
216
512
512
4
4
5
5
8
8
?
?
?
1
?
1
3
7
15
3
4
4
36
36
64
64
4
4
5
5
8
8
2
2
2
2
2
2
4
6
8
4
3
4
5
6
7
8
3
4
5
6
7
8
US, AA
US, AA
US
US. ND
CS, ND
CS,ND
UU
UU
UU
US, ND
US, AA
US, ND
US, AA
US, ND
US, AA
US, ND
US, AD
US, ND
US, AD
US, ND
US, AD
US, ND
Note: US: unconditionally stable; CS: conditionally stable; UU: unconditionally unstable; ND: nondissipative; AD: arbitrary dissipation; AA: asymptotic annihilating; #: effort for coefficient matrix
decomposition, compared with Newmark method (see text); *: effort for forward and backward substitutions, compared with Newmark method (see text)
Figure 4(a) shows the spectral radii for various high-order complex-time-step algorithms. The
same trend can be observed as the third-order algorithms. The spectral radii are decreasing
smoothly to the respective ultimate spectral radii. The higher-order algorithms exhibit smaller
( 1998 John Wiley & Sons, Ltd.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
84
T. C. FUNG
Figure 4(a). Comparison of spectral radii with various higher-order complex-time-step algorithms (m"0)
Figure 4(b). Comparison of algorithmic damping ratios with various higher-order complex-time-step algorithms (m"0)
low-frequency dissipation, as expected. Figures 4(b) and 4(c) show the algorithmic damping
ratios and the relative period errors for various higher-order complex-time-step algorithms,
respectively. It can be seen that the algorithms are separated into three groups with increasing
accuracy.
12. FORCING EXCITATION RESPONSES
So far, only the leading truncation error terms in the numerical amplification matrix are shown to
have been reduced by choosing the algorithmic parameters properly. It is of interest to check if
the truncation errors corresponding to the excitation responses are reduced as well by linearly
combining the responses at the specific sub-step locations.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
( 1998 John Wiley & Sons, Ltd.
COMPLEX-TIME-STEP NEWMARK METHODS
85
Figure 4(c). Comparison of relative period errors with various higher-order complex-time-step algorithms (m"0)
The excitation force f (t) in general can be approximated by a polynomial function or can be
expanded into Taylor series about t"0 over a small time interval *t as
f (t)"f (0)#f @(0)t#1 f A(0)t2#1! f @A(0)t3# и и и #1! f (n)(0)tn# и и и for 0)t)*t
2
3
n
(43)
If f (t) is a special function such as a step function or a delta function, special treatments are
required. For example, if f (t) is a delta function at t"0 representing an impulsive force, the
velocity after the impact at t"0` should be calculated from the momentum equations. The
subsequent response is an unforced vibration with f (t)"0. For the excitation described by
a piecewise continuous function, the time step *t should be chosen so that the excitation is
continuous within the time step.
If the truncated Taylor series is used to represent the excitation, the terms retained should
conform to the required accuracy. Penry and Wood34 noted that the accuracy of the numerical
solution depends on the adequate approximation of the forcing function as well. In the following,
the connection between the response and the excitation is studied.
Since the differential equation is linear, without loss of generality, only one term needs to be
considered. Assume f (t)"atn, where a is the amplitude coefficient and n is an integer. The
differential equation under consideration becomes
u?(t)#2muuR (t)#u2u(t)"atn
(44)
The exact solution of equation (44) can be divided into two parts: the homogeneous solution
and the particular solution. The homogeneous solution corresponds to the free vibration with no
excitation (i.e. a"0). The solution is related to the analytical amplification matrix as
G H
GH
u(*t)
u
"[A(*t)] 0
v
v(*t)
0
( 1998 John Wiley & Sons, Ltd.
(45)
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
86
T. C. FUNG
where v(t)"uR (t). u and v are the given initial displacement and velocity at t"0, respectively.
0
0
The expressions for the truncation errors and the procedure to reduce the truncation errors have
been studied in the previous sections.
The particular solution corresponding to the excitation response of equation (44) with initial
conditions u(0)"0 and uR (0)"0 at t"0 can be written as
A
B
c n!tn`2 c n!tn`3 c n!tn`4 c n!tn`5
= c n!tn`2`k
uN (t)"a 0
# 1
# 2
# 3
# и и и "a + k
1
(n#2)!
(n#3)!
(n#4)!
(n#5)!
(n#2#k)!
k/0
(46)
where c "1, c "!2mu, c "!2muc !u2c
for k*2. It can be seen that the leading
0
1
k
k~1
k~2
term of the displacement response for excitation atn is proportional to tn`2. However, the velocity
response is to be determined from the time derivative of equation (46) as well and the leading term
for the velocity response is proportional to tn`1. To maintain the accuracy of the numerical
solutions, the particular solutions should have the same truncation errors as the homogenous
solutions, if not better. As a result, if an algorithm is mth-order accurate, i.e. the leading error term
is proportional to tm`1, the excitation terms proportional to tm, tm`1, etc. need not be considered.
Only the terms from t0 to tm~1 are required.
It can be shown that the responses evaluated by linearly combining the responses at the
sub-step locations (b *t) with b"1/4, c"1/2, u "0, v "0, n"0, f (t)"a are given by
j
0
0
1
1
= c
k + a bk`2 *tk`2
u(*t)" ac + a b2 *t2# ac + a b3 *t3# и и и "a +
j j
j j
j j
2 0
4 1
2k`1
k/0
(47a)
1
= c
v(*t)"ac + a b *t# ac + a b2 *t2# и и и "a + k + a bk`1 *tk`1
0
j j
j j
j j
2k
2 1
k/0
(47b)
It can be seen that if a and b satisfy equation (7), the corresponding accuracy for the excitation
j
j
responses is increased to the same level of accuracy as in the numerical amplification matrix for
n"0 by the present complex-time-step procedure. In other words, the response accuracy is
enhanced by the proposed procedure as well for f (t)"a.
It can be shown that the responses evaluated by linearly combining the responses at the
sub-step locations (b *t) with b"1/4, c"1/2, u "0, v "0, n*1, f (t)"a6 tn are given by
j
0
0
1
1
= c
k + a bn`k`2 *tn`k`2
u(*t)" a6c + a bn`2 *tn`2# a6c + a bn`3 *tn`3# и и и "a6 +
j
j
0
j
1
j
j j
4
8
2k`2
k/0
(48a)
1
= c
1
k + a bn`k`1 *tn`k`1
v(*t)" a6c + a bn`1 *tn`1# a6c + a bn`2 *tn`2# и и и "a6 +
j j
j j
j j
4 1
2k`1
2 0
k/0
(48b)
Hence, for n*1, it is found that the approximate solutions match the exact solutions if
a
n!
1
"aN
+a bn`k`2
j j
(n#k#2)!
2k`2
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
(49)
( 1998 John Wiley & Sons, Ltd.
COMPLEX-TIME-STEP NEWMARK METHODS
87
Making use of equation (7), that is + a bn"2n~1/n!, this requires
j j
n!
a
a6"
a"
(50)
2n~1
d
k
For n"1 and 2, a6"a. However, for n*3, a6 would be different from a by a factor of n!/2n~1. In
other words, the given excitation in the form
f (t)"f #f t#f t2#f t3#f t4#f t5#f t6#f t7# и и и "f # + f tk
0
1
2
3
4
5
6
7
0
k
k/1
should be modified to the form
(51)
15
45
315
f
3
f t7#и и и"f # + k tk
f (t)"f #f t#f t2# f t3#3 f t4# f t5# f t6#
4
0
0
1
2
2 5
2 6
4 7
d
2 3
k/1 k
(52)
in order to maintain higher-order accuracy in the response calculation using the present higher
order algorithms. The following numerical example confirms the importance of using the
modified excitation in response calculation.
12.1. Numerical example
Consider the single-degree-of-freedom system in equation (3) with m"0 and u"1. The system
is subjected to a periodic excitation as shown in Figure 5. The excitation is piecewise continuous
with period ╣"1. Within one period, the excitation f (t) can be described by exp(2t)!1. The
periodic excitation as shown in Figure 5 is not smooth at all. The effect of using the modified
excitations rather than the actual excitation is studied.
Figure 5. Periodic forcing excitation f (t)
( 1998 John Wiley & Sons, Ltd.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
88
T. C. FUNG
The Taylor series expansion of f (t) about t"0 for 0)t)1 is
f (t)"2t#2t2#4/3t3#2/3t4#4/15t5#4/45t6#8/315t7#и и и#2n/n! tn#и и и
(53)
The modified excitations, also shown in Figure 5, for various higher-order complex-time-step
algorithms at t"0 are
f (t)"2t#2t2#2t3 for the fourth-order algorithms
(54a)
f (t)"2t#2t2#2t3#2t4#2t5 for the sixth-order algorithms
(54b)
f (t)"2t#2t2#2t3#2t4#2t5#2t6#2t7 for the eighth-order algorithms
(54c)
Table III shows the calculated numerical results using the present fourth-order complex-timestep method. It can be seen that the results are not very accurate with *t"1. The accuracy can be
increased by using a smaller time step *t"0и5. However, it can be seen that the accuracy is
improved by using the modified excitations as well. The improvement is not very significant as the
higher-order terms in the excitation are not incorporated in the calculation yet. The fourth-order
algorithms could not capture the higher-order characteristics in the excitation when *t"1.
In the same table, it can also be seen that the numerical results using the present sixth-order
complex-time-step method are not very accurate with *t"1. The accuracy can be increased by
using a smaller time step *t"0и5. However, the inaccuracy is not due to the time-step size being
too large but mainly due to the inappropriate approximation of the excitation. It can be seen that
the accuracy is improved significantly by using the modified excitations with *t kept as 1. The
same conclusion can be drawn for the present eight-order complex-time-step methods. By using
the modified excitations, the present higher-order algorithms can predict accurate responses with
a large time step.
Table III. Comparison of numerical results for various algorithms using the modified excitation
Method
*t
u(t"1)
% diff.
u(t"2)
% diff.
u(t"10)
% diff.
Exact solution
Fourth-order CTS method
Fourth-order CTS method
Fourth-order CTS method
with modified excitation
?
1и0
0и5
0и57346
0и52212
0и56427
?
!8и95%
!1и60%
2и62206
2и32979
2и57666
?
!11и15%
!1и73%
4и28188
3и77968
4и20651
?
!11и73%
!1и76%
1и0
0и54352
!5и22%
2и42292
!7и59%
3и92911
!8и24%
1и0
0и5
0и52988
0и56501
!7и60%
!1и47%
2и34621
2и57810
!10и52%
!1и68%
3и77866
4и20643
!11и75%
!1и76%
1и0
1и0
0и5
0и57150
0и52985
0и56501
!0и34%
!7и60%
!1и47%
2и60489
2и34616
2и57810
!0и66%
!10и52%
!1и68%
4и24831
3и77851
4и20642
!0и78%
!11и76%
!1и76%
1и0
0и57338
!0и01%
2и62120
!0и03%
4и28016
!0и04%
0и25
0и61947
8и02%
2и72308
3и85%
4и43810
3и65%
0и10
0и58084
1и29%
2и63831
0и62%
4и30706
0и59%
Sixth-order CTS method
Sixth-order CTS method
Sixth-order CTS method
with modified excitation
Eighth-order CTS method
Eighth-order CTS method
Eighth-order CTS method
with modified excitation
Second-order
Newmark method
Second order
Newmark method
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
( 1998 John Wiley & Sons, Ltd.
COMPLEX-TIME-STEP NEWMARK METHODS
89
The results obtained from the second-order accurate Newmark method with *t"0и25 and
*t"0и1 are shown as well. It can be seen that the numerical results with *t"0и25 are not very
accurate. The numerical results with *t"0и1 is acceptable.
In conclusion, by using the modified excitations, the present complex-time-step algorithms
could give accurate numerical results with a larger time step. The importance of using the
modified excitations and the advantage of using higher-order algorithms are clearly demonstrated.
13. COMPUTATIONAL PROCEDURE
In general, an sth order accurate numerical amplification matrix [A (*t)] is given by equation (6).
s
In actual computation, the matrices are not computed and combined. Instead, the results at the
sub-step locations are evaluated individually using equation (2) and then combined with the
corresponding weighting factors.
The computational procedures can be briefly summarized as follows:
(1) With the given initial conditions Mu N and Mv N, evaluate MU N and MV N at t "b *t using
0
0
j
j
j
j
equation (2). Note that when j"0, b "0, MU N"Mu N and MV N"Mv N and no evalu0
0
0
0
0
ation is required. Besides, if b and b ("bM ) are complex conjugates, MU N and MV N
1
2
1
2
2
corresponding to b can be obtained from MU N and MV N corresponding to b directly
2
1
1
1
without re-computation since MU N"MU1 N and MV N"MV1 N.
2
1
2
1
(2) Mu N and Mv N at the end of the time step are then obtained by combining MU N and MV N in
1
1
j
j
step (1) with the corresponding weighting factors, a , i.e.
j
Mu N"+ a MU N and Mv N"+ a MV N
1
j j
1
j j
(55)
(3) Mu N and Mv N obtained in the current time step become the initial conditions Mu N and Mv N
1
1
0
0
for the next time step.
(4) Steps (1)?(3) are repeated until the whole time span of interest is covered.
It can be seen that the implementation of the underlying Newmark method in Step (1) remains
unchanged except complex time steps are used. After the numerical results at the sub-step
locations are evaluated, they are then combined linearly with the corresponding weighting
factors.
It can be seen that in Step (1), the evaluations at each sub-step locations are independent of
each other. Hence, the evaluations can be computed in parallel. The algorithms therefore are very
suitable for parallel computers. Ordinary time step integration algorithms are sequential in
nature and are not very suitable for parallel computers. On the other hand, the present algorithms
are parallel in nature and can make use of parallel computers easily.
14. COMPUTATIONAL EFFORT
The third-order accurate L-stable (1, 2) Pade? approximation and the fourth-order accurate
A-stable (2, 2) Pade? approximation can be implemented by using the collocation method,
time discontinuous Galerkin method, bi-discontinuous Galerkin method or weighted residual
method. For example, the resultant equation for the third-order accurate L-stable (1, 2) Pade?
( 1998 John Wiley & Sons, Ltd.
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
90
T. C. FUNG
approximation can be written as9,28
M#2 C*t# 5 K*t2
3
18
!1 C*t!1 K*t2
6
9
!M#1 C*t# 7 K*t2
3
18
M#1 C*t! 1 K*t2
6
18
M#2 C*t!2 K*t2
M
3
9
u
1 "
v *t
1
!M#1 C*t!1 K*t2 0
3
9
G H
G H
u
0
v *t
0
(56)
Similarly, the resultant equation for the fourth-order accurate A-stable (2, 2) Pade? approximation
can be written as27,28
M#C*t# 5 K*t2 1 M! 1 C*t! 1 K*t2
12
12
12
2
C*t#1 K*t2
2
"
M! 1 K*t2
12
M#C*t!7 K*t2 3 M!1 C*t!1 K*t2
12
12
12
2
C*t!1 K*t2
2
M! 1 K*t2
12
G H
u
1
v *t
1
G H
u
0
v *t
0
(57)
The number of equations to be solved simultaneously is twice the number of the degrees of
freedom and the bandwidth is also double. The computational effort for the coefficient matrix
decomposition is proportional to the number of unknowns and the square of the bandwidth. On
the other hand, the computational effort for the backward and forward substitutions is proportional to the number of unknowns and the bandwidth only. As a result, the computational effort
is roughly increased by eight times and four times, respectively, for coefficient matrix decomposition and forward and backward substitutions when compared with the ordinary Newmark
method to advance one time step.
The extrapolated Newmark method evaluates the results at the end of the time step several
times with different sub-step sizes. For the fourth-, sixth- and eighth-order algorithms, there are
2(*t, *t/2), 3(*t, *t/2, *t/4) and 4(*t, *t/2, *t/4, *t/8) different sub-step sizes. Hence the computational effort for coefficient matrix decomposition is only 2, 3 and 4 times higher than the
ordinary Newmark method to advance one time step. However, there are more forward and
backward substitutions. For the fourth-, sixth- and eighth-order algorithms, there are 3, 7 and 15
forward and backward substitutions, respectively. The numerical results given by the extrapolated Newmark methods are very accurate. Unfortunately, it can be shown that the extrapolated
Newmark methods are unconditionally unstable.
When Tarnow and Simo?s procedure is applied to the second-order accurate Newmark
method, the new fourth-order algorithm is unconditionally stable and non-dissipative. It needs
three evaluations to advance one time step. The computational effort is three times that of the
ordinary Newmark method for forward and backward substitutions. As there are only two
different time steps (a*t and (1!2a) *t, where a"(2#21@3#2~1@3)/3:1и3152), only two
coefficient matrices need to be decomposed. The effort for coefficient matrix decomposition is
twice that of the ordinary Newmark method only. In Tarnow and Simo?s algorithm, the
sub-marching results are to be computed sequentially. In the present algorithms, the results at all
Int. J. Numer. Meth. Engng., 41, 65?93 (1998)
( 1998 John Wiley & Sons, Ltd.
COMPLEX-TIME-STEP NEWMARK METHODS
91
sub-step locations can be computed simultaneously in parallel. The computation time could be
reduced by the present algorithms using parallel computers.
The computational effort for the present third- and fourth-order complex-time-step algorithms
is higher than the ordinary Newmark method since complex number is used. In general, the effort
for complex number multiplication is four times higher than that of real number multiplication.
However, as b and b are complex conjugates, the numerical results corresponding to b can be
1
2
2
obtained from b directly. As a result, the present third-order complex-time-step algorithms
1
require 4 times the effort of an ordinary Newmark method to advance one time step. However,
the present algorithms have the advantages of being third-order accurate, unconditionally stable
with controllable numerical dissipation, while the Newmark method is only second-order
accurate, unconditionally stable and non-dissipative if 2b*c"1/2 or first-order accurate,
unconditionally stable and dissipative if 2b*c'1/2.
For even higher-order algorithms, the present algorithms and the extrapolated Newmark
method require more evaluations. Tarnow and Simo?s Newmark method is limited to fourthorder accurate only. The Pade? approximations derived from the time discontinuous Galerkin
method, bi-discontinuous Galerkin method, Petrov?Galerkin method or weighted residual
method require to solve even more simultaneous equations. For example, for an eighth-order
algorithm derived from the Petrov?Galerkin method,29 the number of unknowns and bandwidth
would be 8 times of that of the original problems. The computational effort for coefficient matrix
decomposition and forward and backward substitutions are therefore 83"512 times and 82"64
times that of the ordinary Newmark method. Similarly, for the sixth-order algorithm derived
from the Petrov?Galerkin method,29 the number of unknowns and bandwidth would be 6 times
of the original problems. The computational effort for coefficient matrix decomposition and
forward and backward substitutions are therefore 63"216 times and 62"36 times that of the
ordinary Newmark method.
The direct application of the Pade? approximation would require evaluations of powers of the
coefficient matrix. The computational effort is very intensive as well.
For the present complex-time-step algorithms, the fifth-order algorithms require 1 complex
and 1 real time steps evaluations. Hence, the computational effort is only 5 times that of the
ordinary Newmark method. For the seventh-order algorithms, there are 2 complex time steps
evaluations. Hence, the computational effort is only 8 times that of the ordinary Newmark
method. It can be seen that the computational effort is not increasing as fast as the weighted
residuals method or the Petrov?Galerkin method. Besides, the present algorithms could be
implemented on parallel computers readily.
Table II summarizes the relative computational effort of these higher-order algorithms. The
second-order accurate Newmark method is used as reference. From this stable, it can be
concluded that the present complex-time-step algorithms are efficient with smaller computational
effort for a given order of accuracy and are suitable for parallel implementation. In addition, the
numerical dissipation in the high-frequency range is controllable.
15. CONCLUSIONS
In this paper, the general formulation of the complex-ti
Документ
Категория
Без категории
Просмотров
2
Размер файла
315 Кб
Теги
487
1/--страниц
Пожаловаться на содержимое документа