ISME-2003
XIII National Conference of Indian Society of Mechanical Engineers
December 30–31, 2003, IIT Roorkee
Paper number: TH-122
A FICTITIOUS DOMAIN MULTIGRID PRECONDITIONER FOR THE SOLUTION
OF POISSON EQUATION IN COMPLEX GEOMETRY
∗
Krishna M. Singh, Assistant Professor, Department of Mechanical & Industrial Engineering,
Indian Institute of Technology, Roorkee 247 667. e-mail singhfme@[Link]
J. J. R. Williams, Professor, Department of Engineering, Queen Mary, University of London,
London E1 4NS, UK. e-mail: [Link]@[Link]
∗
Corresponding author
Abstract
This paper presents a fictitious domain multigrid (FDMG) preconditioner for the solution of the
Poisson equation in complicated geometry. For discretization, a seven-point stencil based on the
central difference scheme on a structured rectangular mesh has been used. Sloping sides or curved
boundary surfaces are approximated using stepwise surfaces formed by the grid cells. The result-
ing linear algebraic system is symmetric for any type of boundary conditions. The preconditioned
conjugate gradient method has been used for the solution of this symmetric system. For parallel
implementation, domain decomposition in rectangular blocks with matching grids on interfaces has
been employed. The multilevel preconditioner for the CG is based on a V -cycle multigrid algorithm
applied to the Poisson equation on a fictitious domain formed by the union of the rectangular blocks
used for the domain decomposition. Numerical results are presented for two typical Poisson prob-
lems in complicated geometry — one representing heat conduction in a sphere, and the other one
arising from the LES/DNS of incompressible turbulent flow over a packed array of spheres. The nu-
merical results clearly show the efficiency and robustness of the FDMG preconditioner for Poisson
problems in complex geometry.
1 INTRODUCTION
Let Ω ⊂ R3 be the problem domain with a piecewise smooth boundary ∂Ω. The Poisson equation
governing the distribution of a scalar field u under the influence of the source field f is given by
−∇2 u = f in Ω (1)
with appropriate boundary conditions — Dirichlet, Neumann, Robin, and/or periodic — prescribed on
∂Ω. The preceding equation is encountered in many fields of science and engineering, typical examples
being heat conduction, diffusion, electrostatics, and numerical simulation of the Navier-Stokes equations.
In most practical applications, an analytical solution of the Poisson equation is not feasible, and an
approximate numerical solution is the only option. The latter generally requires the solution of a large
linear algebraic system that can be of the order of millions for a direct numerical solution (DNS) or a
large eddy simulation (LES) of turbulent flows, simulation of VLSI systems and other practical problems.
Solution of such large systems requires the use of robust and efficient iterative algorithms on parallel
computers. Domain decomposition based multilevel methods belong to this class of iterative algorithms.
This paper addresses the extension and application of one such method to the solution of the Poisson
equation in a complicated geometry.
A variety of discretization techniques, viz. finite element, boundary element, finite difference, finite
volume methods, etc., have been used for the numerical solution of the Poisson equation [1, 2, 3]. The
choice of a particular discretization method largely depends on the personal preference of the analyst,
prevalent trends in a particular application area, problem geometry and the context in which the Pois-
son equation arises. With our main interest in LES/DNS of turbulent flows, we prefer a cell-centered
finite difference discretization on a uniform structured rectangular grid. This choice makes the parallel
implementation of the discretization scheme, domain decomposition and multilevel/multigrid algorithms
much simpler than on unstructured grids. To account for the complicated geometric features, we employ
a stepwise approximation of the curved surfaces [4]. This finite difference (or a finite element) discretiza-
tion of Poisson equation results in a symmetric linear algebraic system. Hence, we can use the conjugate
gradient method for its solution instead of more expensive GMRES or BiCGStab method.
Domain decomposition and multilevel methods provide a natural framework for the solution of PDEs
on parallel computers due to (a) the ease of parallelization and good parallel performance, (b) simplifi-
cation of the problem on complicated geometry, and (c) superior convergence properties [5]. There is a
vast amount of literature on domain decomposition methods for partial differential equations [5, 6, 7, 8]
With our choice of a structured rectangular grid, it is natural to employ a decomposition of the do-
main into rectangular blocks with matching interfaces aligned with the grid lines. Further, we consider
a symmetrized multiplicative multilevel Schwarz method as a preconditioner to the conjugate gradient
method owing to its superior convergence properties [5]. This method is essentially the V -cycle multi-
grid method which is traditionally used without a Krylov subspace accelerator [5, 9]. However, we use
it as a preconditioner in the context of the fictitious domain method for problems involving complicated
geometry.
The fictitious domain method is based on the embedding of the problem domain into a domain of
geometrically simpler form (e.g., a rectangular domain) on which the PDE can be solved using a fast
solver. This technique, more commonly referred to as the capacitance matrix method, was proposed by
Buzbee et al. [10] as a direct solution procedure, and studied later on as an iterative solution method
by Proskurowski and Widlund [11], Dryja [12], and others. This approach has also been used to derive
efficient preconditioners based on fast solvers for elliptic problems [13, 14, 15], and has been used in
solution of Navier-Stokes equations by Garbey and Vassilevski [16]. However, no one has attempted
a fictitious domain preconditioner employing domain decomposition based multilevel methods. The
present work is an attempt in this direction.
An outline of the rest of the paper is as follows. Section 2 presents an outline of the finite difference
discretization and the conjugate gradient method. Construction of the fictitious domain multilevel pre-
conditioner is described in Section 3. Parallel implementation issues are covered in Section 4 followed
by details of the numerical experiments and the conclusions in the subsequent sections.
2 DISCRETIZATION AND CONJUGATE GRADIENT METHOD
For the numerical solution of eq. (1), we use a cell-centered finite difference discretization on a uniform
Cartesian grid Ωh covering the solution domain Ω. Curved or slanted surfaces are approximated by
stepwise surfaces formed by the grid cells. Use of the standard seven-point stencil for discretization of
the Laplacian yields:
1 1 1 1
2 2
+ 2+ 2 ui,j,k − (ui−1,j,k + ui+1,j,k )
h x hy hz h2x
1 1
− 2 (ui,j−1,k + ui,j+1,k ) − 2 (ui,j,k−1 + ui,j,k+1 ) = fi,j,k (2)
hy hz
where hx , hy and hz represent grid spacing in the respective coordinate directions. The preceding equa-
tion is modified for boundary cells to incorporate the prescribed boundary conditions.
Figure 1: Decomposition a complex domain on a Cartesian grid into rectangular blocks. Thick outline
denotes the fictitious domain Πb .
Let N denote the number of cells in the solution domain Ωh . Assembling the discrete equation (2)
(or its modified form for boundary cells) for all the cells results in the linear algebraic system
Ax = b (3)
where A is a square matrix of order N , x is the vector of unknowns ui,j,k , and vector b contains the
source terms fi,j,k and contribution of the prescribed boundary conditions. It can be shown that A is an
irreducible diagonally dominant symmetric matrix, and hence we can use the preconditioned conjugate
gradient (PCG) method for its solution.
Let x0 be an initial guess, and define r0 = b − A x0 . Also, let M −1 denote the preconditioner. The
PCG algorithm [7] consists of the following set of computations for k = 0, 1, . . .:
z k = M −1 rk , βk = (z k , rk )/(z k−1 , rk−1 ), β0 = 0
k k k−1 k k k k
p = z + βk p , γk = (z , r )/(p , Ap ) (4)
xk+1 = xk + γk pk , rk+1 = rk − γk Apk
Iterations are stopped at iteration k if
krk k ≤ kr0 k (5)
where is the required relative accuracy. In the preceding equations, (u, v) and k.k denote the Euclidean
scalar product and norm given by
n
X 1
(u, v) = ui vi , kvk = (v, v) 2 u, v ∈ Rn . (6)
i=1
3 FICTITIOUS DOMAIN MULTIGRID PRECONDITIONER
Multigrid methods are widely accepted as the fastest methods for solution of elliptic PDEs on domains
for which one can easily construct a sequence of nested grids and related operators [9]. However, for
problems involving complex geometry, it becomes very difficult (if not impossible) to generate the grids
and multigrid components. For such problems, one simple approach could be to employ the concept of
fictitious domain method which introduces a simple imbedding domain. Multigrid method can be used
for the solution of the PDE on geometrically simple imbedding domain, thereby yielding an efficient
preconditioner for a Krylov subspace solver.
Let Π denote a rectangular domain imbedding Ω. We can decompose Π into Nα non-overlapping
rectangular blocks Παb which have their surfaces parallel to the global Cartesian grid surfaces. A minimal
covering Πb of Ω can be constructed using Nb (≤ Nα ) rectangular blocks. Thus, Πb = ∪α Παb (α =
1, . . . , Nb ). Clearly, Πb ⊆ Π. In many situations, Πb would be a proper subset of Π. Each interface
of a block in Πb is either linked to a neighbouring block or represents a part (or approximation) of the
boundary ∂Ω (see Figure 1). We define Πb to be the fictitious domain imbedding Ω for construction of
multigrid preconditioner.
Let B represent the system matrix obtained from the discretization of a PDE, e.g. the Poisson equa-
tion (1), on Πb . Traditional fictitious domain method constructs the solution vector x using the solution
of a set of linear problems Bqi = yi obtained with a fast solver [7, 10, 11]. We can instead employ this
approach as a preconditioner for an iterative solver wherein we solve only one equation B z̄ = r̄ at each
iteration. Let us suppose that the iterative solver requires the solution of M z k = rk at iteration k as the
effect of the preconditioner. To obtain zk , we employ a multiplicative multilevel Schwarz method (which
is essentially the standard V -cycle multigrid) due to its superior convergence properties [5].
α(j) (j)
Let Πb , j = 0, . . . , l denote a family of (l+1) nested rectangular grids of the block Παb . Let Πb =
α(j)
∪j Πb represent the j-th level grid on the fictitious domain Πb . Let the superscript (j) denote the
quantity associated with the j-th grid level. Thus, B (j) is the matrix obtained by discretizing the Poisson
equation on the j-th grid. Let R(j) and P (j) be the restriction and prolongation operators respectively on
the j-th grid level. Further, let us define a smoothing procedure denoted as
w̄(j) = SMOOTHν (w(j) , B (j) , r(j) ) (7)
which essentially represents the use of ν iterations of a classical iteration method such as Gauss-Seidel or
Jacobi method applied to the linear system B (j) x(j) = r(j) starting with the initial approximation w(j) .
We can now define a symmetric multiplicative multilevel algorithm (which is identical to the standard
(j)
V -cycle multigrid) as follows. Starting with an approximation xm , the algorithm obtains the next iterate
(j)
xm+1 by
(j) (j)
xm+1 = VCYC(rm , j, x(j)
m ) (8)
where the algorithm VCYC consists of the following recursive procedure [5, 9]:
• If j = 0, then use a direct or fast iterative solver to solve B (0) x(0) = r(0) .
• Else
(j) (j) (j)
— Presmoothing: x̄m = SMOOTHν (xm , B (j) , rm )
(j) (j) (j)
— Defect computation: r̄m = rm − B (j) x̄m
(j−1) (j)
— Restriction: r̄m = R(j−1) r̄m
(j) (j) (j−1)
— Coarse grid correction: xm0 = xm + P (j) VCYC(r̄m , j − 1, 0)
(j) (j) (j)
— Post-smoothing: x̄m+1 = SMOOTHν (xm0 , B (j) , rm )
• End If
Based on the algorithm VCYC, we can define the fictitious domain multigrid (FDMG) precondi-
tioner as:
Fictitious Domain Multigrid Preconditioner
• Set r̄l = (rk , 0)T , z̄0l = (z0k , 0)T .
• n Multigrid cycles:
l
z̄m = VCYC(r̄l , l, z̄m−1
l ), m =
1, . . . , n.
• Correction term: z k = z̄nl |Ωh
In the preceding algorithm, z0k denotes the initial guess for the correction term. In VCYC, we can
opt for either Jacobi or Gauss-Seidel iteration for smoothing. The latter choice normally gives a faster
numerical convergence [5], hence, we opt for it. For parallel implementation, we choose the red-black
Gauss-Seidel (GS-RB) smoother. We have opted for ν = 1 in all numerical experiments in this work.
Further, standard multigrid restriction and prolongation operators [17] have been used together with a
preconditioned conjugate gradient method for the solution of the coarsest grid problem (in which the
diagonal preconditioner has been used).
Let us note the fictitious domain multigrid preconditioner defined above is not restricted to a par-
ticular PDE or discretization scheme. It is completely general, and can be used for a wide class of
PDEs in conjunction with appropriate Krylov subspace solver, multigrid components and discretization
technique.
4 Parallel implementation
The core issues in parallel implementation of the conjugate gradient method with a multilevel precondi-
tioner are: (a) incorporation of overlapping domain decomposition, nested grids and related data struc-
tures preserving data locality, and (b) the implementation of the PCG and the multilevel preconditioner.
Let us take a closer look at each of these aspects.
We consider each block Παb used in the domain decomposition as a derived data type or object con-
taining its complete geometric information, graph of links describing its interconnection to neighbouring
blocks, discretization data, and partitions of all the matrices or data fields required in the solution pro-
cess. These block together with their overlap regions define the partition of the domain into overlapping
subdomains. We have opted for an overlap of one grid layer at all the grid levels. This is sufficient for
localized (i.e. the block-level) computations in the conjugate gradient method, as well as for the restric-
tion and prolongation operations in the multigrid preconditioner. Any data pertaining to the overlapping
regions are transmitted to neighbouring subdomains via message passing. These communications have
been implemented using standard message passing libraries — MPI and PVM.
With the preceding domain decomposition, parallel implementation of the PCG method becomes
very simple. Each processor takes care of the computation or updating of the fields pk , z k , rk , xk and
the matrix vector product Apk for the blocks assigned to it. Similarly, each processor evaluates the inner
product of the vector elements for the blocks residing on it. These partial inner products are summed
using global communication to form the full inner product. Global exchange of the data field pk is also
required for the overlap region for the local evaluation of the matrix-vector product Apk .
Our parallel implementation of multigrid is very similar to that of Degani and Fox [18]. We construct
the coarse grids and related data structures automatically and recursively based on the information on a
given fine grid. With the choice of the above mentioned domain decomposition at all the grid levels,
parallel implementation of the restriction and prolongation operations becomes straightforward provided
we update the data in the overlap regions at each grid level after each operation. With the use of red-black
ordering of the grids, implementation of the smoother also becomes fully parallel. The only component
which requires special attention is the solution at the coarsest grid level. To reduce the communication
and simplify the parallel implementation, we solve the problem at the coarsest grid on all the processors.
Although this approach involves redundant computations (since solution of the problem at one processor
is all that we require), it requires only one global broadcast of the residuals at each iteration since the
solution data at the coarsest grid level would be available on all the processors. Let us note that the matrix
data for the coarsest grid level is communicated only once (in the beginning of all the computations), and
hence, it is available on all the processors for all future computations.
Table 1: Effect of the number of multigrid cycles on the performance of the fictitious domain multigrid
preconditioner for heat conduction in the sphere.
MG Cycles Iterations CPU Time
1 42 44.92 s
2 36 66.34 s
3 35 93.70 s
5 NUMERICAL EXPERIMENTS
To explore the performance of the fictitious domain multigrid preconditioner for the solution of Poisson
problems in complex domains, we have applied it to two test problems, viz. (a)heat conduction in a
sphere, and (b) Poisson problem arising from the LES/DNS of incompressible turbulent flow over a
packed array of spheres (representing a rough bed).
All the experiments have been performed on either an SGI Origin 2000 or a Cray T3E. The multi-
grid preconditioner has been implemented as a part of an existing parallel complex geometry LES code
(CgLes) based on domain decomposition [19]. Relative stopping criterion (5) with = 10−6 has been
used for the PCG iterations.
5.1 Heat conduction a sphere
Let us consider the Poisson problem representing conduction of heat in a solid sphere of radius a with
heat produced in it a constant rate q0 per unit time per unit volume. The surface of the sphere is main-
tained at zero temperature. Analytical solution of this problem is given by
q0 2
ue (r) = (a − r2 ) (9)
6κ
where r is the distance from the center of the sphere and κ is the thermal conductivity. In the context of
structured Cartesian grids, the sphere does represent a complex geometry whose surface is approximated
by steps in the grid. The fictitious domain in this case is the imbedding cube of side 2a.
To assess the overall accuracy of the numerical solution, we use the discrete L2 norm. The global L2
error and relative error are given by
X e 1/2
keu k2 = (ui,j,k − ui,j,k )2 , η2 = 100 × keu k2 /kuk2 (in %) (10)
i,j,k
First, let us look at the effect of number of multigrid cycles on the convergence of PCG iterations
and computing efficiency. Table 1 presents results obtained with different number of cycles for 1283
discretization using 4 processors on Origin 2000. We can see that although increasing the number of
cycles slightly improves the convergence behaviour (by reducing the number of PCG iterations), overall
CPU time increases. Hence, the optimal choice seems to be the use of a single multigrid cycle in the
FDMG preconditioner which is used for subsequent experiments.
Table 2 presents the error in the solution obtained with the fictitious domain multigrid preconditioner
(FDMG) with different discretizations on a single processor on SGI Origin 2000. We can clearly see that
the results are very accurate for single domain as well as domain decomposition implementation of the
preconditioner. Single domain implementation requires slightly less CPU time than the DD implemen-
tation of the preconditioner as expected.
Figure 2 shows the convergence history of the PCG with FDMG preconditioner vis-à-vis simple di-
agonal preconditioner. We can observe excellent convergence behaviour of the PCG iterations with the
fictitious domain MG preconditioner. In fact, far fewer iterations are required with the FDMG precondi-
tioner than with the diagonal preconditioner for any level of solution accuracy.
Table 2: Accuracy of numerical solution with the fictitious domain multigrid preconditioner for heat
conduction in the sphere.
Single block (no domain decomposition) 2 × 2 × 2 Decomposition of the domain
Grid Iterations CPU Time Rel. Error η2 Iterations CPU Time Rel. Error η2
323 20 1.04 s 1.36 % 21 1.46 s 1.36 %
643 27 11.9 s 0.84 % 30 15.5 s 0.84 %
1283 40 172 s 0.33 % 42 175 s 0.33 %
Figure 2: Heat conduction in sphere: convergence of PCG iterations (643 grid).
5.2 Poisson problem arising in LES/DNS of flow over a rough bed
To explore the performance of the fictitious domain multigrid preconditioner for the solution of large
scale Poisson problems in complex domains, let us consider the Poisson problem for pressure arising
from the LES/DNS of the incompressible turbulent flow √over a rough bed consisting of a packed array of
spheres. Computational box has the dimensions of 16d 3 × 16d × 4d, where d is the diameter of spheres
arranged in a hexagonal array on the bed. For parallel solution, the domain has been decomposed into
16 × 4 × 16 blocks. Figure 3 gives a schematic representation of the cross-sectional and top view of
the domain decomposition for a quarter of the domain. We have used a global grid of 512 × 64 × 256
(N = 7, 122, 944) for LES, and 1024 × 128 × 512 (N = 56, 938, 500) for DNS. A stopping tolerance
= 10−6 has been used for the PCG iterations (unless indicated otherwise). Further, to judge the overall
parallel performance of the algorithm, we examine the traditionally used measures, viz. speedup, and
scalability.
5.2.1 Convergence behaviour
Figure 4 shows the convergence history of the PCG with FDMG preconditioner for the pressure Poisson
equation arising from LES on Origin 2000 (same pattern is observed for DNS runs on Cray T3E, and
hence, not included here). We can observe excellent convergence behaviour of the PCG iterations with
the fictitious domain multigrid preconditioner We can again observe (Figure 4 and Table 3) that far
fewer iterations and much less CPU time are required with the FDMG preconditioner than with the
diagonal preconditioner for any level of solution accuracy. The excellent convergence behaviour together
with the vastly superior computing efficiency the FDMG preconditioner for this large scale problem on
complicated geometry clearly establish its suitability for Poisson problems in complex geometry.
5.2.2 Speedup
Speedup Sp is defined as [5]
T1
Sp = (11)
Tp
(a) (b)
Cross- Top
sectionview
Figure 3: Poisson problem arising in LES/DNS of flow over rough bed: domain decomposition (quarter
of the domain).
Figure 4: Poisson problem arising in LES of flow over rough bed: convergence of PCG iterations.
Table 3: Poisson problem arising in LES/DNS of flow over rough bed: CPU time required for different
levels of solution accuracy.
Fictitious domain MG preconditioner Diagonal preconditioner Rel. efficiency
Iterations Time (TFDMG ) Iterations Time (TDIAG ) (TDIAG /TFDMG )
LES runs on Origin 2000 (No. of processors = 8)
10−04 6 26.70 s 174 174.22 s 6.52
10−06 18 79.68 s 647 647.44 s 8.12
10−08 29 126.90 s 855 855.54 s 6.74
DNS runs on Cray T3E (No. of processors = 256)
10−04 5 4.98 s 856 253.06 s 50.8
10−06 18 17.90 s 3042 909.46 s 50.8
10−08 30 29.79 s 4045 1205.7 s 40.5
where T1 is the time required to run a problem of size N on one processor (possibly using an optimal
sequential algorithm), and Tp is the time required to run it using p processors. Ep = Sp /p is called the
parallel efficiency. In practice, it is very difficult to use this quantitative measure to assess the parallel
performance of an algorithm for a problem too large to be solved on a single processor. In this case, we
can instead use a relative speedup
Tp∗
Spr = (12)
Tp
to judge if the use of a larger number of processors would result in lower computing time. Here, p∗
denotes the minimum number of processors required for the given size of the problem on a particular
parallel computer. Similarly, we can define a modified parallel efficiency Epr = Spr /(p/p∗ ).
Table 4 gives the results for the LES runs on Cray T3E and SGI Origin 2000 machines. Excellent
speedup and parallel efficiency can be seen for the FDMG preconditioner with the increasing number of
processors on both the machines.
5.2.3 Scalability
An algorithm is said to be scalable if its parallel efficiency is constant (or at least bounded away from
zero) when the number of processors is increased while keeping the local problem size fixed in each
processor [5].
To study the scalability of the fictitious domain multigrid preconditioner for this problem, let us
consider the requirements of LES and DNS runs. The DNS represents an increase in the problem size by
Table 4: Poisson problem for LES of roughbed flow: Speedup of the FDMG preconditioner for a fixed
problem size with increasing number of processors.
LES runs on Cray T3E (Iterations = 18) LES runs on Origin 2000 (Iterations = 18)
Processors Time Speedup (Spr ) Epr Processors Time Speedup (Sp ) Ep
32 25.00 s 1.00 1.00 1 460.5 s 1.00 1.00
64 16.26 s 1.54 0.77 2 259.4 s 1.77 0.88
128 9.02 s 2.77 0.69 4 149.2 s 3.09 0.77
Table 5: Poisson problem arising in LES/DNS of flow over rough bed: Scalability of the FDMG precon-
ditioner keeping fixed size subproblems per processor.
Processors Rectangular grid N Iterations Time
32 512 × 64 × 256 7,122,944 18 24.20 s
256 1024 × 64 × 512 56,938,500 18 17.90 s
a factor of 8 as compared to the LES. Although these two runs represent two different physical problems
(as the source term would be different), execution time requirement would still provide a very good
indication of the scalability. Results in Table 5 for the runs on Cray T3E indicate perfect scalability
when increasing the problem and the machine size.
Results for the preceding problems clearly indicate that the fictitious domain multigrid preconditioner
is very well suited for the solution of large scale Poisson problems arising from the LES/DNS of turbulent
flows in complicated domains on parallel computers.
6 CONCLUSIONS
We have presented a parallel multigrid preconditioner for the solution of the Poisson equation in com-
plicated geometry. We have used a decomposition of the problem domain into rectangular blocks with
matching interfaces on a uniform structured rectangular grid. A stepwise approximation has been used
to approximate the sloping sides or complicated geometric features in the problem domain. The central
difference discretization of the Laplacian results in a symmetric linear algebraic system, which is solved
using preconditioned conjugate gradient method. For construction of the parallel multilevel precondi-
tioner, we have used the fictitious domain approach in conjunction with the domain decomposition. The
union of the rectangular blocks employed in domain decomposition has been used as the fictitious do-
main. Numerical results are presented for two typical Poisson problems in complicated geometry — one
representing heat conduction in a sphere, and the other one arising from the LES/DNS of incompressible
turbulent flow over a packed array of spheres. The first problem has been used to verify the accuracy,
whereas the second one tests the suitability of the algorithm for large scale problems in complex ge-
ometry on two different parallel machines. Results on convergence, computational efficiency vis-à-vis
the simple diagonal preconditioner, speed-up and scalability for these problems clearly establish the
efficiency, suitability and robustness of the proposed approach for problems in complex geometry.
Let us note that although we have constructed and used the fictitious domain multigrid preconditioner
for the solution of Poisson problems in the context of cell-centered finite difference scheme, this approach
is very general and can be used with any discretization method (FEM, FVM etc) by appropriate choice
of Krylov subspace method and multigrid components for a wider class of problems.
References
[1] O. C. Zienkiewicz, R. Taylor, The finite element method, McGraw-Hill, 1989.
[2] C. A. Brebbia, J. C. F. Telles, L. C. Wrobel, Boundary Element Techniques, Springer-Verlag, Berlin, 1984.
[3] G. Smith, Numerical Solution of Partial Differential Equations: Finite Difference Methods, 2nd Edition,
Oxford Univ. Press, 1985.
[4] J. H. Ferziger, M. Perić, Computational Methods for Fluid Dynamics, 2nd Edition, Springer-Verlag, Berlin,
1999.
[5] B. F. Smith, P. E. Bjørstad, W. Gropp, Domain Decomposition: Parallel Multilevel Methods for Elliptic
Partial Differential Equations, Cambridge University Press, 1996.
[6] A. Quarteroni, A. Valli, Domain Decomposition Methods for Partial Differential Equations, Oxford Univer-
sity Press, Oxford, 1999.
[7] G. Meurant, Computer Solution of Large Linear Systems, Elsevier Science B.V., Amsterdam, 1999.
[8] C. C. Douglas, M. B. Douglas, MGNet Bibliography, URL [Link] Depart-
ment of Computer Science, Yale University, New Haven, CT, USA, 1991–2002.
[9] U. Trottenberg, C. W. Oosterlee, A. Schüller, Multigrid, Academic Press, London, 2001.
[10] B. L. Buzbee, F. W. Dorr, J. A. George, G. H. Golub, The direct solution of the discrete Poisson equation on
irregular regions, SIAM Journal on Numerical Analysis 8 (4) (1973) 722–736.
[11] W. Proskurowski, O. B. Widlund, On the numerical solution of Helmholtz’s equation by the capacitance
matrix method, Math. Comp. 30 (1976) 433–468.
[12] M. Dryja, A capacitance matrix method for Dirichlet problems on polygonal domains, Numer. Math. 39
(1982) 51–64.
[13] S. F. McCormick, J. W. Thomas, The fast adaptive composite grid (FAC) method for elliptic equations, Math.
Comp. 46 (1986) 439–456.
[14] G. I. Marchuk, Y. A. Kuznetsov, A. M. Matsokin, Fictitions domain and domain decomposition methods,
Sov. J. Numer. Anal. Math. Modeling 1 (1986) 3–35.
[15] J. H. Bramble, R. E. Ewing, J. E. Pasciak, A. H. Schatz, A preconditioning technique for the efficient solution
of problems with local grid refinement, Comp. Meth. Appl. Mech. Engng. 67 (1988) 149–159.
[16] M. Garbey, Y. V. Vassilevski, A parallel solver for unsteady incompressible 3D Navier-Stokes equations,
Parallel Computing 27 (2001) 363–389.
[17] P. Wesseling, An Introduction To Multigrid Methods, Wiley, New York, 1992.
[18] A. T. Degani, G. C. Fox, Parallel multigrid computation of the unsteady incompressible Navier Stokes equa-
tions, Journal of Computational Physics 128 (1996) 223–236.
[19] T. G. Thomas, J. J. R. Williams, Development of a parallel code to simulate skewed flow over a bluff body,
Journal of Wind Engineering and Industrial Aerodynamics 67&68 (1997) 155–167.