Chapter 3
Orthogonal Polynomials and Related
Approximation Results
The Fourier spectral method is only appropriate for problems with periodic
boundary conditions. If a Fourier method is applied to a non-periodic problem,
it inevitably induces the so-called Gibbs phenomenon, and reduces the global
convergence rate to O(N1) (cf. Gottlieb and Orszag (1977)). Consequently,
one should not apply a Fourier method to problems with non-periodic boundary
conditions. Instead, one should use orthogonal polynomials which are eigenfunc-
tions of some singular Sturm-Liouville problems. The commonly used orthogonal
polynomials include the Legendre, Chebyshev, Hermite and Laguerre polynomials.
The aim of this chapter is to present essential properties and fundamental
approximation results related to orthogonal polynomials. These results serve as
preparations for polynomial-based spectral methods in the forthcoming chapters.
This chapter is organized as follows. In the first section, we present relevant prop-
erties of general orthogonal polynomials, and set up a general framework for the
study of orthogonal polynomials. We then study the Jacobi polynomials in Sect. 3.2,
Legendre polynomials in Sect. 3.3 and Chebyshev polynomials in Sect. 3.4. In
Sect. 3.5, we present some general approximation results related to these families
of orthogonal polynomials. We refer to Szego (1975), Davis and Rabinowitz (1984)
and Gautschi (2004) for other aspects of orthogonal polynomials.
3.1 Orthogonal Polynomials
Orthogonal polynomials play the most important role in spectral methods, so it is
necessary to have a thorough study of their relevant properties. Our starting point is
the generation of orthogonal polynomials by a three-term recurrence relation, which
leads to some very useful formulas such as the Christoffel-Darboux formula. We
then review some results on zeros of orthogonal polynomials, and present efficient
algorithms for their computations. We also devote several sections to discussing
some important topics such as Gauss-type quadrature formulas, polynomial inter-
polations, discrete transforms, and spectral differentiation techniques.
J. Shen et al., Spectral Methods: Algorithms, Analysis and Applications, 47
Springer Series in Computational Mathematics 41, DOI 10.1007/978-3-540-71041-7 3,
c Springer-Verlag Berlin Heidelberg 2011
48 3 Orthogonal Polynomials and Related Approximation Results
3.1.1 Existence and Uniqueness
Given an open interval I := (a,b) ( a < b +), and a generic weight func-tion such that(x)> 0, x I and L1(I), (3.1)
two functions f and g are said to be orthogonal to each other in L2 (a,b) or orthog-
onal with respect to if
( f ,g) :=
b
a
f (x)g(x)(x)dx = 0.
An algebraic polynomial of degree n is denoted by
pn(x) = knx
n + kn1xn1 + . . .+ k1x+ k0, kn = 0, (3.2)
where {ki} are real constants, and kn is the leading coefficient of pn.
A sequence of polynomials {pn}n=0 with deg(pn) = n is said to be orthogonal
in L2 (a,b) if
(pn, pm) =
b
a
pn(x)pm(x)(x)dx = nmn, (3.3)
where the constant n = pn2 is nonzero, and mn is the Kronecker delta.
Throughout this section, {pn} denotes a sequence of polynomials orthogonal
with respect to the weight function , and pn is of degree n.
Denote by Pn the set of all algebraic polynomials of degree n, namely,
Pn := span
{
1,x,x2, . . . ,xn
}
. (3.4)
By a dimension argument,
Pn = span{p0, p1, . . . , pn} . (3.5)
A direct consequence is the following.
Lemma 3.1. pn+1 is orthogonal to any polynomial q Pn.
Proof. Thanks to (3.5), for any q Pn, we can write
q = bn pn + bn1pn1 + . . .+ b0p0.
Hence,
(pn+1, q) = bn(pn+1, pn) + bn1(pn+1, pn1) + . . .+ b0(pn+1, p0) = 0,
where we have used the orthogonality (3.3).
Hereafter, we denote the monic polynomial corresponding to pn by
pn(x) := pn(x)/kn = x
n + a
(n)
n1x
n1 + . . .+ a(n)0 . (3.6)
3.1 Orthogonal Polynomials 49
We show below that for any given weight function (x) defined in (a,b), there
exists a unique family of monic orthogonal polynomials generated by a three-term
recurrence formula.
Theorem 3.1. For any given positive weight function L1(I), there exists a
unique sequence of monic orthogonal polynomials { pn} with deg(pn) = n, which
can be constructed as follows
p0 = 1, p1 = x0,
pn+1 = (xn)pn n pn1, n 1,
(3.7)
where
n =
(xpn, pn)
pn2
, n 0, (3.8a)
n =
pn2
pn12
, n 1. (3.8b)
Proof. It is clear that the first two polynomials are
p0(x) 1, p1(x) = x0.
To determine 0, we see that (p0, p1) = 0 if and only if
0 =
b
a
(x)xdx
/ b
a
(x)dx =
(xp0, p0)
p02
,
where the denominator is positive due to (3.1).
We proceed with the proof by using an induction argument. Assuming that by a
similar construction, we have derived a sequence of monic orthogonal polynomials
{pk}nk=0 . Next, we seek pn+1 of the form
pn+1 = xpn n pn n pn1
n2
k=0
(n)k pk, (3.9)
with n and n given by (3.8), and we require
(
pn+1, pk
)
= 0, 0 k n. (3.10)
Taking the inner product with pk on both sides of (3.9), and using the orthogo-
nality of { pk}nk=0, we find that (3.10) is fulfilled if and only if
(pn+1, pn) = (xpn, pn) n(pn, pn) = 0,
(pn+1, pn1) = (xpn, pn1) n(pn1, pn1) = 0,
(pn+1, p j) = (xpn, p j)
n2
k=0
(n)k (pk, p j)
(3.3)
= (xpn, p j) (n)j p j2 = 0, 0 j n 2.
(3.11)
50 3 Orthogonal Polynomials and Related Approximation Results
Hence, the first equality implies (3.8a), and by the second one,
n =
(xpn, pn1)
pn12
=
(pn,xpn1)
pn12
=
pn2
pn12
,
where we have used the fact
xpn1 = pn +
n1
k=0
(n)k pk,
and the orthogonality of { pk}nk=0 to deduce the last identity. It remains to show that
the coefficients {(n)k }n2k=0 in (3.9) are all zero. Indeed, we derive from Lemma 3.1
that (
xpn, p j
)
= (pn, xp j) = 0, 0 j n 2,
which, together with the last equation of (3.11), implies (n)k 0 for 0 k n 2,
in (3.9). This completes the induction.
Next, we show that the polynomial sequence generated by (3.7)(3.8) is unique.
For this purpose, we assume that {qn}n=0 is another sequence of monic orthogonal
polynomials. Since pn, given by (3.7), is also monic, we have deg
(
pn+1 qn+1
)
n.
By Lemma 3.1,
(pn+1, pn+1 qn+1) = 0, (qn+1, pn+1 qn+1) = 0,
which implies
(
pn+1 qn+1, pn+1 qn+1
)
= 0 pn+1(x) qn+1(x) 0.
This proves the uniqueness.
The above theorem provides a general three-term recurrence relation to construct
orthogonal polynomials, and the constants n and n can be evaluated explicitly for
the commonly used families. The three-term recurrence relation (3.7) is essential
for deriving other properties of orthogonal polynomials. For convenience, we first
extend it to the orthogonal polynomials {pn}, which are not necessarily monic.
Corollary 3.1. Let {pn} be a sequence of orthogonal polynomials with the leading
coefficient kn = 0. Then we have
pn+1 = (anx bn)pn cn pn1, n 0, (3.12)
with p1 := 0, p0 = k0 and
an =
kn+1
kn
, (3.13a)
bn =
kn+1
kn
(xpn, pn)
pn2
, (3.13b)
cn =
kn1kn+1
k2n
pn2
pn12
. (3.13c)
3.1 Orthogonal Polynomials 51
Proof. This result follows directly from Theorem 3.1 by inserting pl = pl/kl with
l = n 1,n,n+ 1 into (3.7) and (3.8).
The orthogonal polynomials {pn} with leading coefficients {kn} are uniquely de-
termined by (3.12)(3.13). Interestingly, the following result, which can be viewed
as the converse of Corollary 3.1, also holds. We leave its proof as an exercise (see
Problem 3.1).
Corollary 3.2. Let {kn = 0} be a sequence of real numbers. The three-term re-
currence relation (3.12)(3.13) generates a sequence of polynomials satisfying the
properties:
the leading coefficient of pn is kn and deg(pn) = n;
{pn} are orthogonal with respect to the weight function (x);
the L2 -norm of pn is given by
n = pn2 = (a0/an)c1c2 . . .cn0, n 0, (3.14)
where 0 = k20
b
a (x)dx.
An important consequence of the three-term recurrence formula (3.12)(3.13) is
the well-known Christoff-Darboux formula.
Corollary 3.3. Let {pn} be a sequence of orthogonal polynomials with deg(pn) = n.
Then,
pn+1(x)pn(y) pn(x)pn+1(y)
x y =
kn+1
kn
n
j=0
pn2
p j2
p j(x)p j(y), (3.15)
and
pn+1(x)pn(x) pn(x)pn+1(x) =
kn+1
kn
n
j=0
pn2
p j2
p2j(x). (3.16)
Proof. We first prove (3.15). By Corollary 3.1,
p j+1(x)p j(y) p j(x)p j+1(y)
= [(a jx b j)p j(x) c j p j1(x)]p j(y)
p j(x)[(a jy b j)p j(y) c j p j1(y)]
= a j(x y)p j(x)p j(y)+ c j[p j(x)p j1(y) p j1(x)p j(y)].
Thus, by (3.13),
k j
k j+1p j2
p j+1(x)p j(y) p j(x)p j+1(y)
x y
k j1
k jp j12
p j(x)p j1(y) p j1(x)p j(y)
x y =
1
p j2
p j(x)p j(y).
This relation also holds for j = 0 by defining p1 := 0. Summing the above identities
for 0 j n leads to (3.15).
52 3 Orthogonal Polynomials and Related Approximation Results
To prove (3.16), we observe that
pn+1(x)pn(y) pn(x)pn+1(y)
x y
=
pn+1(x) pn+1(y)
x y pn(y)
pn(x) pn(y)
x y pn+1(y).
Consequently, letting y x, we obtain (3.16) from (3.15) and the definition of the
derivative.
Define the kernel
Kn(x,y) =
n
j=0
p j(x)p j(y)
p j2
. (3.17)
The Christoff-Darboux formula (3.15) can be rewritten as
Kn(x,y) =
kn
kn+1pn2
pn+1(x)pn(y) pn(x)pn+1(y)
x y . (3.18)
A remarkable property of {Kn} is stated in the following lemma.
Lemma 3.2. There holds the integral equation:
q(x) =
b
a
q(t)Kn(x, t)(t)dt, q Pn. (3.19)
Moreover, the polynomial sequence {Kn(x,a)} (resp. {Kn(x,b)}) is orthogonal with
respect to the weight function (x a) (resp. (b x)).
Proof. Thanks to (3.5), for any q Pn, we can write
q(x) =
n
j=0
q j p j(x) with q j =
1
p j2
b
a
q(t)p j(t)(t)dt.
Thus, by definition (3.17),
q(x) =
n
j=0
1
p j2
b
a
q(t)p j(x)p j(t)(t)dt =
b
a
q(t)Kn(x, t)(t)dt,
which gives (3.19).
Next, taking x = a and q(t) = (t a)r(t) for any r Pn1 in (3.19) yields
0 = q(a) =
b
a
Kn(t,a)r(t)(t a)(t)dt, r Pn1,
which implies {Kn(x,a)} is orthogonal with respect to (x a) .
Similarly, taking x = b and q(t) = (b t)r(t) for any r Pn1 in (3.19), we can
show that {Kn(x,b)} is orthogonal with respect to (b x) .
3.1 Orthogonal Polynomials 53
3.1.2 Zeros of Orthogonal Polynomials
Zeros of orthogonal polynomials play a fundamental role in spectral methods. For
example, they are chosen as the nodes of Gauss-type quadratures, and used to gen-
erate computational grids for spectral methods. Therefore, it is useful to derive their
essential properties.
Again, let {pn} (with deg(pn) = n) be a sequence of polynomials orthogonal
with respect to the weight function (x) in (a,b). The first important result about
the zeros of orthogonal polynomials is the following:
Theorem 3.2. The zeros of pn+1 are all real, simple, and lie in the interval (a,b).
Proof. We first show that the zeros of pn+1 are all real. Assuming i are a pair
of complex roots of pn+1. Then pn+1/((x)2 + 2) Pn1, and by Lemma 3.1,
0 =
b
a
pn+1
pn+1
(x)2 + 2 dx =
b
a
(
(x)2 + 2
) pn+1
(x)2 + 2
2dx,
which implies that = 0. Hence, all zeros of pn+1 must be real.
Next, we prove that the n+ 1 zeros of pn+1 are simple, and lie in the interval
(a,b). By the orthogonality,
b
a
pn+1(x)(x)dx = 0, n 0,
there exists at least one zero of pn+1 in (a,b). In other words, pn+1(x) must change
sign in (a,b). Let x0,x1, . . . ,xk be all such points in (a,b) at which pn+1(x) changes
sign. If k = n, we are done, since {xi}ni=0 are the n+ 1 simple real zeros of pn+1. If
k < n, we consider the polynomialq(x) = (x x0)(x x1) . . . (x xk).Since deg(q) = k+ 1 < n+ 1, by orthogonality, we derive(pn+1, q) = 0.However, pn+1(x)q(x) cannot change sign on (a,b), since each sign change inpn+1(x) is canceled by a corresponding sign change in q(x). It follows that(pn+1, q) = 0,which leads to a contradiction. Another important property is known as the separation theorem.Theorem 3.3. Let x0 = a, xn+1 = b and x1 < x2 < .. . < xn be the zeros of pn. Thenthere exists exactly one zero of pn+1 in each subinterval (x j, x j+1), j = 0,1, . . . ,n.54 3 Orthogonal Polynomials and Related Approximation ResultsProof. It is obvious that the location of zeros is invariant with any nonzero constantmultiple of pn and pn+1, so we assume that the leading coefficients kn,kn+1 > 0.
We first show that each of the interior subintervals (x j, x j+1), j = 1,2, . . . ,n 1,
contains at least one zero of pn+1, which is equivalent to proving
pn+1(x j)pn+1(x j+1)< 0, 1 j n 1. (3.20)Since pn can be written aspn(x) = kn(x x1)(x x2) . . . (x xn),a direct calculation leads topn(x j) = knj1l=1(x j xl) nl= j+1(x j xl). (3.21)This impliespn(x j)pn(x j+1) = Dn, j (1)n j (1)n j1 < 0, (3.22)where Dn, j is a positive constant. On the other hand, using the facts that pn(x j) =pn(x j+1) = 0 and kn,kn+1 > 0, we find from (3.16) that
pn(x j)pn+1(x j)> 0, pn(x j+1)pn+1(x j+1)> 0. (3.23)
Consequently,
[
pn(x j)p
n(x j+1)
][
pn+1(x j)pn+1(x j+1)
]
> 0
(3.22)
= pn+1(x j)pn+1(x j+1)< 0.Next, we prove that there exists at least one zero of pn+1 in each of the boundarysubintervals (xn, b) and (a,x1). Since pn(xn) = 0 and pn(xn) > 0 (cf. (3.21)), the
use of (3.16) again gives pn+1(xn) < 0. On the other hand, due to kn+1 > 0, we
have pn+1(b)> 0. Therefore, pn+1(xn)pn+1(b) < 0, which implies (xn,b) containsat least one zero of pn+1. The existence of at least one zero of pn+1 in (a,x1) can bejustified in a similar fashion.In summary, we have shown that each of the n+ 1 subintervals (x j,x j+1), 0 j n, contains at least one zero of pn+1. By Theorem 3.2, pn+1 has exactly n+ 1real zeros, so each subinterval contains exactly one zero of pn+1. A direct consequence of (3.22) is the following.Corollary 3.4. Let {pn} be a sequence of orthogonal polynomials with deg(pn)= n.Then the zeros of pn are real and simple, and there exists exactly one zero of pnbetween two consecutive zeros of pn.3.1 Orthogonal Polynomials 553.1.3 Computation of Zeros of Orthogonal PolynomialsWe present below two efficient algorithms for computing zeros of orthogonal poly-nomials.The first approach is the so-called Eigenvalue Method.Theorem 3.4. The zeros {x j}nj=0 of the orthogonal polynomial pn+1(x) are eigen-values of the following symmetric tridiagonal matrix:An+1 =0 11 1 2. . .. . .. . .n1 n1 nn n, (3.24)where j =b ja j, j 0; j =1a j1a j1c ja j, j 1, (3.25)with {a j,b j,c j} being the coefficients of the three-term recurrence relation (3.12),namely,p j+1(x) = (a jx b j)p j(x) c j p j1(x), j 0, (3.26)with p1 := 0.Proof. We first normalize the orthogonal polynomials {p j} by definingp j(x) =1 jp j(x) with j = p j2 p j = 1.Thus, we havexp j(3.26)=c ja j j1 jp j1 +b ja jp j +1a j j+1 jp j+1(3.13)=1a j1 j j1p j1 +b ja jp j +1a j j+1 jp j+1= j p j1(x)+ j p j(x)+ j+1 p j+1(x), j 0,(3.27)where we denote j :=b ja j, j :=1a j1 j j1.56 3 Orthogonal Polynomials and Related Approximation ResultsBy (3.13), j j1=a j1c ja j> 0 j =
1
a j1
a j1c j
a j
.
Then we rewrite the recurrence relation (3.27) as
xp j(x) = j p j1(x)+ j p j(x)+ j+1 p j+1(x), j 0.
We now take j = 0,1, . . . ,n to form a system with the matrix form
xP(x) = An+1P(x)+n+1 pn+1(x)En+1, (3.28)
where An+1 is given by (3.24), and
P(x) =
(
p0(x), p1(x), . . . , pn(x)
)T
, En+1 = (0,0, . . . ,0,1)
T .
Since pn+1(x j) = 0, 0 j n, the system (3.28) at x = x j becomes
x jP(x j) = An+1P(x j), 0 j n. (3.29)
Hence, {x j}nj=0 are eigenvalues of the symmetric tridiagonal matrix An+1.
An alternative approach for finding zeros of orthogonal polynomials is to use an
iterative procedure. More precisely, let x0j be an initial approximation to the zero x j
of pn+1(x). Then, one can construct an iterative scheme in the general form:
{
xk+1j = x
k
j +D(x
k
j), 0 j n, k 0,
given
{
x0j
}n
j=0
, and a termination rule.
(3.30)
The deviation D() classifies different types of iterative schemes. For instance, the
Newton method is of second-order with
D(x) = pn+1(x)
pn+1(x)
, (3.31)
while the Laguerre method is a third-order scheme with
D(x) = pn+1(x)
pn+1(x)
pn+1(x)p
n+1(x)
2(pn+1(x))
2
. (3.32)
Higher-order schemes can be constructed by using higher-order derivatives of pn+1.
The success of an iterative method often depends on how good is the initial guess.
If the initial approximation is not sufficiently close, the algorithm may converge to
other unwanted values or even diverge. For a thorough discussion on how to find
zeros of polynomials, we refer to Pan (1997) and the references therein.
3.1 Orthogonal Polynomials 57
3.1.4 Gauss-Type Quadratures
We now discuss the relations between orthogonal polynomials and Gauss-type
integration formulas. The mechanism of a Gauss-type quadrature is to seek the best
numerical approximation of an integral by selecting optimal nodes at which the in-
tegrand is evaluated. It belongs to the family of the numerical quadratures:
b
a
f (x)(x)dx =
N
j=0
f (x j) j +EN[ f ], (3.33)
where {x j, j}Nj=0 are the quadrature nodes and weights, and EN [ f ] is the quadrature
error. If EN [ f ] 0, we say the quadrature formula (3.33) is exact for f .
Hereafter, we assume that the nodes {x j}Nj=0 are distinct. If f (x) CN+1[a,b],
we have (see, e.g., Davis and Rabinowitz (1984)):
EN [ f ] =
1
(N + 1)!
b
a
f (N+1)( (x))
N
i=0
(x xi)dx, (3.34)
where (x) [a,b]. The Lagrange basis polynomials associated with {x j}Nj=0 are
given by
h j(x) =
N
i=0;i= j
x xi
x j xi
, 0 j N, (3.35)
so taking f (x) = h j in (3.33) and using (3.34), we find the quadrature weights:
j =
b
a
h j(x)(x)dx, 0 j N. (3.36)
We say that the integration formula (3.33)(3.36) has a degree of precision (DOP)
m, if there holds
EN [p] = 0, p Pm but q Pm+1 such that EN [q] = 0. (3.37)
In general, for any N + 1 distinct nodes {x j}Nj=0 (a,b), the DOP of (3.33)(3.36)
is between N and 2N + 1. Moreover, if the nodes {xk}Nk=0 are chosen as zeros of
the polynomial pN+1 orthogonal with respect to , this rule enjoys the maximum
degree of precision 2N + 1. Such a rule is known as the Gauss quadrature.
Theorem 3.5. (Gauss quadrature) Let {x j}Nj=0 be the set of zeros of the orthogonal
polynomial pN+1. Then there exists a unique set of quadrature weights { j}Nj=0,
defined by (3.36), such that
b
a
p(x)(x)dx =
N
j=0
p(x j) j, p P2N+1, (3.38)
58 3 Orthogonal Polynomials and Related Approximation Results
where the quadrature weights are all positive and given by
j =
kN+1
kN
pN2
pN(x j)pN+1(x j)
, 0 j N, (3.39)
where k j is the leading coefficient of the polynomial p j.
Proof. Let {h j}Nj=0 be the Lagrange basis polynomials defined in (3.35). It is clear
that
PN = span
{
h j : 0 j N
}
p(x) =
N
j=0
p(x j)h j(x), p PN .
Hence,
b
a
p(x)(x)dx =
N
j=0
p(x j)
b
a
h j(x)(x)dx
(3.36)
=
N
j=0
p(x j) j, (3.40)
which implies (3.38) is exact for any p PN . In other words, the DOP of this rule is
not less than N.
Next, for any p P2N+1, we can write p = rpN+1 + s where r,s PN . In view of
pN+1(x j) = 0, we have p(x j) = s(x j) for 0 j N. Since pN+1 is orthogonal to r
(cf. Lemma 3.1) and s PN , we find
b
a
p(x)(x)dx =
b
a
s(x)(x)dx
=
N
j=0
s(x j) j
(3.40)
=
N
j=0
p(x j) j, p P2N+1, (3.41)
which leads to (3.38).
Now, we show that j > 0 for 0 j N. Taking p(x) = h2j(x) P2N in (3.41)
leads to
0 < bah2j(x)(x)dx =Nk=0h2j(xk)k = j, 0 j N.It remains to establish (3.39). Since pN+1(x j) = 0, taking y = x j and n = N in theChristoff-Darboux formula (3.15) yieldspN(x j)pN+1(x)x x j=kN+1kNNi=0pN2pi2pi(x j)pi(x).Multiplying the above formula by (x) and integrating the resulting identity over(a,b), we deduce from the orthogonality (pi,1) = 0, i 1, thatpN(x j) bapN+1(x)x x j(x)dx=kN+1kNpN2(p0(x j), p0)p02=kN+1kNpN2 .(3.42)3.1 Orthogonal Polynomials 59Note that the Lagrange basis polynomial h j(x) in (3.35) can be expressed ash j(x) =pN+1(x)pN+1(x j)(x x j). (3.43)Plugging it into (3.42) givespN(x j) bapN+1(x)x x j(x)dx = pN(x j)pN+1(x j) bah j(x)(x)dx= pN(x j)pN+1(x j) j =kN+1kNpN2 ,(3.44)which implies (3.39). The above fundamental theorem reveals that the optimal abscissas of the Gaussquadrature formula are precisely the zeros of the orthogonal polynomial for thesame interval and weight function. The Gauss quadrature is optimal because it fitsall polynomials up to degree 2N + 1 exactly, and it is impossible to find any set of{x j, j}Nj=0 such that (3.38) holds for all p P2N+2 (see Problem 3.3).With the exception of a few special cases, like the Chebyshev polynomials, noexplicit expressions of the quadrature nodes and weights are available. Theorem 3.4provides an efficient approach to compute the nodes {x j}Nj=0, through finding theeigenvalues of the symmetric tridiagonal matrix AN+1 defined in (3.24). The fol-lowing result indicates that the weights { j}Nj=0 can be computed from the firstcomponent of the eigenvectors of AN+1.Theorem 3.6. LetQ(x j) =(Q0(x j),Q1(x j), . . . ,QN(x j))Tbe the orthonormal eigenvector of AN+1 corresponding to the eigenvalue x j, i.e.,AN+1Q(x j) = x jQ(x j) with Q(x j)T Q(x j) = 1.Then the weights { j}Nj=0 can be computed from the first component of the eigen-vector Q(x j) by using the formula: j =[Q0(x j)]2 ba(x)dx, 0 j N. (3.45)Proof. Using the Christoffel-Darboux formula (3.16) and the fact that pN+1(x j) = 0,we derive from (3.39) the following alternative expression of the weights:1j(3.39)=kNkN+1pN(x j)pN+1(x j)pN2(3.16)=Nn=0p2n(x j)pn2= P(x j)T P(x j), 0 j N,(3.46)60 3 Orthogonal Polynomials and Related Approximation ResultswhereP(x j) =(p0(x j), p1(x j), . . . , pN(x j))Twith pn =pnpn.The identity (3.46) can be rewritten as jP(x j)T P(x j) = 1, 0 j N.On the other hand, we deduce from (3.29) that P(x j) is an eigenvector correspondingto the eigenvalue x j. Therefore,Q(x j) = j P(x j), 0 j N, (3.47)is the unit eigenvector corresponding to the eigenvalue x j. Equating the first com-ponents (3.47) yields j =[Q0(x j)p0(x j)]2=p02[p0(x j)]2[Q0(x j)]2=[Q0(x j)]2 ba(x)dx, 0 j N.This completes the proof. Notice that all the nodes of the Gauss formula lie in the interior of the interval(a,b). This makes it difficult to impose boundary conditions. Below, we considerthe Gauss-Radau or Gauss-Lobatto quadratures which include either one or bothendpoints as a node(s).We start with the Gauss-Radau quadrature. Assuming we would like to includethe left endpoint x = a in the quadrature, we defineqN(x) =pN+1(x)+N pN(x)x a with N =pN+1(a)pN(a). (3.48)It is obvious that qN PN , and for any rN1 PN1, we derive from Lemma 3.1 that baqN(x)rN1(x)(x)(x a)dx= ba(pN+1(x)+N pN(x)) rN1(x)(x)dx = 0.(3.49)Hence,{qN : N 0}defines a sequence of polynomials orthogonal with respect tothe weight function (x) := (x)(x a), and the leading coefficient of qN is kN+1.Theorem 3.7. (Gauss-Radau quadrature) Let x0 = a and {x j}Nj=1 be the zeros ofqN defined in (3.48). Then there exists a unique set of quadrature weights { j}Nj=0,defined by (3.36), such that bap(x)(x)dx =Nj=0p(x j) j, p P2N. (3.50)3.1 Orthogonal Polynomials 61Moreover, the quadrature weights are all positive and can be expressed as0 =1qN(a) baqN(x)(x)dx, (3.51a) j =1x j akN+1kNqN12qN1(x j)qN(x j), 1 j N. (3.51b)Proof. The proof is similar to that of Theorem 3.5, so we shall only sketch it below.Obviously, for any p PN , bap(x)(x)dx =Nj=0p(x j) bah j(x)(x)dx(3.36)=Nj=0p(x j) j. (3.52)Hence, the DOP is at least N.Next, for any p P2N , we writep = (x a)rqN + s, r PN1, s PN .Since (x a)qN(x)x=x j= 0, we have p(x j) = s(x j) for 0 j N. Therefore, wededuce from (3.49) that bap(x)(x)dx = bas(x)(x)dx=Nj=0s(x j) j =Nj=0p(x j) j, p P2N .Taking p(x) = h2k(x) P2N in the above identities, we conclude that k > 0 for
0 k N.
Note that the Lagrange basis polynomials take the form
h j(x) =
(x a)qN(x)(
(x a)qN(x)
)
x=x j
(x x j)
=
(x a)qN(x)(
qN(x j)+ (x j a)qN(x j)
)
(x x j)
, 0 j N.
Hence, letting j = 0, we derive (3.51a) from the definition of 0, and for 1 j N,
j =
b
a
h j(x)(x)dx =
1
x j a
b
a
qN(x)
qN(x j)(x x j)
(x)dx.
Recall that {qn} are orthogonal with respect to , so the integral part turns out to
be the weight of the Gauss quadrature associated with N nodes being the zeros of
qN(x). Hence, (3.51b) follows from the formula (3.39).
62 3 Orthogonal Polynomials and Related Approximation Results
Remark 3.1. Similarly, a second Gauss-Radau quadrature can be constructed if we
want to include the right endpoint x = b instead of the left endpoint x = a.
We now turn to the Gauss-Lobatto quadrature, whose nodes include two end-
points x = a,b. In this case, we choose N and N such that
pN+1(x)+N pN(x)+N pN1(x) = 0 for x = a,b, (3.53)
and set
zN1(x) =
pN+1(x)+N pN(x)+N pN1(x)
(x a)(b x) . (3.54)
It is clear that zN1 PN1 and for any rN2 PN2, we derive from Lemma 3.1
that
b
a
zN1rN2(x a)(b x) dx
=
b
a
(
pN+1 +N pN +N pN1
)
rN2 dx = 0.
(3.55)
Hence,
{
zN1 : N 1
}
defines a sequence of polynomials orthogonal with respect
to the weight function (x) := (x a)(b x)(x), and the leading coefficient of
zN1 is kN+1.
Theorem 3.8. (Gauss-Lobatto quadrature) Let x0 = a, xN = b and {x j}N1j=1 be the
zeros of zN1 in (3.53)(3.54). Then there exists a unique set of quadrature weights
{ j}Nj=0, defined by (3.36), such that
b
a
p(x)(x)dx =
N
j=0
p(x j) j, p P2N1, (3.56)
where the quadrature weights are expressed as
0 =
1
(b a)zN1(a)
b
a
(b x)zN1(x)(x)dx, (3.57a)
j =
1
(x j a)(b x j)
kN+1
kN
zN22
zN2(x j)zN1(x j)
, 1 j N 1, (3.57b)
N =
1
(b a)zN1(b)
b
a
(x a)zN1(x)(x)dx. (3.57c)
Moreover, we have j > 0 for 1 j N 1.
Proof. The exactness (3.56) and the formulas of the weights can be derived in a
similar fashion as in Theorem 3.7, so we skip the details. Here, we just verify j > 0
for 1 j N 1 by using a different approach. Since {zN1} are orthogonal with
3.1 Orthogonal Polynomials 63
respect to the weight function , and zN1(x j) = 0 for 1 j N 1, we obtain
from the Christoff-Darboux formula (3.16) that
kN
kN+1
zN2(x j)zN1(x j) =
N2
j=0
zN22
z j2
z2j(x j)> 0, 1 j N 1.
Inserting it into the formula (3.57b) leads to j > 0 for 1 j N 1.
The Gauss-type quadrature formulas provide powerful tools for evaluating
integrals and inner products in a spectral method. They also play an important role
in spectral differentiations as to be shown later.
3.1.5 Interpolation and Discrete Transforms
Let
{
x j, j
}N
j=0
be a set of Gauss, Gauss-Radau or Gauss-Lobatto quadrature nodes
and weights. We define the corresponding discrete inner product and norm as
u,vN, :=
N
j=0
u(x j)v(x j) j, uN, :=
u,uN, . (3.58)
Note that , N, is an approximation to the continuous inner product (, ) , and
the exactness of Gauss-type quadrature formulas implies
u,vN, = (u,v) , u v P2N+ , (3.59)
where = 1, 0 and 1 for the Gauss, Gauss-Radau and Gauss-Lobatto quadrature,
respectively.
Definition 3.1. For any u C(), we define the interpolation operator IN : C()
PN such that
(INu)(x j) = u(x j), 0 j N, (3.60)
where = (a,b), [a,b), [a,b] for the Gauss, Gauss-Radau and Gauss-Lobatto
quadrature, respectively.
The interpolation condition (3.60) implies that IN p = p for all p PN . On the other
hand, since INu PN , we can write
(INu)(x) =
N
n=0
un pn(x), (3.61)
which is the counterpart of the discrete Fourier series (2.20) and may be referred to
as the discrete polynomial series. By taking the discrete inner product of (3.61) with
{pk}Nk=0, we can determine the coefficients {un} by using (3.60) and (3.59). More
precisely, we have
64 3 Orthogonal Polynomials and Related Approximation Results
Theorem 3.9.
un =
1
n
N
j=0
u(x j)pn(x j) j, 0 n N, (3.62)
where n = pn2 for 0 n N 1, and
N =
{
pN2 , for Gauss and Gauss-Radau,
pN , pNN, , for Gauss-Lobatto.
(3.63)
The formula (3.62)-(3.63) defines the forward discrete polynomial transform as in
the Fourier case, which transforms the physical values {u(x j)}Nj=0 to the expansion
coefficients {un}Nn=0. Conversely, the backward (or inverse) discrete polynomial
transform is formulated by
u(x j) = (INu)(x j) =
N
n=0
un pn(x j), 0 j N, (3.64)
which takes the expansion coefficients {un}Nn=0 to the physical values {u(x j)}Nj=0.
We see that if the matrices
(
pn(x j)
)
0n, jN and/or
(
1n pn(x j) j
)
0n, jN are
precomputed, then the discrete transforms (3.62) and (3.64) can be manipulated
directly by a standard matrixvector multiplication routine in about N2 flops. Since
discrete t
Reviews
There are no reviews yet.