Gaussian quadrature (Numerical Calculation)

Numerical Calculation for Gaussian Quadrature #

1. Overview of Gaussian Quadrature #

Consider the integral of the product of a polynomial $f(x)$ of degree $2N-1$ defined on the interval $x=[a,b]$ and an appropriate weight function $w(x)$. In this case, the integral can be calculated exactly by evaluating specific $N$ discrete points.

$$ \begin{align} \int_{a}^{b}w(x)f(x)dx=\sum_{i=1}^N \omega_i f(x_i). \end{align} $$

Here, $x=[a,b]$ is the domain of definition for orthogonal polynomials, $x_i, \omega_i$ are the points $x=x_i,~~(i=1,2,\cdots, N)$ that satisfy $p_N(x)=0$ for the $N$th degree orthogonal polynomial $p_N(x)$ and the weights, expressed by the following equations

$$ \begin{gather} p_N(x)=0 \to x=x_i,\\ \omega_i= \frac{k_{N}h_{N-1}}{k_{N-1} p’_N(x_i)p_{N-1}(x_i)}. \label{wdef} \end{gather} $$

Here, $p’_n(x)$ represents the derivative of $p_n(x)$, $p’_n(x)=\frac{dp_n(x)}{dx}$, and $h_n, k_n$ are quantities determined from the orthogonality and coefficients of $p_n(x)$ as follows,

$$ \begin{gather} \int_a^b dx w(x) p_n(x) p_m(x)=h_n \delta_{n,m}, \\ p_n(x)=k_n x^n+\cdots. \end{gather} $$

2. Numerical Calculation Method #

2.1. Root Finding + Definition #

This is a commonly used method.

First, to find the zeros of the polynomial, prepare appropriate initial values and numerically determine the zeros. Then, calculate the weights using the equation \eqref{wdef} provided above.

2.1.1. Calculation of Zeros #

Calculate the zeros of the polynomial.

In many cases, the zeros of the equation considering up to the first and second terms of the asymptotic expansion formula for degree $N$ are solved analytically to serve as initial values.

The details of the Newton-Raphson method are omitted, but it is a method to find $x$ satisfying $f(x)=0$ by updating the value according to the following equation

$$ \begin{align} x_{n+1}=x_n-\frac{f(x_n)}{f’(x_n)}. \end{align} $$

For example, consider finding the $n$ zeros of the Legendre polynomial $P_n(x), x=[-1, 1]$. Let’s denote the zeros of $P_n(x)$ as $x_m$ and further define the following $\theta_m^{(n)}$:

$$ x_{n-m+1}^{(n)}=\cos\theta_m^{(n)},~~(0<\theta_1^{(n)}<\theta_2^{(n)}<\cdots<\theta_n^{(n)}<\pi) $$

In this case, to find $\theta_m^{(n)}$, use the following formula as the initial value for the Newton-Raphson method1

$$ \begin{align} \theta_m^{(n)}=\frac{4m-1}{4n+1}\pi+\frac{1}{8n^2}\cot\frac{4m-1}{4n+2}\pi+O(n^{-3}) \end{align} $$

Or, if the lower and upper bounds of the solution can be specified1

$$ \begin{align} \frac{2m-1}{2n+1}\pi \leq \theta_m^{(n)}\leq \frac{2m}{2n+1}\pi \end{align} $$

Use these to solve with methods such as bisection or Brent’s method.

For other orthogonal polynomials, see the reference1.

As you will see, the combination of initial values and the Newton method can be quite challenging. It often isn’t close enough and doesn’t always succeed. Moreover, if the upper and lower limits are known, there is still a way to proceed, but for polynomials where only approximate values are known, and accuracy is insufficient, a program must be added to check for any missed values.

2.1.2. Calculation of Weights #

Once the positions of the abscissae have been determined, the next step is to calculate the weights \eqref{wdef}. The necessary coefficients are as follows2 3.

Sym. $a$ $b$ $w(x)$ $h_n$ $k_n$
Jacobi $P_n^{(\alpha,\beta)}(x)$ $-1$ $1$ $(1-x)^\alpha(1+x)^\beta$ $\frac{2^{\alpha+\beta+1}}{2n+\alpha+\beta+1}\frac{\Gamma(n+\alpha+1)\Gamma(n+\beta+1)}{n!\Gamma(n+\alpha+\beta+1)}$ $\frac{1}{2^n}\binom{2n+\alpha+\beta}{n}$
Legendre $P_n(x)$ $-1$ $1$ $1$ $\frac{2}{2n+1}$ $\frac{(2n)!}{2^n(n!)^2}$
Associated
Laguerre
$L_n^{(\alpha)}(x)$ $0$ $\infty$ $x^\alpha e^{-x}$ $\frac{\Gamma(\alpha+n+1)}{n!}$ $\frac{(-1)^n}{n!}$
Laguerre $L_n(x)$ $0$ $\infty$ $e^{-x}$ $1$ $\frac{(-1)^n}{n!}$
Hermite $H_n(x)$ $-\infty$ $\infty$ $e^{-x^2}$ $\sqrt{\pi}2^nn!$ $2^n$

In addition to the series of Legendre quadrature methods, we also introduce the Gauss-Radau and Gauss-Lobatto quadrature methods, which evaluate values at the endpoints.

Sym. $a$ $b$ $w(x)$ $x_i$ $w_i$ Error
Legendre $P_n(x)$ $-1$ $1$ $1$ $P_N(x)=0$ $\frac{2}{N P_{N-1}(x_k)P’_N(x_k)}$ $O(x^{2N})$
Radau4 - $-1$ $1$ $1$ $i=1: x=-1$
$i=2,\cdots,N: \frac{P_{N-1}(x)+P_N(x)}{1+x}=0$
$\frac{1-x_i}{N^2[P’_{N-1}(x_i)]^2}$ $O(x^{2N-1})$
Lobatto5 - $-1$ $1$ $1$ $i=1: x=-1$
$i=2,\cdots,N-1: P’_{N-1}(x)=0$
$i=N: x=+ 1$
$\frac{2}{N(N-1) [P_{N-1}(x_i)]^2}$ $O(x^{2N-2})$

where $\alpha>-1,\beta>-1$.

2.2. Golub-Welsch algorithm #

The zeros and weights of orthogonal polynomials can be calculated using the three-term recurrence relation6 and diagonalization. This is known as the Golub-Welsch algorithm (original paper7, explanation8), which converts the three-term recurrence relation into an eigenvalue problem for computation, allowing for highly accurate results. This method is described in detail in the literature, including explanations in Japanese910.

Let’s describe the algorithm in detail.

2.2.1. Calculation of Zeros #

Orthogonal polynomials ${p_n(x)}$ follow a three-term recurrence relation, which is expressed as follows

$$ \begin{gather} a_n p_{n+1}(x)=(b_n+c_n x)p_{n}(x)-d_n p_{n-1}(x),\\ (n=0,1,2,\cdots,~~p_{-1}(x)=0) \end{gather} $$

The specific coefficients $a_n, b_n, c_n, d_n$ can be obtained from sources such as literature6. By rearranging the equation slightly, we obtain

$$ \begin{gather} xp_{n}(x)=\frac{d_n}{c_n}p_{n-1}(x)-\frac{b_n}{c_n}p_{n}(x)+\frac{a_n}{c_n}p_{n+1}(x) \end{gather} $$

When expressed in matrix form, it can be written as follows:

$$ \begin{align} \label{eqxJ} x\left( \begin{array}{c} p_0(x)\\ p_1(x)\\ \vdots\\ p_{N-1}(x) \end{array} \right)= \left( \begin{array}{ccccc} -\frac{b_0}{c_0} & \frac{a_0}{c_0} & 0 & \cdots & 0\\ \frac{d_1}{c_1} & -\frac{b_1}{c_1} & \frac{a_1}{c_1} & \cdots & 0\\ \vdots&\vdots&\ddots&\vdots&\\ 0 & 0 & \cdots & -\frac{b_{N-2}}{c_{N-2}} & \frac{a_{N-2}}{c_{N-2}}\\ 0 & 0 & \cdots & \frac{d_{N-1}}{c_{N-1}} & -\frac{b_{N-1}}{a_{N-1}}\\ \end{array} \right) \left( \begin{array}{c} p_0(x)\\ p_1(x)\\ \vdots\\ p_{N-1}(x) \end{array} \right)+ \frac{a_{N-1}}{c_{N-1}} \left( \begin{array}{c} 0\\ 0\\ \vdots\\ 0\\ p_{N}(x) \end{array} \right) \end{align} $$

Further rewriting it as

$$ \begin{align}\label{eqTN} x\mathbf{p}(x)=T\mathbf{p}(x)+\frac{p_N(x)}{a_{N-1}}\mathbf{e}_N \end{align} $$

where,

  • $\mathbf{p}(x)=(p_0(x), p_1(x), \cdots, p_{N-1}(x))^{T}$ is an $N\times 1$ matrix,
  • $\mathbf{e}(x)=(0, 0, \cdots, 0, 1)^{T}$ is an $(N+1)\times 1$ matrix,
  • $T$ is the $N\times N$ tridiagonal matrix appearing in equation \eqref{eqxJ}.

If $x$ satisfies $p_N(x)=0$, that is, $x=x_j$, then the last term on the right side of equation \eqref{eqTN} becomes zero, leading to the equation being rewritten as

$$ \begin{align}\label{eqTjN} T\mathbf{p}(x_j)=x_j\mathbf{p}(x_j) \end{align} $$

This means that it becomes an eigenvalue problem for the tridiagonal matrix $T$, with the $N$ zeros $x_j$ derived as eigenvalues of $T$, and the corresponding eigenvectors as $\mathbf{p}(x_j)=(p_0(x_j), p_1(x_j), \cdots, p_{N-1}(x))^T$ for each eigenvalue $x_j$.

Solving equation \eqref{eqTN} directly is feasible, but solving it after symmetrization yields even better results.

By multiplying the three-term recurrence formula by $\kappa_n$ and introducing a quantity such that $p’_n(x)=\kappa_n p_n(x)$, we obtain a three-term recurrence formula for ${p’_n}$

$$ \begin{gather} xp’_{n}(x)=\beta_{n-1} p’_{n-1}(x)+\alpha_n p’_{n}(x)+\beta_n p’_{n+1}(x) \\ \alpha_n \equiv -\frac{b_n}{c_n},~~~\beta_n \equiv \left(\frac{a_n d_{n+1}}{c_n c_{n+1}}\right)^{1/2} \end{gather} $$

Consequently, the eigenvalue problem \eqref{eqTjN} becomes:

$$ \begin{align}\label{eqJjNe} J\mathbf{p}’(x_j)=x_j\mathbf{p}’(x_j) \end{align} $$

If we explicitly write $J$:

$$ \begin{align} \label{eqJjN} DTD^{-1}=J= \left( \begin{array}{ccccc} \alpha_0 & \beta_0 & 0 & \cdots & 0\\ \beta_0 & \alpha_1 & \beta_1 & \cdots & 0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0 & 0 & \cdots & \alpha_{N-2} & \beta_{N-2}\\ 0 & 0 & \cdots & \beta_{N-2} & \alpha_{N-1}\\ \end{array} \right) \end{align} $$

The $\kappa$ values are given by the recurrence formula as

$$ \begin{align} \kappa_{n+1}=\left(\frac{a_n c_{n+1}}{c_n d_{n+1}}\right)^{1/2} \kappa_{n},~~\kappa_0=1. \end{align} $$


Diagonalizing $J$ yields eigenvectors denoted as $\mathbf{q}’(x_j)$. These $\mathbf{q}’(x_j)$ differ from the desired polynomial values $\mathbf{p}(x_j)$ in two main ways:

  1. Each element of $\mathbf{q}’(x_j)$ is scaled by $\kappa_n$.
  2. The overall scaling of $\mathbf{q}’(x_j)$ is undetermined (as implied by equation \eqref{eqJjNe}).

The first issue is resolved by scaling each element by $\kappa_n$, i.e.,

$$ q_n(x_j)\equiv \frac{q’_n(x_j)}{\kappa_n} $$

Executing this yields $\mathbf{q}(x_j)$. By considering $\mathbf{q}(x_j)$ as the eigenvector, the only difference between $\mathbf{q}(x_j)$ and $\mathbf{p}(x_j)$ is an overall constant.

The second issue, since the difference between $\mathbf{q}(x_j)$ and $\mathbf{p}(x_j)$ is just a constant factor, it suffices to align the zeroth order of the polynomials. Thus, $\mathbf{p}(x_j)$ is given by

$$ \begin{align}\label{pqrel} \mathbf{p}(x_j)=\frac{k_0}{q_0(x_j)}\mathbf{q}(x_j) \end{align} $$

Or using $\mathbf{q}’(x_j)$,

$$ \begin{align}\label{pqrela} p_n(x_j)=\frac{k_0}{q’_0(x_j)} \kappa_0\frac{q’_n(x_j)}{\kappa_n},~~(j=0,1,\cdots,N-1) \end{align} $$

Here, $k_0$ is the coefficient for $n=0$ in the series expansion3

$$ p_n(x)=k_nx^n+\cdots $$

Although the zeros are obtained from the eigenvalues, calculating weights according to equation \eqref{wdef} might seem wasteful given the available eigenvector information. Let’s transform the equation for calculating weights using this eigenvector information.

2.2.2. Calculation of Weights #

We will transform the equation for calculating weights using the Christoffel–Darboux theorem as follows

$$ \begin{align}\label{CDt} \sum_{\nu=0}^{N-1} \frac{p_\nu(x)p_\nu(y)}{h_\nu} = \frac{k_{N-1}}{h_{N-1} k_{N}}\frac{p_{N}(x)p_{N-1}(y)-p_{N-1}(x)p_{N}(y)}{x-y} \end{align} $$

Considering the limit $y\to x$ in equation \eqref{CDt}, the right-hand side becomes

$$ \begin{gather}\label{CDtlim} \frac{k_{N-1}}{h_{N-1} k_{N}}\lim_{y\to x}\left[ p_{N}(x)\frac{p_{N-1}(y)-p_{N-1}(x)}{x-y}+p_{N-1}(x)\frac{p_{N}(x)-p_{N}(y)}{x-y}\right] \\ =\frac{k_{N-1}}{h_{N-1} k_{N}}\left[-p_N(x)p’_{N-1}(x)+p_{N-1}(x)p’_N(x)\right] \end{gather} $$

Thus, we derive the relation

$$ \begin{align} \sum_{\nu=0}^{N-1} \frac{[p_\nu(x)]^2}{h_\nu} = \frac{k_{N-1}}{h_{N-1} k_{N}}\left[-p_N(x)p’_{N-1}(x)+p_{N-1}(x)p’_N(x)\right] \end{align} $$

Specifically, if $x$ is a zero of $p_N(x)=0$, i.e., $x=x_j$, then

$$ \begin{align}\label{CDnu} \sum_{\nu=0}^{N-1} \frac{[p_\nu(x_j)]^2}{h_\nu} = \frac{k_{N-1}}{h_{N-1} k_{N}}p_{N-1}(x_j)p’_N(x_j) \end{align} $$

This right-hand side of equation \eqref{CDnu} is equivalent to the right-hand side of the weight equation \eqref{wdef}, yielding

$$ \begin{align} \omega_j&= \frac{k_{N}h_{N-1}}{k_{N-1} p’_N(x_j)p_{N-1}(x_j)}=\left[\sum_{\nu=0}^{N-1} \frac{p_\nu^2(x_j)}{h_\nu}\right]^{-1}\label{wdef2} \end{align} $$

When calculating with eigenvectors, it is better to use the eigenvectors derived from equation \eqref{pqrel} to compute equation \eqref{wdef2}. Therefore, the calculation is performed as

$$ \begin{align} \omega_j &=\left[\sum_{\nu=0}^{N-1} \frac{p_\nu^2(x_j)}{h_\nu}\right]^{-1}\\ &=\left(\frac{q_0(x_j)}{k_0}\right)^2 \left[\sum_{\nu=0}^{N-1} \frac{q_\nu^2(x_j)}{ h_\nu}\right]^{-1},\\ &(j=0, 1, \cdots, N-1)\nonumber \end{align} $$

3. Program #

I have implemented the Golub-Welsch algorithm in Python.

Please download and unzip it from the following URL. https://slpr.sakura.ne.jp/sikinote/supplement/python/numeric/Integration/gauss_quadrature_abscissa_weight.zip (約4kB)

In general, I have aimed to implement the program as closely as possible to how it appears in “Handbook of Mathematical Functions” by Abramowitz and Stegun (hereinafter referred to as A&S), with some minor deviations for numerical reasons.

Regarding the Gauss-Lobatto quadrature, I have derived it in Derivation of the Gaussian Quadrature, and the program is implemented based on this information.

Currently, I have implemented six different quadrature methods. When you call the corresponding function with the g_kind parameter, it will return the quadrature points and weights for the specified quadrature method.

x, w = gauss_quadrature_abscissa_weight(n_points, g_kind="Chebyshev1")

g_kind $a$ $b$ $w(x)$ After Scaling
Chebyshev1 $-1$ $1$ $(1-x^2)^{-1/2}$ $\int_{a}^{b} [(x-a)(b-x)]^{-1/2} f(x) dx$
Chebyshev2 $-1$ $1$ $(1-x^2)^{1/2}$ $\int_{a}^{b} [(x-a)(b-x)]^{1/2} f(x) dx$
Legendre $-1$ $1$ $1$ $\int_{a}^{b} f(x) dx$
Lobatto $-1$ $1$ $1$ $\int_{a}^{b} f(x) dx$
Associated
Laguerre
$0$ $\infty$ $x^\alpha e^{-x}$ $\int_{0}^{\infty} x^{\alpha} e^{-\kappa x}f(x) dx$
Hermite $-\infty$ $\infty$ $e^{-x^2}$ $\int_{-\infty}^{\infty} e^{-\kappa x^2}f(x) dx$

By applying an appropriate scaling, you can perform integrals corresponding to the “After Scaling” column in the table above.

If you have already extracted the zip file and specified $f(x)=x^4$, you can obtain the following results by running the program:

> python main.py
Chebyshev1 1.1780972450961693
Chebyshev2 0.1963495408493613
Legendre 0.39999999999999875
Lobatto 0.39999999999999997
Laguerre 0.6163885883547535
Hermite 0.2349964007466552
>

These results correspond to the calculation of the following integrals,

$$ \begin{align} \text{Chebyshev1} &: \int_{-1}^{1} [(1-x^2)]^{-1/2} x^4 dx = 3\pi/8 \approx 1.17809724509617… \\ \text{Chebyshev2} &: \int_{-1}^{1} [(1-x^2)]^{1/2} x^4 dx =\pi/16 \approx 0.196349540849362… \\ \text{Legendre} &: \int_{-1}^{1} x^4 dx = 2/5 = 0.4 \\ \text{Lobatto} &: \int_{-1}^{1} x^4 dx = 2/5 = 0.4 \\ \text{Laguerre} &: \int_{0}^{\infty} x^{-0.25} e^{-2 x} x^4 dx \approx 0.61638858835475457… \\ \text{Hermite} &: \int_{-\infty}^{\infty} e^{-2 x^2}x^4 dx = (3 \sqrt(\pi/2))/16 \approx 0.2349964007466563… \end{align} $$

4. Appendix #

$$ \begin{align} \Gamma(z+1)=z\Gamma(z) \end{align} $$

$$ \begin{align} \frac{\Gamma(n+b)}{\Gamma(n+a)}=\frac{\Gamma(b)}{\Gamma(a)}\cdot\prod_{k=1}^{n}\frac{n-k+b}{n-k+a} \end{align} $$

$h_n=\frac{\Gamma(\alpha+n+1)}{n!}$, and

$$ \begin{gather} h_n=\frac{\alpha+n}{n}h_{n-1},~~h_0=\Gamma(\alpha+1) \end{gather} $$

5. References #