-
Notifications
You must be signed in to change notification settings - Fork 4
Numerical Integration
Numerical integration, also known as numerical quadrature, is a fundamental technique for approximating definite integrals. It has wide-ranging applications in various scientific and engineering fields. For example, in statistics, the expected value of a random variable is calculated using an integral, and numerical integration can be employed to approximate this expected value.
The Numerics library provides several methods for performing numerical integration on single dimensional integrands. Each algorithm computes an approximation to a definite integral of the form:
For the first example, let's consider a simple function with a single variable:
Integrating from
Definite integrals can be numerically solved using Riemann sums, such as the trapezoidal rule. This method works by approximating the region under the function
This approximation can be improved by partitioning (or binning) the integration interval
Now, let's implement this in Numerics. First, we need to reference Integration namespace:
using Numerics.Mathematics.Integration;Next, create the test function:
/// <summary>
/// Test function: f(x) = x^3
/// </summary>
public double FX(double x)
{
return Math.Pow(x, 3);
}The Integration class is a static class that contains the Midpoint Rule, Trapezoidal Rule, Simpson's Rule, and the 10-point Gauss-Legendre integration methods. Let's first compute the integral using the Trapezoidal Rule shown above method with 10 bins (or steps):
double result = Integration.TrapezoidalRule(FX, 0, 1, 10); // 0.25249999999999995Increasing the number of steps will increase the accuracy. Let's compute it again using 1,000 steps:
double result = Integration.TrapezoidalRule(FX, 0, 1, 1000); // 0.25000025000000053We can see that this is much more precise.
The challenge with static numerical integration methods, such as the trapezoidal rule mentioned above, is that the user must specify both the limits of integration and the number of integration bins. If the integrand function has subregions with high variance, this approach can lead to large approximation errors. Many real-world integrand functions have substantial weight concentrated in narrow subregions, resulting in wasted integration bins in areas that contribute little to the total weight.
Adaptive integration, a more refined numerical integration method, adjusts subintervals within the integration bounds based on the behavior of the function. These methods concentrate subintervals in regions that contribute the most to the integral, overcoming the limitations of static approaches.
The Numerics library provides two adaptive integration routines: the Adaptive Simpson's Rule and the Adaptive Gauss-Lobatto method.
The Adaptive Simpson’s Rule (ASR) algorithm subdivides the integration interval recursively until a user-defined tolerance is achieved. In each subinterval, Simpson’s Rule is used to approximate the region under the function
The criterion for determining when to stop subdividing an interval is:
where
More details on the ASR and Adaptive Gauss-Lobatto (AGL) methods can be found in [1] and [2], respectively.
To use the ASR method, follow these steps:
var asr = new AdaptiveSimpsonsRule(FX3, 0, 1);
asr.Integrate();
double result = asr.Result; // 0.25For this simple test function, the ASR method requires only
Alternatively, we can use the AGL method as follows:
var agl = new AdaptiveGaussLobatto(FX3, 0, 1);
agl.Integrate();
double result = agl.Result; // 0.24999999999999997The AGL method requires 18 function evaluations to converge given an absolute and relative tolerance of
For a more challenging test problem, let's compute the mean of a Gamma distribution with a scale of
The mean of a probability distribution is computed as:
Now, let's implement this in Numerics. First, we need to reference Integration and Distributions namespaces:
using Numerics.Mathematics.Integration;
using Numerics.Distributions;Then, using the ASR method, follow these steps:
// Create the Gamma distribution and set the integration limits
var gamma = new GammaDistribution(10, 5);
double a = gamma.InverseCDF(1E-16);
double b = gamma.InverseCDF(1 - 1E-16);
// Create the integrand function
double I(double x)
{
return x * gamma.PDF(x);
}
// Perform integration
var asr = new AdaptiveSimpsonsRule(I, a, b);
asr.Integrate();
double result = asr.Result; // 50.000000004866415The ASR method requires
Multidimensional integration, also known as multiple or multivariate integration, involves evaluating integrals over functions of more than one variable. Instead of integrating over a single interval, as in one-dimensional integration, you integrate over a region in a multidimensional space. This is commonly used in fields like physics, engineering, and statistics where systems often depend on multiple variables.
Solving multidimensional integrals is computationally demanding. If traditional, nonadaptive numerical integration techniques were used, the solution would require
To avoid these computation limitations, the Numerics library provides three multidimensional integration routines: Monte Carlo, Miser, and VEGAS. Each algorithm computes an approximation to a definite integral of the form:
The Miser integration algorithm is a type of adaptive Monte Carlo method designed for efficient evaluation of multidimensional integrals. It is particularly well-suited for integrands that exhibit regions of high variance, as it allocates more samples to areas where the integrand contributes more to the total integral. The algorithm combines the flexibility of Monte Carlo integration with adaptive subdivision techniques to enhance accuracy and efficiency in complex, high-dimensional problems.
Key Concepts of the Miser Algorithm:
-
Monte Carlo Integration: The basic idea of Monte Carlo integration is to approximate the value of a multidimensional integral by randomly sampling points in the domain and averaging the function values at those points. However, this approach can be inefficient if the function has regions where the integrand varies significantly, leading to poor convergence.
-
Adaptive Subdivision: Miser improves upon basic Monte Carlo integration by recursively subdividing the integration domain into smaller regions. The algorithm then allocates more samples to the subregions where the integrand has higher variance, focusing computational resources where they are most needed.
-
Variance-Based Sampling: The Miser algorithm estimates the variance of the integrand in different subregions. Subregions with higher variance are given a greater proportion of the total samples. This reduces the error by refining the integral in the parts of the domain that contribute the most to the integral’s value.
For more details on the stratified sampling and the Miser algorithm, see [2] and [3].
The VEGAS integration method is a Monte Carlo-based numerical integration technique designed for efficiently evaluating high-dimensional integrals, particularly when dealing with functions that have significant variability in certain regions of the integration space [4][5]. It is widely used in computational physics and other fields requiring the evaluation of complex integrals.
Key Features of the VEGAS Algorithm:
-
Importance Sampling: VEGAS employs importance sampling to focus the integration effort on regions where the integrand contributes most significantly to the integral. This helps to improve the accuracy of the integral estimate while reducing variance.
-
Adaptive Grid: The algorithm adapts the sampling grid based on the characteristics of the integrand. It divides the integration domain into smaller subregions, and the sampling density is adjusted according to the estimated contribution of each region to the overall integral.
-
Iterative Approach: VEGAS works in iterations, refining the sampling strategy with each pass. In the first iteration, a uniform grid is typically used. After evaluating the integrand, the method estimates the probability distribution of the function values, allowing the grid to be adjusted in subsequent iterations to better capture areas with higher contributions.
For more details on the importance sampling and the VEGAS algorithm, see [2] and [3].
[1] P. J. Davis and P. Rabinowitz, Methods of Numerical Integration, 2nd ed., Mineola, New York: Dover Publications, Inc., 2007.
[2] W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery, Numerical Recipes: The Art of Scientific Computing., 3rd ed., Cambridge, UK: Cambridge University Press, 2017.
[3] A. Ciric, A Guide to Monte Carlo & Quantum Monte Carlo Methods, Createspace Independent Publishing Platform, 2016.
[4] G. Lepage, "A New Algorithm for Adaptive Multidimensional Integration," Journal of Computational Physics, vol. 27, no. 1, pp. 192-203, 1978.
[5] G. Lepage, "VEGAS: An Adaptive Multidimensional Integration Program," Cornell University, 1980.