MA2213: Numerical Analysis I
AY2014/2015, Semester 2, Lecturer: Tan Hwee Huat
Course Coverage:
1. Round-Off Errors & Computer Arithmetic
2. Algorithm & Convergence
3. Iterative Methods & Error Analysis
4. Accelerating Convergence
5. Interpolations
6. Numerical Integration
7. Linear System of Equations
The first course in numerical analysis focus on the computability of theoretical results. The module begin with introduction to computer arithmetics. Computers operate in a binary setting, and hence the approximations of the numerical solutions are bound to round-off error. Thus, in this module, both the computation methods and the error analysis are of equal importance.
The iterative methods covered in this module include the bisection method, fixed point iterations and Newton's method. The bisection method is the easiest method and its error reduces by a factor of 2 (geometrically) with each iterations. However, using this method to complicated equations might be inefficient because it may require unnecessary large number of iterations. Under certain conditions, the fixed point method can be used to speed up the process. However, for most cases, the solution converges only linearly. A special known as the Newton's method, however, converges quadratically. The last part of this chapter also introduces methods that could accelerate the convergence to the true value. Two such methods are Aitken's method and the Steffensen's method, in which the latter is an improvement made on the basis of the former.
Interpolation methods begin with the basic Lagrange interpolating polynomials which forms the theoretical foundation for interpolation based on data points, and followed by Newton's divided difference method which accelerates the process of interpolation. More advanced interpolating methods such as Hermite interpolations and Cubic Spline interpolations were taught but not tested in finals (they still appear in graded assignments though).
Numerical integration is derived from the interpolating polynomials. The numerical quadrature is obtained by integrating the interpolating polynomials over the given interval, and it approximates the integral of the objective function just like how the polynomials approximate the function. Two advanced method of numerical integration covered are (1) Romberg Integration and (2) Gaussian Quadrature.
The last topic for the course is the study of linear systems. In particular, how to perform efficient matrix computations. The focus on this topic is on the LU factorization and its application to special matrices, e.g. Cholesky Factorization for symmetrical positive definite matrices and Crout Factorization for tridiagonal matrices.
This module is a core module for applied mathematics majors, and it definitely have a wide range of computing applications. Personally, this module has definitely further my understanding in financial mathematics and some financial economic models, and I would expect the same in many of the computation-based majors.
Workload: Moderate
Difficulty: Moderate
Grade: B+
Course Coverage:
1. Round-Off Errors & Computer Arithmetic
2. Algorithm & Convergence
3. Iterative Methods & Error Analysis
4. Accelerating Convergence
5. Interpolations
6. Numerical Integration
7. Linear System of Equations
The first course in numerical analysis focus on the computability of theoretical results. The module begin with introduction to computer arithmetics. Computers operate in a binary setting, and hence the approximations of the numerical solutions are bound to round-off error. Thus, in this module, both the computation methods and the error analysis are of equal importance.
The iterative methods covered in this module include the bisection method, fixed point iterations and Newton's method. The bisection method is the easiest method and its error reduces by a factor of 2 (geometrically) with each iterations. However, using this method to complicated equations might be inefficient because it may require unnecessary large number of iterations. Under certain conditions, the fixed point method can be used to speed up the process. However, for most cases, the solution converges only linearly. A special known as the Newton's method, however, converges quadratically. The last part of this chapter also introduces methods that could accelerate the convergence to the true value. Two such methods are Aitken's method and the Steffensen's method, in which the latter is an improvement made on the basis of the former.
Interpolation methods begin with the basic Lagrange interpolating polynomials which forms the theoretical foundation for interpolation based on data points, and followed by Newton's divided difference method which accelerates the process of interpolation. More advanced interpolating methods such as Hermite interpolations and Cubic Spline interpolations were taught but not tested in finals (they still appear in graded assignments though).
Numerical integration is derived from the interpolating polynomials. The numerical quadrature is obtained by integrating the interpolating polynomials over the given interval, and it approximates the integral of the objective function just like how the polynomials approximate the function. Two advanced method of numerical integration covered are (1) Romberg Integration and (2) Gaussian Quadrature.
The last topic for the course is the study of linear systems. In particular, how to perform efficient matrix computations. The focus on this topic is on the LU factorization and its application to special matrices, e.g. Cholesky Factorization for symmetrical positive definite matrices and Crout Factorization for tridiagonal matrices.
This module is a core module for applied mathematics majors, and it definitely have a wide range of computing applications. Personally, this module has definitely further my understanding in financial mathematics and some financial economic models, and I would expect the same in many of the computation-based majors.
Workload: Moderate
Difficulty: Moderate
Grade: B+