Given Triangular Form of Matrix Find Solution
Linear Equations
William Ford , in Numerical Linear Algebra with Applications, 2015
2.3.1 Upper-Triangular Form
In upper-triangular form, a simple procedure known as back substitution determines the solution. Since the linear algebraic systems corresponding to the original and final augmented matrix have the same solution, the solution to the upper-triangular system
begins with
followed by
In general,
We now formally describe the Gaussian elimination procedure. Start with matrix A and produce matrix B in upper-triangular form which is row-equivalent to A. If A is the augmented matrix of a system of linear equations, then applying back substitution to B determines the solution to the system. It is also possible that there is no solution to the system, and the row-reduction process will make this evident.
Begin at element a 11. If a 11 = 0, exchange rows so a 11 ≠ 0. Now make all the elements below a 11 zero by subtracting a multiple of row 1 from row i, 2 ≤ i ≤ n. The multiplier used for row i is
The matrix is now in this form
Now perform the same process of elimination by using and multipliers , making a row exchange if necessary, so that all the elements below are 0,
Repeat this process until the matrix is in upper-triangular form, and then execute back substitution to compute the solution.
Example 2.4
Solve the system
Row reduce the augmented matrix to upper-triangular form.
Execute back substitution.
Final solution: x 1 = 9, x 2 = − 13, x 3 = − 5
When computing a solution "by hand," it is a good idea to verify that the solution is correct.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123944351000028
The inverse
Richard Bronson , Gabriel B. Costa , in Matrix Methods (Fourth Edition), 2021
3.5 LU decomposition
Matrix inversion of elementary matrices (see Section 3.1) can be combined with the third elementary row operation (see Section 2.3) to generate a good numerical technique for solving simultaneous equations. It rests on being able to decompose a nonsingular square matrix A into the product of lower triangular matrix L with an upper triangular matrix U. Generally, there are many such factorizations. If, however, we add the additional condition that all diagonal elements of L be unity, then the decomposition, when it exists, is unique, and we may write
(7)
with
and
To decompose A into from (7), we first reduce A to upper triangular form using just the third elementary row operation: namely, add to one row of a matrix a scalar times another row of that same matrix. This is completely analogous to transforming a matrix to row-reduced form, except that we no longer use the first two elementary row operations. We do not interchange rows and we do not multiply a row by a nonzero constant. Consequently, we no longer require the first nonzero element of each nonzero row to be unity, and if any of the pivots are zero—which in the row-reduction scheme would require a row interchange operation—then the decomposition scheme we seek cannot be done.
Example 1
Use the third elementary row operation to transform the matrix
into upper triangular form.
Solution
If a square matrix A can be reduced to upper triangular form U by a sequence of elementary row operations of the third type, then there exists a sequence of elementary matrices E 21, E 31, E 41, …, E n, n−1 such that
(8)
where E 21 denotes the elementary matrix that places a zero in the 2−1 position, E 31 denotes the elementary matrix that places a zero in the 3−1 position, E 41 denotes the elementary matrix that places a zero in the 4−1 position, and so on. Since elementary matrices have inverses, we can write (8) as
(9)
Each elementary matrix in (8) is lower triangular. It follows from Property 7 of Section 3.4 that each of the inverses in (9) are lower triangular and then from Theorem 1 of Section 1.4 that the product of these lower triangular matrices is itself lower triangular. Setting
we see that (9) is identical to (7), and we have the decomposition we seek.
Example 2
Construct an LU decomposition for the matrix given in Example 1.
Solution The elementary matrices associated with the elementary row operations described in Example 1 are
with inverses given respectively by
Then,
or, upon multiplying together the inverses of the elementary matrices,
Example 2 suggests an important simplification of the decomposition process. Note the elements in L below the main diagonal are the negatives of the scalars used in the elementary row operations to reduce the original matrix to upper triangular form! This is no coincidence. In general
Observation 1
If an elementary row operation is used to put a zero in the i−j position of A(i > j) by adding to row i a scalar k times row j, then the i−j element of L in the LU decomposition of A is −k.
We summarize the decomposition process as follows: Use only the third elementary row operation to transform a given square matrix A to upper triangular form. If this is not possible, because of a zero pivot, then stop; otherwise, the LU decomposition is found by defining the resulting upper triangular matrix as U and constructing the lower triangular matrix L utilizing Observation 1.
Example 3
Construct an LU decomposition for the matrix
Solution Transforming A to upper triangular form, we get
We now have an upper triangular matrix U. To get the lower triangular matrix L in the decomposition, we note that we used the scalar −3 to place a zero in the 2−1 position, so its negative −(−3) = 3 goes into the 2−1 position of L. We used the scalar to place a zero in the 3−1 position in the second step of the above triangularization process, so its negative, , becomes the 3−1 element in L; we used the scalar to place a zero in the 4−3 position during the last step of the triangularization process, so its negative, becomes the 4−3 element in L. Continuing in this manner, we generate the decomposition
LU decompositions, when they exist, can be used to solve systems of simultaneous linear equations. If a square matrix A can be factored into A = LU, then the system of equations Ax = b can be written as L(Ux) = b. To find x, we first solve the system
(10)
for y, and then, once y is determined, we solve the system
(11)
for x. Both systems (10) and (11) are easy to solve, the first by forward substitution and the second by backward substitution.
Example 4
Solve the system of equations
Solution This system has the matrix form
The LU decomposition for the coefficient matrix A is given in Example 2. If we define the components of y by α, β, and γ, respectively, the matrix system Ly = b is
which is equivalent to the system of equations
Solving this system from top to bottom, we get α = 9, β = −9, and γ = 30. Consequently, the matrix system Ux = y is
which is equivalent to the system of equations
Solving this system from bottom to top, we obtain the final solution x = −1, y = 4, and z = 5. ■
Example 5
Solve the system
Solution The matrix representation for this system has as its coefficient matrix the matrix A of Example 3. Define
Then, using the decomposition determined in Example 3, we can write the matrix system Ly = b as the system of equations
which has as its solution α = 5, β = −7, γ = 4, and δ = 0. Thus, the matrix system Ux = y is equivalent to the system of equations
Solving this set from bottom to top, we calculate the final solution a = −1, b = 3, c = 2, and d = 0. ■
LU decomposition and Gaussian elimination are equally efficient for solving Ax = b, when the decomposition exists. LU decomposition is superior when Ax = b must be solved repeatedly for different values of b but the same A, because once the factorization of A is determined it can be used with all b. (See Problems 17 and 18.) A disadvantage of LU decomposition is that it does not exist for all nonsingular matrices, in particular whenever a pivot is zero. Fortunately, this occurs rarely and when it does the difficulty usually overcome by simply rearranging the order of the equations. (See Problems 19 and 20.)
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128184196000034
Determinants and Eigenvalues
Stephen Andrilli , David Hecker , in Elementary Linear Algebra (Fourth Edition), 2010
Calculating the Determinant by Row Reduction
We will now illustrate how to use row operations to calculate the determinant of a given matrix A by finding an upper triangular matrix B that is row equivalent to A.
Example 4
Let
We row reduce A to upper triangular form, as follows, keeping track of the effect on the determinant at each step:
Because the last matrix B is in upper triangular form, we stop. (Notice that we do not target the entries above the main diagonal, as in reduced row echelon form.) From Theorem 3.2, Since , we see that .
A more convenient method of calculating |A| is to create a variable P (for "product") with initial value 1, and update P appropriately as each row operation is performed. That is, we replace the current value of P by
Of course, row operations of type (II) do not affect the determinant. Then, using the final value of P, we can solve for |A| using |B| = P|A|, where B is the upper triangular result of the row reduction process. This method is illustrated in the next example.
Example 5
Let us redo the calculation for |A| in Example 4. We create a variable P and initialize P to 1. Listed below are the row operations used in that example to convert A into upper triangular form B, with . After each operation, we update the value of P accordingly.
Row Operation | Effect | P |
---|---|---|
Multiply P by −1 | −1 | |
No change | −1 | |
Multiply P by | ||
No change |
Then |A| equals the reciprocal of the final value of P times |B| that is, .
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123747518000226
Conditioning of Problems and Stability of Algorithms
William Ford , in Numerical Linear Algebra with Applications, 2015
Reasons Why the Study of Numerical Linear Algebra Is Necessary
Floating point roundoff and truncation error cause many problems. We have learned how to perform Gaussian elimination in order to row reduce a matrix to upper-triangular form. Unfortunately, if the pivot element is small, this can lead to serious errors in the solution. We will solve this problem in Chapter 11 by using partial pivoting. Sometimes an algorithm is simply far too slow, and Cramer's Rule is an excellent example. It is useful for theoretical purposes but, as a method of solving a linear system, should not be used for systems greater than 2 × 2. Solving Ax = b by finding A − 1 and then computing x = A − 1 b is a poor approach. If the solution to a single system is required, one step of Gaussian elimination, properly performed, requires far fewer flops and results in less roundoff error. Even if the solution is required for many right-hand sides, we will show in Chapter 11 that first factoring A into a product of a lower- and an upper-triangular matrix and then performing forward and back substitution is much more effective. A classical mistake is to compute eigenvalues by finding the roots of the characteristic polynomial. Polynomial root finding can be very sensitive to roundoff error and give extraordinarily poor results. There are excellent algorithms for computing eigenvalues that we will study in Chapters 18 and 19. Singular values should not be found by computing the eigenvalues of A T A. There are excellent algorithms for that purpose that are not subject to as much roundoff error. Lastly, if m ≠ n a theoretical linear algebra course deals with the system using a reduction to what is called reduced row echelon form. This will tell you whether the system has infinitely many solutions or no solution. These types of systems occur in least-squares problems, and we want a single meaningful solution. We will find one by requiring that x be such that ‖b − Ax‖2 is minimum.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123944351000107
Matrix Representation of Linear Algebraic Equations
Stormy Attaway , in Matlab (Second Edition), 2012
Gauss elimination
The Gauss elimination method consists of:
- ▪
-
creating the augmented matrix [A|b]
- ▪
-
applying EROs to this augmented matrix to get an upper triangular form (this is called forward elimination )
- ▪
-
back substitution to solve
For example, for a 2 × 2 system, the augmented matrix would be:
Then, elementary row operations are applied to get the augmented matrix into an upper triangular form (i.e., the square part of the matrix on the left is in upper triangular form):
So, the goal is simply to replace a 21 with 0. Here, the primes indicate that the values (may) have been changed.
Putting this back into the equation form yields
Performing this matrix multiplication for each row results in:
-
a′11 x1 + a′12 x2 = b′1
-
a′22 x2 = b′2
So, the solution is
-
x2 = b′2 / a′22
-
x1 = (b′1 − a′12 x2) / a′11
Similarly, for a 3 × 3 system, the augmented matrix is reduced to upper triangular form:
(This will be done systematically by first getting a 0 in the a 21 position, then a 31, and finally a 32.) Then, the solution will be:
-
x3 = b3′ / a33′
-
x2 = (b2′ − a23′x3) / a22′
-
x1 = (b1′ − a13′x3 − a12′x2) / a11′
Note that we find the last unknown, x3, first, then the second unknown, and then the first unknown. This is why it is called back substitution.
As an example, consider the following 2 × 2 system of equations:
-
x1 + 2x2 = 2
-
2x1 + 2x2 = 6
As a matrix equation Ax = b, this is:
The first step is to augment the coefficient matrix A with b to get an augmented matrix [A|b]:
For forward elimination, we want to get a 0 in the a 21 position. To accomplish this, we can modify the second line in the matrix by subtracting from it 2 * the first row.
The way we would write this ERO follows:
Now, putting it back in matrix equation form:
says that the second equation is now −2x2 = 2, so x2 = −1. Plugging into the first equation,
This is back substitution.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123850812000120
Vectors and Matrices
Frank E. Harris , in Mathematics for Physical Science and Engineering, 2014
Exercises
-
For each of the following equation sets:
- (a)
-
Compute the determinant of the coefficients, using Eq. (4.61).
- (b)
-
Row-reduce the coefficient matrix to upper triangular form and either obtain the most general solution to the equations or explain why no solution exists.
- (c)
-
Confirm that the existence and/or uniqueness of the solutions you found in part (b) correspond to the zero (or nonzero) value you found for the determinant.
- 4.6.1
-
- 4.6.2
-
- 4.6.3
-
- 4.6.4
-
- 4.6.5
-
- 4.6.6
-
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128010006000043
Determinants and Eigenvalues
Stephen Andrilli , David Hecker , in Elementary Linear Algebra (Fifth Edition), 2016
Techniques for Finding the Determinant of an n × n Matrix A
- ■
-
2×2 case: (Sections 2.4 and 3.1).
- ■
-
3×3 case: Basketweaving (Section 3.1).
- •
-
Row reduction: Row reduce A to an upper triangular form matrix B, keeping track of the effect of each row operation on the determinant using a variable P. Then , using the final value of P. Advantages: easily computerized; relatively efficient (Section 3.2).
- ■
-
Cofactor expansion: Multiply each element along any row or column of A by its cofactor and sum the results. Advantage: useful for matrices with many zero entries. Disadvantage: not as fast as row reduction (Sections 3.1 and 3.3).
Also remember that if A is row equivalent to a matrix with a row or column of zeroes, or with two identical rows, or with two identical columns.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128008539000037
Gaussian Elimination and the LU Decomposition
William Ford , in Numerical Linear Algebra with Applications, 2015
In Chapter 2, we presented the process of solving a nonsingular linear system Ax = b using Gaussian elimination. We formed the augmented matrix A |b and applied the elementary row operations
- 1.
-
Multiplying a row by a scalar.
- 2.
-
Subtracting a multiple of one row from another
- 3.
-
Exchanging two rows
to reduce A to upper-triangular form. Following this step, back substitution computed the solution. In many applications where linear systems appear, one needs to solve Ax = b for many different vectors b. For instance, suppose a truss must be analyzed under several different loads. The matrix remains the same, but the right-hand side changes with each new load. Most of the work in Gaussian elimination is applying row operations to arrive at the upper-triangular matrix. If we need to solve several different systems with the same A, then we would like to avoid repeating the steps of Gaussian elimination on A for every different b. This can be accomplished by the LU decomposition, which in effect records the steps of Gaussian elimination.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123944351000119
Integer Discrete Cosine/Sine Transforms
Vladimir Britanak , ... K.R. Rao , in Discrete Cosine and Sine Transforms, 2007
5.2.7 QR, LU, LDU and PLUS factorizations
We mentioned that in linear algebra and matrix computations, a real nonsingular matrix is reduced by elementary rotation matrices and elementary transformations into various canonical forms in order to simplify subsequent computational steps of a solved problem.
Such procedures lead to various useful factorizations of the matrix into the products of structurally simpler matrices.
There exist two basic methods how to reduce a real nonsingular matrix of order N into equivalent upper triangular form. The first method is based on premultiplications of the matrix by elementary Givens–Jacobi rotation matrices. This procedure leads to the well-known QR factorization, where Q is an orthogonal matrix and R is an upper triangular matrix. The QR factorization is also discussed in the Appendix A.3 (see equations (A.27)–(A.34)). The following theorem and corollaries state the QR factorization [1].
Theorem 5.1: ([1], Chapter 1, p. 37)
Arbitrary real nonsingular matrix
can be reduced through successive premultiplications by elementary Givens–Jacobi rotation matrices Gij to an upper triangular matrix, whose all diagonal elements are positive besides the last one, i.e.,
•
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123736246500072
Matrices
Richard Bronson , ... John T. Saccoman , in Linear Algebra (Third Edition), 2014
1.7 LU Decomposition
Matrix inversion of elementary matrices is at the core of still another popular method, known as LU decomposition, for solving simultaneous equations in the matrix form Ax = b. The method rests on factoring a nonsingular coefficient matrix A into the product of a lower triangular matrix L with an upper triangular matrix U. Generally, there are many such factorizations. If L is required to have all diagonal elements equal to 1, then the decomposition, when it exists, is unique and we may write
(1.33)
with
To decompose A into form (1.33), we first transform A to upper triangular form using just the third elementary row operation R 3. This is similar to transforming a matrix to row-reduced form, except we no longer use the first two elementary row operations. We do not interchange rows, and we do not multiply rows by nonzero constants. Consequently, we no longer require that the first nonzero element of each nonzero row be 1, and if any of the pivots are 0—which would indicate a row interchange in the transformation to row-reduced form—then the decomposition scheme we seek cannot be done.
Example 1 Use the third elementary row operation to transform the matrix
into upper triangular form.
Solution:
by adding to the second row − 2 times the first row | |
by adding to the third row 3 times the first row | |
by adding to the third row 1 times the second row |
If a square matrix A can be reduced to upper triangular form U by a sequence of elementary row operations of the third type, then there exists a sequence of elementary matrices E 21, E 31, E 41, … , E n,n− 1 such that
(1.34)
where E 21 denotes the elementary matrix that places a 0 in the 2-1 position, E 31 denotes the elementary matrix that places a 0 in the 3-1 position, E 41 denotes the elementary matrix that places a 0 in the 4-1 position, and so on. Since elementary matrices have inverses, we can write Equation (1.29) as
(1.35)
Each elementary matrix in Equation (1.34) is lower triangular. It follows from Theorem 4 of Section 1.5 that each of the inverses in Equation (1.35) are lower triangular and then from Theorem 2 of Section 1.3 that the product of these lower triangular inverses is itself lower triangular. If we set
then L is lower triangular and Equation (1.35) may be rewritten as A = LU, which is the decomposition we seek.
A square matrix A has an LU decomposition if A can be transformed to upper triangular form using only the third elementary row operation.
Example 2 Construct an LU decomposition for the matrix given in Example 1.
Solution: The elementary matrices associated with the elementary row operations described in Example 1 are
with inverses given respectively by
Then,
or, upon multiplying together the inverses of the elementary matrices,
Example 2 suggests an important simplification of the decomposition process. Note that the elements in L located below the main diagonal are the negatives of the scalars used in the elementary row operations in Example 1 to reduce A to upper triangular form! This is no coincidence.
▸Observation 1
If, in transforming a square matrix A to upper triangular form, a zero is placed in the i-j position by adding to row i a scalar k times row j, then the i-j element of L in the LU decomposition of A is − k.◂
We summarize the decomposition process as follows: Use only the third elementary row operation to transform a square matrix A to upper triangular form. If this is not possible, because of a zero pivot, then stop. Otherwise, the LU decomposition is found by defining the resulting upper triangular matrix as U and constructing the lower triangular matrix L according to Observation 1.
Example 3 Construct an LU decomposition for the matrix
Solution: Transforming A to upper triangular form, we get
by adding to the second row − 3 times the first row | |
by adding to the third row − 1/2 times the first row | |
by adding to the third row − 3/2 times the second row | |
by adding to the fourth row 1 times the second row | |
by adding to the fourth row 5/2 times the third row |
We now have an upper triangular matrix U. To get the lower triangular matrix L in the decomposition, we note that we used the scalar − 3 to place a 0 in the 2-1 position, so its negative −(− 3) = 3 goes into the 2-1 position of L. We used the scalar − 1/2 to place a 0 in the 3-1 position in the second step of the preceding triangularization process, so its negative, 1/2, becomes the 3-1 element in L; we used the scalar 5/2 to place a 0 in the 4-3 position during the last step of the triangularization process, so its negative, − 5/2, becomes the 4-3 element in L. Continuing in this manner, we generate the decomposition
LU decompositions, when they exist, are used to solve systems of simultaneous linear equations. If a square matrix A can be factored into A = LU, then the system of equations Ax = b can be written as L(Ux) = b. To find x, we first solve the system
(1.36)
for y, and then once y is determined, we solve the system
(1.37)
for x. Both systems (1.36) and (1.37) are easy to solve, the first by forward substitution and the second by backward substitution.
If A = LU for a square matrix A, then the equation Ax = b is solved by first solving the equation Ly = b for y and then solving the equation Ux = y for x.
Example 4 Solve the system of equations:
Solution: This system has the matrix form
The LU decomposition for the coefficient matrix A is given in Example 2. If we define the components of y by α, β, and γ, respectively, the matrix system Ly = b is
which is equivalent to the system of equations
Solving this system from top to bottom, we get α = 9, β = − 9, and γ = 30. Consequently, the matrix system Ux = y is
which is equivalent to the system of equations
Solving this system from bottom to top, we obtain the final solution x = − 1, y = 4, and z = 5.
Example 5 Solve the system
Solution: The matrix representation for this system has as its coefficient matrix the matrix A of Example 3. Define
Then, using the decomposition determined in Example 3, we can write the matrix system Ly = b as the system of equations
which has as its solution α = 5, β = − 7, γ = 4, and δ = 0. Thus, the matrix system Ux = y is equivalent to the system of equations
Solving this set from bottom to top, we calculate the final solution as a = − 1, b = 3, c = 2, and d = 0.
Problems 1.7
In Problems 1 through 14, A and b are given. Construct an LU decomposition for the matrix A and then use it to solve the system Ax = b for x.
- (1)
-
- (2)
-
- (3)
-
- (4)
-
- (5)
-
- (6)
-
- (7)
-
- (8)
-
- (9)
-
- (10)
-
- (11)
-
- (12)
-
- (13)
-
- (14)
-
- (15)
-
- (a)
-
Use LU decomposition to solve the system
- (b)
-
Use the decomposition to solve the preceding system when the right sides of the equations are replaced by 1 and − 1, respectively.
- (16)
-
- (a)
-
Use LU decomposition to solve the system
- (b)
-
Use the decomposition to solve the preceding system when the right side of each equation is replaced by 10, 10, and 10, respectively.
- (17)
-
Solve the system Ax = b for the following vectors b when A is given as in Problem 4:
- (a)
-
- (b)
-
- (c)
-
- (d)
-
- (18)
-
Solve the system Ax = b for the following vectors b when A is given as in Problem 13:
- (a)
-
- (b)
-
- (c)
-
- (d)
-
- (19)
-
Show that LU decomposition cannot be used to solve the system
but that the decomposition can be used if the first two equations are interchanged.
- (20)
-
Show that LU decomposition cannot be used to solve the system
but that the decomposition can be used if the first and third equations are interchanged.
- (21)
-
- (a)
-
Show that the LU decomposition procedure given in this section cannot be applied to
- (b)
-
Verify that A = LU, when
- (c)
-
Verify that A = LU, when
- (d)
-
Why do you think the LU decomposition procedure fails for this A? What might explain the fact that A has more than one LU decomposition?
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123914200000019
Given Triangular Form of Matrix Find Solution
Source: https://www.sciencedirect.com/topics/mathematics/upper-triangular-form