SOLVING SYSTEMS OF LINEAR EQUATIONS
Objective #1: Explain and use 3 methods for solving simultaneous linear
equations by hand.
- There are at least 3 general methods for solving sets of simultaneous linear
equations by hand. However, these techniques are often only practical when
the number of equations is three or less. A computer should be used when there
are more equations.
- Three methods for solving sets of simultaneous equations:
- The Graphical Method - Since linear equations form straight lines,
it is not difficult to graph them on the Cartesian plane. If you are solving
a set of 2 linear equations and they intersect, the solution of is the
point of intersection. If there are three equations and the planes that
they represent intersect in a three-dimensional coordinate system, the
point of intersection is the solution. In either case, if they do not
intersect, then there are no solutions and it is said that the
system is singular. If the intersection is a line or a plane, then
there are infinite solutions and it is said to be a singular system
as well. However, if the point of intersection is difficult to determine
because two lines or planes are very close to being singular, then the
system is said to be ill-conditioned. Note that ill-conditioned
systems pose problems for numerical solutions and computer algorithms
due to round-off errors.
- Cramer's
Rule - The determinant of a 2 x 2 matrix of the form
a11 a12
a21 a22
is D = a11*a22 - a21 * a12
The determinant of a 3 x 3 matrix of the form
a11 a12 a13
a21 a22 a23
a31 a32 a33
is D = a11 * (a22 * a33 - a32 * a23) - a12 * (a21 * a33 - a31 * a23) +
a13 * (a21 * a32 - a31 * a22)
Cramer's Rule shows that each unknown in a system of linear equations
is the fraction formed by the fraction where the numerator is the determinant
of the 3x3 matrix formed by substituting the column of coefficients of
the unknown by the constants (e.g. the constant b1 where a11x1 + ... +
a13x3 = b1 is the first equation) and the denominator is the determinant
of the matrix formed by the equations' coefficients.
- Another technique for finding the determinant of a 3 x 3 matrix is called the Sarrus Rule. See this link
for more details.
- Elimination of Unknowns - Multiply one equation by a constant
so that it is possible to combine the two equations and eliminate one
of the unknowns. The remaining unknown can then be easily solved and substituted
back into either original equation to determine the other unknown.
Objective #2: Explain and use a computer algorithm for using Gauss Elimination.
- Gauss elimination
(sometimes called Gaussian elimination) is a scheme which uses the
Elimination of Unknowns process as described above to solve a set of equations,
however large.
- Two phases:
- forward elimination
- The goal during this phase is to reduce the set of equations to
an upper triangular system where the only nonzero entries in
the matrix form a triangle in the upper-right corner of the matrix
that is augmented to the right with a vector formed by the equations'
constants. This matrix is called an augmented matrix.
- A pivot equation is chosen which has the maximum coefficient
value in the first column. The maximum coefficient value in this column
is called the pivot coefficient (or pivot element).
If necessary, you can swap any two rows (i.e. equations) in the augmented
matrix in order to place the pivot equation at the top of the matrix.
- The pivot equation is divided by the value equal to its pivot coefficient
as long as this does not cause division by zero. This process is called
normalization.
- The coefficient of the term in the same column on the next equation
is multiplied with the pivot equation so as to be able to eliminate
that term when the two equations are added or subtracted.
- This process is continued until all of this term's coefficients
in the other equations are eliminated.
- These three operations can be and are performed on the augmented
matrix formed by the system of equations:
- You can multiply any row of the augmented matrix by a nonzero
number.
- You can add any row of the augmented matrix to any multiple
of another row.
- You can swap any two rows.
- The following pseudocode to perform forward elimination
could be used:
For k = 1 to n-1
For i = k+1 to n
factor = Aik / Akk
For j = k+1 to n
Aij = Aij -
factor * Akj
Bi = BI - factor * Bk
- back substitution
- After an upper-triangular matrix has been formed, the last equation
will indicate the solution for one of the unknowns.
- This result can be back-substituted in the next-to-last equation
in order to evaluate one of the other unknowns.
- This process can be continued until all of the unknowns have been
determined.
- The following pseudocode to perform back substitution
could be used:
Xn = Bn / Ann
For i = n-1 to 1 by -1
sum = 0
For j = i+1 to n
sum = sum + Aij * Xj
Xi = (BI - sum) / Aii
- Potential errors
- Division by zero - The method of Gauss Elimination outlined above
is sometimes called Naive Gauss Elimination because it is possible to
divide by zero during the forward elimination or the back substitution
phases when blindly following the algorithm. The process called partial
pivoting is used to swap two rows of coefficients when necessary in
order to place the maximum coefficient of a given term in the proper row
and avoid division by zero. (Complete pivoting involves swapping columns
but is rarely used due to extra, difficult-to-program side requirements.)
The following pseudocode for partial pivoting could be
used:
p = k
big = abs(Akk)
For ii = k+1 to n
dummy = abs(Aii,k)
If dummy > big
big = dummy
p = ii
If p != k
For jj = k to n
dummy = Ap,jj
AP,jj = Ak,jj
AK,jj = dummy
dummy = Bp
BP = Bk
Bk = dummy
- Round-off errors - As always, a computer must round off values
in order to compute and store values. In systems, with large numbers of
equations, these round-off errors can snowball (propagate) into larger
errors as the algorithm proceeds. Also, if some coefficients are extremely
large and others are very small, round-off errors tend to affect answers.
So scaling the equations so that the maximum coefficient for each
term is the value one will decrease round-off error. Sometimes, scaling
is just used to determine if equations should be pivoted. Always, back-substitute
results into the original equations of the system in order to check your
answers to decide if round-off error is significant.
- Ill-conditioned systems - If a small change in one or more coefficients'
values leads to a great change to the solution of the system, the system
is said to be ill-conditioned. Otherwise, the system is considered to
be well-conditioned. In fact, round-off errors can cause large errors
in the computed solutions in ill-conditioned systems. It is sometimes
possible to scale the equations so that the maximum element in any row
is 1. Using the most precise data type is advisable to avoid having an
ill-conditioned system spoil your attempt at finding a solution.
- Singular systems - If a system is singular, then it's determinant
is zero. Furthermore, the determinant of a triangular system can easily
be computed by multiplying the diagonal entries. Therefore, it is worthwhile
to check the determinant of a system with this simple computation after
forward elimination.
- Pseudocode for Gauss Elimination - Notice that the scaling
is not used to manipulate the equations only to check if pivoting should be
performed. Also, notice that the determinant is checked by computing the diagonal
terms to find singular systems. The term er is used to indicate whether or
not a singular system has been found. The term tol is set by the user to determine
the user's tolerance for near-zero occurrences.
Sub Gauss(A,B,n,X,tol,ER)
Dim S(n)
ER= 0
For i = 1 to n
Si = abs(Aij)
For j = 2 to n
If abs(Aij) > Si
Then Si = abs(Aij)
Call Eliminate(A, S, n, B, tol, ER)
If ER != -1 Then Call Substitute(A, n, B, X)
Sub Eliminate(A,S,n, B, Tol, ER)
For k = 1 to n-1
Call Pivot(A, B, S, n, k)
If (abs(Akk/Sk) < tol Then
ER = -1
break
For i = k+1 to n
factor = Aij
For j = k+1 to n
Aij
= Aij - factor * Bk
If abs(Akk/SK) < tol Then ER = -1
Sub Pivot(A, B, S, n, k)
p = k
big = abs(Akk / SK)
For ii = k+1 to n
dummy = abs(Aii,k / Sii)
If dummy > big Then
big = dummy
p = ii
If p != k Then
For jj = k to n
dummy = AP,jj
AP,jj = AK,jj
AK,jj = dummy
dummy = BP
BP = Bk
Bk = dummy
dummy = Sp
SP = SK
SK = dummy
Sub Substitute(A, n, B, X)
Xn = Bn/Ann
For i = n-1 to 1 by -1
sum = 0
For j = i+1 to n
sum = sum + Aij * Xj
XI = (BI - sum)/Aii
- Another form of Gauss Elimination pseudocode is found on p. 434 in C
Program Design for Engineers:
p = 0
For i = p to n-1
pivot using the maximum pivot strategy
if a solution is still possible
scale the pivot row
eliminate the coefficients beneath the
pivot row
go to the next row
If last coefficient is 0
no unique solution exists
Else if there is a solution
scale the last row
Objective #3: Explain and use a computer algorithm for using Gauss-Jordan's
Method.
- The Gauss-Jordan
Method is a more extensive variation of Gauss Elimination.
- When an unknown is eliminated, it is eliminated from all other equations
rather than just the ones below the current pivoting equation. In this manner,
the final matrix contains nonzero values on the diagonal rather than a triangular
matrix. Only the diagonal elements are nonzero. The rows are also normalized
by dividing each one by its pivot element. The result is that the identity
matrix is formed where all diagonal coefficients are the value 1.
- Back-substitution is not necessary when using the Gauss-Jordan method since
the result is the identity matrix.
- The Gauss-Jordan method requires more work and more operations (such as
division, multiplication, etc.) than Gauss elimination therefore plain Gauss
elimination is preferred.