FINDING ROOTS
Objective #1: Explain the concept of roots to an equation and determine the
number of roots to an equation.
- A root of an equation is a value for x that makes a given function
f(x) equal to 0.0.
- Two curves, f(x) and g(x), intersect where f(x) - g(x) = 0.0, so a root
to F(x) = f(x) - g(x) is a point of intersection between the two functions.
- A polynomial of degree n (where n is the highest power of
the independent variable x) has at most n roots that can include complex numbers
(with i = square root of -1) and multiple roots.
- Transcendental functions which involve trig, log, and/or exponential
functions can have rational or irrational roots. However, since the computer's
number system is finite (due to the finite range of data type's, maximum precision
with each data type, and processor's computational methods), we usually use
the same algorithms for finding roots to transcendental functions as well
as polynomials.
- Transcendental functions can also have an infinite number of roots. Of course,
a computer cannot find or display an infinite number of roots so you have
the responsibility in such cases to terminate your program or simplify a root-finding
algorithm.
- See p. 672 for examples of the types of functions that are typically encountered
in root solving.
- The quadratic formula is only useful for general quadratic equations which
of course have a degree of 2. Use the quadratic formula
r = (-b +/- sqr(pow(b, 2) - 4 * a * c)) / 2 * a
to solve for r, the roots of a general quadratic equation of the form a *
POW(x, 2) + b * x + c = 0
Objective #2: Use graphical methods to find or predict roots.
- Our root-solving techniques will first require that you either specify an
initial interval that is known to contain a root or that you provide an initial
guess for the root.
- It is wise to initially draw a graph for a function when you wish to find
its roots to determine an initial guess for a root.
- It may take a significant amount of time to obtain a decent initial guess
for an equation's root.
- In fact, which of the following methods (Bisection, False Position, etc.)
that you choose must be done wisely and with great care. Otherwise, your algorithm
may be slow or even useless in particular cases.
Objective #3: Explain and use the Bisection Method to finding roots.
- The Bisection Method is a systematic, iterative method for finding
a root to a continuous function if you know that at least one root exists
between two given points in the domain. It is easier to use and more dependable
if you know for sure that exactly one root exists within a given interval.
- The algorithm picks midpoints x within the interval (a, b) and after computing
the f(x), you change one of the interval endpoints depending on whether the
f(x) is greater or less than the f(a) or f(b). Then, half of the original
interval is discarded and the midpoint is assigned to one of the endpoints.
After a few iterations, the algorithm yields an approximation of the root
given a specified tolerance.
- The only inputs necessary for the Bisection Method would be the function
itself, the tolerance, and the endpoints of the original interval. You may
also wish to have the user specify a maximum number of iterations.
- The success of the Bisection Method is based on the size of the interval.
That is, the larger the interval that contains the root, the more iterations
and the longer time it will take to find a suitable approximate root.
- The number of iterations it would take to predict an approximated root with
a specified tolerance can be predicted when using the Bisection Method. If
you are required to find a root to within a tolerance, d, then the
number of iterations, n, can be determined as
(b - a) / pow(2, n) < d
where a and b are the original endpoints and n is the number of iterations.
(See p. 676 of the old textbook)
You can also use the equation
( log2 ((b - a)/ d )) < n
or
(log ((b - a)/d))/log 2 < n (since log2
x = log x / log 2)
to determine the smallest whole number n that represents the number of iterations
that will guarantee an acceptable approximated root.
- Study and understand the Bisection Demo program (formerly known as Program 14.1
from p. 757 of the old textbook).
You will be expected to explain this program's algorithm in essay form. You
will also be expected to be familiar enough with this program to fill in missing
sections (variables, expressions, and even whole statements).
- Notice that f(x1), f(x2), and f(x3), are not all computed on each and every
iteration of the for loop. This would lead to extra unnecessary computations,
slowing the down the processor and increasing the overall program execution
time.
- Also, notice that a number of diagnostic messages are included exposing
potential problems such as excessive iterations and no noticeable roots within
the specified interval.
Objective #4: Explain and use the False Position Method (aka Regula Falsi
Method) for finding roots.
- The Bisection Method is not the
most sophisticated algorithm for finding real roots in such conditions. It
can be considered to be a use of brute force. It can be slightly improved
by interpolating the successive approximated roots between the function's
values at the interval endpoints. Modifying it in this way leads to what is
called the False Position Method. The False Position Method almost
always converges more quickly than the Bisection Method.
- Basically, the statement
x2 = (x1 + x3) / 2;
in the Bisection Method algorithm is replaced by the statement
x2 = x1 - (x3 - x1) * f1 / (f3 - f1);
to turn it into the False Position Method. The old textbook uses slightly
different code in Program 14.2
due to its particular
form of the algorithm.
- The False Position Method also guarantees that you will find a root within
a specified tolerance if the function is continuous and there is at least
one root in the original interval.
- Study and understand the Modified Regula Falsi demo (formerly Program 14.2
from p. 765 of the old textbook).
You will be expected to explain this program's algorithm in essay form. You
will also be expected to be familiar enough with this program to fill in missing
sections (variables, expressions, and even whole statements).
- Since the values of f1 and f3 could be very close, their difference could
be so close to zero to cause round-off errors (even when using a double data
type). This is a potential problem that could lead to poor results depending
on the nature of the function.
- The only inputs necessary for the Method of False Position would be the
function itself, the tolerance, and the endpoints of the original interval.
You may also wish to have the user specify a maximum number of iterations.
- By introducing and using a relaxation factor to draw the interpolating
segments, the method converges more quickly. Our textbook calls this method
the "Modified Regula Falsi Method." This artificially reduces
the slope of the interpolating line to find the next predicted root value.
Eventually, after enough approximations, the predicted roots alternate between
the left and right sides of the interval. This modified version of False Position
is more efficient than the one without the relaxation factor and the Bisection
Method.
- The textbook suggests using a relaxation factor of 0.9 unless you can perform
the sophisticated analysis to determine a better relaxation factor for the
given function.
- It is NOT possible to predict the number of iterations necessary to find
a suitable root with either form of the False Position Method. Therefore,
the programmer should include a check for an excessive number of iterations.
The success of the False Position Method is based on the size of the function
not the size of the interval. That is, the larger the function (i.e. f(b)
- f(a)) the quicker the method will hone in on a suitable approximate root.
However, the success of the Modified False Position Method is based on the
size of the interval.