The Square Root Function
Sometimes I think I take math.h for granted, so today I'm going to try and create my own square root function.
Since many square roots are irrational, this will be all about trying to get the best approximation that I can.
I'm not really sure where to start, so I'll try to implement the first and only approximation that I learned back in Calc 1, which is using a linear approximation.
A linear approximation
So we're trying to estimate
To take a linear approximation we can find a perfect square close to , and use the tangent line to the function to approximate.
The tangent line can be found with the derivative
Plugging this into a line equation, the tangent line for the square root function at any input is given by
Now the only thing left to find is a perfect square that is close to
I'm sure there's a better way to do this, but for now I'm just going to iterate through numbers until the square is greater than the target number.
Theoretically this should be alright as grows fairly quickly.
Here's how I'm finding the closest perfect square
(the function returns its square root)
int closestSquare(double x)
{
int previousSquare;
int nextSquare = 1;
while (nextSquare * nextSquare < x)
{
previousSquare = nextSquare++;
}
return (nextSquare * nextSquare - x) > (x - previousSquare * previousSquare) ? previousSquare : nextSquare;
}
And now we should have everything we need to implement this approximation.
double linear_sqrt(double x)
{
int cs = closestSquare(x);
return (1.0 / (2 * cs)) * (x - cs * cs) + cs;
}
Ok so before I try to improve this approximation, lets get a sense of how accurate this is to the sqrt() function in math.h
The error should be less for greater numbers as the square root will grow slower and the tangent line will become a better approximation.
Averaging the error for the first positive integers we get...
| Average Error | |
|---|---|
... which I think is pretty decent for a simple approximation
The second iteration
I'm not really sure of any other types of approximations, so I think the next logical step is to create a quadratic approximation using both the first a second derivatives of the function.
This will use an approximating line that looks like this
Where this function will have the same first and second derivative as the square root function at any input .
This is the implementation
double quad_sqrt(double x)
{
int cs = closestSquare(x);
return (-1.0 / (8 * power(cs, 3))) * power((x - cs * cs), 2) + (1.0 / (2 * cs)) * (x - cs * cs) + cs;
}
(I created a power function to simplify things, can't be cheating with math.h)
So let's check the error for this function...
| Average Error | |
|---|---|
...and it seems to be improving
A general form
Now to improve this to any desired accuracy it seems that we should be able to take derivatives of the function and create better and better approximations...
this sounds a lot like Taylor's Theorem from Calc 2 which means our approximation should be able to be extended in the following way...
to implement this we need a factorial function and a way to take an arbitrary amount derivatives of the square root function.
The factorial is trivial
int factorial(int n)
{
return n > 2 ? n * factorial(n - 1) : n;
}
(I'm just using int's for now because I don't expect to plugin in anything too high)
and for the derivative of , I'll implement the power rule directly in the loop.
And the completed function looks like
double general_sqrt(double x, int n)
{
int cs = closestSquare(x);
double approx = cs;
double coefficient = 0.5;
for (int k = 1; k <= n; k++)
{
approx += power(x - cs * cs, k) / factorial(k) * coefficient * power(cs, 1 - 2 * k);
coefficient *= (1 - 2 * k) / 2.0;
}
return approx;
}
Here's a table showing the average error for different amounts of iterations:
| 1 | 2 | 3 | 4 | 5 | |
|---|---|---|---|---|---|
I think I'll call that a success!
Wrapping Up
Although using Taylor's theorem is fairly accurate, I think its actually too slow compared to modern methods. Modern algorithms as far as I can tell make use of Newton's method after making an approximation to improve said approximation. Newton's method is another interesting bit of calculus but I'll leave that for another time.