ga('send', 'pageview');
Categories
Teknik

Fixed-point calculations (and square roots) in Clojure

I’m currently reading “The interpretation and structure of computer programs”, (which by the way is a really joyful read) and I felt an urge to try the examples in a more “modern” language than Lisp.

Clojure seemed like a nice fit since it is a Lisp and also runs on the JVM which makes the learning curve slightly less steep (for me atleast).

Square root cMattiasalculation using Newton’s method

One of the examples in the book covers Newton’s method for calculating the square root of a number. The idea is that if we pick a guess for the square root, we’ll get a better guess by dividing our initial guess with the square rooted number. That is, if y is a guess for the square root and x is the number for which we’re calculating the square root, we can find:

y_{improved} = frac{x}{y}

Averaging this fraction with the guess helps with the convergence of the algorithm and is needed for Newton’s method to converge. The theory behind this is beyond me, but simple calculations can show that this average is actually the same as the original fraction and only affects the algorithm convergence behavior. This method can then be applied iteratively by using the answer as input to the next iteration. So we’ll use this instead:

y_{improved} = frac{(y + frac{x}{y})}{2}

Enough mumbling about stuff I don’t know that much about and let’s write some code!

I first define the square, average and abs help functions.

(defn square [x]
  (* x x))
(defn average [x y]
  (/ (+ x y) 2))
(defn abs [x]
  (if (< x 0)
    (- x)
    x))

We also need a stop criteria to be able to tell when we’ve found a good enough guess. For that, we’ll use our good-enough? predicate function. This function checks that the difference between the squared guess and the value we’re computing the square, is within our tolerance level. This will mean that we have found an acceptably good guess.

(defn good-enough? [guess x tolerance]
  (< (abs (- (square guess) x))
     tolerance))

Now, we can use a help function to iterate to a good enough guess. The next-guess function is responsible for calculating our next guess, given the previous guess and squared number as arguments. sqrt-iter is an recursive function that stops when the stop criteria (good-enough?) is satisfied. The sqrt function initiates the calculations and provides the recursive sqrt-iter with an initial guess.

(defn next-guess [guess x] (average guess (/ x guess)))

(defn sqrt-iter [guess x tolerance]
  (if (good-enough? guess x tolerance) guess
    (sqrt-iter (next-guess guess x) x tolerance)))

(defn sqrt [x initial-guess tolerance]
  (sqrt-iter initial-guess x tolerance))

We can now make approximate calculations of the square root of a number for a specific tolerance. For example, the square root of 2 with the tolerance 0.0001 can be calculated with:

(sqrt 2.0 1.0 0.0001)
=> 1.4142156862745097

I’m using 1.0 as the initial guess which I thought was reasonably good. This is all nice and dandy, but it doesn’t really show any huge advantages from using a non-functional programming language such as Java. Implementing this solution in Java is left as an exercise.

Refactoring to fixed-point

The power of functional programming and Clojure comes into play when we can find a generic pattern for many problems and use functions as first-order citizens. In this case, it turns out that the Newton approximation is a special case of a fixed-point calculation. We can refactor the solution by breaking out the responsibility of finding a fixed-point for a function into a separate function:

(defn fixed-point [func first-guess tolerance]
  (letfn [
    (good-enough? [first second tolerance]
      ( (abs (- first second)) tolerance))
    (eval-guess [guess]
      (let [next-guess (func guess)]
        (if (good-enough? guess next-guess tolerance)
          guess
          (eval-guess next-guess))))]
  (eval-guess first-guess)))

The fixed-point function takes a function as its first argument. The fixed-point function will evaluate the given function with the current guess. The result of this evaluation is used to determine if the function has converged to within the tolerance level. If so, the current guess will be returned, else the fixed-point will continue with the new guess as input. This means that we must provide a function that converge under these conditions (fixed-point). As we’ve seen, Newton’s method includes such a function for calculating the square root of a number (the y_{improved} shown earlier). The fixed-point function also has a local function good-enough? that checks if the current guess is within our tolerance level (same as good-enough? in the previous solution).

I’ve used some more Clojure API support here, such as letfn and let, which let’s me define local functions and variables. It makes the code a bit more concise and we don’t get access to the local functions from outside the fixed-point function. Check out the Clojure API for more details on this.

We can now wrap up the refactoring with the following small function that uses the generic fixed-point to calculate the square root:

(defn sqrt [x tolerance]
  (fixed-point #(average % (/ x %)) 1.0 tolerance))

Here I’m using the # symbol, which is a short-hand notation for an anonymous function. The % within the function represents the first argument to the function. This means I’m actually passing an implementation of y_{improved} to thefixed-point function.

To clean this up a little bit more I’m breaking out the responsibility of average damping in a separate function:

(defn average-damp [func]
  #(average % (func %)))

The average-damp function applies the average damping to the provided function and returns a new function that is calculated with applied damping. This means that we can write our square root calculation function as:

(defn sqrt [x tolerance]
  (fixed-point (average-damp #(/ x %)) 1.0 tolerance))

We also get another payoff. We can now reuse the fixed-point method for approximating fixed points of other functions as well. For example, the golden ratio is a value x that satisfies:

x^2 = x +1

To calculate this using our fixed-point function we need to reduce it to the form:

x = 1 + frac{1}{x}.

We can pass this reduced form to our fixed-point function:

(defn golden [tolerance]
  (fixed-point (average-damp #(+ 1 (/ 1 %))) 1.0 tolerance))

To approximate the golden ratio, we can now call our golden method and pass a reasonable tolerance level (0.0001 in this case):

(golden 0.0001)
=> 1.6179380934832117

It turns out that this algorithm actually does not need the average-damping to converge. Since we have separated the function for finding the fixed-point from that of average-damping we can try:

(defn golden [tolerance]
  (fixed-point #(+ 1 (/ 1 %)) 1.0 tolerance))

Which yields the same results as the average-damped version:

(golden 0.0001)
=> 1.6179380934832117

Hopefully this post shows some examples on how you can use higher-order functions to modularize responsibilities. The original examples (in Scheme) in this post are available in the book and credit for this should go to Abelson, Sussman and Sussman. The book is also released under Creative Commons Unported licence 3.0.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *