SICP

# SICP 1.1.7 Solutions

July 18, 2012 4:00

### Ex. 1.6

The problem with Alyssa’s definition is that the clauses are now arguments of her procedure. With applicative order, they will both be evaluated regardless of what the predicate’s truth-value is. That’s not what’s expected in most conditional programs.

In the particular case of the square root, the ‘false’ branch will always be evaluated, leading to an infinite recursion. This could possibly be fixed by rewriting the if statements into two procedural steps, but this defeats the entire purpose of the `new-if`

redefinition.

This one’s easy enough to enter in and test. To make it even easier, I used our friend `(p)`

from the previous set of exercises.

Here are three different programs that ostensibly do the same thing. `test1`

and `test2`

, using built-in forms, will work as expected to evaluate `(test`

*n* `0)`

properly. But `test3`

, using Alyssa’s defined procedure, runs into the interpreter’s applicative-order argument processing, and will not terminate. Of interest is the note in the text about `if`

being specially processed to evaluate only based on the predicate. It can be seen that `cond`

appears to follow the same rules.

### Ex. 1.7

The `good-enough?`

procedure is simple enough to write. Using `abs`

means only having one comparison, which is good. It then just calculates whether the fraction of the guess is smaller than the ‘tolerance’.

For testing, it’s worth it to compare to the `sqrt`

function that should be already defined. If not, compare to a calculator or a similar program. For several values the differing results (if any) can be seen. This is why the names `sqrt-old`

and `sqrt-new`

were used.

The ‘old’ method (absolute tolerance) yields values accurate to the tolerance for most reasonably-sized numbers. As the numbers get smaller, however, the answers are increasingly inaccurate. The process stops too soon. Eventually, values will be so small that subtracting them from the tolerance doesn’t work, and all the answers will be the same (and very very inaccurate) for arguments below that value.

An even bigger problem is seen at the high end. Once the numbers are too large, the tiny value for the tolerance can never be properly compared to the precision of the large numbers. In this case, no answer will ever be found to be good enough, and the procedure does not even terminate.

The ‘improved’ (fractional tolerance) solution is not without its own flaws, however. For a given fixed fractional tolerance and precision, it too may end too soon — in this case, at very large values. Arguably this is less of a flaw in most cases; using the fractional tolerance at least does not have the problem of completely failing for some values, and the tolerance can be improved based on the floating-point precision. Although it also may fail for a special case (0) that the old one simply gives the wrong answer for.

A further problem with the `sqrt-old`

approach is the use of the initial guess. For values of `x`

very close to 1.0, it will already find them to be ‘good enough’ and will not adjust the answer at all. This leads to a number of values close to the guess all returning the same answer. Even for smaller values not near 1.0, while the answer is inaccurate, the difference between two results will still change. This is actually something I didn’t realize `sqrt-new`

could avoid until I went to post this, even though the fix was simple.

### Ex. 1.8

Following the pattern of the square root, I use `-iter`

and `improve-`

procedures to solve the cube root. The only pertinent difference is in `improve-cube`

; even `good-enough-new?`

(the fractional approach) can be used without change. This suggest the abstraction that the text alludes to for a future section.

Testing this one involves just trying some different values. Since we’re using all floating-point numbers, exact results can’t be expected. Values used are some basic numbers, a random floating-point value, and then 0 and 1. Note that `1`

actually yields a more precise answer, and `0`

(integer or rational) succeeds while `0.0`

will fail; this may not hold for all interpreters. Negative values are valid for cube roots, so must be checked as well, and finally we can see the limit of the floating point precision causing the procedure to fail to terminate.