SICP

# SICP 1.2.4 Solutions

September 28, 2012 11:30

### Ex 1.16

The mathematical solution to this one is relatively easy, as it can be based on the existing `fast-expt`

procedure. The iterative one just stores the cumulative result in the `a`

argument instead of backing it out by recursion.

To test the results, we compare the answers we get using the original `fast-expt`

to our iterative version. For most of them, the results come out directly to make sure that both procedures are getting sensible values. Even though `fast-expt`

was already provided, this is necessary to make sure that it wasn’t typed in wrong. At the end, I’ve added a simple equality test, since the number produced would be impossible to check by hand. We only have to check that the result is true (in Dr. Racket this is signified with `#t`

).

The equality test uses the expression `(= (fast-expt a b) (expo a b))`

, which is something a little bit new. Previously we’ve only compared actual numbers or variables, but the structure of Lisp makes it readily understandable. It’s going to evaluate `(fast-expt a b)`

and `(expo a b)`

, and then test if the resulting answers are equal.

### Ex. 1.17 & 1.18

I provided simple forms of `double`

and `halve`

to use for this exercise. At the time this book was originally written, this exercise may have had slightly more relevance to computers then in use (though it still applies to many embedded processors). Halving would be surely done as a binary operation, bit shifting to the right. This would only be accurate for even integers, where the last digit in binary is 0. Bit shifting to halve in that case is both simpler and faster. Using the division operation as I did here is slower, but for writing in Scheme, it’s a little bit clearer what is going on.

The mathematical solution is again fairly easy to understand. If `b`

is even, it can be halved, and we double that result. If odd, then add `a`

once and subtract one from `b`

. The iterative version (1.18) is derived similarly to the solution for 1.16.

To test it, this time we save the ‘accurate answer’ as the value `big-result`

. I’m using the built-in `*`

, though the book-defined version could also be one to test against. That one is probably slower than the built-in version, but that would only illustrate the reason for `big-result`

. Because it actually might take a little time to calculate it, we can compare it to the result of both procedures without having to recalculate it. Again, all we expect is a true from our equality comparison, with the assumption that the built-in `*`

is correct.

### Ex. 1.19

Viewing functions, and eventually computer programs, as transformations will be crucial to understanding some things much later in the book. It’s a good idea to make sure you have a full understanding of what’s going on in this exercise.

We have the transformation **T _{pq}** on

`(a,b)`

expressed as*a* ← *bq* + *aq* + *ap*

*b* ← *bp* + *aq*

Now we want to find the transformation **T _{p'q'}** which is equal to (

**T**)

_{pq}^{2}. To do this, we’ll look at the results of applying

**T**twice and then find the equivalent expressions. In the following, the subscripts denote the result of applying

_{pq}**T**, i.e.

*a*means

_{1}**T**is applied once for

*a*.

*a _{1}* =

*bq*+

*aq*+

*ap*

*b*=

_{1}*bq*+

*aq*

*a*=

_{2}*b*+

_{1}q*a*+

_{1}q*a*

_{1}p*b*=

_{2}*b*+

_{1}p*a*

_{1}qSubstitute to get in terms of

*a*and

*b*:

*a*= (

_{2}*bp*+

*aq*)

*q*+ (

*bq*+

*aq*+

*ap*)(

*q*+

*p*)

*b*= (

_{2}*bp*+

*aq*)

*p*+ (

*bq*+

*aq*+

*ap*)

*q*

The transformation for **T _{p'q'}** is

*a* ← *bq*' + *aq*' + *ap*'

*b* ← *bp*' + *aq*'

And we want to use *a _{2}* and

*b*to make them equivalent. Using just

_{2}*b*, we can arrange it to get

_{2}*b _{2}* =

*b*(

*p*

^{2}+

*q*

^{2}) +

*a*(2

*pq*+

*q*

^{2})

*p*' and *q*' then equal the values in parentheses, since we are using *b _{2}* for

*b*.

As a check, we can put these values into the expression for *a _{2}*:

*a _{2}* ≟

*a*←

*b*(2

*pq*+

*q*

^{2}) +

*ap*

^{2}+ 2

*aq*

^{2}+ 2

*apq*

*a _{2}* ≟

*bpq*+

*aqq*+

*bqq*+

*bpq*+

*aqq*+

*apq*+

*aqp*+

*app*

*a _{2}* = (

*bp*+

*aq*)

*q*+ (

*bq*+

*aq*+

*ap*)(

*q*+

*p*)

∴

*p*' = *p*^{2} + *q*^{2}

*q*' = 2*pq* + *q*^{2}

Using those expressions in the procedure gives us our fast Fibonacci finder. To test it out, there are a few sample values given. These can be verified by any outside reference.

The final value is a quick check of the time taken. This line uses a Racket-specific function :

The `time`

function (further details at the Racket reference page) will give us the amount of time that a procedure takes. It reports in a format like this:

`cpu time: 4334 real time: 5059 gc time: 27`

The time is in milliseconds. Usually only CPU time is relevant. The `gc`

is for garbage collection, which is important to know as it can vary depending on what procedures were run previously. The procedure `void`

is used to wrap the expression because `time`

will make sure to return the value of the expression inside it. However, in this case we don’t want to show it because it would be far too large to deal with. All `void`

does is toss out the result inside it, which makes it handy when we only care about the time taken. In this case, the time function shows that this procedure is indeed quite fast, as this number has on the order of a million digits. If you aren’t using Racket, you can get rid of that check (or try to find the equivalent timing function and see how long it takes to compute this value).