SICP

# SICP 3.3.4 Solutions

December 1, 2019 10:36

### Ex. 3.28

By using the `logical-or`

procedure and assuming a defined constant `or-gate-delay`

, this one is nothing more than copying the `and-gate`

procedure and replacing ‘and’ with ‘or’. It doesn’t need to be any more complicated than that.

### Ex. 3.29

The solution to this exercise relies on what’s known in Boolean logic as DeMorgan’s Rules. These show how expressions using Boolean AND can be converted to Boolean OR, or vice versa. The rule we’ll use can be expressed this way (using ~ to mean inversion): ~(A AND B) = (~A) OR (~B). In other words, if you invert the output of an AND-gate, you get a value equivalent to an OR-gate, but as if the inputs had been inverted. The other property we’ll rely on is that double inversion is equivalent to the original expression. That this is true should be fairly clear; with only two possible values, if you invert twice, you end up going back to whatever the original value was.

We want our output to be the expression A OR B. Looking at the rule, if we just input A and B on their own and invert, we’d get an OR expression of their inverses. Therefore if we invert the inputs first, then put them into an AND-gate and invert, we’ll get what we want. In expression form, it looks like this:

~( ~A AND ~B) = (~~A) OR (~~B) = A OR B

The procedure for this OR-gate creates the necessary gates, along with some internal wires to hook them up. It resembles the half-adder in that it doesn’t actually set anything in the agenda; that will all be handled by the inner gates themselves. There’s a potential problem with that, explored in a later exercise; if we do need desire a specific initial value, we can set the signals directly in this procedure.

To determine the delay in this gate, we want to find the longest path from any input to the output. Both the signals here are the same, though: They must pass through an inverter at the input, then an AND-gate, and then one more inverter. That amounts to 2 * `inverter-delay`

+ `and-gate-delay`

.

### Ex. 3.30

The total amount of code required here isn’t much, since we can take advantage of the fact that all the inputs are lists, and the inputs and outputs are connected in an identical fashion to each successive bit. By expanding or limiting the lists to include either the inputs or outputs desired, we can use Scheme’s list operations to produce all our components easily. The main thing to pay attention to is that the inputs, outputs, and our other needed wires have to be aligned to each other properly.

By observation of the diagram in the text, we see that internal wires need to be defined, to connect the carry bits of the fast adder blocks. By using `map`

with A as our input and just calling `make-wire`

, we’ll have the right number of internal carry bit wires, no matter the number of inputs. The last carry out needs to be the external wire `cout`

, so we have to add it on to form our list of carry outs. However, we also need to trim the carry-in of the LSB, since it’s not an output of any internal unit. To easily lop off the last element, we reverse the list, take the cdr, and reverse it again. We also explicitly set the first carry-in to 0. Finally, with all our lists now the same length, a `for-each`

will generate the right amount of full-adder blocks that we want.

To compute the delay, I’ll work from the top down. First we get the delay in terms of the components in the ripple adder, then the delay in terms of the components’ parts, and eventually we’ll get down to the gates themselves. We want to consider the worst possible case, so in some cases we have to choose the maximum of two possible paths, and potentially not know the actual worst-case until we examine the lower-level blocks. In this analysis I’m only considering the possible paths a signal might follow. That ignores the details of how the signals might change and whether a particular gate will actually affect the delay in any real-world situation. This might mean our estimated delay is longer than anything that actually occurs, but at least it gives us a decent upper bound on the delay.

So let’s start at the top and find the longest path in the ripple adder. Since the internal carry signals ‘ripple’ from the output of each lower bit to the next higher bit, all the way to the last output, the final sum or carry output (S*n* or C) might depend in time on the very first full-adder. There are *n* – 2 full-adders that can propagate the carry between the first and last full-adder, so we’ll need to include the delay from each of those carry inputs to the carry outputs in those full adders. Additionally, we need to add on whatever the longest delay on the very first full-adder is from one of the A or B inputs, and the longest delay from the carry input to either the sum or carry out in the last full-adder.

So, at this stage, our delay in terms of full-adder (FA) delays is :

(*n*– 2) * (FA delay Cin→Cout) + max[FA delay A→Cout, FA delay B→Cout] + max[FA delay Cin→Cout, FA delay Cin→S]

Now we need to compute the delays inside the full-adders. From Cin to Cout, there are two half adders that the signal might pass through, plus an OR-gate beyond them. We can expect that this delay Cin to Cout is likely longer than from Cin to S, since it passes through that OR-gate; we’ll make that assumption and use it for the last term above.

The carry input only affects input B of the half-adders. That means the worst-case delay is the delay from B to S in the first half-adder, added to the delay of B to C in the second half-adder. For either input of the full adder to Cout, the longest delay is going to be that of input B, since it must pass through both half-adders. That will be the half-adder delay from A to S in the first one, then from B to C in the second one, plus an OR-gate delay. That means in every full-adder block, we’re looking at the delay in the second half-adder B input to the output C and the OR-gate, and every block except the first will also have the delay from B to S in the first half adder. In the first, it will be the delay from A to S. This allows us to simplify the expression of the delay for the ripple-carry-adder to this :

*n** (

`or-gate-delay`

+ HA delay of B→C) + (*n*– 1) * HA delay B→S + HA delay A→S

Then we step down into the half-adders. We can see right away that since the inputs here are both hooked up to the same gates at the start, they will have exactly the same delay to the S output. That simplifies the last odd term out in the above expression, and all our full-adder blocks have the same delay! We’re now down to this:

*n** (

`or-gate-delay`

+ HA delay of B→C + HA delay (A or B)→S)
The delay to S is one AND-gate delay plus either one OR-gate delay or the sum of an AND-gate delay and an inverter delay, whichever is longer. The delay to the C output is just one AND-gate delay.

That gives us the final form of the delay as this:

*n** (

`or-gate-delay`

+ `and-gate-delay`

+ `and-gate-delay`

+ max[ `and-gate-delay`

+ `inverter-delay`

, `or-gate-delay`

] )
In the simulation in the text, the `or-gate-delay`

is given as 5, which turns out to be equal to the sum of `and-gate-delay`

and the `inverter-delay`

(3 + 2). So if we use those as our delays, the final value is (5 + 3 + 3 + 5), or 16 time units per bit.

### Ex. 3.31

The problem here is twofold: signals on wires are always assumed to have an initial value of 0, and a signal won’t update unless it is connected to another component whose value changes. Right away we see the issue: what if we expect a signal to be set to some initial value based on its logical connection? Unless it’s connected to an input that sees a change (or we allow initial values to be forced), it won’t ever be set properly. Even if it is connected to an input, it may also depend on the value in a different input than the one we’re testing, one which might not change in a given test. That is the case with this half-adder.

Referring to diagram 3.25 in the text, the wire labeled E is the one that leads to problems. The way it should work is that output S will be 1 when D (the OR-gate output) is 1 and the carry C is not 1. The initial state for C is 0, assuming A and B are both 0. Since E is connected to an inverter, its initial state is expected to be 1. That would allow any input A or B to send S to 1 while C is 0. If E can’t change until its input changes, however, then it cannot be set until C changes. That will only happen if both A and B go to 1 from their initial value of 0.

How does executing the procedure when adding it to the agenda help us? We can comment out the line in `accept-action-procedure`

and see what happens when it isn’t there. When only `input-1`

changes, the sum output does not change and the probe reports nothing. It is only after both inputs change that the internal inverter signal gets updated, and the output changes. From then on, the component would operate normally. By executing the action procedure once, it is then guaranteed to set its output state to some initial value. Note that this will not necessarily be the ‘desired’ initial value (since the starting inputs are assumed 0), and it will also not update until after the component’s delay. Any necessary changes will be propagated once the simulation starts, however, so if there is a stable initial state it will be reached after some delay. In fact, simulating the initialization process might well be what we’re trying to do. That makes this closer to real-life behavior than a signal not updating at all at the start.

### Ex. 3.32

While a wire may be given multiple signals to change at a given time step, the procedures to do so can only be processed by the agenda one at a time. That means that ‘during’ a time step, there may be inaccurate states that are calculated before all the changes are complete. Only the last calculated state includes all changes and can be trusted as correct. However, when any signal is processed, it may add new items to the agenda for future time steps, since there is no mechanism for knowing whether that change is the final one in the time step or not. That means there could be spurious signal changes, and only the very last action added to the agenda for a wire will induce the correct state.

If the queue is processed in FIFO First In, First Out vs. Last In, First Out order, then the order of processing the signals will be preserved. The last action processed will be the last one to be applied in future time steps, since first in, first out also means that the last in is the last out. If the other order is used (LIFO), the last one out will be the first one in, which is the least likely to be accurate for that time step.

Let’s run through the example of an AND-gate. Let the inputs be (A,B) and the output G. If the inputs are (0,1) and then change to (1,0), there must be two commands issued to change the signals. First A is set to 1. That changes wire A’s value and triggers the AND-gate action. It will see its inputs as (1,1) since the change to B hasn’t been processed yet. It will therefore add an action that sets the output G of the gate to 1 (after a delay). Let’s call this Action 1 [G→1]. Next the agenda item that sets B to 0 is processed. That again triggers the AND-gate actions. This time it sees its inputs as (1,0) and the output will be set to 0. Call that Action 2 [G→0]. When the delay has completed, the agenda will process Action 1 first. The value of G will be set to 1. It will process Action 2 after that (but for the same time step), and the output of G will be set to the (correct) value of 0. If it had been LIFO, then Action 2 would be processed first; Action 1 would be the last one processed, and the time step would finish with the erroneous value of 1 for G.

It might be thought that, instead of accepting the final reported value as accurate, we could still use the last in, first out processing order but accept the first value that is output at a time step as the correct one. It works for this simple gate example, but this then disrupts how items are added to the agenda, and causes later problems. Consider the case where the output of our AND-gate gets connected to an inverter (call it I). In a LIFO agenda, Action 2 gets processed first, triggering an action for the inverter (Action 3 [I→1]). Then Action 1 is processed, and Action 4 [I→0] would be added to the queue. Once the inverter delay has passed, Action 4 would now be the first one processed; but it’s not the correct value, so we can’t accept the first one out this time. Only by using the FIFO sequence can we be sure that the action that includes all changes for a given time remains the final one in the queue.

In the solutions file, there are a few examples using OR-gates and AND-gates that show what happens when the inputs change at the same time. This isn’t altering the FIFO behavior of the queue; it’s just there to illustrate how spurious values can be generated, and that they are not always easily predictable.