You load sixteen tons, what do you get?

Site Info: | Favorites: | C++: | Fun: | Newer Stuff: | Old Fun: | Old Tech: | Old Other: |

News | Rating System | MinGW Distro | Image Hacking | SF Reviews | Origami Polyhedra | bwtzip | Quotations |

Stephan T. Lavavej | Paper Airplane | Random Work | Book Reviews | ||||

Deus Ex | PNG | Downloads | |||||

Anime/SF | Mersenne Primes | Wallpaper | |||||

Foundation | Diet |

Jump To: |

Contents |

Ternary Sequences |

Hamiltonian Cycles |

ElGamel |

Traveling Salesman |

Earth Falling Into The Sun |

Logic Gates |

Lambda Functions |

All code appearing on this page is mine, and distributed under the GNU GPL.

I recently read an issue of American Scientist with a cool article about ternary numbers. Here's a description of something in it that interested me, along with some Scheme code for playing around with sequences of ternary numbers.

Let's look at sequences of digits in certain bases. For example, in base 10 one could be 0 1 2 3 4 5 6 7 8 9. (Which I will write as 0123456789 to save space - we won't work with base 11 or higher. :->) We'll call a "square" sequence one that has at least one pair of adjacent identical subsequences. Meaning that it looks like:

[beginning of sequence].....[some subsequence of any length][same subsequence]....[end of sequence].

In base 10, here is a square sequence:

012349879874321

Because:

01234[987][987]4321

We'll call a squarefree sequence one which is not square. Here is a squarefree sequence in base 10:

0123498701234

There are indeed two subsequences [01234] there, and they are identical subsequences - but they are not adjacent.

Now, let's look at base 2 squarefree sequences. How long can they be?

Length one: The sequences 0 and 1 are both squarefree (trivial).

Length two: The sequences 01 and 10 are squarefree; 00 and 11 are not.

Length three: 010 and 101 are squarefree, but 000, 001, 011, 100, 110, and 111 are not squarefree (as they contain either [0][0] or [1][1]).

Length four: Let's look at all of them:

0000 through 0011 - start with [0][0]

0100 - ends in [0][0]

0101 - is [01][01]

0110 - has [1][1] in the middle

0111 - has [1][1]

1000 - has [0][0]

1001 - has [0][0] in the middle

1010 - is [10][10]

1011 - ends in [1][1]

1100 - 1111 - start with [1][1]

So there are NO squarefree sequences of length four in base 2! (There aren't any of length 5 or higher, either; any sequence of length 5 must contain as a subsequence one of those sixteen sequences of length 4, and none of those are squarefree, so the whole length 5 sequence can't be squarefree.)

What about base 3? You might go trying to build a sequence, digit by digit, but then you could get stuck:

0, 01, 010, 0102, 01020, 010201, and 0102010 are all okay. But can we add another digit?

01020100 - ends in [0][0]. Bleah.

01020101 - ends in [01][01]. Bleah.

01020102 - is [0102][0102]. Uh oh.

That is sort of depressing.

However, it turns out that you don't necessarily HAVE to get stuck! The Norwegian mathematican Axel Thue proved nearly 100 years ago that there do exist arbitrarily long ternary squarefree sequences, and gave a method for constructing one. It's simple:

You start with the sequence 0.

To generate the next sequence, perform the following replacements simultaneously:

0 is replaced by 1 2

1 is replaced by 1 0 2

2 is replaced by 0

(So if you were performing this on the sequence 1 0 2 0, you'd get 102 12 0 12. See? Simple.)

Thue proved that you can perform this procedure as many times over as you like, starting with the sequence 0, and you will always generate a (longer) ternary sequence which is squarefree.

We'll leave the proof of that to mathematics majors, or - better yet - someone who knows how to use a library! We can just implement this in Scheme and see what happens, because having a computer generate these Thue sequences is way easier and faster for us.

First, we need a way to find what subsequence of numbers have to be substituted in, given either 0, 1, or 2. This is an easy function:

(define (thue-lists n) (cond ((= n 0) (list 1 2)) ((= n 1) (list 1 0 2)) ((= n 2) (list 0))))Examples:

(thue-lists 0) => (1 2) (thue-lists 1) => (1 0 2) (thue-lists 2) => (0)Now, we want some way to take a list - any list of ternary numbers - and perform the Thue replacement on them.

(define (thue-replace lst) (apply append (map thue-lists lst)))This replaces every element

And finally, we don't want to have to type

(thue-replace (thue-replace (thue-replace (list 0))))to get the third list in this sequence, so let's make a quick wrapper function:

(define (thue-iterate n) (if (= n 0) (list 0) (thue-replace (thue-iterate (- n 1)))))Now,

Now, you can check by hand that that is squarefree, or - better yet - we can have a computer do it!

I was investigating how (length (thue-iterate n)) grows as n grows. To my astonishment it proceeds 1, 2, 4, 8, 16, 32, 64.... Wowsa!

Ever wary of the Strong Law of Small Numbers (namely, that patterns don't always go as the first cases might indicate), I decided to see if this was, in fact, true.

(For an example of the Strong Law, draw a circle. Arrange dots on the circumference such that when straight lines between every pair of dots are drawn, the circle is divided into the most number of distinct regions with nonzero area. Two dots produce two regions. Three dots produce 4 regions. 4 dots produce 8 regions. But the pattern DOES NOT HOLD!)

The reason this surprised me so much is that another "replacement sequence" which I knew of before grows in a much more nasty fashion:

In which you start with 13. Say it out loud: one one, then one three. Giving 1113. Which is: three ones, one three. Giving 3113. Which is: one three, two ones, one three. Giving 132113. And so forth. It turns out that the manner in which its length grows isn't very nice to compute.

Anyways, back to the "Thue sequences": The proof that its length grows as powers of two is rather nice. Let's do some induction. (Heh).

Now, say that our expression has A number of 0s, B number of 1s, and C number of 2s. The "0 replacement rule" will create A 1s and A 2s in the next expression. The "1 replacement rule" will create B 1s, B 0s, and B 2s in the next expression. And the "2 replacement rule" will create C 0s in the next expression. So, the next expression generated will have: B + C 0s, A + B 1s, and A + B 2s.

Say our expression starts off with N 0s, N-1 1s, and N-1 2s. That sums to 3N-2, the length of the sequence. As noted above, the next sequence will have 2N-2 0s, 2N-1 1s, and 2N-1 2s. That sums to 6N-4, so the length has doubled.

Now, a sequence with 2N-2 0s, 2N-1 1s, and 2N-1 2s can be seen as having M 0s, M+1 1s, and M+2 2s. That sums to 3M+2, the length of this sequence. The next sequence after that will have 2M+2 0s, 2M+1 1s, and 2M+1 2s. That sums to 6M+4, so the length of the sequence has again doubled! Note that a sequence with 2M+2 0s, 2M+1 1s, and 2M+1 2s can be seen as having N 0s, N-1 1s, and N-1 2s, completing the loop. As long as our sequence starts off in either of these forms, its length will double continously.

For the kicker:

(thue-iterate 0) is (list 0). That has 1 0s, 0 1s, and 0 2s, fitting the "N 0s, N-1 1s, N-1 2s" form. The length of this sequence is 1, and 2^0 = 1. Therefore, (length (thue-iterate n)) is 2^n.

;; A function which drops the first n elements of a list is already builtin. ;; It is called list-tail: (list-tail (list 1 2 3 4 5 6 7 8 9 10) 6) => (7 8 9 10) ;; A function to return the first n elements of a list. (define (get-first-n-elements lst n) (reverse (list-tail (reverse lst) (- (length lst) n)))) ;; Are the first n elements of lst equal to the immediately following n elements of lst? ;; If so, then lst is non-squarefree. ;; If not, see if there are any adjacent identical subsequences of length n+1 ;; in which the first element of lst is used. ;; Eventually return true if the first element of lst is definitely not included in any ;; adjacent identical subsequences in the list (we will drop that first element in another fxn) (define (is-squarefree?-iter lst n) (cond ((> (* n 2) (length lst)) #t) ((equal? (get-first-n-elements lst n) (list-tail (get-first-n-elements lst (* n 2)) n) ) #f ) (else (is-squarefree?-iter lst (+ n 1))))) ;; A wrapper and initializer ;; Sees if there are any adjacent identical subsequences in lst involving the first element of lst ;; If not, then discard the first element and recurse ;; Eventually tests all adjacent subsequences of equal length in lst ;; Doesn't grow very nicely (define (is-squarefree? lst) (cond ((null? lst) #t) ((not (is-squarefree?-iter lst 1)) #f) (else (is-squarefree? (cdr lst)))))

There was a CS/Ma 6a set with a nasty problem in it. I solved it by using an interesting theorem I had learned of online.

I'll explain my solution here. If you are not versed in elementary graph theory it will be somewhat confusing, but I'll try to define (most) terms I use here.

Basic graph theory: Essentially, a graph G is a set of "vertices" V (a vertex is just some object; often it's convenient to number and re-number them) and a set of "edges" E. Any edge e in E is a set of precisely two vertices in V. (Yeah, yeah, graphs can be infinite, or have one vertex, or be disconnected - for the purposes of this problem we will not consider such things.) Two vertices are adjacent if there is an edge connecting them. A walk is a sequence of vertices such that any two successive vertices in the walk are adjacent; a path is a walk that does not repeat vertices. (A connected graph, such as we are studying here, is such that there is a path from any one vertex to any other vertex.) A Hamiltonian cycle is a walk that includes all vertices of the graph, and whose vertices are all distinct except that the first vertex in the walk is the same as the last vertex in the walk. The degree of a vertex v, deg(v), is the number of edges which contain the vertex (also can be viewed as the number of vertices adjacent to v). |V| is of course the number of vertices in a graph.

Let G = (V, E) be a graph with at least 3 vertices such that:

deg(v) >= (1/2) |V| for all v in V.

Prove that G has a Hamiltonian cycle.

This is a very nasty problem because usually you cannot prove that a graph has a Hamiltonian cycle! I used the lemma of Bondy and Chvatal, which I will cover here and provide the additional work needed to solve this problem.

Lemma:Back to the main proof. Pick any two vertices y, z in G which are nonadjacent. By the conditions of the problem, deg(y) + deg(z) >= |V|. By the lemma, G is Hamiltonian iff G' = (V, E + {y, z}) is Hamiltonian. If G' has two vertices y', z' which are nonadjacent, then G' is Hamiltonian iff G'' = (V, E + {y', z'}) is Hamiltonian. We can do this without end, connecting unconnected edges. Eventually we see that G is Hamiltonian iff the complete graph K

Let u and v be two nonadjacent vertices of a graph H = (W, F) such that deg(u) + deg(v) >= |W|. Then H is Hamiltonian iff H' = (W, F + {u, v}) is Hamiltonian (i.e. has a Hamiltonian cycle). (H' is H plus the edge uv, in other words).

Proof of lemma:

If H is Hamiltonian, then H' obviously is. So we want to show that if H' is Hamiltonian, H must be Hamiltonian.

We are now given that H' is Hamiltonian. If H is Hamiltonian, we are done. Therefore, we will assume H is not Hamiltonian, and prove that if that's so, H has a Hamiltonian cycle anyways (proof by contradiction).

H does have a Hamiltonian path (a path that visits every vertex once, but does not return to its original vertex as it is a path)

P = w_{0}w_{1}w_{2}... w_{n}

where w_{0}= u, w_{n}= v, n = |W|. This Hamiltonian path exists, because adding the edge {u, v} converts the non-Hamiltonian graph H to the Hamiltonian graph H'.

Let S be the set of vertices adjacent to u = w_{0}, which are obviously all on P (as P visits every vertex in H). Let X = {x_{0}x_{1}x_{2}... x_{k}} be the set of vertices which are adjacent to v = w_{n}. Let T be the set of vertices T = {t_{0}t_{1}t_{2}... t_{k}} such that for all i from 0 to k, x_{i}= w_{j}; w_{j+1}= t_{i}. (T is the set of vertices on P which follow neighbors of v on P.)

|S| + |T| is at least |W|, as |S| = deg(u), |T| = |X| = deg(v), and by our conditions deg(u) + deg(v) >= |W|.

We see that u is not a member of S (u is not adjacent to itself). Futhermore, u = w_{0}is not in T as it follows NO vertex in P. Therefore U is not a member of S or T.

Hence, if |S| + |T| > |W| there MUST be a vertex w_{q+1}in both S and T. If |S| + |T| = |W| but S and T are not disjoint we also have our w_{q+1}. If |S| + |T| = |W| and S and T are disjoint then every vertex in W is a member of S or T but not both. But as we just saw, u is not a member of S or T. So w_{q+1}exists in all cases.

We need to know four things:

We also need to know four more things:

- As just shown,
w. Therefore it does have a vertex preceeding it in P, which we will call w_{q+1}is not u_{q}.w, as it is in S and hence adjacent to u. By the conditions of the lemma, u and v are nonadjacent._{q+1}is not vwas it has a vertex following it in P._{q}is not v- As w
_{q+1}is in T, w_{q}is adjacent to v = w_{n}. Hencew, as by the conditions of the lemma, u and v are nonadjacent._{q}is not u

Construct a Hamiltonian path: Go from w

- There is a path P
_{A}= w_{0}... w_{q}. (Obvious).- There is a path P
_{B}= w_{q+1}... w_{n}. (Obvious). We can go in the reverse order from w_{n}to w_{q+1}- call that path P_{B'}.- There is an edge C from w
_{q+1}to w_{0}, as w_{q+1}is in S. Edge C is not in P (P_{A}and P_{B'}too) as w_{q}is not u.- There is an edge D from w
_{q}to w_{n}as w_{q+1}is in T, w_{q}is adjacent to w_{n}. Edge D is not in P (P_{A}and P_{B'}too) as w_{q+1}is not v._{0}to w_{q}along P_{A}. Then go to w_{n}on edge D. Go from w_{n}to w_{q+1}along P_{B'}. Return to w_{0}along edge C. We have visited every vertex in H once using a walk whose vertices are all distinct except for the first and the last. H is Hamiltonian. Hence the lemma.

Here is the definition of the ElGamel signature scheme:

Let p be a prime such that the discrete log problem inHow does the signature verification work?Z_{p}^{*}is intractable.

Let a be a primitive root modulo p. Let b ≡ a^{s}for some s.

Keep s secret. Make a, b, and p public.

To produce a signature for a message x, choose a random k relatively prime to p − 1.

The signature is then (y, z) where:

y ≡ a^{k}(mod p)

z ≡ (x − sy)k^{−1}(mod p − 1)

To verify a signature (y, z) for a message x, check that:

a^{x}≡ b^{y}y^{z}(mod p)

Write:

b = a

y = a

z = (x − sy)k

What does k

kk

So write:

kk

Now write b

Rewrite:

Rearrange:

Now, we will work with this expression modulo p. The latter two terms can therefore be eliminated; the above expression is congruent, modulo p, to:

Rewrite:

Rewrite:

We know what kk

Multiply:

Factor:

Rearrange:

Subtract:

Rewrite:

a

By Fermat's Little Theorem, a

a

QED.

So, there are many variants of the Traveling Salesman Problem (TSP). The story goes that there are a fixed number of cities which a traveling salesman must visit; he must visit all of them exactly once and return to where he started. He must also

As you probably know, the TSP is NP-Complete, which is a way of saying it's an incredibly hard problem to completely solve. Given N cities in the TSP, there seem to be N! cycles - first you choose one of the N cities to start from, then you visit one of the (N-1) remaining cities, etc. It's not quite THAT bad - you could say that your cycle "starts" on any city you choose, but it doesn't really matter. Also, you could travel a cycle in the forward or reverse direction, so really there are (N-1)!/2 cycles to consider. (If you want to represent a cycle as a list of numbers corresponding to which city you visit - first you visit city #0, then you visit city #5, then you visit city #3, etc, this is equivalent to saying that you can rotate or reverse the list without affecting the meaning.) Since we don't yet know a general way to find the best (i.e. "optimal") TSP cycle without doing a huge number of computations (not necessarily searching all (N-1)!/2 cycles, but it's almost as bad), the TSP has "exponential time complexity". "NP-Complete" has a specific meaning which I won't cover here.

Perhaps we only want to find a "decent" cycle given a bunch of cities on the plane (a TSP instance). Also, we may want to use a deterministic method; we want to be guaranteed to find a decent cycle after a given number of computations, and not depend on a pseudorandom number generator. (This is not part of the TSP itself, I'm just saying we might want to do this.) What are some methods we might want to consider?

The first method is the "nearest neighbor" (NN) method. This is rather obvious; start from a city in the plane. Then visit the closest city. Repeat until you've visited all the cities, and then return to where you started. Computationally, we pick a starting city, and then we iterate through all the cities we haven't visited yet to look for the closest city. Since we will have to look for a new city N times (actually, N - 2; we have no choice once all but one cities have been visited, and we have no choice in returning to our original city - from now on, I will use the language of time complexity and ignore such factors) and since we have to search through N cities each time, this is a quadratic-time algorithm. That's great, given that the TSP itself has exponential complexity! NN might, on first glance, appear to give the optimal path, but this is not so (obviously); NN is very good about keeping most edges short, but generally it will miss a few cities in its pursuit of short edges, and at the end must "clean up" by visiting the missed cities, which are scattered throughout the plane. NN still isn't

NN can be enhanced in one obvious way. The choice of starting city for a TSP cycle does not matter, but the choice of starting city for the NN algorithm

Now, suppose that we are given a cycle and want to improve it. One method is "annealing", but as that involves random computations I don't want to think about it here (read: I don't know how to do it). What's a deterministic way to improve a given cycle?

Think of a cycle not as a list of cities to visit but as a cycle in a graph. It may be that we could shorten the length of the cycle by "swapping" two edges. See the example.

You can probably convince yourself that given two edges (not necessarily separated by a single edge like in the above diagram) which do not share vertices, there is only one way to delete them and create two new edges so that the cycle is preserved. Imagine a cycle such that no matter what pair of edges we select we can not improve the length in this manner. Since we can't make the cycle better by swapping any two edges, we'll call such a cycle "two-optimized". Obviously the optimal cycle is two-optimized, but there are plenty of non-optimal cycles which are two-optimized as well.

There are a finite number of edges in a cycle. There are also a finite number of pairs of edges in a cycle. So, what we should do is look through all pairs of edges, and find which pair (if any) can be swapped so as to reduce the total length by the greatest amount. Then we can swap those edges and try again. When we can't swap any more edges (this has to happen

There is one way that two-optimization may be enhanced, but I have not tried it. It is not necessarily so that picking the pair of edges whose swapping reduces the length by the greatest amount is best. Perhaps we should recursively try all possible edge-swaps which lead to an improvement, and select the best two-optimized path which is eventually created. This would lead to a large increase in the computational power needed, but it is still a deterministic algorithm and not horrendously bad. (Since we still retain the constraint that every edge-swap must reduce the total length, things are kept neat. If we recursively tried

We can also iterate through all triples of edges and see which edge rearrangement leads to the greatest reduction in cycle length, perform that edge rearrangement, and then repeat until it can't be done any more. This process is obviously three-optimization (and the optimal cycle is once again by definition three-optimized), and it has the potential to improve a cycle which is already two-optimized. Of course, it carries a greater cost in both computational power and in algorithm complexity. I believe that if the three original edges are deleted, that there are only two ways to create three new edges such that the cycle is preserved (i.e. it is not split up into separate loops), but I may be incorrect.

There is also four-optimization and higher levels (all of which I have not tried), which are increasingly costly and offer diminishing returns.

Create all NN paths. Two-optimize, then three-optimize all of them. Select the best one.

Suppose that an asteroid with the Earth's mass moving at 30 km/sec hits the Earth tangential to its orbit, zeroing out the Earth's tangential velocity. How long would it take the Earth to fall into the Sun? [We make several approximations here: most notably, the mass of the asteroid and Earth combined are not significant compared to the mass of the Sun.]

This might seem to be an easy problem, but as the Earth falls into the Sun it undergoes increasing acceleration. This leads to a differential equation which can be solved, but not without some work.

Recall Kepler's third law, the Law of Periods:

G * M

G * M

G * M

G * M

G * M

T

Note that this makes us happy - it confirms Kepler's empirical law, and is independent of the eccentricity of the orbit - it holds for both circular ones and extremely long, narrow orbits.

In this problem, we can view the Earth as traveling on an infinitely skinny elliptical orbit, with the Sun at one focus at one end of the orbit and the Earth starting at the other end. The major axis of this orbit is the distance

T = √[4 * π

T = 2 * π * √[r

T / 2 = π * √[r

5578760 seconds is

#include <stdio.h> /* SI units */ #define RESOLUTION 1 #define SUNRADIUS 695000000 #define SUNMASS 1.989e30 #define BIGG 6.6739e-11 #define EARTHORBITRADIUS 1.496e11 int main(void) { double x = EARTHORBITRADIUS; double v = 0; double t = 0; while (x > SUNRADIUS) { /* Not x > 0, beware */ t += RESOLUTION; v -= BIGG * SUNMASS * RESOLUTION / (x * x); x += v * RESOLUTION; } printf("CRASH! Time: %f days, Velocity: %f m/s\n", t / 86400, -v); return 0; }The computational method indicates that the Earth achieves a significant velocity at the end, a small fraction of the speed of light.

This problem is found in found in HAKMEM Item 19.

So, given the three inputs A, B, C:

- A B.
- A C.
- B C.

- A + B.
- A + C.
- B + C.

- (1) C = A B C.
- (4) + C = A + B + C.
- (4) C = A C + B C.
- (1) + (9) = A B + A C + B C.
- /(10) = /A /B + /B /C + /A /C.
- (7) + (11) = A B C + /A /B + /B /C + /A /C.
- (12) (8) = A B C + A /B /C + /A B /C + /A /B C.
- /(13) = A B /C + A /B C + /A B C + /A /B /C.

- (1) + (11) = A B + /A /B + /B /C + /A /C.
- (4) (11) = A /B /C + /A B /C.
- (2) + (11) = /A /B + /B /C + /A /C + A C.
- (5) (11) = A /B /C + /A /B C.
- (3) + (11) = /A /B + /B /C + /A /C + B C.
- (6) (11) = /A B /C + /A /B C.

- (14) + (16) = A /B + /A B + /C.
- (14) + (18) = A /C + /A C + /B.
- (14) + (20) = B /C + /B C + /A.

- (15) (21) = /C.
- (17) (22) = /B.
- (19) (23) = /A.

In Scheme, functions are first-class datatypes and lambda functions are unnamed functions. At first glance, it appears impossible for a lambda function to call itself, because it has no name. Yet, we can perform iterative computations with lambda functions if we wish, entirely avoiding the use of

First, how might we write an iterative factorial function? (For the time being, I will ignore good style and not provide wrapper functions. A factorial function really should have the interface

(define (iterative-factorial n result) (if (= n 1) result (iterative-factorial (- n 1) (* result n)) ) ) (iterative-factorial 10 1) ;; Computes 10!

Now, let's create a

(define (proto-factorial check-if-done update-n update-result n result) (cond (check-if-done (= n 1) ) (update-n (- n 1) ) (update-result (* result n)) ) )

> (proto-factorial #f #f #t 10 1) ;; If n is 10 and result is 1, what will be the next result? 10 > (proto-factorial #f #t #f 10 1) ;; If n is 10, what will be the next n? 9 > (proto-factorial #t #f #f 10 1) ;; If n is 10, is it time to stop the iteration? #fOf course, we are simply using syntatic sugar in

(define proto-factorial (lambda (check-if-done update-n update-result n result) (cond (check-if-done (= n 1) ) (update-n (- n 1) ) (update-result (* result n)) ) ) )Now, we need some sort of master function which will repeatedly call

(define (recurse fxn done n result) (if done result (recurse fxn ;; Hand the function down (fxn #t #f #f n result) ;; Ask whether we are done (fxn #f #t #f n result) ;; Ask what the new n should be (fxn #f #f #t n result) ;; Ask what the new result should be ) ) )So, calling

(define (proto-recurse fxn done n result prfxn) (if done result (prfxn fxn ;; Hand the function down (fxn #t #f #f n result) ;; Ask whether we are done (fxn #f #t #f n result) ;; Ask what the new n should be (fxn #f #f #t n result) ;; Ask what the new result should be prfxn ;; Hand prfxn down ) ) )

Now we can call

( (lambda (thing-to-duplicate) ;; Just like: ;; (proto-recurse proto-factorial #f 10 1 proto-recurse) (thing-to-duplicate proto-factorial #f 10 1 thing-to-duplicate) ) ;; proto-recurse: (lambda (fxn done n result prfxn) (if done result (prfxn fxn ;; Hand the function down (fxn #t #f #f n result) ;; Ask whether we are done (fxn #f #t #f n result) ;; Ask what the new n should be (fxn #f #f #t n result) ;; Ask what the new result should be prfxn ;; Hand prfxn down ) ) ) )Now we can just substitute in

( (lambda (thing-to-duplicate) (thing-to-duplicate ;; proto-factorial: (lambda (check-if-done update-n update-result n result) (cond (check-if-done (= n 1) ) (update-n (- n 1) ) (update-result (* result n)) ) ) #f 10 1 thing-to-duplicate ) ) ;; proto-recurse: (lambda (fxn done n result prfxn) (if done result (prfxn fxn ;; Hand the function down (fxn #t #f #f n result) ;; Ask whether we are done (fxn #f #t #f n result) ;; Ask what the new n should be (fxn #f #f #t n result) ;; Ask what the new result should be prfxn ;; Hand prfxn down ) ) ) )And finally we can wrap up the whole thing in a nice bow:

(define factorial (lambda (number) ( (lambda (thing-to-duplicate) (thing-to-duplicate ;; proto-factorial: (lambda (check-if-done update-n update-result n result) (cond (check-if-done (= n 1) ) (update-n (- n 1) ) (update-result (* result n)) ) ) #f number 1 thing-to-duplicate ) ) ;; proto-recurse: (lambda (fxn done n result prfxn) (if done result (prfxn fxn ;; Hand the function down (fxn #t #f #f n result) ;; Ask whether we are done (fxn #f #t #f n result) ;; Ask what the new n should be (fxn #f #f #t n result) ;; Ask what the new result should be prfxn ;; Hand prfxn down ) ) ) ) ) )Now we can compute

Of course, all that complicated machinery really isn't necessary. With these principles in mind, we can develop a general way to allow lambda functions to recurse on themselves. This computes factorials:

( (lambda (f . data) (apply f (cons f data)) ) (lambda (fxn n) (if (= 0 n) 1 (* n (fxn fxn (- n 1))) ) ) 10 )And this is a recursive multiply function:

( (lambda (f . data) (apply f (cons f data)) ) (lambda (fxn a b) (if (= 0 b) 0 (+ a (fxn fxn a (- b 1))) ) ) 10 15 )This takes advantage of Scheme varargs, cons, and apply.

https://nuwen.net/work.html (updated a long time ago)

**Stephan T. Lavavej**

Home: stl@nuwen.net

Work: stl@microsoft.com

This is my personal website. I work for Microsoft, but I don't speak for them.