Don't be discouraged that it is in Ruby as the concepts are completely general. Great watch even though I never coded a line of Ruby.
mkagenius 11 hours ago [-]
Is there a technique to remember this? I will understand it today and forget after a few weeks.
pvg 11 hours ago [-]
There's a way to remember smaller and smaller bits of it until you terminate at a piece small enough to remember permanently.
crdrost 7 hours ago [-]
Yes.
Lambda calculus is about template substitution. It is the abstract mathematics of substituting templates into other templates. Alonzo Church got interested in this when he realized that it had numbers,
type NumF = <X>(f: (x: X) => X) => (x: X) => X
const zero: NumF = f => x => x
const one: NumF = f => x => f(x)
const two: NumF = f => x => f(f(x))
Addition is not too hard to define, multiplication is actually just function composition, and decrement is an unexpected motherf$@&er. To understand this last point, it may help to understand that this encoding of numbers is basically zero as [], one as [null], two as [null, null], etc and you are trying to use Array.prototype.reduce to compute the array with one fewer element. So you can do it with a (last, curr) pair or a null initial value, you have to know what those look like in the lambda calculus to translate,
// predecessor and successor
const succ = n => f => x => n(f(x))
// Maybe fns: in λ-calc you'd uncurry
type MaybeF<x> = <z>(ifNil: z, ifJust: (x: x) => z) => z
const nil: MaybeF<any> = (ifNil, _ifJust) => ifNil
function just<x>(x: x): MaybeF<x> {
return (_ifNil, ifJust) => ifJust(x);
}
const pred = n =>
n(maybe_n => maybe_n(just(zero), k => just(succ(k))))(nil)
Now you asked for a technique to remember the Y combinator. The basic thing to remember is the infinite template substitution discovered by Haskell Curry,
(f => f(f))(f => f(f))
Forgetting everything you know about programming languages, and just thinking about template substitution, this is a template on the left, being substituted with a value on the right, and the reason it's an infinite loop is that after the substitution you get the exact same expression that you had before the substitution. [On the right we could have also written g => g(g) if that helps, the two variables are defined in separate scopes and only happen to have the same name f to make it clearer that this will loop infinitely if you try to eagerly evaluate it.]
Terms like this, were why Curry wanted a type theory. He immediately understood that this was kind of a weird expression because if you tried to express the idea of f(f) it was secretly holding on to a sort of paradox of self-reference or an infinity or whatever you want to call it. It is a function type which takes only the same function type as an argument, and returns God-knows-what. If you just forbid self-reference in the typing judgments, you don't get this sort of weird f(f) pathology.
So if you are remembering f(f) being somehow the key to the Y combinator, then you immediately see that you can kind of step it by not immediately evaluating it,
const Q = (f => () => f(f))(f => () => f(f))
so as a matter of algebra Q() = Q although a programming language struggles to prove equality between functions so as a matter of programming that might not be equal. So this is a function you can call Q()()()() and it's just infinitely callable without producing any other values.
But why () =>, why not x => ? Well, you can introduce that but Q will just ignore it, it doesn't know what to do with that. So you need to write f(f)(x) to plumb it through to do something.
Note briefly that there IS a difference between f(f) and x => f(f)(x). The latter delays execution of f(f) until needed and so is still stepped, the former is an infinite loop.
And the last thing is that you need to hand this internal function to someone else, some decider, to decide when to call the recursion. This leads to the full form,
const factorial = Y(recurse => n => n <= 1 ? 1 : n * recurse(n - 1));
mprast 8 hours ago [-]
personally I don't have as much trouble wrapping my brain around the Omega combinator as I do around the Y combinator, so I just try to remember that the Y combinator is an omega combinator with an extra f injected at the site of replication, if that makes sense
bloppe 11 hours ago [-]
It's easier to remember if you use a more common notation:
(Y(f))(x) = f(f(f(...(x)...)))
Or express it in python, which is still a bit weird but probably still more readable than pure LC to pretty much everybody:
def Y(f):
return lambda x: f(Y(f)(x))
tromp 10 hours ago [-]
> (Y(f))(x) = f(f(f(...(x)...)))
I think that should be (Y(f))(x) = f(f(f(...)))(x)
chabska 7 hours ago [-]
That misses the whole point, in that you cannot refer to Y inside of Y in an environment that doesn't have recursion.
fritzo 10 hours ago [-]
From scratch it's also easy dress up a quoted Y combinator as in Löb's theorem. Start with the usual Y combinator
Y = \f. (\x.f(x x)) (\x.f(x x))
: (p -> p) -> p
which can produce a thing of type p from a function f : p -> p.
Now Löb's theorem states that []([]p -> p) -> []p, which can be read, "if you have the source code to transform (the source code of a thing of type p) to a thing of type p, then you can produce the source code for a thing of type p". Let's embellish Y using {} to denote the quoting comonad.
To get there, just add quotes as needed: f must be quoted, f's result must be quoted, and f takes a quoted input.
tromp 11 hours ago [-]
> you technically can't do name = term
But you can do
let name = term in term2
by de-sugaring it to
((λname. term2) term)
zarakshR 9 hours ago [-]
This is not relevant since this de-sugaring cannot be performed if `name` is recursive (i.e., if `term` refers to `name`), which `Y` is. So the author's derivation is still required in the pure lambda calculus.
programjames 9 hours ago [-]
I'm a constructivist, but I still believe in proof by contradiction. In fact, I don't think you can believe in proof by contradiction without constructivism. How do you know you can take an arbitrary proof P, and show it leads to a contradiction, if you don't even know you can touch P to begin with?
Anyway, how I would construct a proof by contradiction is:
1. Suppose you want to know if there exists a proof P of length less than N.
2. Do the usual "proof by contradiction" with a placeholder P.
3. You can write a very short algorithm that then plugs in all 2^N possible proofs into your proof by contradiction algorithm.
4. And voila! You have a constructive proof.
yayitswei 7 hours ago [-]
At first I thought they were going to implement it in MIT Scratch.
8 hours ago [-]
Rendered at 08:10:44 GMT+0000 (Coordinated Universal Time) with Vercel.
Don't be discouraged that it is in Ruby as the concepts are completely general. Great watch even though I never coded a line of Ruby.
Lambda calculus is about template substitution. It is the abstract mathematics of substituting templates into other templates. Alonzo Church got interested in this when he realized that it had numbers,
Addition is not too hard to define, multiplication is actually just function composition, and decrement is an unexpected motherf$@&er. To understand this last point, it may help to understand that this encoding of numbers is basically zero as [], one as [null], two as [null, null], etc and you are trying to use Array.prototype.reduce to compute the array with one fewer element. So you can do it with a (last, curr) pair or a null initial value, you have to know what those look like in the lambda calculus to translate, Now you asked for a technique to remember the Y combinator. The basic thing to remember is the infinite template substitution discovered by Haskell Curry, Forgetting everything you know about programming languages, and just thinking about template substitution, this is a template on the left, being substituted with a value on the right, and the reason it's an infinite loop is that after the substitution you get the exact same expression that you had before the substitution. [On the right we could have also written g => g(g) if that helps, the two variables are defined in separate scopes and only happen to have the same name f to make it clearer that this will loop infinitely if you try to eagerly evaluate it.]Terms like this, were why Curry wanted a type theory. He immediately understood that this was kind of a weird expression because if you tried to express the idea of f(f) it was secretly holding on to a sort of paradox of self-reference or an infinity or whatever you want to call it. It is a function type which takes only the same function type as an argument, and returns God-knows-what. If you just forbid self-reference in the typing judgments, you don't get this sort of weird f(f) pathology.
So if you are remembering f(f) being somehow the key to the Y combinator, then you immediately see that you can kind of step it by not immediately evaluating it,
so as a matter of algebra Q() = Q although a programming language struggles to prove equality between functions so as a matter of programming that might not be equal. So this is a function you can call Q()()()() and it's just infinitely callable without producing any other values.But why () =>, why not x => ? Well, you can introduce that but Q will just ignore it, it doesn't know what to do with that. So you need to write f(f)(x) to plumb it through to do something.
Note briefly that there IS a difference between f(f) and x => f(f)(x). The latter delays execution of f(f) until needed and so is still stepped, the former is an infinite loop.
And the last thing is that you need to hand this internal function to someone else, some decider, to decide when to call the recursion. This leads to the full form,
Usage with an example decider:I think that should be (Y(f))(x) = f(f(f(...)))(x)
Now Löb's theorem states that []([]p -> p) -> []p, which can be read, "if you have the source code to transform (the source code of a thing of type p) to a thing of type p, then you can produce the source code for a thing of type p". Let's embellish Y using {} to denote the quoting comonad.
To get there, just add quotes as needed: f must be quoted, f's result must be quoted, and f takes a quoted input.But you can do
by de-sugaring it toAnyway, how I would construct a proof by contradiction is:
1. Suppose you want to know if there exists a proof P of length less than N.
2. Do the usual "proof by contradiction" with a placeholder P.
3. You can write a very short algorithm that then plugs in all 2^N possible proofs into your proof by contradiction algorithm.
4. And voila! You have a constructive proof.