Comments: |
Yaaaaaaaaay! (Where's that Kermit gif when you need it... apparently not in my userpic list.) Fake edit:
Congrats! What was the most interesting thing you've learned so far, not necessarily in the lectures?
You ask the best questions. There have been a lot of interesting things; I don't know if I can pick just one. Let's see:
* Sequent calculus is just natural deduction half flipped upside down. Somehow I had never heard anyone explain it this way before Frank's lectures, and it makes so much sense!
* Dependent types subsume GADTs. That's not too surprising, I guess, but seeing it worked out was instructive.
* "If you make all the atoms negative, you get Prolog. If you make all the atoms positive, you get Datalog." I'll have to meditate on this. (Incidentally, the people I'll be talking to at Berkeley next month work on a descendant of Datalog.)
* Iterated identity types have a step-indexy sort of smell.
* "Exploding Kripke models" are a thing.
* "Uniformity" is just parametricity.
* In the category of posets, coproduct is least upper bound. (Maybe I'll eventually understand enough to be able to exploit this fact somehow.)
* Being able to extract programs from Coq is apparently the main reason for the Type/Prop distinction (I don't understand this yet, and really want to).
* At least one of the workshop speakers apparently sees nothing wrong with making gender-essentialist comments that sell men short. In this case, it was something along the lines of "women are better at reasoning about distributed systems because unlike men, they can think about multiple things at once". As icing on the sexist cake, the same speaker who did this then repeatedly used "distributed systems guy" to refer to a generic distributed systems researcher.
There have also been some things that aren't new to me, but are still nice to see again (Landin's knot; a stunningly clear explanation of monads from John Hughes; a historical perspective on logical relations from Amal). Hughes even used QuickCheck to fuzz a bunch of monads to make sure they obeyed the monad laws, which was just totally delightful.
Ooh the BOOM/Dedalus people? I am really interested in that stuff, and have irresponsibly not made time to work my way carefully through their papers and implementation. (So super jealous.)
But yeah. It was Frank that said that about the atoms I presume? (Additional even crazier fact: if you use linear logic and totally flip upside down the thing that you mean when you say "I'm looking for a proof of the sequent X", you can get *Datalog* from making all the atoms *negative* and *Prolog* from making all the atoms *positive*.)
WAIT WHAT WHY IS IT BACKWARDS D:
just when I thought I was starting to understand logic programming...
It's backwards because it's also upside down! Don't worry, you probably are starting to understand logic programming. The idea is to forget everything you know of what it means to do proof search. Paradigm 1, the familiar paradigm (call it the Proof Construction Paradigm, following Andreoli as we are all so wont to do) is "I have a set of sequents that I'm trying to make proofs of these sequents because if I can prove the thing I'm trying to prove has a derivation." In the Proof Construction Paradigm, positive polarity = Datalog, negative polarity = Prolog. The other paradigm, Paradigm 2, is "let's keep a database of all the sequents we have derivations for because if we can grow that database until it includes the thing I'm trying to prove I will, obviously, be done." This is what's called Inverse Method Theorem Proving, and before you say "that sounds insanely inefficient" remember that it's basically what Forward Chaining does but played out at the level of sequents, not atomic propositions - and Sean McLaughlin is I think putting the finishing touches on the Best Theorem Prover For Intuitionistic Logic In The World (TM) right now and it's based on inverse method theorem proving. It was in the context of Inverse Method that Kaustuv and company realized that polarity switching made Prolog/Datalog happen, but if your computational model is the inverse method, not the Proof Construction Paradigm, you're growing your derivation trees from the sky down instead of the ground up and everything is reversed: negative polarity gives you Datalog (straightforwardly, actually!) and positive polarity gives you Prolog (in kind of a weird way). Kaustuv wrote a summary newsletter article about this upside-down view, I recommend it.
Wow that was really long BUT I WILL DERAIL ANY CONVERSATION TO TALK ABOUT FOCUSING.
(Also congrats on the awesome funding successes Lindsey!)
I love lengthy asides on PL topics showing up in my blog comments, and will do all I can to encourage them.
(Thanks!)
Derailing To Talk About Focusing For Dummies **(Deleted comment)**
My understanding is that magic templates says "show me a forward-chaining logic programming interpreter, and I'll show you an tabled backward-chaining interpreter for well-moded logic programs." So magic sets is showing that forward-chaining can emulate (tabled) backward-chaining; Frank's point is that a more general Proof Search For Intuitionistic Sequents can simulate both.
But vice-versa? What does it mean for magic sets to do a vice versa thing?
Nope, but I'd be interested to hear about it!
Yep, that was Frank! (And you can play "guess the CMU faculty member" among most of the remaining ones, too, if you like. (Edited to add: The gender-essentialist remark was *not* from a CMU faculty member.)) I know very little about Datalog or any of its successors, but as it turns out, there's a close analogy to be made between some recent work by the BOOM people and the project that Ryan and I got this grant to do. That really deserves its own post, though -- I'll get around to it soon! *Edited at 2012-07-28 05:29 pm (UTC)*
Congrats! Funding is an unqualified good.
Superb news! Congratulations! | |