||[Jul. 26th, 2012|10:59 pm]
I have some exciting research news to share: the grant proposal that my advisor and I co-wrote and submitted last December has been funded! Barring unforeseen circumstances, this grant will fund my research from now until 2015 or when I graduate, whichever comes first.1
We received the "we expect to be able to fund your proposal" email back in May, but it's taken until now for all the paperwork that makes the grant official to go through, and I didn't want to jump the gun by saying anything publicly before now. Suffice it to say it's been a rather squirmy two months of frequent email-checking and NSF-website-page-reloading. For me, at least, the stakes were pretty high, but getting this grant means that the second half of grad school is beginning to shape up in the way that I had hoped it would.
The work that we proposed is already underway, and I'm going to be giving a couple of talks about it in the near future, starting with a five-minute lightning talk tomorrow. I've been at the Oregon Programming Languages Summer School for the last two weeks, learning lots of new things2 and hanging out with many wonderful people from my research community, one of whom (hi, Gabriel!) graciously organized several sessions' worth of student talks. I've been practicing a bit today, and as it turns out, five minutes is enough time to say approximately nothing whatsoever, but I'm going to give it a shot anyway. Then, three weeks from now, I'll be visiting the BOOM lab at Berkeley and giving a longer talk -- more on that soon.
- The latter, hopefully.
- To give you an idea of the information density of the summer school, the 80-sheet (so, 160-page) notebook that I bought on July 16 is now over half filled. It's a good thing that we're almost done, because I've just about reached my limit for absorbing new ideas in a short time.
(Where's that Kermit gif when you need it... apparently not in my userpic list.)
Congrats! What was the most interesting thing you've learned so far, not necessarily in the lectures?
You ask the best questions. There have been a lot of interesting things; I don't know if I can pick just one. Let's see:
* Sequent calculus is just natural deduction half flipped upside down. Somehow I had never heard anyone explain it this way before Frank's lectures, and it makes so much sense!
* Dependent types subsume GADTs. That's not too surprising, I guess, but seeing it worked out was instructive.
* "If you make all the atoms negative, you get Prolog. If you make all the atoms positive, you get Datalog." I'll have to meditate on this. (Incidentally, the people I'll be talking to at Berkeley next month work on a descendant of Datalog.)
* Iterated identity types have a step-indexy sort of smell.
* "Exploding Kripke models" are a thing.
* "Uniformity" is just parametricity.
* In the category of posets, coproduct is least upper bound. (Maybe I'll eventually understand enough to be able to exploit this fact somehow.)
* Being able to extract programs from Coq is apparently the main reason for the Type/Prop distinction (I don't understand this yet, and really want to).
* At least one of the workshop speakers apparently sees nothing wrong with making gender-essentialist comments that sell men short. In this case, it was something along the lines of "women are better at reasoning about distributed systems because unlike men, they can think about multiple things at once". As icing on the sexist cake, the same speaker who did this then repeatedly used "distributed systems guy" to refer to a generic distributed systems researcher.
There have also been some things that aren't new to me, but are still nice to see again (Landin's knot; a stunningly clear explanation of monads from John Hughes; a historical perspective on logical relations from Amal). Hughes even used QuickCheck to fuzz a bunch of monads to make sure they obeyed the monad laws, which was just totally delightful.
Ooh the BOOM/Dedalus people? I am really interested in that stuff, and have irresponsibly not made time to work my way carefully through their papers and implementation. (So super jealous.)
But yeah. It was Frank that said that about the atoms I presume? (Additional even crazier fact: if you use linear logic and totally flip upside down the thing that you mean when you say "I'm looking for a proof of the sequent X", you can get Datalog from making all the atoms negative and Prolog from making all the atoms positive.)
WAIT WHAT WHY IS IT BACKWARDS D:
just when I thought I was starting to understand logic programming...
It's backwards because it's also upside down! Don't worry, you probably are starting to understand logic programming.
The idea is to forget everything you know of what it means to do proof search. Paradigm 1, the familiar paradigm (call it the Proof Construction Paradigm, following Andreoli
as we are all so wont to do) is "I have a set of sequents that I'm trying to make proofs of these sequents because if I can prove the thing I'm trying to prove has a derivation." In the Proof Construction Paradigm, positive polarity = Datalog, negative polarity = Prolog.
The other paradigm, Paradigm 2, is "let's keep a database of all the sequents we have derivations for because if we can grow that database until it includes the thing I'm trying to prove I will, obviously, be done." This is what's called Inverse Method Theorem Proving, and before you say "that sounds insanely inefficient" remember that it's basically what Forward Chaining does but played out at the level of sequents, not atomic propositions - and Sean McLaughlin is I think putting the finishing touches on the Best Theorem Prover For Intuitionistic Logic In The World (TM) right now and it's based on inverse method theorem proving.
It was in the context of Inverse Method that Kaustuv and company realized that polarity switching made Prolog/Datalog happen, but if your computational model is the inverse method, not the Proof Construction Paradigm, you're growing your derivation trees from the sky down instead of the ground up and everything is reversed: negative polarity gives you Datalog (straightforwardly, actually!) and positive polarity gives you Prolog (in kind of a weird way). Kaustuv wrote a summary newsletter article about this upside-down view, I recommend it
Wow that was really long BUT I WILL DERAIL ANY CONVERSATION TO TALK ABOUT FOCUSING.
(Also congrats on the awesome funding successes Lindsey!)
I love lengthy asides on PL topics showing up in my blog comments, and will do all I can to encourage them.
Derailing To Talk About Focusing For Dummies
My understanding is that magic templates says "show me a forward-chaining logic programming interpreter, and I'll show you an tabled backward-chaining interpreter for well-moded logic programs." So magic sets is showing that forward-chaining can emulate (tabled) backward-chaining; Frank's point is that a more general Proof Search For Intuitionistic Sequents can simulate both.
But vice-versa? What does it mean for magic sets to do a vice versa thing?
Nope, but I'd be interested to hear about it!
Yep, that was Frank! (And you can play "guess the CMU faculty member" among most of the remaining ones, too, if you like. (Edited to add: The gender-essentialist remark was not
from a CMU faculty member.))
I know very little about Datalog or any of its successors, but as it turns out, there's a close analogy to be made between some recent work by the BOOM people
and the project that Ryan and I got this grant to do. That really deserves its own post, though -- I'll get around to it soon!Edited at 2012-07-28 05:29 pm (UTC)
Congrats! Funding is an unqualified good.
Superb news! Congratulations!