?

Log in

No account? Create an account
Quals countdown: T-3 days - Lindsey Kuper [entries|archive|friends|userinfo]
Lindsey Kuper

[ website | composition.al ]
[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

Quals countdown: T-3 days [Dec. 12th, 2010|04:14 pm]
Lindsey Kuper
[Tags|]

I've been alternating bettween ATTaPL and TaPL all weekend, and I finished reading the latter a little while ago, although I skipped over most of the coinduction stuff in chapter 21 and skimmed pretty quickly over a lot of the OO stuff in the chapters near the end. It is probably going to feel odd the next time I look at a printed page that isn't in Lucida Bright.

He's helping.

After reading chapter 1 of ATTaPL, I think it's strange and interesting how the algorithmic1 typing rules for linear types are actually sort of easier to read than the declarative ones. Could that be because linear types are an inherently procedural notion? Oh, and there's one thing I don't understand. Let's assume, as the first part of the chapter does, that the resource that we want to ensure linear use of is memory on the stack. In that case, what exactly is guaranteed about a program that makes it past the type checker? Does it just guarantee that it's possible to implement the compiler and/or run-time system in such a way that that program will use the stack linearly? In other words, is typability a necessary but not sufficient condition for actual linear use of whatever resource we're talking about? Because, you know, here we are going to all this effort with the typing rules and then the chapter is like "Oh, and also, let me tell you about this thing called...*drumroll*...a tail call!"

As for chapter 10 of ATTaPL, I sort of gave up after a while. It got off to a wonderful start2 and then gradually became completely impenetrable, so I just went back and read TaPL 22 again, and that was much better. (Mylk helped, as you can see.) I think I'll be okay if I can sort of hand-wavily talk about constraint generation and unification. There's still a big chunk of the list to plow through. I ought to read the F-to-TAL paper3 next -- I'm not sure I've ever actually read it start-to-finish -- and I could stand to at least glance over the Findler and Felleisen again. That still leaves about five things that I really have to read before Wednesday, and about two others that need a a minimum of a quick look.

And then I guess we'll just hope for the best. Late last week I sent Dan a spastic middle-of-night STEP-INDEXING IS FIXPOINT THEORY!!! email that he was pretty pleased about. I think my committee will agree that whether or not I'm good at this stuff, at least I'm enthusiastic.


  1. I know it's standard usage and all, but it kind of pains me that typing rules apparently have to be syntax-directed and completely deterministic to be considered "algorithmic". That strikes me as too narrow a definition of the word "algorithm". Are you going to tell me that something isn't an algorithm even if I can express it as a miniKanren program?
  2. I like that the authors distinguish between ML-the-programming-language and ML-the-type-system. That way, we can rectify the fact that ML-the-type-system is sitting somewhere in between STLC and System F in terms of power with the fact that ML-the-programming-language is more powerful than System F because it gives you general recursion. Obviously we need a lambda hypercube.
  3. Which isn't in the pile I thought it was in -- and it's kind of snowing a lot. Crap.
LinkReply

Comments:
[User Picture]From: lindseykuper
2010-12-13 01:33 am (UTC)
You know, the first time I tried to read chapter 2 was a year ago (to prepare for this talk (warning: PDF)), and I remember thinking then that it was insanely hard to understand. Reading it again this weekend, I kept thinking, "Wow, they really didn't explain that at all" or "Wow, that's not how I would explain that". So I eventually did learn about dependent types somehow, but it was probably mostly through having conversations with helpful people. Maybe the peanut gallery has some suggestions for good introductions to dependent types?
(Reply) (Parent) (Thread)
(Deleted comment)
[User Picture]From: lindseykuper
2010-12-14 04:49 pm (UTC)
Yeah, it's one of those concept-is-simple-but-implications-are-profound things. Nothing could be simpler than untyped LC, yet it's only recently that I actually understood why the usual pred combinator for Church numerals works, and even knowing the trick I probably couldn't reconstruct it from scratch.

I think dependent types are a little harder, though, because you can lose track of which universe you're in. "What's the type of...oh, wait, it *is* a type."
(Reply) (Parent) (Thread)
[User Picture]From: tim.dreamwidth.org
2010-12-18 03:04 am (UTC)
Hmm, I thought I already responded to this, but I guess I didn't. Anyway, I think I picked up a lot of intuition about dependent types just by working with Coq. You never think "oh, I'm using dependent types", but you are, and they just make sense there, and then suddenly you can understand papers that didn't make any sense to you when you tried to read them two years ago.

Or maybe that's just me. However, ch. 2 of ATTAPL is apparently beyond the level of those papers that did start to make sense to me after spending a year with Coq.
(Reply) (Parent) (Thread)