Log in

No account? Create an account
Lindsey Kuper [entries|archive|friends|userinfo]
Lindsey Kuper

[ website | composition.al ]
[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

"A system for testing specifications of CPU semantics" [Aug. 21st, 2010|12:47 am]
Lindsey Kuper

Today was the last day of my internship. It was the other interns' last day, too, and we all gave talks this afternoon about what we worked on this summer. In case you were wondering what I've been up to for the last thirteen weeks, I've posted the slides and notes from mine.

Me, to one of the other interns: Your talk was good!
Him: Thanks! Yours was...complicated!

...I don't think it was that complicated, really! But you can judge for yourself. Also, I did the talk without notes, and I feel good about that. In the past, I've always given talks by writing down almost every word I want to say, and bringing a big ol' stack of paper up to the front with me. I didn't do that this time. I still wrote down almost every word I wanted to say -- I think pretty textually, and writing it all down helps me get my thoughts in order and drives the slide-making process. I stole the idea of writing down talks from danah boyd a long time ago, and I'll probably keep doing it. But instead of bringing the crib notes with me, I went up and did it cold from the slides, and I think that it was a much better talk for having done so. I had no choice but to be thinking about the things I was saying as I was saying them, and that meant that I had the presence of mind to answer spontaneous questions as they came up during the talk. I'm still working on becoming a good speaker, but I think this is the best I've done so far.

All in all, the summer was a success: the project will continue in my absence, I'll be a coauthor if we eventually publish something, I was invited to return next summer, and we all celebrated with pizza and Rock Band this evening. Count this one a win.

Edited to add: I gave an expanded version of the talk to my research group later (slides; notes). It's mostly the same, but with a bit more detail about abstract interpretation.


(Deleted comment)
[User Picture]From: lindseykuper
2010-08-23 03:54 am (UTC)
As part of implementing look-thru memory, I wrote what was essentially a wrapper API for ptrace system calls on Linux. ptrace gives you the ability to attach to a process, single-step it, look at its registers, and so on. At first, I just wrote a function to wrap each kind of ptrace request that we wanted to make, and I had to do a bunch of run-time error checking to make sure that no one would, for instance, request to single-step a process before they had attached to it or after they had detached from it.

It turned out to be a nicer design to have a look-thru object that attaches to the process at the time it is constructed and detaches when it's destructed, and have other kinds of requests, like the single-step request or the get-contents-of-registers request, be methods on that object. Then, it's impossible to make those requests of a process to which you're not attached. (Later, when I told a co-worker about this, he pointed out that it was a good example of RAII.) Not only did this make my code saner, but because the error checking and the "Don't call this function unless..." comments could go away, it made my code shorter, which totally blew my mind, and that's when the moment of "So this is what objects are good for!" happened. When I told this story to you and Alec on the train two weeks ago, you said something like "But that just means that abstraction is good for something, not that objects are good for something," and I'm wondering if I can get you to elaborate on that.
(Reply) (Parent) (Thread)
[User Picture]From: underwhelm
2010-08-23 08:00 am (UTC)
I cut my teeth on c++ and java.

I don't know much about programming, but it was enough for me to learn that with an oo-oriented language, you get for free a paradigm that your professors will make you learn to implement in non-oo-oriented languages.

Anyway, as wikipedia tells it, abstraction is just one of several features of oo programming. All of which I suppose you can get when using a procedurally-oriented language... but not for free.
(Reply) (Parent) (Thread)
[User Picture]From: lindseykuper
2010-08-23 12:59 pm (UTC)

Quiet, lawyer. Go back to your...lawyer...stuff.

Just kidding. Yeah, so, I have several things to say here. One is that I think there's a high threshold for OO to become worth it. If OO is making your code longer, as it so often does, then it's worth thinking pretty hard about whether you really want OO, because more code means more bugs regardless of other factors. Another is that it's important to realize that programming paradigms are not mutually exclusive, and you can write OO-style code in most languages; it's just that some have baked-in support for it. But it's worth considering using a language that lets you move freely among styles. Then, the places in your code that really need to be OO (like my look-thru library) can evolve toward OO, but you don't have to commit immediately. I can't believe I'm actually arguing in favor of C++ right now, but I guess I am.
(Reply) (Parent) (Thread)
[User Picture]From: jes5199
2010-08-24 06:44 am (UTC)
It hadn't even occurred to me that RAII was an OO-specific sort of construct. There's a ruby way of doing it with lambdas, but calling foo.method() still feels safer than calling function(foo), since you can't even call the method at all if you don't have the object.

Still, I think the biggest win is code partitioning - you can think of each object as a little program with its own inputs, outputs, and state. You can test each object's implementation separately.

Also, I like polymorphism, but there are other ways of getting there.
(Reply) (Parent) (Thread)
[User Picture]From: lindseykuper
2010-08-24 07:25 pm (UTC)
There's a ruby way of doing it with lambdas, but calling foo.method() still feels safer than calling function(foo), since you can't even call the method at all if you don't have the object.

Exactly. There's always a way of doing it with lambdas, and function(foo) is what's really happening under the hood in some sense anyway; Python's self arguments are one place where that bleeds through a little bit.
(Reply) (Parent) (Thread)
[User Picture]From: mindstalk
2010-08-21 09:09 pm (UTC)
The writing-down-talks link is borked.
(Reply) (Thread)
[User Picture]From: lindseykuper
2010-08-23 03:18 am (UTC)
Thanks. That was me being indecisive about which of her talks I wanted to link to.
(Reply) (Parent) (Thread)
[User Picture]From: jes5199
2010-08-24 06:34 am (UTC)
It didn't seem all that complicated to me, but it's a clever solution to a problem that I had never even thought about before, so I think that's pretty cool
(Reply) (Thread)
[User Picture]From: lindseykuper
2010-08-25 09:50 pm (UTC)
Thanks! Finding easier ways to do static analysis of executables is still an unusual problem to have; not a lot of people are doing that. GrammaTech's whole research program is centered around the idea of "what you see is not what you execute", meaning that no matter what approach you take, no matter how sophisticated your analysis of source code is, you won't really know what your programs are doing unless you analyze machine code. It's a compelling argument.
(Reply) (Parent) (Thread)
[User Picture]From: jes5199
2010-08-25 10:47 pm (UTC)
sure, but, you never step into the same processor twice. Even executing the code on metal is insufficient to understand it's entire possibility space, because the combinatorics of the machine state is so huge
(Reply) (Parent) (Thread)