As part of implementing look-thru memory, I wrote what was essentially a wrapper API for ptrace system calls on Linux. ptrace gives you the ability to attach to a process, single-step it, look at its registers, and so on. At first, I just wrote a function to wrap each kind of ptrace request that we wanted to make, and I had to do a bunch of run-time error checking to make sure that no one would, for instance, request to single-step a process before they had attached to it or after they had detached from it.
It turned out to be a nicer design to have a look-thru object that attaches to the process at the time it is constructed and detaches when it's destructed, and have other kinds of requests, like the single-step request or the get-contents-of-registers request, be methods on that object. Then, it's impossible to make those requests of a process to which you're not attached. (Later, when I told a co-worker about this, he pointed out that it was a good example of RAII
.) Not only did this make my code saner, but because the error checking and the "Don't call this function unless..." comments could go away, it made my code shorter
, which totally blew my mind
, and that's when the moment of "So this
is what objects are good for!" happened. When I told this story to you and Alec on the train two weeks ago, you said something like "But that just means that abstraction
is good for something, not that objects are good for something," and I'm wondering if I can get you to elaborate on that.
I cut my teeth on c++ and java.
I don't know much about programming, but it was enough for me to learn that with an oo-oriented language, you get for free a paradigm that your professors will make you learn to implement in non-oo-oriented languages.
Anyway, as wikipedia tells it
, abstraction is just one of several features of oo programming. All of which I suppose you can get when using a procedurally-oriented language... but not for free.
2010-08-23 12:59 pm (UTC)
Quiet, lawyer. Go back to your...lawyer...stuff.
Just kidding. Yeah, so, I have several things to say here. One is that I think there's a high threshold for OO to become worth it. If OO is making your code longer, as it so often does, then it's worth thinking pretty hard about whether you really want OO, because more code means more bugs regardless of other factors. Another is that it's important to realize that programming paradigms are not mutually exclusive, and you can write OO-style code in most languages; it's just that some have baked-in support for it. But it's worth considering using a language that lets you move freely among styles. Then, the places in your code that really need to be OO (like my look-thru library) can evolve toward OO, but you don't have to commit immediately. I can't believe I'm actually arguing in favor of C++ right now, but I guess I am.
It hadn't even occurred to me that RAII was an OO-specific sort of construct. There's a ruby way of doing it with lambdas, but calling foo.method() still feels safer than calling function(foo), since you can't even call the method at all if you don't have the object.
Still, I think the biggest win is code partitioning - you can think of each object as a little program with its own inputs, outputs, and state. You can test each object's implementation separately.
Also, I like polymorphism, but there are other ways of getting there.
There's a ruby way of doing it with lambdas, but calling foo.method() still feels safer than calling function(foo), since you can't even call the method at all if you don't have the object.
Exactly. There's always a way of doing it with lambdas, and function(foo) is what's really happening under the hood in some sense anyway; Python's self arguments are one place where that bleeds through a little bit.