Lindsey Kuper (lindseykuper) wrote,
Lindsey Kuper

A sixty-second talk about my research

(Update: I got the PLMW travel award; hooray! I also changed my talk slightly from the version appearing below, because, as they say, certain things have come to light. Stay tuned.)

I just applied for a scholarship to attend PLMW 2013 in Rome next January, which, if I'm accepted, will cover airfare, hotel, and registration fees for both PLMW itself and POPL for the rest of that week. It's an amazing deal for those of us in the US -- round-trip airfare alone would cost over $1000 for me -- and I hope all the students I know in the PL community here will apply!

As part of the PLMW festivities this time, they're running a one-minute madness event:

One minute madness is a fun event where you get to present your research in 1 minute. You can have 1 slide to help you. After one minute you must leave the floor (escorted if necessary) and the next presenter begins.

The idea of presenting one's research in one minute is, well, madness. But I came up with the following talk, which clocks in at fifty-eight seconds:

Hi, my name is Lindsey Kuper, and I'm a Ph.D. student at Indiana University, where I'm working with Ryan Newton on new models for deterministic parallel programming. We're excited about deterministic parallel models because they offer programmers the promise of freedom from subtle, hard-to-reproduce nondeterministic bugs that are the scourge of traditional parallel programming. We've observed that in existing deterministic parallel systems, from venerable ones like Kahn process networks to modern ones like Intel's Concurrent Collections framework, the determinism of the system is based on some notion of monotonicity. By taking monotonicity as our starting point, then, we can generalize existing models. In particular, we generalize existing single-assignment models to allow multiple assignments that are monotonically increasing with respect to a user-specified partial order. We maintain determinism by allowing only monotonic writes and "threshold" reads to and from shared data. If you're interested in learning more, you can find our draft paper and tech report by doing a web search for "lattice-based deterministic parallelism". Thanks!

What do I mean by "traditional parallel programming"? That's a good question, but what I think I mean is that traditional languages only offered programmers one set of abstractions for doing either parallelism (that is, making sequential programs go faster by spreading some parts of the work across multiple available processors) or concurrency (that is, a program-structuring technique that allows multiple threads of control, for times when you have multiple distinct tasks to accomplish), and that single set of abstractions (threads and locks, or synchronized) was better suited for concurrency than it was for parallelism. If you wanted to do parallelism, you were forced to use a fairly unsuitable abstraction that allowed nondeterminism in places you didn't want it. So a deterministic parallel model, then, is really just one that recognizes that parallelism calls for a different set of abstractions than concurrency does.

I don't find there to be a shortage of deterministic parallel models in the research community -- quite the opposite. Our research isn't just about Yet Another deterministic parallel model; it's about sussing out the essence of deterministic parallelism. Having said that, our monotonic-writes-and-threshold-reads model is a twist that might be especially well suited for certain kinds of problems. We'll see!

Tags: research

  • Post a new comment


    Anonymous comments are disabled in this journal

    default userpic

    Your reply will be screened

    Your IP address will be recorded