My co-authors Aaron Turon, Neel Krishnaswami, Ryan Newton, and I are very happy to announce that our paper “Freeze After Writing: Quasi-Deterministic Parallel Programming with LVars” has been accepted for publication at POPL 2014!

In this paper, we investigate extending the LVars model with an operation that lets us make exact observations of the contents of an LVar. In the original LVars model, there are two fundamental things to do with an LVar: the put operation, which monotonically increases its contents, and the get operation, which tells us when the contents of the LVar have crossed a certain lower bound. By themselves, though, put and get are a pretty limited interface: they don’t offer us any way to find out the exact contents of an LVar. So, we added a new LVar operation called freeze that, in exchange for forbidding any future writes to the LVar, lets us read its exact contents.

When we add freeze to our model, we give up on full determinism, because if a put occurs after a freeze, we’ll get a run-time exception. However, we can ensure that such a run-time exception is the only way in which determinism is compromised. We call this determinism-modulo-exceptions property quasi-determinism and show that, in a language where put, get, and freeze are the only effects, programs are at worst quasi-deterministic. Moreover, if we can ensure that the freeze operation happens last in a given program’s series of LVar effects, then the program is guaranteed to behave deterministically – in which case, we can have both determinism and exact reads.

A draft of the paper is available now, and we’re working on preparing the camera-ready version. Comments and questions welcome!

Update (November 10, 2013): As of yesterday, we’ve turned in the camera-ready version of the paper, and the above link now points to it.

The LVish library

To go along with the announcement of the paper, we’re announcing the initial Hackage release of LVish, a Haskell library for deterministic and quasi-deterministic programming with LVars.1 Install it with cabal install lvish!

Most of the usual caveats about research code apply. Having said that, we want you – you! – to be able to use this library, and a good “Hello World” place to start is with the examples from this post. Try it out, file bugs, and let us know how it goes.

Could you implement LVars?

When I was at ICFP last week, my friend Josh Dunfield presented the work that he and Neel did recently on a new algorithm for bidirectional typechecking. They claimed in the title of their paper that their approach was “easy”, and the claim was corroborated by Olle Frederickson implementing the algorithm in one day, working from the paper alone. It was awesome to hear about this, and it prompted me to think about how hard it would be for someone not-us to implement LVars in their language of choice, given only what we’ve provided so far.

What does it mean to “implement LVars”? If you just want to implement some data structures that have LVar semantics, then that’s one thing. But if you want to make things go fast, merely having lattice-based data structures won’t magically cause that to happen (as will be quite obvious if you try playing with the LVish Redex model!). Rather, lattice-based data structures provide (what we hope is a novel kind of) opportunity for a parallel scheduler to safely and effectively do its thing. The LVish library provides both lattice-based data structures and a parallel scheduler2, and they work together to give you a way to write (quasi-)deterministic parallel programs. So, “implementing LVars” could mean implementing either or both of those things, and I’d love to see someone else’s take on either or both of them.

  1. At the moment, there’s a problem with the generated Haddock documentation on Hackage. We’re trying to figure out what’s wrong, but in the meantime, here’s a local mirror of the docs. 

  2. The parallel work-stealing scheduler in LVish is directly inspired by the one in the monad-par package – if you want to implement such a thing yourself, monad-par might be a good place to begin looking. 

Comments