Interpreting LVar effects as resources
Lately I’ve been thinking about an analogy between properties that are and aren’t true of LVar operations, and presence or lack of structural rules in a logic.
As I’ve written about before, LVars provide an interface of
put operations for reading and writing shared memory locations deterministically. The trick to preserving determinism is that
put always takes the least upper bound of the old value and the new value (with respect to a given partial order), while
get performs a threshold read that allows only limited information about the state of the LVar to “leak” out.
The determinism guarantee we get from LVars says that, if an otherwise pure program performs a given set of LVar effects – that is,
puts – on each LVar, then the value the program evaluates to will be the same every time that program is run. To get the guarantee, we only have to perform the same set of effects on each run, not the same sequence. We don’t care what order the reads and writes happen in; that’s the whole point of LVars.
However, we do care that whatever effects are to occur will all occur at least once. The same set of
gets had better be performed on every run, in some order; if we don’t ensure that, all determinism bets are off.1
Do we care about ensuring that LVar effects occur only once? My claim (although I haven’t proven it) is that we don’t care about that! LVar operations are idempotent: running an effectful LVar operation more than once will have the same effect as running it once. In fact, we leverage this idempotence property both in the implementation of our Haskell LVars library, where it saves us synchronization effort, and in the determinism proof for the accompanying LVar calculus, where in several places we make use of the fact that the least upper bound of some LVar’s state p and itself is still p.
These three properties – that effects can occur in any order, that they need not occur only once, and that they must occur at least once – suggest an interesting correspondence to substructural logics for reasoning about resource management (of, say, memory resources):
- The property that effects can occur in any order corresponds to an exchange structural property, which says that the order in which we use resources is irrelevant.
- The property that effects need not occur only once corresponds to a contraction structural property, which says that if we have duplicate occurrences of a single resource, we can ignore the duplicates. (This interpretation seemed wrong to me at first, because if we’re interpreting effects as resources, it seems we’re saying that it’s okay to not repeat effects, rather that saying that it’s okay to repeat them. But the weirdness is resolved if we read the contraction rule from the bottom up, rather than the top down.)
- The property that effects must occur at least once corresponds to a lack of a weakening structural property, which would say that we could add extra, unneeded resources to a context without upsetting anything. (Again, if we read the weakening rule top-down, it looks like we’re saying that it’s not okay to add in effects, instead of saying that it’s not okay to leave out effects. Again, the weirdness is resolved if we read the rule bottom-up.)
This interpretation leaves us with something like strict logic, a substructural logic where we have exchange and contraction, but not weakening. It’s common to use a substructural logic for reasoning about concurrency, but usually, when we do so, it’s because we want to ensure that, say, a particular thread has exclusive access to a memory location. With LVars, though, we explicitly don’t care about enforcing exclusive access to memory – sharing memory is what LVars are all about! So it’s interesting that we can still reason about LVars with a resource interpretation, but one where the resources are the LVar operations themselves, rather than the memory that LVars use. That’s kind of weird! Can anyone point me to other systems where something like this happens?
Thanks to Carlo Angiuli, Zach Sparks, and Chris Martens for discussing aspects of this post with me.
An exception to this would be if we had “no-op” operations. For instance, if
yin the ordering of LVar states, and we’re writing
yto the same LVar, then we ought to be able to eliminate the write of
x. Conversely, if we’re only writing
y, it should do no harm to add an additional write of