What’s the difference between LVars and CRDTs?
Greetings from Κεφαλονιά!
This week, I’ve been on a Greek island, attending a WG 2.8 meeting as an observer and having lots of interesting conversations. While here, I also gave a talk that covers my most recent thinking on LVars and CRDTs: their similarities, their differences, and what we stand to gain by combining them.
I’ve discussed LVars and CRDTs individually pretty often here, but in this post I want to focus on one of the questions I tried to answer in this talk, which is a question people seem to ask a lot: “What’s the difference?” It turns out that LVars and CRDTs are similar, but not quite the same. The most obvious difference, of course, is that CRDTs are intended to be distributed and replicated, while LVars are for shared memory. (Sorry.1) And then there are two other important differences – or are there?
Difference #1: threshold reads
Aside from sharing versus replication, the most important difference between LVars and CRDTs has to do with how their contents can be observed. When you query a CRDT replica at different times, you may see different values, depending on what updates have reached it yet. In fact, that’s the whole point of high availability – you want to be able to read from a replica right now, whether or not it agrees with the others. So, queries of a CRDT are nondeterministic.
When you read from an LVar, on the other hand, it’s a blocking threshold read, and this is (half) the reason why LVars give us determinism. There is a way to read the contents of an LVar immediately and exactly, but you can only do that after it’s in a “frozen” state and can no longer change – and programs that do so are downgraded from determinism to quasi-determinism, or determinism modulo errors, but not all the way to full nondeterminism.
Could CRDTs have threshold reads, though? I’d certainly like them to. In our draft paper “Deterministic Threshold Queries of Distributed Data Structures”, Ryan Newton and I investigate adding LVar-style threshold reads to CRDTs. The thing I find really appealing about the idea is that a blocking read doesn’t have to wait for the thing it’s blocking on to appear at all replicas. As soon as you see that thing appearing at one replica, that’s good enough! You don’t need to wait for a quorum, because you know that any update will make it to the other replicas too, assuming eventual delivery. (Eventual delivery is admittedly a big assumption, but traditional CRDTs have to assume it anyway.)
Personally, I think this idea that you can make deterministic queries of distributed data structures without having to wait for replicas to agree is quite powerful. What’s missing is a really interesting application for the idea. That’s why the “Deterministic Threshold Queries” paper has been rejected a couple of times – the lack of an implementation or any interesting applications makes it rather boring. I’d like to fix that. So, hey, if anyone has a good idea for an application for threshold queries and wants to try actually implementing what the paper proposes, I’m happy to provide guidance.
Difference #2: every write has to compute a join lol j/k nope
With CRDTs, the only time you have to compute a least upper bound, or join, is when you merge a replica’s state with that of another replica. Regular old updates to a replica, though, just have to be inflationary – that is, they have to move that replica’s state “upward” with respect to the lattice in question or leave it the same. They don’t need to do a join, and in fact, it’s typical for them not to.
With LVars, on the other hand, it was the case that when you wrote to an LVar, it would be updated to the join of the state that was already there and the new state you were writing. That is, every write computed a join. But what we figured out – largely as a result of studying CRDTs, in fact – was that there’s no reason why LVars should be limited to least-upper-bound writes. Arbitrary writes that are inflationary and commutative are good enough to ensure determinism.
In fact, in order to implement an incrementable counter, we need the ability to do writes that are inflationary and commutative, but not idempotent. Incrementable counters are so useful that we’ve had them in the LVish library for some time. In our PLDI paper, we hand-wavily argued that these non-idempotent updates don’t pose any threat to determinism – a claim that made me quite nervous at the time. But we’ve now actually proved that this is the case! (The details are in my dissertation.) So we can no longer say that LVars are distinguished from CRDTs by the fact that LVars do a join with every write, because that doesn’t have to be the case.
-
At the meeting here a couple days ago, Phil Wadler, commenting on the state of research on models for concurrent programming, said, “I would like people who are using a shared-memory model to just be a little embarrassed about it!” ↩
Comments