Introducing Incremental Consistency Guarantees

This post gives an informal sneak peek at Incremental Consistency Guarantees (ICG). This represents our work on improving the way applications interact with replicated storage systems, which we’ll be presenting at the upcoming OSDI’16 conference. For the full-fledged version of the paper, we have a preprint over at arXiv.

So what does it mean, informally,  for consistency guarantees to be incremental?

Let me start, in fact, with what it does not mean.

Traditionally, applications interact with replicated storage systems using a single-shot approach. By this I mean that the application fires an RPC to read some data under a predefined consistency model, then it waits for the response. The response should—hopefully—satisfy the guarantees of the consistency model which the application specified.

Cassandra, Amazon SimpleDB 1, Google AppEngine 2 and many others work this way. They allow applications to “tag” each individual operation with the desired consistency model.

The single-shot approach can be problematic. Consider, for instance, that any protocol implementing a consistency model lies somewhere on the consistency/performance trade-off. This is what the CAP theorem and the expanded PACELC formulation is all about. 3

The following figure illustrates this point. It shows four consistency models (eventual, causal, sequential, and linearizable), and their corresponding positions on the consistency/performance trade-off spectrum.

there-is-no-optimal-model

CAP & PACELC. No consistency model is optimal.

Essentially, there can be no such thing as an optimal protocol to replicate data, which achieves both consistency (informally, correctness of data) and exhibits good performance (high availability, low latency).

As a protocols leans more towards better guarantees (higher position on the y-axis), it will tend to provide relatively worse performance (higher position on the x-axis). Note that most consistency protocols inhabit the area along the secondary diagonal in the above plot. There might be some protocols which drift towards the lower-right corner, which is the lurking ground of the abstract “worst protocol”—worst because it’s both bad in performance and weak in correctness guarantees, an awful combination.

What this means, in the context of the single-shot approach, is that the application’s RPC will be inherently limited, to some degree, either from the consistency standpoint or from the performance standpoint.

The best we can do in such a case is to look at the application, think hard about what are the specific consistency requirements, and select the weakest model which satisfies these requirements. Weakest means lowest on the y-axis. We need the weakest model because we need the performance benefits of that model. Anything stronger than the basic minimum would be an overkill.

This whole process, as I described it, sounds hand-wavy and error-prone. That’s exactly how it is in practice. Though, to be fair, it works decently—decently!—most of the time.

Some applications have it easy. This is the case when specific application semantics allow us to employ a weak model. This model would lie somewhere towards the lower-left section of the above plot, and it exhibits relatively good performance. By default, the downside with such a model is that sooner or later it will expose inconsistencies to the application. Luckily, this application has the overriding goal of performance, so it will be fine to sacrifice consistency to some degree. More bluntly: users care more about getting fast, predictable performance, and they tolerate well occasional inconsistencies. So exposing incorrect data is not harmful. 4

Most applications are not so lucky. This is because it is very difficult to find the appropriate, minimum consistency model for them. It’s like trying to fit a square peg in a round hole. If a consistency model does fit, it’s still not good because it’s too strong and sluggish (the model belongs to the upper-right quadrant in the above figure). Typically, these applications do not have the overriding goal of performance. Or rather: they cannot afford even residual inconsistencies to creep up at the application layer.

In other words, these applications would ideally like to satisfy both consistency and performance. Consequently, such applications cannot employ—or should not employ—weaker guarantees, since it puts them at the risk of endangering the safety (correctness) of the application in irrevocable ways. 5

The single-shot approach, in this case, forces the application to choose sides and to fall on the lower-performance, stronger-guarantees side of the trade-off.

Now we can wrap-up. This is what ICG is not: it’s not the single-shot approach.

ICG is a simple extension to the single-shot approach, building on the naive idea that more is better.

In the most plain example, ICG works as follows. The application still fires an RPC to read data from the storage. But now the data arrives in multiple shots—or views—at the application. Each view has increasingly better consistency guarantees, but higher latency. The application now receives these incremental replies, and it can use them as fast as the storage system provides them. The replies with weak guarantees will be blazingly fast, compared to stronger guarantees which take relatively longer.

Incremental Consistency Guarantees (ICG)

Incremental Consistency Guarantees (ICG)

You might ask now: given that ICG exposes multiple consistency views to the application, how does this precisely improve the application’s interaction with the replicated storage system? Good point.

Before I go on, anyhow, I want to point out that an application would most likely use two views, not four as the figure above shows. Two is sufficient to get the benefits, and it’s better than overloading the applications with too many models. Beyond two, any additional view brings marginal benefits, except for some classes of applications which I’ll ignore for the moment. For instance, using a combo of eventual consistency plus linearizability might be ideal for most applications. Now let’s go back to the ramble on the improvement argument.

When applications use ICG instead of the single-shot approach, this change in the interaction lifts a significant burden off the shoulders of the application: the burden of choosing—and settling on—a single consistency model out of a dazzling wide range of options. 6

In the single-shot paradigm, each operation in the application is bound, a priori, to a specific consistency model. This static choice, as I mentioned above, even if it works well most of the time, it is bound to rear its ugly head: by virtue of the CAP theorem, it will either expose inconsistent data or it will manifest high latencies (to the point of making the application unavailable).

In the incremental approach, the application builds on the individual benefits of each consistency model offered by the storage stack, one at a time. The advantages of one model will serve to hide, or complement, the disadvantages of another model.

For instance, if inconsistent data arrives at the application, then the application becomes aware of this inconsistency, being able to remediate and “save face”. Similarly, if high latencies ensue as a consequence of network issues, the application has the choice to maintain availability and enforce latency SLAs by resorting to the weakly-consistent data. This data is possibly incorrect—stale or prone to being overwritten or lost; but hey.. at least the system is available and responsive. In fact, in many scenario, an inconsistent response is better than no response.

In practice, what we advocate through ICG is that applications should no longer regard consistency models as mutually exclusive. Rather, consistency models are perfectly complementary. We can exploit the low-latency and responsiveness of weakly-consistent protocols to ensure performance, and we can subsequently ensure correctness by relying on the guarantees of strongly-consistent protocols: we use these two in an incremental, complementary approach.

To conclude, our goal with incremental consistency guarantees is not to provide a quantitative improvement over other protocols or techniques. We’d like to see a qualitative amelioration for programming with replicated objects. This is not to say quantitative improvements are unworkable in the incremental consistency model. Far from it: the application still retains the performance benefits of using weaker models. Moreover, ICG enables speculation and other neat tricks, which can lead to significant latency improvements (compared to using a single, stronger model) as we show in the paper. This is also a topic which I might discuss in a future post.

Notes:

  1. ref: Consistency in Amazon SimpleDB
  2. ref: Data Consistency in Datastore Queries
  3. refs: CAP twelve years later and CAP is only part of the story. I also mentioned this in a previous post, A Fistful of Ideas.
  4. I don’t want to go into too many details here, but suffice to say that the occasional hiccups, when the consistency protocol exposes inconsistencies and the applications then leaks these anomalies to its users are pretty much harmless. Amazon’s shopping cart is perhaps the most notable example. The problem there manifests as follows: it can happen that a user removes items from the shopping cart, but then later those items re-appear in the cart. No biggie!
  5. Going back to the comment on Amazon’s cart: if an item re-appears in the cart after being deleted, the user can re-remove it from the cart. It’s an annoying occurrence, but not a critically important misbehaviour. On the other hand, consider what happens if the user was unable to add something to the cart due to an inconsistency; well then it’s not just annoying, but it could drive off customers and impact revenue negatively. You need to be able to always add items. This is an example of an operation that should execute be both fast and in a correct manner.
  6. ref: Cassandra 2.0 consistency levels. I don’t mean to be unfair towards Cassandra: I think they’re doing a terrific job and few systems offer such a solid and thorough foundation in terms of storage system properties.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>