What Every Developer Should Know About CouchDB
A little more than five years ago, I started working with CouchDB. I had recently joined Dimagi, and before I arrived on the scene, Dimagi had found some success using CouchDB in one of our largest projects at the time, an offline EMR in Zambia. We chose CouchDB because of its replication capabilities, which let us keep data from several locations reliably in sync over an unreliable network. When I started working on what became our flagship product, CommCare, the concept was to reuse a lot of the code we had already written. At this point, the experience we had with CouchDB was pretty positive: it didn’t enforce schemas, so it was easy to change schemas in our code without worrying about a data migration, and overall it was relatively intuitive to use. As a result of these positive experiences and without much additional ceremony, CouchDB became our de facto default database.
It took us some time to realize how much of a bottleneck CouchDB was to become for us. By the time it became clear, we had no choice but to double down and expand our cluster. At this point we entered into a contract with Cloudant, who provided the enterprise-level support and scalability for CouchDB that became necessary. During this period, we could tell we were abusing CouchDB, but there weren’t a lot of good resources out there on tips for using CouchDB as an application database — probably because using CouchDB as an application database is not particularly common. So everything we learned, we had to learn ourselves or in collaboration with Cloudant. It’s been an unintentional path, but we’ve picked up a thing or two worth sharing.
For the sake of this blog post, I’ll assume you know what CouchDB is, and have used it for a bit. Maybe you’re chugging along without any problems, or maybe you’ve started to scale and reached a point in which your current setup has become unmanageably slow. Either way, this blog post is for you. It’s what I wish I’d known five years ago when I first started using CouchDB.
CouchDB has few enough features that you can cover most of them in a short blog post. Since most of what I’ve learned falls into the usage of a feature, I’ll start with a breakdown of what I consider to be CouchDB’s main features.
The main division in CouchDB within a single instance is the database:
|concept||Example URI||How many?|
||Many per instance|
Databases further contain the following concrete objects:
|concept||Example URI||How many?|
||Many per database|
||Many per document|
||Many per database|
||Many per design doc|
||One per database|
- Document revisions and write conflicts
- Replication, which I will not cover here
There are some other less common features, but in terms of what I use, that’s pretty much it. These are the concepts I’ll be covering in this post, with a section per concept that provides a quick overview of the feature as well as the most important performance and usage “gotchas” to keep in mind. And while much of this blog post is dedicated to things that can go poorly, one thing I’ve always loved about CouchDB is that it stakes out a small space and does a few things well.
Each document is a JSON blob (a JSON “object” to be precise). It can theoretically be infinitely large, and infinitely nested, but keep in mind:
- Large documents degrade performance
Documents can be created, updated, and deleted, but keep in mind:
- Each create, edit, and delete creates an entry in the changes feed that must be processed by all views
- Write conflicts degrade performance
Each document can have any number of attachments, which are conceptually “files”: they have (binary) content, a file name, and a content type. Theoretically, you can have as many as you want, and they can be as large as you want, but keep in mind:
- Large attachments degrade performance
- Adding, updating, or removing an attachment requires an edit to the document, which must then be processed by all views
- Attachments are an inefficient feature, and an external object store should almost always be used instead
As a further simple dollar cost consideration:
- Since Cloudant is not trying to serve as a block storage provider, it should not be held to the same standards in terms of cost effectiveness. 1 GB of storage on a Cloudant cluster costs many times more than 1 GB of storage on any major VM host (Rackspace, Softlayer, AWS, etc), so it is an expensive choice for file storage. On a simple per GB storage basis we found Cloudant to be 10x as expensive per GB as block storage on Rackspace (something like $1.2/GB/mo vs. $0.12/GB/mo). CouchDB/Cloudant does a lot of stuff for you, but if all you’re interested in is block storage you should use a cheaper alternative.
The Map-Reduce view is CouchDB’s primary feature. For each view, CouchDB maintains a b-tree that can be queried by key or by key range. In addition to this sort key, each node in the b-tree is associated with:
- A value
- A document
Views are organized into design docs. Theoretically, you can have as many design docs as you want in a database, and as many views as you want in a single design doc. Theoretically, each view can emit arbitrarily many b-tree nodes per document, and your map/reduce code can be arbitrarily complex. But keep in mind:
- Having many views degrades performance, because each view must be run on every document change
- All views in the same design doc are indexed together; changing, adding, or removing any view requires all of them to be reindexed
- Having many emits per document in a view can degrade performance (but slightly more performant than putting each emit in its own view)
- Complex map and reduce code degrades performance
- Emitting values other than
- Using reduce code other than the
_statsbuilt-ins degrades performance
As a side note CouchDB and Cloudant differ on exactly when views are updated:
- CouchDB updates views lazily, that is when they are queried. This can lead to long wait times for infrequently accessed views.
- Cloudant updates views asynchronously in the background. This means that views that are no longer being accessed are still consuming system resources.
Every write, edit, or delete on a document (including to design docs) is logged by CouchDB and can be accessed through the changes feed. Each change is associated with sequence id (abbreviated
seq), which can be used to query changes from that point on. Full document bodies can be optionally included with the change stub. The changes feed is useful for:
- Syncing data to another database for reporting or analytics
- Asynchronously performing any action in your application layer based on changes to your data
CouchDB and Cloudant differ in both the format of the
seq, as well as the guarantees that they make about ordering.
- CouchDB uses integer
seqs, and changes will always be in the same order
- Cloudant, because of its cluster model, uses (quite long) string ids that begin with (decimal representations of) integers, but no specific ordering is guaranteed—though very roughly the integer prefix goes from small to large
In both implementations:
- The feed may only contain the last relevant change to a document
- You can ask for all changes since a particular
seqand are guaranteed to get (at least) that
On Cloudant, you can get a lot more than you ask for in certain failure cases, such as when a node restarts:
seqmay not be recognized, in which case it will respond with the changes feed from the beginning of time
- If you are daisy chaining calls to the changes feed (using the last
seqof each one to get the next one), this can result in silently reprocessing all changes from the beginning of time. This has happened to us enough that we’ve started calling it a “rewind” and have built a little tooling around trying to detect the situation and abort before we go down that path.
CouchDB theoretically provides a way to filter the changes feed based on an arbitrarily piece of code (for example, only showing changes to a certain type of document), but
- In practice (at least on a taxed Cloudant cluster) this has proven to be extremely slow and inefficient
- Consider including the full document body of all changes and filtering in your application (as inefficient as that sounds)
In addition to an id (
_id), each document also has a magic
_rev property, which is automatically set by the system. To update a document, the revision of the JSON posted must match the current revision; if it doesn’t, the write is rejected, and you get back an HTTP 409 with the message “Document update conflict.” If your application is built to handle this, it might then do a fresh fetch of the document, apply its change again, and then save again, perhaps failing after a certain number of attempts.
Things become more complicated if
- You’re using replication “master-master”-style replication, in which more than one node accepts writes
- You’re using Cloudant (since it uses the above architecture internally)
When two separate instances of couchdb that have accepted updates to the “same document” (documents with the same _id) replicate (in either or both directions), one version will win (it will be the same version on both nodes) and the other one becomes a conflicted revision.
Notably, in the case of Cloudant, this can actually happen in exactly the same situation as getting a 409: if you submit two edits to the same doc (quoting the same rev) in quick succession it will either 409 or silently create a conflicted revision, depending on (1) whether the two requests wrote to different nodes internally and, if so (2) whether the nodes have had time to sync with each other.
A document can theoretically have an unlimited number of conflicted revisions,* but keep in mind:
- Conflicted revisions degrade performance.
- Even when these conflicted revisions are deleted, they leave behind a record that a revision used to exist but has been deleted, which continues to degrade performance.
- In the case of Cloudant conflict revisions represent times that Cloudant told your application, “Yes I accept your write,” and then threw it away into the dark dustbin of conflicts. If you don’t monitor your conflicts (by writing a view over the
_conflictfield of all documents), then writes to your database are failing silently, which is by design due to the cluster model.
Having conflicted revisions can also lead to other kinds of unintuitive behavior. When deleting a document, if it has any conflicted revisions, one of them will arbitrarily replace the document, resulting in something closer to a “blast from the past” than a deletion.
- To effectively delete a document, you must first delete all of its conflicted revisions
- This is particularly important for design docs; if you’re not careful, deleting a design document without deleting its conflicted revisions could result in an expensive reindexing process that must be killed at the OS process level manually
*in reality this number is capped by the implementation
Views vs. SQL indexes
I wanted to take a moment in this post to compare CouchDB and relational or SQL databases. For the most part, there’s not that much in common, but I’ve found an analogy—their respective approach to maintaining indexes on your data—to be useful to keep in mind.
CouchDB views are analogous to SQL indexes. Sort of. They’re both b-tree indexes of your data that are automatically kept up to date.
One big user-facing difference between CouchDB and SQL databases is that whereas a SQL database lets you use its Structured Query Language to specify what data you want to retrieve and then decides how to execute your query given your indexes, CouchDB forces you query indexes directly. When you make a SQL query, your database might decide to use an index to quickly whittle down a table to a manageably small set, and then finish the query with a sequential scan; if there are two indexes that could be useful, it will make the decision for you of whether to index by this and filter by that, or index by that and filter by this. CouchDB on the other hand offers no such help; it simply maintains an index and exposes it to you through the view API. This means that either you must make your views return exactly the data you are looking for, or be willing to do the extra filtering in your application. Either way, you are responsible for query planning.
Another difference is either in the terminology of “database” vs. “table,” or in behavior of “views” vs. “indexes,” depending on how you look at it. Whereas a SQL index maintains a b-tree over a (SQL) table, a CouchDB view maintains a b-tree over a (CouchDB) database. Another way to look at it is that a database in CouchDB is like a table in SQL. While you can have as many document types in a single CouchDB database as you want (indeed, CouchDB doesn’t have any built-in concept of document types), keep in mind that:
- Having multiple document types in the same database degrades performance because each view must be run on every document, even if that document is irrelevant and filtered out in the map code.
Finally, while this analogy holds well for very simple views (such as one that indexes users by
(domain, username)), it breaks down in that CouchDB views can do much more complicated things (and thus start to more resemble SQL stored procedures and/or materialized views); this is one of the areas in which CouchDB supposedly shines, though keep in mind as we’ve said before:
- Complex map and reduce code degrades performance
While most blog posts about a piece of software have the primary purpose of either gushing or ranting, my intention is to do neither. CouchDB is a fine piece of software, and like all software, it has its limits. There are many things I wish I’d done differently from the get-go, which all come directly from the points I’ve bulleted above:
- Try and have each view in a database with only the documents relevant to it:
- Put every doc type in its own db—or if that seems too extreme, at least separate the highest-volume documents out into their own databases. (Since I haven’t tried the extreme version, I don’t know what the performance limits are to having hundreds of databases on the same instance.)
- If views should only run for a small subset of documents (such as a particular client’s data), consider having these views in their own db and asynchronously syncing the relevant data to that db
- Never use attachments, and store all files in an object store (like a shared filesystem or Riak)
- Never, ever save a document twice when you could save it once
- Avoid update conflicts like the plague.
- Put every view in its own design document so that each view can be reindexed separately. (Remember, even deleting a view causes a major reindex if it’s in a design doc with other views!)
- Don’t rely on the filtered changes feed
- Keep the number of views to a minimum:
- If half of your views are to index doc type X by client, delete them and write a single view that indexes all docs by doc type and client
- If you can get rid of a view and do some small amount of sorting or filtering in your application, it’s probably worth it.
- Encapsulate couch-related logic in accessor functions that are tested. This lets you easily rewrite the “plan” you use to acheive a particular query if you need to change/refactor/remove/optimize your views later on; in the comparison to SQL, if you’re using CouchDB, remember that you’re in charge of query planning, and sometimes a sequential scan on partially filtered data is more efficient than using (and maintaining!) a special index
And of course…
- Always consider whether SQL (MySQL, Postgres, etc.—doesn’t matter, whatever you’re using) would be better suited to your problem
At Dimagi, we’ve come a long way in applying these hard-learned tips, and I’m happy to say it has significantly improved the performance and reliability of our CouchDB (Cloudant) database, and having teased out these tips, we’ve matured as a team from simply venting about CouchDB to understanding what we can do with it to get the best performance it’ll give us. But I have to say, I sure do miss the days when I thought there was nothing funny about CouchDB’s tagline. Relax.
Also, We’re Hiring! Please apply if any position seems right for you. And if you enjoyed this article, you might also like our article on Scaling Code Review.
Another day, another Zero Day: What all Digital Development organizations should take away from recent IT security news
Even if you don’t work as a software developer, you probably heard about recent, high profile security issues that had IT Admins and developers frantically patching servers over the holidays and again more recently. Dimagi's CTO shares what these recent issues mean for Digital Development organizations.
January 28, 2022
Join the fight to support critical open source infrastructure
Open Source tools are a critical piece of global infrastructure, and need champions for long term investment
March 17, 2020
Two big lessons that Iowa and Geneva can teach us about technology in digital development
Last week brought two high profile technology failures into the global spotlight. Although these two mishaps may seem quite different at first glance, they both highlight challenges that are inherent in providing software in the public sector (regardless of locale) and illustrate cautionary lessons worth discussing for practitioners in Digital Development. The Iowa Caucus Debacle
February 7, 2020