The backstory on data storage in our CMS

In 2009 we started migrating features from our legacy CMS to a new one, which we refer to as SkyPad.  In previous posts, I've talked about our SOA approach and our service bus.  Here I'll be talking just about data storage.

We use Sql Server a lot.  It's been supporting most of our systems and running well for 15 years.  It was natural for us to continue using Sql Server as we started to create SkyPad. 

As we started developing features in SkyPad, we found that almost all of our tables were identical.  We deal with documents  -- stories, videos, slideshows, images, etc. -- and the table structure we were creating over and over again consisted of a document id, an xml blob with the document contents, and some timestamps for auditing and concurrency control. 

That common table structure led us to develop a simple CRUD repository.  Create, Update and Delete were trivial, as was retrieving by ID.  Querying was harder, since it required either storing the queryable properties separately, or digging into the XML blob.  All doable, but not as simple as we wanted.

As a major news organization, it is, alas, a newsworthy event if one of our sites goes down even for a few minutes.  So our sites run in multiple datacenters, and our ops team requires complete redundancy within each data center.  We have sets of servers called "pods".  Each is a small, but complete, production environment.  During normal maintenance and deployments (a couple times a day), we can take a pod offline and still be running live out of all the data centers.  That requires our data to be redundant also - each pod has a copy of all the live data.

Sql Server provides excellent replication and high availability features.  However, we were trying hard to build a system that could be run on commodity hardware and scale out, not up.  We wanted to avoid clustering, SANs and other solutions that require extensive setup, care and feeding by our Ops team.  We wanted teams to be able to quickly spin up a test environment that looked a lot like production.

For those reasons, we decided to use our service bus to synchronize data across the various databases in multiple data centers.  When a pod is taken offline, data synchronization messages start queuing up.  When the pod is brought back online, the messages are handled and the pod's data is up to date before it starts receiving live requests.

That works fine, though it has some issues.  To reduce concurrency conflicts, at any given time, one pod is designated as the write-master for each application that is doing writes.  We deploy multiple times a day, and we need to change the write-masters each time.  We need a tool to re-sync pods when the inevitable problems occur or when we bring a new pod online.  These were the tradeoffs we made when deciding to roll our own replication.

Discuss this post

You're in Easy Mode. If you prefer, you can use XHTML Mode instead.
As a new user, you may notice a few temporary content restrictions. Click here for more info.