In most fields, there's a special kind of shame associated with having to start a project over from scratch. As an architect, for example, the last thing you want to hear is that one of your buildings will be torn down and rebuilt from the ground up because it can no longer support the weight of its tenants.
According to computer scientist and entrepreneur Michael Stonebraker, however, that's more or less the situation confronting Facebook right now. Only in Facebook's case, the "building" is a Web application, and the problem isn't concrete or steel girders; it's MySQL.
[ Neil McAllister reveals the most dangerous programming mistakes. | Get software development news and insights from InfoWorld's Developer World newsletter. | And sharpen your Java skills with the JavaWorld Enterprise Java newsletter. ]
In 2008, Facebook famously disclosed that it had deployed a whopping 1,800 production MySQL servers, and the social networking giant's growth has only accelerated since then. As of now, Stonebraker says, Facebook has split its MySQL data store into some 4,000 shards, with 9,000 caching servers running 24/7 just to keep up with the load.
Facebook's struggles with MySQL are far from secret. In fact, the company maintains a MySQL at Facebook profile page with updates on its continuous quest to keep the open source database running efficiently at such a massive scale.
But to hear Stonebraker tell it, that quixotic journey should have ended long ago. He describes being saddled with Facebook's complex MySQL installation as "a fate worse than death." The only way out of this purgatory, he says, is for Facebook to "bite the bullet and rewrite everything." In other words: Tear this building down.
Naturally, Stonebraker's comments have ruffled a lot of feathers in the Facebook camp. But for the sake of argument, let's assume he's right. Let's assume Facebook really is nearing the limits of what MySQL can possibly do, and that the most effective solution at this point would be a total rewrite.
So what's the big deal?