SQL Server 2014's biggest new feature is in-memory transaction processing, or in-memory OLTP, which Microsoft claims can make database operations as much as 30 times faster. In-memory database technology for SQL Server has long been in the works under the code name "Hekaton." To use it, a database must have certain tables (actually, the file groups used to store tables) declared as memory-optimized. The resulting table can be used as a conventional database table or as a substitute for a temporary table.
Microsoft claims it's made other speed improvements apart from the gains realized by keeping a table in memory. Reads and writes to an in-memory table only lock on individual rows, and even then, row-level locks are handled in such a way that they don't cause concurrency issues. Stored procedures can also run in memory for more of a boost by having them compiled to native code.
SQL Server has long had a function called table pinning, which allowed certain tables to not be flushed from memory once they are read. This feature has been misconstrued in the past as a kind of in-memory database technology, but it doesn't provide any of the other in-memory or behavioral optimizations Microsoft is claiming have been made in SQL Server 2014.
In-memory database technology is far from new, but until recently, it was only available in some fairly obscure or bespoke implementations (such as Sybase or IBM's solidDB/BLU products). Now that in-memory processing is showing up in the feature roster of big-name commercial databases (Oracle), and open source products (VoltDB, MariaDB, PostgreSQL), Microsoft SQL Server included the commercial offerings, where the technology shows the greatest advantage by dint of being that much more thoroughly integrated into the finished product.
According to the materials released so far about SQL Server 2014, front-end applications won't have to be rewritten to take advantage of the in-memory functionality. But existing databases will need to be modified, and the exact benefit any given database will enjoy after moving to an in-memory model will vary depending on its workload. (Microsoft has sample code to demonstrate this.)
Microsoft's promise here is that relatively little work will need to be done to move a database to an in-memory incarnation, albeit at the cost of an upgrade to an entirely new edition of SQL Server. The company's been positioning SQL Server more as an upmarket solution for big data, where features like in-memory tables and Hadoop integration would be welcomed with open arms, and less as the sort of generic database solution long since eclipsed by the likes of MySQL.
Cloud computing also figures prominently in SQL Server 2014, as it features a slew of what Microsoft described as "hybrid cloud enhancements that make it easy to extend your on-premises database environment to the cloud." SQL Server 2014 instances can be backed up to Windows Azure storage, databases can be automatically deployed to Windows Azure VMs by way of a wizard, and a Windows Azure VM can be designated as a high-availability node for a SQL Server 2014 instance.
Clearly, Microsoft doesn't want the idea of an "Azure-enabled" SQL Server to be one that connects to Azure for the sake of static storage, but rather one that allows SQL Server to be run more within Azure itself -- and allows existing on-premises deployments to be migrated gradually into Azure as part of day-to-day operations, not as a nebulous future project.
This story, "SQL Server 2014 supercharged with in-memory tables, Azure connectivity," was originally published at InfoWorld.com. Get the first word on what the important tech news really means with the InfoWorld Tech Watch blog. For the latest developments in business technology news, follow InfoWorld.com on Twitter.