PASS Summit 2013—I’m Speaking

Last week I received a really exciting email—I’ll be speaking at this year’s PASS Summit in Charlotte, NC. I had the good fortune of being selected to present on two topics. Without further ado, here’s what I’ll be presenting:

Accelerate Database Performance through Data Compression

I feel like Data Compression is one of the most underutilized features of SQL Server Enterprise Edition. In this session, we will build an understanding of its costs and benefits, and the best use cases for it in your environment. Additionally, we will discuss the new ColumnStore index feature in SQL Server 2012. There will be a lot of demos, and hopefully I won’t have to go to my DR laptop like I did in Atlanta.

Into the Blue—Extending Always On Availability Groups

I know a lot of us (my friends in the SQL Community) bash the cloud, and I completely agree that “cloud”, “big data” and “agile” are the three buzzwords that tend to make me want to punch a wall. However, DR solutions provide an interesting use of off-premise resources, a lot of smaller companies may have high RPO/RTO needs, but may not have the in-house capacity to support them (multiple geo-separated data centers). In this session we will show how to use the Windows Azure Virtual Machines to extend your AlwaysOn Availability Group into Azure.

I hope to see many of you in Charlotte.

Disaster Recovery Starts with Backup and Recovery

I present and blog a lot about high availability and disaster recovery solutions, in doing so I get to talk to a lot of folks about different strategies. Depending on your business needs and regulatory requirements these can vary greatly in costs and complexity. However, no matter your DR solution, it is imperative that you have a sound backup strategy and that you test those backups on a regular basis.

I recently took part in a architectural review of several important applications. The reason for the review is that customer teams were asking for a real-time DR solution for systems that were producing multiple terabytes of transaction volume daily. This is possible, but only at great costs, so in order to craft a better solution we started asking for details around current strategy, and to get a better understanding of the business requirements around their data. When doing so, it came out, that multiple application teams had never tested a restore of their backups. Excuse my fonts here…

If you aren’t regularly testing your restores, save the drive space and don’t

back anything up

Ok, rant over—sort of. At a conference I had the pleasure of attending this spring, an Infrastructure Executive from Goldman Sachs was presenting about how they had zero downtime during Hurricane Sandy. Granted it is Goldman Sachs, and their IT budget is some-huge-number billion dollars, but several things she said really stand out. In addition to much risk analysis work that GS had done, they regularly tested out failovers and restores, in all systems (computer, power, cooling and generation). That’s by far the most important thing you can do from a DR perspective. Even, if it is not physically doing a test (you really should), but getting all of the teams (Database, Sys Admin, Network, Application) into one room, and working out all of the moving parts, and what needs to happen in the event of a disaster.

Lastly, I know it’s hard to get resources to do test restores, enterprise storage isn’t cheap. However, there are many options you can leverage if you don’t have space in your primary location for testing:

  • Amazon Web Services—Take advantage of the cloud, DR in my opinion is one of the best use cases for the cloud. You can fire up a server, use it for testing, and then blow it away. You have to get your data there and that can be painful for large data sets, but it’s not a bad solution.
  • Real Hardware—I know enterprise storage is pricey, but a lab server with some relative slow, but dense storage isn’t. You can build a really effective restore testing environment for less than $10,000, and well under it, if you are willing to buy older unsupported HW on eBay.
  • Offsite Recovery Services—There are a number of firms who provide offsite recovery services (for example Sungard). However this option tends to be extremely expensive, as they guarantee hardware for you in the event of a disaster, and as part of that guarantee you are granted testing time.

I can’t express my feelings on this strongly enough—particularly if you are working with large data sets, with lots of opportunities for failure, it is absolutely critical to test those systems. Otherwise, don’t plan on being able to restore when you need.

Compression Presentation Resources

I’ve been fortunate enough to give a presentation on Data Compression in SQL Server several times over past and coming months. I put this post together to showcase the slide deck, answer some questions, and list some resources around the feature.

Data Compression is an Enterprise Edition only feature, which was introduced in SQL Server 2008, starting with 2008 R2 Backup Compression was made available in all editions of SQL Server.

The SQLCAT blog about compression (and specifically to answer the question of there are benefits to using backup compression with a row and/or page compressed database–the answer is yes) is here.

Hugo Kornelius answers some questions around LOBs and why SQL Server doesn’t compress them at SQL Blog here.

My blog post on Columnstore Archival Compression in SQL 2014.

The full SQLCat white paper on Data Compression is here.

To answer the question about backup compression affecting the transaction log chain–basically, you can’t mix compression modes in a given media set. So if you are not using backup compression and you turn it on, you need to start backing up to a new media set (a new backup file). For example, the below code is correct:

backup database [AdventureWorksDW2012] to disk='c:\temp\1.bak'
go
backup log [AdventureWorksDW2012] to disk='c:\temp\2.bak'
with compression

backup log [AdventureWorksDW2012] to disk=’c:\temp\3.bak’

BACKUP LOG [AdventureWorksDW2012] TO DISK = N’C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup\AdventureWorksDW2012_LogBackup_2013-05-09_14-22-10.bak’ WITH NOFORMAT, NOINIT, NAME = N’AdventureWorksDW2012_LogBackup_2013-05-09_14-22-10′, NOSKIP, NOREWIND, NOUNLOAD, NORECOVERY , STATS = 5
RESTORE DATABASE [AdventureWorksDW2012] FROM DISK = N’c:\temp\1.bak’ WITH FILE = 2, NORECOVERY, NOUNLOAD, STATS = 5
RESTORE LOG [AdventureWorksDW2012] FROM DISK = N’c:\temp\2.bak’ WITH FILE = 2, NORECOVERY, NOUNLOAD, STATS = 5
RESTORE LOG [AdventureWorksDW2012] FROM DISK = N’c:\temp\3.bak’ WITH FILE = 1, NORECOVERY, NOUNLOAD, STATS = 5
RESTORE LOG [AdventureWorksDW2012] FROM DISK = N’C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup\AdventureWorksDW2012_LogBackup_2013-05-09_14-21-30.bak’ WITH FILE = 1, NOUNLOAD, STATS = 5

GO

Also–the following rules apply around using compression in conjunction with partitioned tables:

When you use data compression with partitioned tables and indexes, be aware of the following considerations:

  • When partitions are split by using the ALTER PARTITION statement, both partitions inherit the data compression attribute of the original partition.
  • When two partitions are merged, the resultant partition inherits the data compression attribute of the destination partition.
  • To switch a partition, the data compression property of the partition must match the compression property of the table.
  • When a clustered index is dropped, the corresponding heap partitions retain their data compression setting unless the partitioning scheme is modified. If the partitioning scheme is changed, all partitions are rebuilt to an uncompressed state.

credit Books online

%d bloggers like this: