Have You Patched For Spectre/Meltdown Yet? (And more on patches)

It’s security week here at DCAC (you can join us on Friday January 19th, 2018 at 2PM in a webcast to talk more about security) and I wanted to focus on patches. I wrote a couple of weeks ago about the impact of Spectre and Meltdown to SQL Server (and just about every other thing that runs on silicon chips). Well in the interim, Microsoft has patched all currently supported editions of SQL Server—the patches can be hard to find but are all summarized in this KB article. I can’t emphasize enough the need to patch all of your infrastructure for this—the vulnerabilities are big and they are really bad. While you may have physically isolated servers (though these are a rarity in modern IT) an attacker may have gained access to your network via other credentials that were taken from an unpatched server.

So to summarize, you need to patch the following:

  • System BIOs
  • Hypervisor
  • Guest Operating System
  • Browser
  • Your Mouse (probably)

That’s a lot of patching. And a lot of downtime, and testing. It sucks, and yeah, it’s probably also going to impact server performance. You still need to do it—unless you want to be next guy blamed by the CEO of Equifax.

Which brings me to my next topic.


What is your patching strategy?

In my career I found enterprise IT to be stodgy and not always open to new ideas. We were also slow to move generally, and operated a couple of years and versions behind modern IT. However, all of the large enterprises where I worked (5 different Fortune 100s) were really good at centralized management of systems. Which made things like patching much easier. At the telecom company where I worked, I remember having to patch all Windows Servers to fix a remote desktop vulnerability—it was one my first tasks there. We had System Center Configuration Manager to patch (and inventory the patch) of all of those servers. We had a defined maintenance window, and good executive support to say we are going to apply system updates, and you should build customer facing applications to be fault tolerant.

Smaller organizations have challenges with patching—Cabletown had a team of two people who’s job was to manage SCCM. Many smaller orgs are lucky if they have a sysadmin and Windows Server Update Services. So how do you manage updates in a small org? My first recommendation would be to get WSUS—we have it on our organization, and we’re tiny. However, you still need to manage rebooting boxes, and applying SQL Server CUs (and testing, maybe). So what can you do?

  • Use the cloud for testing patches
  • Get a regular patching window
  • Use WSUS to check status of updates
  • When in doubt, apply the patch. I’d rather have to restore a system than be on the news

I mentioned the cloud above—one thing you may want to consider for customer facing applications is platform as a service offerings like Amazon RDS, Microsoft Azure SQL Database, and Azure Web Apps. These services are managed for you, and have been architected to minimize downtime for updates. For example if you are using Azure SQL Database, when you woke up to the Meltdown/Spectre news, your databases were already protected. Without significant downtime.

Exporting Masked Data with Dynamic Data Masking

The SQL Herald | Databases et al…

Dynamic Data Masking is a presentation layer that got added to Azure SQL DB and SQL Server 2016. In a nutshell it prevents end users from seeing sensitive data, and lets administrators show some data (e.g. the last 4 digits of social security number) for verification purposes. I’m not going to focus too much on the specifics of data masking in this post—that’s a different topic. This is how once you have a masking strategy you can protect your sensitive data going to other environments.

Well at PASS Summit, both in our booth and during my presentation on security in Azure DB, another idea came up—exporting data from production to development, while not releasing any sensitive data. This is a very common scenario—many DBAs have to export sensitive data from prod to dev, and frequently it is done in an insecure fashion.

Doing this requires a little bit of trickery, as dynamic data masking does not work for administrative users. So you will need a second user.

First step—let’s create a database and a masked table.


FirstName varchar(100) MASKED WITH (FUNCTION = ‘partial(1,”XXXXXXX”,0)’) NULL,
LastName varchar(100) NOT NULL,
Phone# varchar(12) MASKED WITH (FUNCTION = ‘default()’) NULL,
Email varchar(100) MASKED WITH (FUNCTION = ’email()’) NULL);

INSERT Membership (FirstName, LastName, Phone#, Email) VALUES
(‘Roberto’, ‘Tamburello’, ‘555.123.4567’, ‘RTamburello@contoso.com’),
(‘Janice’, ‘Galvin’, ‘555.123.4568’, ‘JGalvin@contoso.com.co’),
(‘Zheng’, ‘Mu’, ‘555.123.4569’, ‘ZMu@contoso.net’);


USE msdb

ALTER ROLE db_datareader ADD MEMBER demoexport;
ALTER ROLE db_datawriter ADD MEMBER demoexport;

Next I’ll login as this user and select from the membership table.


From here, I’m going to (as the TestUser) take an export of the database. You can do this by selecting the Export Data Tier Application option from the tasks menu in Management Studio.


I won’t bore you with clicking through the process, but this will give you an export of your database, with the data masked. Your next step is to import the .bacpac file you created. In this case I’m going to the same instance, so I changed the database name.

Right click on “Databases” in SSMS and select “Import Data-tier application”. Import the file you created in the previous step.


Now try selecting as your admin user.


Boom, you’ve exported and imported masked data in your lower environments.

%d bloggers like this: