Dear Colos: Up Your Game: aka How the #$%^ do you not have fibre in stock?

Many companies use co-located data centers to store their hardware. In some cases (like the colo we at DCAC use) you pay for power, cooling, and a connection to the internet. There is no expectation of added services other than those three things. In other cases, companies like Rackspace or Level 3 are what are known as managed service providers (note: I’m not talking about Rackspace or Level 3 in this post, I’m not going to name the guilty party, but if you want to know, you can reach me privately). Managed service providers offer solutions like shared storage, network management, and other value added services beyond just a space for your servers.

Enter the Cloud

So Microsoft and Amazon are effectively playing in this space with their IaaS offerings. There’s a big difference, however, as the cloud providers have invested a great deal of money in automation. The same customer I’m talking about in today’s post has some Azure VMs that we are deploying. I built and VM and allocated 3 TB of SSD storage in about 10 minutes this morning. Pretty slick operation–I’m fairly certain when I ran the PowerShell to deploy the VM, there was no person who got up to do anything. When I added the storage, I’m pretty sure no SAN zoning took place, and if it did, it was a few lines of code. We had previously stayed away from Azure because it’s not the most cost effective solution for very large workloads (Colo’s tend to have slightly better pricing on big boxes, but you get nickled and dimed on other things.

When Your Colo Sucks

So I have two different work streams going with the colo right now. One of which is to configure a site-to-site VPN to Azure. This should be a simple operation, however it took over a week to get in place, and only after I sent the colo the Cisco instructions on how to configure the VPN were they able to tell me that the Cisco device they had didn’t support the latest route-based VPN in Azure.  So we finally get up and running, and then we discover that we can’t get the Azure VMs from certain on-prem subnets. We ask them to make a change to add those subnets and they completely break our connection. Awesome.

The other workstream is a cluster upgrade. I wanted a new cluster node and storage, so we didn’t have to do an in-place upgrade. We started this process like 3 weeks ago, hoping to do the migration on black Friday. We had a call today to review the configuration. Turns out they had nothing in place, and aren’t even sure they can get a server deployed by NEXT FRIDAY (YES–10 days to deploy a server, your job is deploy servers). I heard lots of excuses like, we aren’t working Thurs/Fri, and we have to connect to two different SANs, we might not have that fibre in stock. It wasn’t my place to yell WHAT THE EVERLOVING $%^^ on the call, so I started live tweeting. Because that’s ridiculous. Managing and deploying infrastructure was what I did for a living, and I wouldn’t have a job if it took 10 days to deploy a server, and that wasn’t my only job. That really is the colo’s only job. How the #$%^ do you not have fibre in stock? Seriously? My lab at Comcast had all the fibre I could possibly need.

Edited to add this:

This is after last month when they confused SAN snapshots with SAN clones (when it takes 4 hours to recover from a “snapshot” it’s a clone) and presented production cluster storage (that was in use) to a new node. Awesome!!!

Why the Cloud will Ultimately Win

Basically, when it comes to repetitive tasks like deploying OSs and setting up storage, software is way better than humans. Yeah, you need smart engineers and good design, but Azure and AWS are already 90% of the way there. Also, there service levels and response times are much better, because everything is standardized and makes troubleshooting and automating much easier.

 

 

 

SQL Server v.Next—Linux Preview and Ola Hallengren’s Jobs

If you watched Scott Guthrie’s keynote at Microsoft Connect() this morning, your mouth is probably still on the floor. There was lot of big news:

  • Nearly all enterprise edition features in SQL Server 2016 SP1 are in standard edition
  • There is going to be a v.Next of SQL Server and you can play with CTP1 of it today
  • SQL Server on Linux CTP1 can now be downloaded and installed

The biggest news of the day is the standard edition news—this is going to be huge for independent software vendors who build their applications on top of SQL Server. While this is amazing news, I wanted to talk about something a little more near and dear to my heart—SQL Server on Linux. You can learn how to install SQL on Linux here.

So SQL on Linux

I’ve had the good fortune of being involved in the private preview for a good while now. Here’s the requisite screenshot of @@version you’ve seen in so many demos.

screen-shot-2016-11-16-at-9-53-50-am

In this case I’m running a VM on my Mac running CentOS. I also have a VM running Ubuntu and SQL Server running in a Docker container. If you want to go full native, you can use Visual Studio code and run without Windows at all. Aaron Bertand has a great post on how to make this work.

Management of SQL Server on Linux

Aside from a couple of DMVs that show you Linux specific performance information, everything in SQL Server on Linux is the same. Some of the HA and DR functionality is not complete, and the SQL Agent is not done, however you can use cron (and if you’re familiar with Linux, you should learn about cron—I’ll have another post on that next week).

Ola Hallengren’s Jobs on Linux

Many DBAs use Ola Hallengren’s jobs to manage backups and maintenance on their servers. The first thing you’ll want to do is download Ola’s scripts to your machine. You can do this using the CURL command in Linux. In this scenario I’m redirecting the output of the CURL command to a file called ola.sql

curl https://ola.hallengren.com/scripts/MaintenanceSolution.sql> ola.sql

Because of the behavior of the SQL Agent (currently) you will need to set the CREATE_JOBS parameters in Ola’s scripts from Y to N. I used VI to do this—you can read a primer on VI here.

After that—you’ll want to install Ola’s scripts on your SQL Server instance. You have sqlcmd on your Linux install and here you will use the input file flag.

sqlcmd -S . -Usa -Pp@ssw0rd! -iola.sql

You can do this from Management Studio on your Windows machine, or you can just do this from the command line—the nice part about the command line is that automation should be easy and straightforward.

The next thing we want to do is put a backup command in a shell script. In this case I’m just going to grab the backup example from Ola’s site. Use your favorite text editor—mine is VI because I hate myself and create a file called userbackup.sh

sqlcmd -S localhost -U SA -P p@ssw0rd!! -d master -Q “execute [dbo].[databaseBackup] @Databases=’USER_DATABASES’,@BackupType=’FULL’, @verify=’Y’, @CleanupTime=48, @CheckSum=’Y’, @LogToTable = ‘Y'” -b

After you save this file, you’ll want to make it executable. I’m going to use the chmod command to do that.

chmod 770 userbackup.sh

Now this file can be executed. You can do this using the ./ syntax. The output will be returned to the screen, if you are automating the process you can redirect the output to file which you can check for errors.

Date and time: 2016-11-16 10:22:47

Command: BACKUP DATABASE [TestingQuerystore] TO DISK = N’C:\Data\helsinki\TestingQuerystore\FULL\helsinki_TestingQuerystore_FULL_20161116_102247.bak’ WITH CHECKSUM, NO_COMPRESSION

Processed 2032 pages for database ‘TestingQuerystore’, file ‘TestingQuerystore’ on file 1.

Processed 2 pages for database ‘TestingQuerystore’, file ‘TestingQuerystore_log’ on file 1.

BACKUP DATABASE successfully processed 2034 pages in 0.090 seconds (176.562 MB/sec).

Outcome: Succeeded

Duration: 00:00:00

Date and time: 2016-11-16 10:22:47

This is quick primer on getting started with Linux—in the coming months, you’ll be learn more about being a DBA on Linux.

 

 

 

 

Exporting Masked Data with Dynamic Data Masking

The SQL Herald | Databases et al…

Dynamic Data Masking is a presentation layer that got added to Azure SQL DB and SQL Server 2016. In a nutshell it prevents end users from seeing sensitive data, and lets administrators show some data (e.g. the last 4 digits of social security number) for verification purposes. I’m not going to focus too much on the specifics of data masking in this post—that’s a different topic. This is how once you have a masking strategy you can protect your sensitive data going to other environments.

Well at PASS Summit, both in our booth and during my presentation on security in Azure DB, another idea came up—exporting data from production to development, while not releasing any sensitive data. This is a very common scenario—many DBAs have to export sensitive data from prod to dev, and frequently it is done in an insecure fashion.

Doing this requires a little bit of trickery, as dynamic data masking does not work for administrative users. So you will need a second user.

First step—let’s create a database and a masked table.

CREATE DATABASE DDM_Demo
GO

USE DDM_Demo
GO
CREATE TABLE Membership
(MemberID int IDENTITY PRIMARY KEY,
FirstName varchar(100) MASKED WITH (FUNCTION = ‘partial(1,”XXXXXXX”,0)’) NULL,
LastName varchar(100) NOT NULL,
Phone# varchar(12) MASKED WITH (FUNCTION = ‘default()’) NULL,
Email varchar(100) MASKED WITH (FUNCTION = ’email()’) NULL);

INSERT Membership (FirstName, LastName, Phone#, Email) VALUES
(‘Roberto’, ‘Tamburello’, ‘555.123.4567’, ‘RTamburello@contoso.com’),
(‘Janice’, ‘Galvin’, ‘555.123.4568’, ‘JGalvin@contoso.com.co’),
(‘Zheng’, ‘Mu’, ‘555.123.4569’, ‘ZMu@contoso.net’);

CREATE LOGIN TestUser WITH PASSWORD ‘P@ssw0rd!’
CREATE USER TestUser FROM LOGIN TestUser

GRANT VIEW DEFINITION TO demoexport;
USE msdb
GO

ALTER ROLE db_datareader ADD MEMBER demoexport;
GO
ALTER ROLE db_datawriter ADD MEMBER demoexport;
GO

Next I’ll login as this user and select from the membership table.

image_thumb.png

From here, I’m going to (as the TestUser) take an export of the database. You can do this by selecting the Export Data Tier Application option from the tasks menu in Management Studio.

image.png

I won’t bore you with clicking through the process, but this will give you an export of your database, with the data masked. Your next step is to import the .bacpac file you created. In this case I’m going to the same instance, so I changed the database name.

Right click on “Databases” in SSMS and select “Import Data-tier application”. Import the file you created in the previous step.

image.png

 
Now try selecting as your admin user.

imageimage_thumb.pngimage.png

Boom, you’ve exported and imported masked data in your lower environments.

Circles and Squares–What Do They Mean in the Query Store?

Denny Cherry and I are on a tuning mission this week, and fortunately the customer has SQL Server 2016, so we don’t have to waste a lot of time finding out problematic queries in the system. However, we were a little bit confused when we saw a graphic in the query store diagram today.

image

You will note that plan 44 has two durations, and even though has a single a plan id, the table to the right of the plan summary window indicates two plans. You will note that one of the plans is represented by a circle, the other a square. So what does this mean?

If the query is circle that means it completed. When the icon is a square that represents query that is cancelled, but had a plan generated for it. In this case I cancelled the query in SSMS.

We can confirm by looking in the Query Store catalog views.

image

The second run of the query shows as aborted and is represented as such in the query.

%d bloggers like this: