Query Store and Availability Groups—Force Plan on Secondary Replicas

I’m still fighting with some challenges about inconsistent performance between a primary and secondary replica, so I’ve been waste deep in undocumented system views looking at temporary statistics. One of the things I thought about doing was talking advantage of the Force Plan option in the Query Store in SQL Server 2016.  If you are not familiar with this feature, it allows you to force a “preferred” execution plan. In this scenario, our query was running in about 20-30 seconds on the primary, and 20-30 minutes on the secondary. The plans were reasonably close, but I wanted to see what would happen if I forced a plan on the primary.

Primer about the Query Store and Availability Groups

Since readable secondary replicas are read-only, the query store on those secondary replicas are also read-only. This means runtime statistics for queries executed on those replicas are not recorded into the query store. All the stats there are from the primary replica. However, I wasn’t sure what would happen if I forced a plan on the primary—would the secondary replica honor that plan?

Let’s Find Out

The first thing I did was to query the query store catalog views to verify that the plan was forced.

image

I have to copies of the forced plan. If I run an estimated query plan on the primary, I see that plan is forced. You can see this by looked for UsePlan in the XML of the plan.

image

I did the same thing on the secondary (in the case of the secondary, we are looking at the actual plan, but it doesn’t matter).

image

You will note that there is no UsePlan. There are extended events and a catalog view that reflect plan forcing failure (Grant Fritchey wrote about this behavior here), While, I wouldn’t expect the catalog view to get updated, I was hoping that the Extended Event might fire. It did not.

Conclusion

The query store, as awesome as it is, doesn’t really do much for you on readable secondary replica. It does not force plans, nor does it record any of your data.

Thanks to Grant Fritchey and Erin Stellato for helping with this post!

The Curious Case of the HTDELETE Wait Type

I was working with a client this week and we encountered very long wait types on “insert as select” queries that were part of their data delivery process. This wait type isn’t documented very well, SQL Skills has it documented here  and mentions this:

“Typically these waits occur when queries involve columnstore indexes, but they can also occur without columnstore indexes being involved if a hash operator runs in batch mode.”

Nacho from Microsoft also has a blog post on it here. My theory was that a bad hash join was taking place and causing the wait.

Isolating the Plan

The thing that was very curious about the situation is that waits were only occurring on the readable secondary replica. At first, I tried to examine the query store to try to understand if there were multiple execution plans for a given query. The one problem with that is the readable secondary copy of the data is read-only, which means on that secondary replica you only see the query store data from the primary replica. If there was a plan that was specific to the secondary, I’d have gather them from the plan cache on the secondary. (Thanks to Erin Stellato (b|t) for this idea). There was one other problem—the code in question was executing as dynamic SQL from a stored procedure when meant it was always getting a new execution plan.

Ultimately I think any thought of the readable secondary having a vastly different plan was a red herrings. Statistics are going to be the same on both instances, and if there were a missing statistic on the secondary, SQL Server would create it in TempDB. Anyway, columnstore indexes don’t use statistics in the traditional sense.

Fortunately I was able to catch a query in the process of waiting on HTDELETE, so I no longer had to look for the needle in the haystack, and I could get to tuning the plans. I was able to grab the SELECT part of the query and generate an estimated plan on both the primary and secondary nodes. The plans were virtually the same on both nodes, with just a minor difference in memory grant between them.

 

image

The query is here.

SELECT    Object4.Column1
FROM    Database1.Schema1.Object5 Object4
    INNER JOIN Database2.Schema1.Object6(?, ?) Object7 ON Object4.Column2 = Object7.Column3
    INNER JOIN Database2.Schema1.Object8(?) Object9 ON Object4.Column4 = Object9.Column4 
    INNER JOIN Database1.Schema1.Object10 Object11 ON Object4.Column5 = Object11.Column6
    INNER JOIN Database2.Schema1.Object12(?) Object13 ON Object11.Column7 = Object13.Column7
WHERE    1 = 1
    AND Object4.Column8 >=  ‘01-Jan-2017’

The pattern here was that we were taking all of rows of an ID field in a columnstore index with about 350MM rows and joining them to a function that has 3500 rows. My gut instinct was this was a bad match for batch mode hashing. Additionally, SQL Server was recommending I create a b-tree index on the large columnstore table. there was a key lookup in the plan that I wanted to eliminate, but my hunch was that this join was causing the waits.

image

 

The Solution

So before I created the index, the query was taking at least 2-4 minutes, when it wasn’t getting hung on the HTDELETE wait. After I created the first index, we got done to about 15 seconds. SQL Server then recommended that I create another index on one of the join tables, which brought my query time down to sub-second. The plan looked a lot more traditional and had lots of my favorite operator INDEX SEEK.

image

 

The Moral of the Story

Sometimes you need non-clustered indexes on columnstore indexes. It stinks, because they do add space, but its hard to argue with a performance gain like this. I need to email some friends on the product team to ask, but I’m therorizing that the join was super expensive and causing the query to hang. Anyway, the real answer is to never stop tuning and trust your instincts.

Thanks to Sentry One for making Plan Explorer Free. I used it for the screen shots and anonymization in this post.

SQL Bits Precon: SQL Server on Linux—A Brave New World

My first European trip this year includes two of my favorite bike races, the Tour of Flanders (or as my Belgian friends call it, the Ronde van Vlaanderen) and Paris-Roubaix. In the week between on Wednesday, I will be doing a full day of training at SQLBITS in Telford. I’ve had the good luck to be working with SQL Server on Linux since very early days of the development process, and am looking forward to sharing that knowledge with attendees.

 

insert-promptI

In this full day training session you will learn about the Linux operating system. Some of the topics we’ll talk about include:

  • Operating system architecture
  • Security Model
  • File manipulation
  • Common commands and using them together
  • Bash and Korn shells
  • Shell scripting
  • File systems and Volume Managers
  • Clustering in Linux
    You will also lean about technologies in SQL Server.
  • Monitoring O/S performance 
  • Proper SQL Server configuration
  • Automating deployment
  • Deploying High Availability and Disaster Recovery
    Whether you are new to Linux and UNIX and want to get started, or if you can awk and sed your way through a maze, this precon will have something for you.

DBCC Clonedatabase and Very Large Databases

One of the recent feature introductions to SQL Server is dbcc clonedatabase, a feature that lets you create a “data-less” clone of you database. All of the statistics and objects come into your cloned database, however none of the data does. This is perfect for development or performance tuning exercises, where you want all the metadata, but do not want the security risk of dealing with production data.

Recently I had the opportunity to use clonedatabase on a very large database. I was concerned about the size of the data files and how this would impact space on my volumes. Books Online is fairly clear, but I wanted to see for myself.

Note All files in the target database will inherit the size and growth settings from the model database. File name convention: The file names for the destination database will follow the source_file_name _underscore_random number convention. If the generated file name already exists in the destination folder, DBCC CLONEDATABASE will fail

So my thought in reading that, is that the same number of data files will be created in the clone, just with the settings in model. Let’s test that out.

The first thing I did was create a new database, and then add a few data files to it. I made them 20 MB, which is a different size than model—just for testing purposes.

image

Next, I ran the clone database command.

image

Then connect to the clone and look at the data files

image

I can see that all of the files were created, in the same location as the files on the source database, except with the size settings of model. While this shouldn’t be a big deal for most, if you do like I recommend and make model a reasonable size for your environment, and you happen to be tight on drive space, you could fill up a volume. So just be aware when using clonedatabase particularly with databases that have a lot of data files in them.

An “Ask” for Microsoft—A Global Price List

And yes, I just used ask as a noun (I feel dirty), I wouldn’t do that in any other context, but this one. In reviewing my end of year blog metrics, my number one post from last year was a post that listed the list price of SQL Server. I wrote this post because a) I wanted clicks and b) I knew what a pain it was to find the pricing in Microsoft documents. However, the bigger issue is that to really figure out what a SQL Server cost, you need to go to another site to get Windows pricing, and probably another site to find out what adding System Center to your server might cost.

This post came up because Denny and I were talking the other night, as someone had posted to the Data Platform MVP list asking how much the standalone R Server product cost. We found a table on some Microsoft site:

IMG_06012017_194009

I’m not sure what math is required to translate “Commercial Software” into a numeric value, but it is definitely a type conversion and those perform terribly. Eventually I found this on an Azure page:

This image is charged exactly like SQL Server 2016 Enterprise image, but it contains no Database elements and has the core ScaleR and DeployR functionality optimized for Windows environments. For production workloads we recommend that you use a virtual machine size of DS4 or higher.

This leads me to believe that R Server has the same pricing as SQL Server, but with the documents I have I am not certain of that fact.

What Do I Want?

What I want, is pricing.microsoft.com, a one-stop shop where I can find pricing for all things Microsoft, whether they be Azure, On-Premises, or Software as a Service. At worse it should be one click from the product name to it’s pricing page. Ideally, I’d like it all in a single table, but let’s face it, software pricing can be complex and each product probably needs it’s own page with pricing details.

The other thing that would be really cool, and this is more of an Azure thing, is to have pricing data built-in to the API for deploying solutions. That way I can build pricing based intelligence into my automation code, to rollout cost optimized solutions for Azure.

Anyone else have feature suggestions?

Updated: Jason Hall has a great comment below that I totally forgot about. Oracle has a very good price list (it definitely wins the number of commas award) that is very easy to access. So dear readers in Redmond: Oracle does it, we you should too!

Updated: There is some of this available in Azure. It’s not perfect though. https://msdn.microsoft.com/en-us/library/azure/mt219004?f=255&MSPPError=-2147217396. Amazon just announced enhancements to their version of this service. https://awsinsider.net/articles/2017/01/09/pricing-notifications.aspx

SQL Server on Linux Clustering—A Few Other Notes

So I was chatting with fellow MVP Allan Hirt (b|t) about the cluster build that I wrote about yesterday, and I had a few more realizations about the Linux HA process as it stands right now. I haven’t talked to the the Linux product team at Microsoft about this, but I hope to in the near future to get a better idea of where things are headed. So these are my notes as of now, strictly relating to failover cluster instances (FCI), AlwaysOn Availablity Groups are coming, but are not in the latest CTPs of SQL Server on Linux.

It was faster than building a Windows cluster

It took me a while, I laughed, I cried, I cursed a lot, but if I look at the time it took for me to actually build the cluster and install SQL Server, it was a much faster process. Much of this comes down to the efficiency of the SQL Server installation process on Linux, which is as simple as running yum install mssql-server (mostly). Which leads me to my next point..

Installation options would be nice

The cluster building process is a little kludgy. Basically, you install two standalone instances of SQL Server, and then remove the data files from one them, and copy them into your NFS share. Having the option to do the equivalent of an “Add Node” install, would mean you wouldn’t need to worry about cleaning up your second node.

There’s no cluster validation, explicitly

This is a bit scarier, or easier depending on your view point. There are tests at various parts of the process to make sure things are working. For example, the first step of building your Linux cluster is to authorize the nodes to take part in the cluster, which validates certain security and network settings. However, the storage validation consists of starting and stopping SQL Server on each node to make sure it can talk to the storage and startup. Given that Microsoft doesn’t own the clusterware for this solution, I’m not sure how much they can enhance that, or if they will. This is a good open question.

There’s no dns

(Happy Late Birthday Kris!) One interesting thing I realized after talking to Allan was that I did all of my networking setup through the /etc/hosts file on each individual node. I remember doing this for RAC, and I think it may be a requirement of Pacemaker, but you will still want to make a DNS entry for your cluster identifier. When you do this on Windows, if you are using Active Directory for DNS, the installation does this for you. Not in Linux, you will need to do this yourself.

Screen Shot 2017-01-04 at 11.40.26 AM

Get comfortable with command line and scripting

There’s no cluster wizard to get you through the process. I think this isn’t a huge deal—Denny and I were talking yesterday about how relatively easy it would be to script the whole process in bash (I’m holding off until I find out if Microsoft is doing this), and most Linux sysadmins are really comfortable with writing bash scripts. But if you aren’t comfortable with Linux and the command line, now is the time to brush up, before things go prod.

Summary

We are in the very early days of this process, there is much that will likely change. From a functional and conceptual perspective, this is very similar to the way a SQL Server Failover Cluster works in Windows, but the implementation is quite different. I’d like to see things resemble Windows a bit more, at least from a SQL Server perspective, but we’ll see where the product heads.

SQL Server on Linux–Clustering

First of word of warning on this post—if you are reading it and it isn’t January of 2017, I suspect things may have changed significantly in the months going forward.

Screen Shot 2017-01-03 at 3.47.08 PM

So I did It, I built a SQL cluster on Linux. The process is documented here on BOL, I’m not going to walk you through it, I’ll probably do that in a later post, I just wanted to mention some things I ran into during this build process. First, I did this using VMWare Fusion on my Mac, but I think any virtualization platform that allows virtual networks should work. Secondly, even though BOL says you need Red Hat Enterprise Linux (and you do if you are doing this in prod and require support), I was able to do all of this on CentOS, which is the free as in beer version of RHEL.

In my scenario, I built 3 VMs, one to serve as an NFS server, the other two to be my SQL Servers. Currently, there is no cluster version of the install, it’s the standard installation for standalone SQL on Linux, you then point SQL Server at the NFS mount you created which serves as your shared storage. I had an initial permissions problem on writing my data files there—I did a bad thing on the NFS server and opened up the directory to the world (777), and was then able to copy files there. I’ll follow up on that.

One other thing that wasn’t in BOL, that I had to troubleshoot my way through is that just like a cluster on Windows, you have a cluster identifier and floating IP address. I had to add that to /etc/hosts on each of my nodes to get it to resolve. The article mentions turning off fencing for non-prod environments—I had to do that in order to get failover working correctly in my environment.

Finally, failover was a bit wonky at first, and I had to spend too much time troubleshooting an odd problem. I wrote a connect item for it., but select @@servername and select name from sys.servers returns the name of the host, and not the cluster name. I’m sure the team will fix this in the near future.

Why Are You Still Running Your Own Email Server?

One of the things I tell customers when doing any sort of architectural consulting, is to identify their most important business systems. Invariably something that gets left off of that list is email. Your email is your most critical system. ERP may run your profit centers, but email keeps it moving.

With that in mind, and given all the security risks that exist in the world (see: Russian hacking scandal, other email leaks of the week) it doesn’t make a lot of sense for most organizations to run their own Exchange environments when Microsoft is really good at it.

I had a discussion with an attorney at a company in a heavily regulated industry recently. The attorney mentioned that after investigating, she determined that the company didn’t have journaling turned on for their Exchange servers. (For you DBAs, journaling is effectively full recovery mode for Exchange—it’s more complicated that, but that is a nice analogy). Given that we are Office 365 customers, I wanted to check the difficulty of enabling this in our environment. I found out, full e-discovery capabilities that integrate with e-discovery systems are as easy as one click of a mouse (and a credit card to make sure you are on the right service level).

Another great security feature that was really painful to integrate with email login is multi-factor authentication. Once again, this requires a mouse click or two, and your credit card. You can even quickly do things like whitelisting your office’s IP address so that your users don’t have to use MFA when in the office.

These features are great, but it doesn’t even cover all the threat protection that Microsoft has built into Office 365 and Azure. You can read about that here, but Microsoft can even protect you from threats like spearphising. (Hi Vlad!) . Just like encryption. Don’t be a news story—just be secure.

Dear Colos: Up Your Game: aka How the #$%^ do you not have fibre in stock?

Many companies use co-located data centers to store their hardware. In some cases (like the colo we at DCAC use) you pay for power, cooling, and a connection to the internet. There is no expectation of added services other than those three things. In other cases, companies like Rackspace or Level 3 are what are known as managed service providers (note: I’m not talking about Rackspace or Level 3 in this post, I’m not going to name the guilty party, but if you want to know, you can reach me privately). Managed service providers offer solutions like shared storage, network management, and other value added services beyond just a space for your servers.

Enter the Cloud

So Microsoft and Amazon are effectively playing in this space with their IaaS offerings. There’s a big difference, however, as the cloud providers have invested a great deal of money in automation. The same customer I’m talking about in today’s post has some Azure VMs that we are deploying. I built and VM and allocated 3 TB of SSD storage in about 10 minutes this morning. Pretty slick operation–I’m fairly certain when I ran the PowerShell to deploy the VM, there was no person who got up to do anything. When I added the storage, I’m pretty sure no SAN zoning took place, and if it did, it was a few lines of code. We had previously stayed away from Azure because it’s not the most cost effective solution for very large workloads (Colo’s tend to have slightly better pricing on big boxes, but you get nickled and dimed on other things.

When Your Colo Sucks

So I have two different work streams going with the colo right now. One of which is to configure a site-to-site VPN to Azure. This should be a simple operation, however it took over a week to get in place, and only after I sent the colo the Cisco instructions on how to configure the VPN were they able to tell me that the Cisco device they had didn’t support the latest route-based VPN in Azure.  So we finally get up and running, and then we discover that we can’t get the Azure VMs from certain on-prem subnets. We ask them to make a change to add those subnets and they completely break our connection. Awesome.

The other workstream is a cluster upgrade. I wanted a new cluster node and storage, so we didn’t have to do an in-place upgrade. We started this process like 3 weeks ago, hoping to do the migration on black Friday. We had a call today to review the configuration. Turns out they had nothing in place, and aren’t even sure they can get a server deployed by NEXT FRIDAY (YES–10 days to deploy a server, your job is deploy servers). I heard lots of excuses like, we aren’t working Thurs/Fri, and we have to connect to two different SANs, we might not have that fibre in stock. It wasn’t my place to yell WHAT THE EVERLOVING $%^^ on the call, so I started live tweeting. Because that’s ridiculous. Managing and deploying infrastructure was what I did for a living, and I wouldn’t have a job if it took 10 days to deploy a server, and that wasn’t my only job. That really is the colo’s only job. How the #$%^ do you not have fibre in stock? Seriously? My lab at Comcast had all the fibre I could possibly need.

Edited to add this:

This is after last month when they confused SAN snapshots with SAN clones (when it takes 4 hours to recover from a “snapshot” it’s a clone) and presented production cluster storage (that was in use) to a new node. Awesome!!!

Why the Cloud will Ultimately Win

Basically, when it comes to repetitive tasks like deploying OSs and setting up storage, software is way better than humans. Yeah, you need smart engineers and good design, but Azure and AWS are already 90% of the way there. Also, there service levels and response times are much better, because everything is standardized and makes troubleshooting and automating much easier.

 

 

 

SQL Server v.Next—Linux Preview and Ola Hallengren’s Jobs

If you watched Scott Guthrie’s keynote at Microsoft Connect() this morning, your mouth is probably still on the floor. There was lot of big news:

  • Nearly all enterprise edition features in SQL Server 2016 SP1 are in standard edition
  • There is going to be a v.Next of SQL Server and you can play with CTP1 of it today
  • SQL Server on Linux CTP1 can now be downloaded and installed

The biggest news of the day is the standard edition news—this is going to be huge for independent software vendors who build their applications on top of SQL Server. While this is amazing news, I wanted to talk about something a little more near and dear to my heart—SQL Server on Linux. You can learn how to install SQL on Linux here.

So SQL on Linux

I’ve had the good fortune of being involved in the private preview for a good while now. Here’s the requisite screenshot of @@version you’ve seen in so many demos.

screen-shot-2016-11-16-at-9-53-50-am

In this case I’m running a VM on my Mac running CentOS. I also have a VM running Ubuntu and SQL Server running in a Docker container. If you want to go full native, you can use Visual Studio code and run without Windows at all. Aaron Bertand has a great post on how to make this work.

Management of SQL Server on Linux

Aside from a couple of DMVs that show you Linux specific performance information, everything in SQL Server on Linux is the same. Some of the HA and DR functionality is not complete, and the SQL Agent is not done, however you can use cron (and if you’re familiar with Linux, you should learn about cron—I’ll have another post on that next week).

Ola Hallengren’s Jobs on Linux

Many DBAs use Ola Hallengren’s jobs to manage backups and maintenance on their servers. The first thing you’ll want to do is download Ola’s scripts to your machine. You can do this using the CURL command in Linux. In this scenario I’m redirecting the output of the CURL command to a file called ola.sql

curl https://ola.hallengren.com/scripts/MaintenanceSolution.sql> ola.sql

Because of the behavior of the SQL Agent (currently) you will need to set the CREATE_JOBS parameters in Ola’s scripts from Y to N. I used VI to do this—you can read a primer on VI here.

After that—you’ll want to install Ola’s scripts on your SQL Server instance. You have sqlcmd on your Linux install and here you will use the input file flag.

sqlcmd -S . -Usa -Pp@ssw0rd! -iola.sql

You can do this from Management Studio on your Windows machine, or you can just do this from the command line—the nice part about the command line is that automation should be easy and straightforward.

The next thing we want to do is put a backup command in a shell script. In this case I’m just going to grab the backup example from Ola’s site. Use your favorite text editor—mine is VI because I hate myself and create a file called userbackup.sh

sqlcmd -S localhost -U SA -P p@ssw0rd!! -d master -Q “execute [dbo].[databaseBackup] @Databases=’USER_DATABASES’,@BackupType=’FULL’, @verify=’Y’, @CleanupTime=48, @CheckSum=’Y’, @LogToTable = ‘Y'” -b

After you save this file, you’ll want to make it executable. I’m going to use the chmod command to do that.

chmod 770 userbackup.sh

Now this file can be executed. You can do this using the ./ syntax. The output will be returned to the screen, if you are automating the process you can redirect the output to file which you can check for errors.

Date and time: 2016-11-16 10:22:47

Command: BACKUP DATABASE [TestingQuerystore] TO DISK = N’C:\Data\helsinki\TestingQuerystore\FULL\helsinki_TestingQuerystore_FULL_20161116_102247.bak’ WITH CHECKSUM, NO_COMPRESSION

Processed 2032 pages for database ‘TestingQuerystore’, file ‘TestingQuerystore’ on file 1.

Processed 2 pages for database ‘TestingQuerystore’, file ‘TestingQuerystore_log’ on file 1.

BACKUP DATABASE successfully processed 2034 pages in 0.090 seconds (176.562 MB/sec).

Outcome: Succeeded

Duration: 00:00:00

Date and time: 2016-11-16 10:22:47

This is quick primer on getting started with Linux—in the coming months, you’ll be learn more about being a DBA on Linux.

 

 

 

 

%d bloggers like this: