SQL Server on Linux Licensing

Now that SQL Server 2017 has gone GA and SQL Server on Linux is a reality, you may wonder how it effects your licensing bill? Well there’s good news—SQL Server on Linux has exactly the same licensing model as on Windows. And Docker, if you are using it for non-development purposes (pro-tip: don’t use Docker for production, yet) is licensing just like SQL Server in a virtual environment, you can either license all of the cores on the host, or simply the cores that your container is using.

But What About the OS?

So let’s compare server pricing:

  • Windows Server 2016 Standard Edition– $882
  • Red Hat Enterprise Linux with Standard Support—$799 
  • SuSE Enterprise Standard Support—$799
  • Ubuntu—Free as in beer

As you can see most of the licenses are the same whether you are on Windows or Linux. I’m not going to get into the religious war to say which is the best distro or if you need paid support, just simply putting the numbers out there.

HA and DR

There’s still more to come on the HA and DR story, so stay tuned. But assume the standard free license with SA for an idle secondary.

Ignite—Important Data Announcements You Probably Missed

So, if you have been living under a rock for the last year (and yes, he really is president), you may not have heard that SQL Server 2017 launched yesterday, and it runs on Linux!!! If you want to know more about that you can read my article over at Redmond mag. SQL Server 2017 will be released next Monday, October 2nd. But there were a lot of other Microsoft data platform announcements, that you might have missed, because I know I did, and I watched the keynote and stood in the SQL Server booth all day.

cities-of-learning-announcement-at-the-2014-open-badges-summit-to-reconnect-learning-1-638.jpg (638×359)

  • Azure SQL Database Machine Learning Support—If you are using a premium Azure SQL Database you can now take advantage of R and most of the goodies that are available in SQL Server. You can learn more about that here. Python isn’t there yet, and some options are limited, but it’s a nice start
  • SSIS as a Service Public Preview—Anyone who has tried to use Azure Data Factory, well hi, Meagan! There’s a new release of ADF that will support using SSIS packages, which should make the service much more friendly to use. You can learn more about that here.
  • Public Preview of Managed Instances—We’ve been hearing a little bit about this for a while, and I’ve been fortunate enough to be part of the private preview. This is hybrid PaaS/IaaS product, that will be easy to migrate to. It will also have the added bonus of allowing you to bring your own license to the party. This isn’t quite available yet, but based on yesterday’s announcement, should be soon. There’s a little bit of info in this post from Microsoft.
  • SQL Data Warehouse BIG SCALE—Do you have a Neteeza? Are you sick of paying them $500k/yr for support? Azure SQL Data Warehouse has a solution for you (bring a black card only, please). You can go up to 18,000 DWUs using NVMe hardware in the cloud. This offering is aimed at large customers who want to remove expensive on-premises appliances and get the flexibility that the cloud offers.
  • Azure SQL Database vNet Support—This is something that a lot of a customers have been asking for in Azure SQL DB. The ability to not have a public facing endpoint (even though you could lock it down), but it can allow you have total isolation. The bigger benefit of this is that you can allow more granularity in what connects to your SQL DBs from Azure. You can learn more about the preview here.

There are a lot of announcements this week, and not a lot of time in the keynotes to get to them all. So I thought this might be helpful.

SQL Server Management Studio—Presenter Mode

The tools team at Microsoft has been doing an awesome job of delivering new features into SQL Server Management Studio, Visual Studio Code, and even the command line tools and drivers. And stay tuned—there’s more news coming soon! Anyway, one of my favorite new additions to SSMS is presenter mode. Yes, you no longer have to worry about custom configuring 12 different parts of SSMS and using ZoomIt (you should still use ZoomIt). So let’s see how this works. Here we have normal SSMS 17.2


In the quick launch box type “PresentOn”


After typing that, we can see both our results, object explorer, and our query are all in a much larger font. In order to turn this off, you can type “RestoreDefaultFonts” into the quick launch bar. In my experience with 17.2 of SSMS, you may also need to restart once or twice to get everything back to normal. SSMS is currently in beta, so I’m thinking this will improve over time. Also, I would expect to have a “PresentOff” command at some point.

SQL Server 2017—SELECT INTO With Filegroup Support

One of the things I really appreciate Microsoft doing in recent releases of SQL Server is fixing some of the longstanding Connect items. These aren’t necessarily bugs, but design gaps—a good example of this was with SQL Server 2016, where the ability to truncate an individual partition came into effect. Some of these are minor, but have real impact into usability and functionality of the RDBMS.

Image result for file cabinet

The feature I am highlighting here is the SELECT INTO syntax for SQL Server. In the past, this syntax could only be used to create the new table in the user’s default filegroup. There were workarounds like changing the default filegroup for the user, but it was an extra step that shouldn’t have been needed. Starting with SQL Server 2017, T-SQL will support the ON syntax in this command:

SELECT * INTO dbo.NewProducts FROM Production.Product ON SecondaryFG

This isn’t huge, but it is a very nice thing that Microsoft has fixed.

Speaking in Foreign Countries—A Few Tips

I recently finished up a big trip where I spoke in three countries—India, Singapore, and Australia. Travel is wonderful,you get to meet people from around the world, see new places, and gain different perspectives. However, there are some things to be aware of when speaking internationally. A few of these things are tips you’ll find on travel sites, others are just lessons learned from experience.

  • Always know what power adapter you need and have multiples. Amazon is a great source for these, and in some countries it helps to ask a native what type of adapter you need. You may need more than your destination’s adapter—be aware of what airport you are connecting through.
  • When giving international talks, try to remove any local humor from your slides. So try to pick more universal humor, or even region specific. The world is smaller, but trust me few in the rest of the world know about things like American football
  • Be ready for any sort of projector. Have two of all the dongles, and be prepared to show things in limited resolution. This really applies anywhere, but you really never know what you are going to get
  • Have fun—take in the local culture, food, and drink.
  • My one crutch about international travel is that I like to stay in an American hotel, whenever I can. It’s nice knowing that I’m going to get when I go there.

Speaking internationally is incredibly rewarding, but it’s a lot easier when you prep.

Rant: Sign Your God &#$%^ Software

Not that I normally like to rant, but security is really important, and people are bad at security. One of the ways we secure things is trusted certificates—as a software developer who has a company something you need to do is buy a certificate to “sign” your software. Your certificate is issued by a 3rd party who performs some level of verification which indicates that you are a “real” software developer. It is a key part of the trust relationship when installing software. In fact unsigned software is used an indication by experts to identify malware.

Image result for cerfiticate

So who is the victim of my rant this week? Power Tap, the company that makes the power meter on my bicycle. They also make software that lets me download my cycling data. This software has two problems:

  1. It requires Java. Boo, hiss.
  2. It is unsigned. More booing, and hissing.

So, sign your software, so people don’t think it’s a virus. On my Mac I have to go into security settings and do bad things to even install this software. And then I have to do even worse things, like installing Java, in order to use it.

An Unusual R Services Problem

I have had the good luck of having a customer that was onboard with SQL Server 2016 very early—like we started testing in March of 2015, and went live in August of 2015. In fact, their home directory refers to vNext instead of 2016. This customer also adopted what felt like most of the new features list. Temporal tables, columnstore, PolyBase, and R Services amongst other features. You can read more about this great use of SQL Server 2016 here. Anyway, we had R up and running, and it ran for a while.

Recently, and unfortunately I don’t have an exact date on when this started failing (though it was around service pack 1 install time) with the following error:

Msg 39012, Level 16, State 1, Line 10
Unable to communicate with the runtime for ‘R’ script. Please check the requirements of ‘R’ runtime.
STDERR message(s) from external script:

DLL ‘C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER1601\MSSQL\Binn\sqlsatellite.dll’ cannot be loaded.
Error in eval(expr, envir, enclos) :
DLL ‘C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER1601\MSSQL\Binn\sqlsatellite.dll’ cannot be loaded.
Calls: source -> withVisible -> eval -> eval -> .Call
Execution halted
STDOUT message(s) from external script:

Failed to load dll ‘C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER1601\MSSQL\Binn\sqlsatellite.dll’ with 182 error.

I troubleshot this with some colleagues at Microsoft and we weren’t able to resolve. We tried a couple of different approaches including reinstalling CU1, but all to no avail. Yesterday, I got on a call with a couple of folks on the product team to try an isolate the problem. We looked at binaries and timestamps and it looked like everything matched up. Then, my friend Arvind Shyamsundar (b|t) suggested we run procmon on the server.


There Arvind noticed these odd calls to sqlos.dll in the shared directory. We then looked at add/remove programs and found the following item installed:

Screen Shot 2017-08-11 at 8.19.38 AM

The T-SQL compiler service which was a legacy of CTP 2.3 was there, and as soon as we uninstalled our problems went away. So, if you happen to be running on a server that you’ve upgraded since very early versions of SQL Server 2016, you may see this issue.

Just to give credit to Microsoft and the current SQL Server install process, this server has had nearly every release of SQL Server 2016 on it (we’re behind a couple of CUs), and this is the only issue we’ve had. Thanks again to Arvind and UC for solving this tough issue.

Analyzing Your Dump Files

I’m blogging about this, because A) It’s something really awesome that the SQL Server team built and B) it seems to have terrible SEO, because it took me like three google searches to find the page. With the introduction of SQL Server Management Studio 17, the Tiger team at Microsoft built a plugin that allows you to debug and resolve memory dumps you may have encountered during otherwise normal operations. This is really awesome, as it is something that usually requires a support case with CSS.

For those of you wearing aluminum hats, this does require you do upload the dump file to Azure, where it is analyzed for free (as in beer) on Microsoft’s software. You can even choose your region if you have data provenance concerns. And according to this blog post the memory dumps are stored in accorded with Microsoft’s Privacy Policy.

You will need SSMS 17 for this, as well as to install the plug in, which you can get here.


After that you can quickly get feedback on your dumps. Microsoft have even built an API, so if you want to built something automated to upload your dump files using Python or PowerShell you can.

The Self-Tuning Database?

There was a lot of talk on Twitter over the weekend, about automation, and the future of the DBA role. I’ve spoken frequently on the topic, and even though the PASS Summit program committee has had limited interest in my automation sessions, they have been amongst my most popular talks at other large conferences. Automating mundane tasks makes you a better DBA, and escalating the level of those tasks allows you to focus on activities that really help your business like better data quality, and watching cat videos on YouTube.

But Tuning is Hard

Performance tuning has always been something of a black art. I remember that Oracle called 9i the self-tuning database, because we no longer had to manually calculate how many blocks (pages) of memory where allocated to each pool. That was a joke—and those databases still required a lot of manual effort for tuning. However, that was then, and we’re in the future now. We’ll call the future August. Stay with me here, I’m going to talk about some theories I have.

So It’s August 2017

Let’s say you were a vendor of a major RDBMS, who also happened to own a major hyperscale cloud, and you had invested heavily in collecting query metadata in the last two releases of your RDBMS. Let us also accept the theory that the best way to get an optimal execution plan, is to generate as many potential execution plans as possible. Most databases attempt a handful of plans, before picking the best available plan—this is always a compromise as generating execution plans involves a lot of math, and is very expensive from a CPU perspective. Let us also portend that as owner of the hyperscale cloud, you also have a lot of available processing power, and you’ve had your users opt-in to reviewing their metadata for performance purposes. Additionally, you’ve taken care of all of the really mundane tasks like automating backups, high availability, consistency checks, etc. So now it’s on to bigger fish.


Still With Me?

Ok, so we have all the tools in place to build our self-tuning database, so let’s think about what we would need to do. Let’s take a somewhat standard heuristic I like to use in query tuning—if a query takes more than 30ms to execute or is executed more than 1000/times in a day, we should pay attention to it for tuning purposes. That’s a really big filter—so we’ve already narrowed down the focus of our tuning engine (and we have this information in our runtime engine, which we’ll call the Query Boutique). We have also had our users opt-in to use using their metadata to help improve performance.

So we identify our problem queries in your database. We then export the statistics from your database, into our backed tuning system. We look for (and attempt to apply) any missing indexes to the structure, to evaluate the benefit of a missing index. We then attempt to generate all of the execution plans (yes, all of them—this processing is asynchronous, and doesn’t need to be real time). We could even collect read/write statistics on given objects and apply a weighted value to a given index. We could then take all of this data and run it through our back end machine learning service, to ensure that our query self tuner algorithm was accurate, and to help improve our overall process.

We could then feed this data back into the production system as a pinned execution plan. Since we are tracking the runtime statistics, if the performance of the plan drops off, or we noticed that the statistics changed, we could force out our new execution plan, and start the whole process over again.

So there, I didn’t write a line of code, but I laid out the architecture for a self-tuning database. (I thought about all of this on a bike ride, I’m sure Conor could do a lot better than me Smile)  I’m sure this would take years and many versions to come into effect. Or not at all. To be successful in this changing world of data, you need to stay ahead of the curve, learn about clouds work, how to script, how to automate, and how to add value.

Drive By #$%^ing—er, Tweeting.

Note: This post may contain some vulgarities, but no obscenity, at least as defined by the Supreme Court in Miller v. California (1973)

So, my morning started off early, with a lovely bike ride. It was nice and cool in Southern California, where I am this week, so I had a lovely 20 miles. Then, I took my phone out of my pocket and was confronted with two really shitty posts. The first was on twitter, and the second was so shallow that it may as well been a tweet.

I love Twitter, I’ve been on for almost ten years, and as a data nerd, and sports fan, it is like the end all be all of services. However, for discourse, 140 characters leaves out the ability to for any nuance or subtlety. (See the President of United States, not that he had any nuance or subtlety to begin with). However, when you legitimately want to critique something, prose works far better than a 140 characters.

The First #$%, er Tweeter

So I got back from my ride, and the first thing I saw was:

Screen Shot 2017-07-26 at 8.09.53 AM

Richie, that tweet was school on Sunday dude. Not cool—I get that you may not like the sessions picked, or general direction of the organization (I certainly disagree with a ton of what PASS does, and am moderately vocal about it). But when you write a tweet like that, you are basically inferring that a bunch of shitty speakers, submitted a a bunch of shitty sessions, and the program committee organized a total shit show. You probably didn’t mean that—I personally think the new emphasis on development isn’t the right approach for a database conference. However, that’s a) not my decision, and b) a more nuanced thought than “Summit sucks, haha.”

The old saying about “if you don’t have anything nice to say, don’t say anything”, is wrong. However, if you don’t have anything constructive to say,don’t insult 150+ speakers,volunteers, and potential sponsors who might be reading your stream.

The Second #$%e, Blogger

I’m not linking to this guy’s shit. Because it’s shit.

Screen Shot 2017-07-26 at 9.11.10 AM

Here’s the gist of this post—Azure SQL Database and Azure Blob Storage are completely $%^&ing insecure because they have public IP addresses. Never mind, that you can completely lock them down to all IP addresses, and Azure. (Granted a database or storage account that no one can access is probably of limited value). However, these services are fully accessed controlled and have built-in security. Additionally, in the case of SQL DB, you have the option of built-in threat detection that will detect anomalous behavior like SQL injection, or rogue logins.

Currently, Azure doesn’t have the ability the put your database on a VNet. I’d be shocked if the Azure team is not working on said feature. In the article, the author makes a point of Amazon having the ability to do this for RDS. That’s cool, and probably why Microsoft is working on it. But instead of focusing on how to secure your app in a hybrid platform, he just shits on the vendor with a clickbait headline.

Wheaton’s Law

Wheaton’s Law, which also happens to be the core of our HR policy at DCAC, is simply “Don’t be a dick”. Think before you write and tweet—don’t write clickbait crap, and if you want to criticize an org, do it carefully, and write a blog post. Or send someone a DM.

%d bloggers like this: