SQL Server 2017—SELECT INTO With Filegroup Support

One of the things I really appreciate Microsoft doing in recent releases of SQL Server is fixing some of the longstanding Connect items. These aren’t necessarily bugs, but design gaps—a good example of this was with SQL Server 2016, where the ability to truncate an individual partition came into effect. Some of these are minor, but have real impact into usability and functionality of the RDBMS.

Image result for file cabinet

The feature I am highlighting here is the SELECT INTO syntax for SQL Server. In the past, this syntax could only be used to create the new table in the user’s default filegroup. There were workarounds like changing the default filegroup for the user, but it was an extra step that shouldn’t have been needed. Starting with SQL Server 2017, T-SQL will support the ON syntax in this command:

SELECT * INTO dbo.NewProducts FROM Production.Product ON SecondaryFG

This isn’t huge, but it is a very nice thing that Microsoft has fixed.

Speaking in Foreign Countries—A Few Tips

I recently finished up a big trip where I spoke in three countries—India, Singapore, and Australia. Travel is wonderful,you get to meet people from around the world, see new places, and gain different perspectives. However, there are some things to be aware of when speaking internationally. A few of these things are tips you’ll find on travel sites, others are just lessons learned from experience.

  • Always know what power adapter you need and have multiples. Amazon is a great source for these, and in some countries it helps to ask a native what type of adapter you need. You may need more than your destination’s adapter—be aware of what airport you are connecting through.
  • When giving international talks, try to remove any local humor from your slides. So try to pick more universal humor, or even region specific. The world is smaller, but trust me few in the rest of the world know about things like American football
  • Be ready for any sort of projector. Have two of all the dongles, and be prepared to show things in limited resolution. This really applies anywhere, but you really never know what you are going to get
  • Have fun—take in the local culture, food, and drink.
  • My one crutch about international travel is that I like to stay in an American hotel, whenever I can. It’s nice knowing that I’m going to get when I go there.

Speaking internationally is incredibly rewarding, but it’s a lot easier when you prep.

Rant: Sign Your God &#$%^ Software

Not that I normally like to rant, but security is really important, and people are bad at security. One of the ways we secure things is trusted certificates—as a software developer who has a company something you need to do is buy a certificate to “sign” your software. Your certificate is issued by a 3rd party who performs some level of verification which indicates that you are a “real” software developer. It is a key part of the trust relationship when installing software. In fact unsigned software is used an indication by experts to identify malware.

Image result for cerfiticate

So who is the victim of my rant this week? Power Tap, the company that makes the power meter on my bicycle. They also make software that lets me download my cycling data. This software has two problems:

  1. It requires Java. Boo, hiss.
  2. It is unsigned. More booing, and hissing.

So, sign your software, so people don’t think it’s a virus. On my Mac I have to go into security settings and do bad things to even install this software. And then I have to do even worse things, like installing Java, in order to use it.

An Unusual R Services Problem

I have had the good luck of having a customer that was onboard with SQL Server 2016 very early—like we started testing in March of 2015, and went live in August of 2015. In fact, their home directory refers to vNext instead of 2016. This customer also adopted what felt like most of the new features list. Temporal tables, columnstore, PolyBase, and R Services amongst other features. You can read more about this great use of SQL Server 2016 here. Anyway, we had R up and running, and it ran for a while.

Recently, and unfortunately I don’t have an exact date on when this started failing (though it was around service pack 1 install time) with the following error:

Error
Msg 39012, Level 16, State 1, Line 10
Unable to communicate with the runtime for ‘R’ script. Please check the requirements of ‘R’ runtime.
STDERR message(s) from external script:

DLL ‘C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER1601\MSSQL\Binn\sqlsatellite.dll’ cannot be loaded.
Error in eval(expr, envir, enclos) :
DLL ‘C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER1601\MSSQL\Binn\sqlsatellite.dll’ cannot be loaded.
Calls: source -> withVisible -> eval -> eval -> .Call
Execution halted
STDOUT message(s) from external script:

Failed to load dll ‘C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER1601\MSSQL\Binn\sqlsatellite.dll’ with 182 error.

I troubleshot this with some colleagues at Microsoft and we weren’t able to resolve. We tried a couple of different approaches including reinstalling CU1, but all to no avail. Yesterday, I got on a call with a couple of folks on the product team to try an isolate the problem. We looked at binaries and timestamps and it looked like everything matched up. Then, my friend Arvind Shyamsundar (b|t) suggested we run procmon on the server.

image

There Arvind noticed these odd calls to sqlos.dll in the shared directory. We then looked at add/remove programs and found the following item installed:

Screen Shot 2017-08-11 at 8.19.38 AM

The T-SQL compiler service which was a legacy of CTP 2.3 was there, and as soon as we uninstalled our problems went away. So, if you happen to be running on a server that you’ve upgraded since very early versions of SQL Server 2016, you may see this issue.

Just to give credit to Microsoft and the current SQL Server install process, this server has had nearly every release of SQL Server 2016 on it (we’re behind a couple of CUs), and this is the only issue we’ve had. Thanks again to Arvind and UC for solving this tough issue.

Analyzing Your Dump Files

I’m blogging about this, because A) It’s something really awesome that the SQL Server team built and B) it seems to have terrible SEO, because it took me like three google searches to find the page. With the introduction of SQL Server Management Studio 17, the Tiger team at Microsoft built a plugin that allows you to debug and resolve memory dumps you may have encountered during otherwise normal operations. This is really awesome, as it is something that usually requires a support case with CSS.

For those of you wearing aluminum hats, this does require you do upload the dump file to Azure, where it is analyzed for free (as in beer) on Microsoft’s software. You can even choose your region if you have data provenance concerns. And according to this blog post the memory dumps are stored in accorded with Microsoft’s Privacy Policy.

You will need SSMS 17 for this, as well as to install the plug in, which you can get here.

image

After that you can quickly get feedback on your dumps. Microsoft have even built an API, so if you want to built something automated to upload your dump files using Python or PowerShell you can.

The Self-Tuning Database?

There was a lot of talk on Twitter over the weekend, about automation, and the future of the DBA role. I’ve spoken frequently on the topic, and even though the PASS Summit program committee has had limited interest in my automation sessions, they have been amongst my most popular talks at other large conferences. Automating mundane tasks makes you a better DBA, and escalating the level of those tasks allows you to focus on activities that really help your business like better data quality, and watching cat videos on YouTube.

But Tuning is Hard

Performance tuning has always been something of a black art. I remember that Oracle called 9i the self-tuning database, because we no longer had to manually calculate how many blocks (pages) of memory where allocated to each pool. That was a joke—and those databases still required a lot of manual effort for tuning. However, that was then, and we’re in the future now. We’ll call the future August. Stay with me here, I’m going to talk about some theories I have.

So It’s August 2017

Let’s say you were a vendor of a major RDBMS, who also happened to own a major hyperscale cloud, and you had invested heavily in collecting query metadata in the last two releases of your RDBMS. Let us also accept the theory that the best way to get an optimal execution plan, is to generate as many potential execution plans as possible. Most databases attempt a handful of plans, before picking the best available plan—this is always a compromise as generating execution plans involves a lot of math, and is very expensive from a CPU perspective. Let us also portend that as owner of the hyperscale cloud, you also have a lot of available processing power, and you’ve had your users opt-in to reviewing their metadata for performance purposes. Additionally, you’ve taken care of all of the really mundane tasks like automating backups, high availability, consistency checks, etc. So now it’s on to bigger fish.

rube-goldberg-stamp

Still With Me?

Ok, so we have all the tools in place to build our self-tuning database, so let’s think about what we would need to do. Let’s take a somewhat standard heuristic I like to use in query tuning—if a query takes more than 30ms to execute or is executed more than 1000/times in a day, we should pay attention to it for tuning purposes. That’s a really big filter—so we’ve already narrowed down the focus of our tuning engine (and we have this information in our runtime engine, which we’ll call the Query Boutique). We have also had our users opt-in to use using their metadata to help improve performance.

So we identify our problem queries in your database. We then export the statistics from your database, into our backed tuning system. We look for (and attempt to apply) any missing indexes to the structure, to evaluate the benefit of a missing index. We then attempt to generate all of the execution plans (yes, all of them—this processing is asynchronous, and doesn’t need to be real time). We could even collect read/write statistics on given objects and apply a weighted value to a given index. We could then take all of this data and run it through our back end machine learning service, to ensure that our query self tuner algorithm was accurate, and to help improve our overall process.

We could then feed this data back into the production system as a pinned execution plan. Since we are tracking the runtime statistics, if the performance of the plan drops off, or we noticed that the statistics changed, we could force out our new execution plan, and start the whole process over again.

So there, I didn’t write a line of code, but I laid out the architecture for a self-tuning database. (I thought about all of this on a bike ride, I’m sure Conor could do a lot better than me Smile)  I’m sure this would take years and many versions to come into effect. Or not at all. To be successful in this changing world of data, you need to stay ahead of the curve, learn about clouds work, how to script, how to automate, and how to add value.

Drive By #$%^ing—er, Tweeting.

Note: This post may contain some vulgarities, but no obscenity, at least as defined by the Supreme Court in Miller v. California (1973)

So, my morning started off early, with a lovely bike ride. It was nice and cool in Southern California, where I am this week, so I had a lovely 20 miles. Then, I took my phone out of my pocket and was confronted with two really shitty posts. The first was on twitter, and the second was so shallow that it may as well been a tweet.

I love Twitter, I’ve been on for almost ten years, and as a data nerd, and sports fan, it is like the end all be all of services. However, for discourse, 140 characters leaves out the ability to for any nuance or subtlety. (See the President of United States, not that he had any nuance or subtlety to begin with). However, when you legitimately want to critique something, prose works far better than a 140 characters.

The First #$%, er Tweeter

So I got back from my ride, and the first thing I saw was:

Screen Shot 2017-07-26 at 8.09.53 AM

Richie, that tweet was school on Sunday dude. Not cool—I get that you may not like the sessions picked, or general direction of the organization (I certainly disagree with a ton of what PASS does, and am moderately vocal about it). But when you write a tweet like that, you are basically inferring that a bunch of shitty speakers, submitted a a bunch of shitty sessions, and the program committee organized a total shit show. You probably didn’t mean that—I personally think the new emphasis on development isn’t the right approach for a database conference. However, that’s a) not my decision, and b) a more nuanced thought than “Summit sucks, haha.”

The old saying about “if you don’t have anything nice to say, don’t say anything”, is wrong. However, if you don’t have anything constructive to say,don’t insult 150+ speakers,volunteers, and potential sponsors who might be reading your stream.

The Second #$%e, Blogger

I’m not linking to this guy’s shit. Because it’s shit.

Screen Shot 2017-07-26 at 9.11.10 AM

Here’s the gist of this post—Azure SQL Database and Azure Blob Storage are completely $%^&ing insecure because they have public IP addresses. Never mind, that you can completely lock them down to all IP addresses, and Azure. (Granted a database or storage account that no one can access is probably of limited value). However, these services are fully accessed controlled and have built-in security. Additionally, in the case of SQL DB, you have the option of built-in threat detection that will detect anomalous behavior like SQL injection, or rogue logins.

Currently, Azure doesn’t have the ability the put your database on a VNet. I’d be shocked if the Azure team is not working on said feature. In the article, the author makes a point of Amazon having the ability to do this for RDS. That’s cool, and probably why Microsoft is working on it. But instead of focusing on how to secure your app in a hybrid platform, he just shits on the vendor with a clickbait headline.

Wheaton’s Law

Wheaton’s Law, which also happens to be the core of our HR policy at DCAC, is simply “Don’t be a dick”. Think before you write and tweet—don’t write clickbait crap, and if you want to criticize an org, do it carefully, and write a blog post. Or send someone a DM.

24 Hours of PASS Summit Preview—Getting Started with Linux

Next week is the 24 Hours of PASS—which is a free event PASS puts on to showcase some of the sessions you will get to see at November’s PASS Summit. I’m going to be giving a preview on my session on Linux—I’ll be talking about how to get started learning Linux, installing a SQL test environment on a Docker container or a VM, and a little bit about some of the ancillary configuration of SQL Server running on the Linux platform.

Porsche 919 Six Hours of Spa

The Porsche 919 that won the recent 24 Hours of LeMans

While SQL Server on Linux is something that may not immediately impact you—this is definitely something you want to be ready for. Just this week I talked to some executives from a major Linux vendor, who have been amazed at the amount of interest this has had amongst their customers. A lot of folks in IT strongly dislike working with “the other major database vendor” (name withheld so I don’t get sued, bingle sailboats and databases), and a lot of those organizations like to run their database servers on Linux.

So grab a beer or a whisky (if you’re in the US) as I’ll be presenting at 9 PM (2100) EDT (GMT –5)/0100 GMT. You can sign up here. If you can’t make it (because you live in the EU, and would rather sleep than listen to me ramble about shell commands) there will be a recording, but you will still need to register.

24 Hours of PASS—Summit Preview Linux

Next week is the 24 Hours of PASS—which is a free event PASS puts on to showcase some of the sessions you will get to see at November’s PASS Summit. I’m going to be giving a preview on my session on Linux—I’ll be talking about how to get started learning Linux, installing a SQL test environment on a Docker container or a VM, and a little bit about some of the ancillary configuration of SQL Server running on the Linux platform.

Porsche 919 Six Hours of Spa

The Porsche 919 that won the recent 24 Hours of LeMans

While SQL Server on Linux is something that may not immediately impact you—this is definitely something you want to be ready for. Just this week I talked to some executives from a major Linux vendor, who have been amazed at the amount of interest this has had amongst their customers. A lot of folks in IT strongly dislike working with “the other major database vendor” (name withheld so I don’t get sued, bingle sailboats and databases), and a lot of those organizations like to run their database servers on Linux.

So grab a beer or a whisky (if you’re in the US) as I’ll be presenting at 9 PM (2100) EDT (GMT –5)/0100 GMT. You can sign up here. If you can’t make it (because you live in the EU, and would rather sleep than listen to me ramble about shell commands) there will be a recording, but you will still need to register.

Would You Fly a Plane with One Engine? Or Run Your Airline with One Data Center(re)?

For those of you who may of been in the US or outside of Europe this past weekend, you may not have heard about the major British Airways IT outage, that took down their entire operations for most of Saturday and into Sunday. Rumors, which were later confirmed, were that a switch from primary to backup power at their primary data centre (they’re a UK company, so I’ll spell it in the Queen’s English), lead to a complete operations failure. I have a bit of inside information, since my darling wife was stuck inside of Terminal 5 at Heathrow.

Image result for jet with missing engine

There’s a requirement for planes that travel across oceans call ETOPs, which stands for Extended Range Operation with Two-Engine Airplanes, however in parlance is know as Engines Turn or Passengers Swim. This protocol and requirements are a set of rules that ensure if a plane has a problem over a body of water, it can make it back to shore for a safe landing. As someone who flies across oceans a decent amount, I am very happy the regulatory bodies have these rules in place.

However, there are no such rules for data centers that run airline operations. In fact, in January, Delta Airlines had a major failure which took down most of its operations for a couple of days. Most IT experts have surmised that Delta was running a single data center for it’s operations. Based on the evidence from Saturday’s incident with BA, I have to assume that they are, as well. One key bit of evidence, was that BA employees were unable to access email. They are an Office 365 customer, so theoretically, even if on-premises systems were down e-mail should work. However, if they were using Active Directory Federation Services, so that all of their passwords were stored on-prem, then the data center being down, would mean they couldn’t authenticate, and therefore would not have email.

This was my biggest clue that BA was running with a single data center—was that email didn’t work. While some systems, particularly some of the mainframe systems that may handle flight operations, have a tendency to not do well with failover across sites, Active Directory is one of the best distributed systems there is, and is extremely resilient to failures. In fact, given BA’s global business, I’m really surprised they didn’t have ADFS servers in locations around the world.

Enter the Cloud

Denny and I sat talking yesterday and running some numbers on what we thought a second data center would cost a company like BA. Our rough estimate (and this is very rough) was around $30-40 million USD. While that is a ton of money, it is estimated that weekend’s mess may cost BA up to  £150 million (~$192MM USD). However, companies no longer have to build multiple data centers in order to have redundancy, as Microsoft (and Amazon, and Google) have data centers throughout the world. The cloud gives you the flexibility to protect critical systems, and at a much cheaper cost. I’ve designed DR strategies for small firms that cost under $100/month, and I’ve had real-time failover that supported 99.99% uptime. With the resources of a firm like BA, this should be a no-brainer given the risk profile.

What About Outsourcing?

Much has been made of the fact that BA has outsourced much of its IT functions to TCS and various other providers. Some have even tried to place blame on the providers for this outage. Frankly, I don’t have enough detail to blame anyone, and it seems more like the data center operator’s issue. However, I do think it speaks to the lack of attention and resources paid to technology at a company that clearly depends on it heavily. Computers and data are more important to business now than ever, and if your firm doesn’t value that, you are going to have problems down the road.

Conclusions

In the cloud era, I’m convinced no business, no matter how big or small should run with a single data center. It is way too cheap and easy to ship your backups to multiple sites, and be online in a matter of hours with a cloud provider. Given the importance and consolidation of airlines to our world economy, it probably wouldn’t be a terrible idea if their regulators created regulations requiring failover and failover testing. Don’t let this happen to your stock price.

//platform.twitter.com/widgets.js