The Self-Tuning Database?

There was a lot of talk on Twitter over the weekend, about automation, and the future of the DBA role. I’ve spoken frequently on the topic, and even though the PASS Summit program committee has had limited interest in my automation sessions, they have been amongst my most popular talks at other large conferences. Automating mundane tasks makes you a better DBA, and escalating the level of those tasks allows you to focus on activities that really help your business like better data quality, and watching cat videos on YouTube.

But Tuning is Hard

Performance tuning has always been something of a black art. I remember that Oracle called 9i the self-tuning database, because we no longer had to manually calculate how many blocks (pages) of memory where allocated to each pool. That was a joke—and those databases still required a lot of manual effort for tuning. However, that was then, and we’re in the future now. We’ll call the future August. Stay with me here, I’m going to talk about some theories I have.

So It’s August 2017

Let’s say you were a vendor of a major RDBMS, who also happened to own a major hyperscale cloud, and you had invested heavily in collecting query metadata in the last two releases of your RDBMS. Let us also accept the theory that the best way to get an optimal execution plan, is to generate as many potential execution plans as possible. Most databases attempt a handful of plans, before picking the best available plan—this is always a compromise as generating execution plans involves a lot of math, and is very expensive from a CPU perspective. Let us also portend that as owner of the hyperscale cloud, you also have a lot of available processing power, and you’ve had your users opt-in to reviewing their metadata for performance purposes. Additionally, you’ve taken care of all of the really mundane tasks like automating backups, high availability, consistency checks, etc. So now it’s on to bigger fish.

rube-goldberg-stamp

Still With Me?

Ok, so we have all the tools in place to build our self-tuning database, so let’s think about what we would need to do. Let’s take a somewhat standard heuristic I like to use in query tuning—if a query takes more than 30ms to execute or is executed more than 1000/times in a day, we should pay attention to it for tuning purposes. That’s a really big filter—so we’ve already narrowed down the focus of our tuning engine (and we have this information in our runtime engine, which we’ll call the Query Boutique). We have also had our users opt-in to use using their metadata to help improve performance.

So we identify our problem queries in your database. We then export the statistics from your database, into our backed tuning system. We look for (and attempt to apply) any missing indexes to the structure, to evaluate the benefit of a missing index. We then attempt to generate all of the execution plans (yes, all of them—this processing is asynchronous, and doesn’t need to be real time). We could even collect read/write statistics on given objects and apply a weighted value to a given index. We could then take all of this data and run it through our back end machine learning service, to ensure that our query self tuner algorithm was accurate, and to help improve our overall process.

We could then feed this data back into the production system as a pinned execution plan. Since we are tracking the runtime statistics, if the performance of the plan drops off, or we noticed that the statistics changed, we could force out our new execution plan, and start the whole process over again.

So there, I didn’t write a line of code, but I laid out the architecture for a self-tuning database. (I thought about all of this on a bike ride, I’m sure Conor could do a lot better than me Smile)  I’m sure this would take years and many versions to come into effect. Or not at all. To be successful in this changing world of data, you need to stay ahead of the curve, learn about clouds work, how to script, how to automate, and how to add value.

Drive By #$%^ing—er, Tweeting.

Note: This post may contain some vulgarities, but no obscenity, at least as defined by the Supreme Court in Miller v. California (1973)

So, my morning started off early, with a lovely bike ride. It was nice and cool in Southern California, where I am this week, so I had a lovely 20 miles. Then, I took my phone out of my pocket and was confronted with two really shitty posts. The first was on twitter, and the second was so shallow that it may as well been a tweet.

I love Twitter, I’ve been on for almost ten years, and as a data nerd, and sports fan, it is like the end all be all of services. However, for discourse, 140 characters leaves out the ability to for any nuance or subtlety. (See the President of United States, not that he had any nuance or subtlety to begin with). However, when you legitimately want to critique something, prose works far better than a 140 characters.

The First #$%, er Tweeter

So I got back from my ride, and the first thing I saw was:

Screen Shot 2017-07-26 at 8.09.53 AM

Richie, that tweet was school on Sunday dude. Not cool—I get that you may not like the sessions picked, or general direction of the organization (I certainly disagree with a ton of what PASS does, and am moderately vocal about it). But when you write a tweet like that, you are basically inferring that a bunch of shitty speakers, submitted a a bunch of shitty sessions, and the program committee organized a total shit show. You probably didn’t mean that—I personally think the new emphasis on development isn’t the right approach for a database conference. However, that’s a) not my decision, and b) a more nuanced thought than “Summit sucks, haha.”

The old saying about “if you don’t have anything nice to say, don’t say anything”, is wrong. However, if you don’t have anything constructive to say,don’t insult 150+ speakers,volunteers, and potential sponsors who might be reading your stream.

The Second #$%e, Blogger

I’m not linking to this guy’s shit. Because it’s shit.

Screen Shot 2017-07-26 at 9.11.10 AM

Here’s the gist of this post—Azure SQL Database and Azure Blob Storage are completely $%^&ing insecure because they have public IP addresses. Never mind, that you can completely lock them down to all IP addresses, and Azure. (Granted a database or storage account that no one can access is probably of limited value). However, these services are fully accessed controlled and have built-in security. Additionally, in the case of SQL DB, you have the option of built-in threat detection that will detect anomalous behavior like SQL injection, or rogue logins.

Currently, Azure doesn’t have the ability the put your database on a VNet. I’d be shocked if the Azure team is not working on said feature. In the article, the author makes a point of Amazon having the ability to do this for RDS. That’s cool, and probably why Microsoft is working on it. But instead of focusing on how to secure your app in a hybrid platform, he just shits on the vendor with a clickbait headline.

Wheaton’s Law

Wheaton’s Law, which also happens to be the core of our HR policy at DCAC, is simply “Don’t be a dick”. Think before you write and tweet—don’t write clickbait crap, and if you want to criticize an org, do it carefully, and write a blog post. Or send someone a DM.

24 Hours of PASS Summit Preview—Getting Started with Linux

Next week is the 24 Hours of PASS—which is a free event PASS puts on to showcase some of the sessions you will get to see at November’s PASS Summit. I’m going to be giving a preview on my session on Linux—I’ll be talking about how to get started learning Linux, installing a SQL test environment on a Docker container or a VM, and a little bit about some of the ancillary configuration of SQL Server running on the Linux platform.

Porsche 919 Six Hours of Spa

The Porsche 919 that won the recent 24 Hours of LeMans

While SQL Server on Linux is something that may not immediately impact you—this is definitely something you want to be ready for. Just this week I talked to some executives from a major Linux vendor, who have been amazed at the amount of interest this has had amongst their customers. A lot of folks in IT strongly dislike working with “the other major database vendor” (name withheld so I don’t get sued, bingle sailboats and databases), and a lot of those organizations like to run their database servers on Linux.

So grab a beer or a whisky (if you’re in the US) as I’ll be presenting at 9 PM (2100) EDT (GMT –5)/0100 GMT. You can sign up here. If you can’t make it (because you live in the EU, and would rather sleep than listen to me ramble about shell commands) there will be a recording, but you will still need to register.

24 Hours of PASS—Summit Preview Linux

Next week is the 24 Hours of PASS—which is a free event PASS puts on to showcase some of the sessions you will get to see at November’s PASS Summit. I’m going to be giving a preview on my session on Linux—I’ll be talking about how to get started learning Linux, installing a SQL test environment on a Docker container or a VM, and a little bit about some of the ancillary configuration of SQL Server running on the Linux platform.

Porsche 919 Six Hours of Spa

The Porsche 919 that won the recent 24 Hours of LeMans

While SQL Server on Linux is something that may not immediately impact you—this is definitely something you want to be ready for. Just this week I talked to some executives from a major Linux vendor, who have been amazed at the amount of interest this has had amongst their customers. A lot of folks in IT strongly dislike working with “the other major database vendor” (name withheld so I don’t get sued, bingle sailboats and databases), and a lot of those organizations like to run their database servers on Linux.

So grab a beer or a whisky (if you’re in the US) as I’ll be presenting at 9 PM (2100) EDT (GMT –5)/0100 GMT. You can sign up here. If you can’t make it (because you live in the EU, and would rather sleep than listen to me ramble about shell commands) there will be a recording, but you will still need to register.

%d bloggers like this: