Getting Started with SQL Server and Kubernetes

If you find yourself asking, “what the heck is Kubernetes, and is the next Hekaton?” you are in the right place. If you know all about K8s (that’s open source shorthand for Kubernetes K+8 letter+S, don’t shoot me, open source projects don’t have marketing people), then hopefully you’ll learn something about how SQL Server interacts with Kubernetes.

No really, What the hell is this?

One of the cool introductions in SQL Server 2017, was support for Docker containers. Feature support was pretty limited (no AD, limited HA/DR options, and some of the other standard SQL Server on Linux limitations), however especially for those of us who work on Macs, Docker support gave us a very easy way to get SQL Server running on our computers without spinning up a virtual machine (VM). Docker is a basic containerization system–what are containers? Containers are similar to VMs, however they run a little bit closer to bare metal, don’t contain the whole version of the operating system (just the runtimes you need), and most importantly are fully software defined.

I recommend two books on the topic.

The Kubernetes Book — This is a good overview and will give you decent understanding around what you need to know

Kubernetes Up and Running   — This is much more comprehensive, deeper dive, that you will want before you try to operationalize K8s.

I’d really like a book on Kubernetes Infrastructure and Stuff for VMWare Admins (no one steal that title, in case I decide to write it), but from my research no such tome exists.

Software Defined? Isn’t that just buzzword?

In recent years, you have like heard the terms software defined networking, software defined storage, and software defined infrastructure, and if you’ve done anything on a public cloud provider, you have taken advantage of the technology. In a nutshell, with *DI you have a file (that’s most likely YAML or JSON) that defines all of your infrastructure. Network connections, storage, the image to build your container from, etc.

There’s a rationale behind all this–large companies like massive enterprises (and what are cloud providers, but massive enterprises with amazing programmers) system administrators and enterprise hardware don’t scale. Imagine if everytime someone wanted to build a VM in Azure, someone at Microsoft had to do something. Anything–even if it was clicking OK to Approve–that process would never scale as there are 1000s of requests per minute.

Enterprise software and hardware don’t scale to public cloud volumes either–if Microsoft or Amazon were running enterprise class storage, there is no way they could offer storage at the low costs they do. And if you are building your own hardware, you may as well build it so that you can easily automate nearly all of the normal tasks, through the use of a common set of APIs. Remind me to write another blog post about object-based storage which is also a common theme in these architectures.

Weren’t you going to show us how to do something?

This post ended up being a lot longer than I expected–so in order to get you started, you are going to install and configure K8s. If you are on a Mac, this is very easy:

Mac Instructions: Download the Edge Version (think beta) of Docker for Mac here. Then follow these instructions.

If you are on Windows, it’s a little more complex. This effort is going to use VirtualBox–there needs to be a Linux virtual host, to run the VM where you build your containers. You’ll then need to ensure Hyper-V is not turned on–it’s off by default, and you can check in turn Windows Features On/Off part of Control Panel.

From there, you can install Docker, and VirtualBox using the Docker toolkit available here. This post does a good job of illustrating the steps you’ll need to configure minikube and kubectl after you’ve got Docker installed.

Conclusion

You know why containers are a thing, and how software defined everything has brought the computing world to this end. In the next post, you will learn how to configure SQL Server in a high availability configuration using K8s. I promised it will be far less painful than clustering on Linux.

 

Resources from Philly Code Camp Precon

I did a precon at Philly Dot Net’s Code Camp, and I promised that I would share the resources I talked about during the event. I’d like to thank everyone who attended the audience was attentive and had lots of great questions. So anyway here goes in terms of resources that we talked about during the session:

1) Automated build scripts for installing SQL Server.

2) Solarwinds free Database Performance Analyzer

2)  SentryOne Plan Explorer

3) Glenn Berry’s Diagnostic Scripts- (B|T) These were the queries I was using to look at things like Page Life Expectancy, Disk Latency on the plan cache, Glenn’s scripts are fantastic and he does and excellent job of keeping them up to date.

4) SP_WhoIsActive this script is from Adam Machanic (b|t) and will show you what’s going on a server at any point in time. You have the option of logging this to a table as part of an automated process.

5) Here’s the link to the Jim Gray white paper about storing files in a database. It’s a long read, but from one of the best.

Finally, my slides are here. Thanks everyone for attending and thanks to Philly Dot Net for a great conference.

How the Cloud Democratizes Solutions

The grey hairs I see every now and again remind me of how long I’ve been working in IT. I’m now in my 22nd year of having a job (2 years as an intern, but I did have the ‘sys’ password), and I’ve seen a lot of things come and go. When I started working servers cost at least tens of thousands of dollars, were the size of refrigerators, and had less processing power than the Macbook Pro I’m typing this post on. More importantly, they had service contracts that cost a rather large amount of cash per year. This bought you, well I’m not sure, but your hardware reps surely took you out for at least a steak dinner or two. Eventually, we moved to commodity hardware (those boxes from HP and Dell that cost a tenth of what you paid 20 years ago) and service contracts were a few hundred dollars per server per year. (And then those sales reps started selling expensive storage).

Most of my career has been spent working in large Fortune 500 enterprises—I think about things that at the time were only available to organizations of that size and budget, and are now available for a few clicks and a few dollars per hour. I’m going to focus on three specific technologies that I think are cool, and interesting, but as I’m writing this, I’ve already thought of three or four more.

image

Massively Parallel Processing

MPP processing is a data warehouse design pattern that allows for massive scale out solutions, to quickly process very large amounts of data. In the past it required buying an expensive appliance from Teradata, Oracle, Netezza, or Microsoft. I’m going to focus on Microsoft here, but there are several other cloud options for this model, like Amazon Redshift or Snowflake, amongst others. In terms of the on-premises investment you had to make, effectively to get your foot in the door with one of these solutions, you were looking at at least $250k/USD which is a fairly conservative estimate. In a cloud world? SQL Data Warehouse can cost as little as $6/hour, and while that can add up to a tidy sum over the course of a month, you can pause the service when you aren’t using it, and only pay for the storage. This allows you to do quick proof of concept work, and more importantly compare solutions to see which one best meets your needs. It also allows a smaller organization to get into the MPP game without a major investment.

Secondary and Tertiary Data Centers

Many organizations have two data centers. I’ve only worked for one that had a third data center. You may ask why is this important? A common question I get when teaching Always On Availability Groups, is if we are split across multiple sites, where do we put the quorum file share? The correct answer is that it should be in a third, independent data center. (Which virtually no organizations have). However, Windows Server 2016 offers a great solution, for mere pennies a month—a cloud witness, a “disk” stored in Azure Blob Storage. If you aren’t on Windows Server 2016, it may be possible to implement a similar design using Azure File Storage, but it is not natively supported. Additionally, cloud computing greatly simplifies the process of having multiple data centers. There’s no worries about having staff in two locales, or getting network connectivity between the two sites; that’s all done by your cloud vendor. Just build stuff, and make it reliable. And freaking test your DR (That doesn’t change in the cloud)

Multi-Factor Authentication

If you aren’t using multi-factor authentication for just about everything, you are doing it wrong. (This was a tell that Joey wrote this post and not the Russian mafia). Anyway, security is more important than ever (hackers use the cloud too) and having multi-factor authentication can offer additional levels of security that go far beyond passwords. MFA is not a new thing, and I have a collection of dead SecureID tokens dating back to the 90s that tell me that. However, implementing MFA used to require you to buy an appliance (later it was some software), possible have ADFS, and a lot of complicated configuration work. Now? It’s simply a setting in the Office 365 portal (if you are using AAD authentication; if you are an MS shop learn AAD, it’s a good thing). While I complain about the portal, and how buried this setting is, it’s still far easier and cheaper (a few $ a month) than buying stuff and configured ADFS.

These a but a few examples of how cloud computing makes things that used to be available to only the largest enterprises available to organizations of any size. Cloud is cool and makes things better.

%d bloggers like this: