Kubernetes, Part III–The Failover and The Rolling Upgrade

If you have been following this series, you’ve learned how to install and configure minikube and get started with running SQL Server within Kubernetes. In the final part of this series, you will learn about how failover works, and how you can do an rolling upgrade of SQL Server.

How This Works

Kubernetes works kind of like Windows Failover Clustering does–in that, since we have defined our service, it runs a health check on our pod running SQL Server. There is also a notion of auto-healing–our service should match our deployment manifest which is called Desired State, which means if our pod (the unit of our containment) goes away, the Kubernetes control plane will bring back our pod and therefore database. In the event that a node were to go away, since our deployment is stored in the Kubernetes master, SQL Server will come back online. The timing on this failover is very similar to a SQL Server Failover Cluster restart–as crash recovery will need to be performed. Let’s see how this works. In this first shot, we have a working instance–with a database called demo.

s1

Next–I’m going to use the kubectl delete pod command, where we will remove our container.

s2

As you can see in the kubectl get pod command, we have one container terminating and another creating. Next, I will run the kubectl logs command to get the log from our container–this is just the SQL Server error log, because it is written to stdout (that’s the standard output) in Linux.

s3

Now, I can log back into my container and see my database.

Screen Shot 2018-04-17 at 9.19.41 AM

Performing a Rolling Upgrade

One of the other cool things we can do is a rolling upgrade. In the manifest I supplied, when you build your container, you will always get the latest release of SQL Server. However, you can specify a specific CU–in this next example, I will change the image line in my manifest to pull down CU4 to image: microsoft/mssql-server-linux:2017-CU4. I’ve deployed that here, as you can see.

Screen Shot 2018-04-17 at 9.27.52 AM

So, I’m simply going to change the image line in my manifest line, from CU4 to CU5 and then redeploy our manifest using the kubectl apply -f command. You will want to have deployed the version of SQL Server to test environment in your cluster to reduce your downtime–when you deploy the image is local to your cluster, which means the time to spin up your new container will be much lower.

You can see how that process worked in this screenshot.

Screen Shot 2018-04-17 at 9.34.15 AM

We’ve redeployed the template (which you can see by the change to deployment configured response from K8s) our pod terminated and restarted, and then SQL Server upgraded, and we were able to connect.

Summary

I hope you enjoyed this series of posts–I really feel like Kubernetes represents the future of infrastructure whether in the cloud or on-premises. Software defined infrastructure, and persisted storage make it easy to deploy and manage. When SQL Server gets some more features–the main ones I’m looking for are AD authentication and Always On Availability Groups, I think you will see far more widespread use of SQL Server on containers.

Getting Started with SQL SQL Server and Kubernetes, Part II

You might have noticed that I titled this post Part II, but with no “of N”. This will likely be a periodically recurring series of posts as I take things a bit further with Kubernetes. Also, likely I will be correcting things I screwed up along the way. The first of which is that I said if you had a Mac this is easy to configure using the Docker Edge configuration with Kubernetes. Well it was easy to configure K8s with Docker–however when I got deep into playing with SQL Server, I kept running into weird issues with storage persistence and instance startup. I was chatting with a colleague on the product team (thanks Mihaela!!) and she told me about this bug:

https://github.com/Microsoft/mssql-docker/issues/12

It’s complicated–but basically the Docker filesystem mapper isn’t consistent with SQL Server’s I/O code. So, in lieu of that, I installed Minikube on an Ubuntu VM on my Mac. You can find the instructions here, you will have to install VirtualBox to host the VM for your Kubernetes install. It was really straightforward–if you want to build a “real” Kubernetes cluster, the process is a little bit more complicated, and outside of the scope of today’s post.

What are We Building?

In January, the SQL Server Product Group posted this blog post on deploying SQL Server in a high-availability configuration on Kubernetes. It went without much fanfare, but I dug into the post, and built the solution, and realized it offers nearly the same functionality as a Failover Cluster Instance, with minimal configuration effort. While building an FCI on Windows is quite approachable, building an FCI on Linux is somewhere between painful and awful, depending on the distribution you are using, the day of the week, and the position of the stars. The post assumes you are building on top of the Azure Kubernetes Service–which is a great way to get started, but it costs money, and I wanted to do something in a VM. So we’re building the same basic architecture that is in that PG post, however we are going to build it on minikube.

Components

There are a couple of things we are building here:

Persisted Disk: Inherently containers have no persisted storage. Obviously, this is a problem for database containers. We are going to define a persistent volume claim to map our storage account. This is probably the biggest difference between my code and the PG blog post, as I’m referring to local storage as opposed to Azure disks.

Deployment: This refers to our container and volume. You will see this defined in the code below.

Service: We are defining a service and creating a load balancer for our deployment. The load balancer is the key to this deployment, as it will maintain a persistent IP for our deployment when our container goes away.

By defining this deployment in this way, if we have a failure on a host that is hosting our container, Kubernetes auto-healing process will automatically deploy a new pod (in the context here, a pod just holds our single container, a web server might have multiple containers in the pod.

Let’s Build Something

This assumes that you have minikube up and running. The first thing you’re going to do is build a secret to pass into your deployment, for your SA password.

 

kubectl create secret generic mssql --from-literal=SA_PASSWORD="MyC0m9l&xP@ssw0rd"

 

The next thing you are going to do is build you persistent volume claim.

 

kind: PersistentVolumeClaim

apiVersion: v1

metadata:

  name: mssql-data-claim

spec:

  accessModes:

  - ReadWriteOnce

  resources:

   requests:

    storage: 10Gi

You will save this text in a file. For the purposes of this posts, we will call it pv-claim.yaml. You will then run the kubectl apply -f pv-claim.yaml command. You will see the message “persistentvolumeclaim “mssql-data-claim” created

Next we are going to build our deployment and our load balancer.

apiVersion: v1

kind: Service

metadata:

  name: mssql-deployment

spec:

  selector:

    app: mssql

  ports:

    - protocol: TCP

      port: 1433

      targetPort: 1433

  type: LoadBalancer

---

apiVersion: apps/v1beta1

kind: Deployment

metadata:

  name: mssql-deployment

spec:

  replicas: 1

  template:

    metadata:

      labels:

        app: mssql

    spec:

      terminationGracePeriodSeconds: 10

      containers:

      - name: mssql

        image: microsoft/mssql-server-linux

        ports:

        - containerPort: 1433

        env:

        - name: ACCEPT_EULA

          value: "Y"

        - name: SA_PASSWORD

          valueFrom:

            secretKeyRef:

              name: mssql

              key: SA_PASSWORD

        volumeMounts:

        - name: mssql-persistent-storage

          mountPath: /var/opt/mssql

      volumes:

      - name: mssql-persistent-storage

        persistentVolumeClaim:

          claimName: mssql-data-claim

There’s a lot of stuff here. Let’s walk through the key elements of this file:

We’re defining our service and load balancer at the beginning of the code. Next, we are defining our deployment, which specifies the container we’re going to use, which in this case it is the latest release of SQL Server 2017, and it picks up our predefined SA password. Finally, we are defining our volume mount and its path for where it will be mounted in the VM. Save this off to a file called sqldeployment.yaml. You will run the same kubectl apply -f sqldeployment.yaml to deploy this. You will see service “mssql-deployment” created and deployment “mssql-deployment” created. You can verify the deployments by running the below commands:

Screen Shot 2018-04-05 at 12.49.49 PM

You’ll make note of that IP address and port for your SQL deployment. You do need to make note of that port, however, that IP address is not routable within that cluster. There is some weirdness here to running minikube. I’d like to thank the Database Avenger for this post which showed me how to connect to the kube.  Run the following command, which will give the IP address to connect to:

minikube service mssql-deployment --url

Your output will give you the IP address and port you can connect to.

Can We Login Yet?

So, I have SQLCMD installed on my Linux VM (instructions for SQL tools for Linux). If you have made it this far, this is just a standard SQL connection (albeit to a non-standard port, which is denoted by a comma after the IP address)

Screen Shot 2018-04-05 at 12.59.39 PM

You can see my server name is mssql-deployment. I’m already out of space, so come back next week to talk about persisting data, and how failover works.

%d bloggers like this: