Kubernetes, Part III–The Failover and The Rolling Upgrade

If you have been following this series, you’ve learned how to install and configure minikube and get started with running SQL Server within Kubernetes. In the final part of this series, you will learn about how failover works, and how you can do an rolling upgrade of SQL Server.

How This Works

Kubernetes works kind of like Windows Failover Clustering does–in that, since we have defined our service, it runs a health check on our pod running SQL Server. There is also a notion of auto-healing–our service should match our deployment manifest which is called Desired State, which means if our pod (the unit of our containment) goes away, the Kubernetes control plane will bring back our pod and therefore database. In the event that a node were to go away, since our deployment is stored in the Kubernetes master, SQL Server will come back online. The timing on this failover is very similar to a SQL Server Failover Cluster restart–as crash recovery will need to be performed. Let’s see how this works. In this first shot, we have a working instance–with a database called demo.


Next–I’m going to use the kubectl delete pod command, where we will remove our container.


As you can see in the kubectl get pod command, we have one container terminating and another creating. Next, I will run the kubectl logs command to get the log from our container–this is just the SQL Server error log, because it is written to stdout (that’s the standard output) in Linux.


Now, I can log back into my container and see my database.

Screen Shot 2018-04-17 at 9.19.41 AM

Performing a Rolling Upgrade

One of the other cool things we can do is a rolling upgrade. In the manifest I supplied, when you build your container, you will always get the latest release of SQL Server. However, you can specify a specific CU–in this next example, I will change the image line in my manifest to pull down CU4 to image: microsoft/mssql-server-linux:2017-CU4. I’ve deployed that here, as you can see.

Screen Shot 2018-04-17 at 9.27.52 AM

So, I’m simply going to change the image line in my manifest line, from CU4 to CU5 and then redeploy our manifest using the kubectl apply -f command. You will want to have deployed the version of SQL Server to test environment in your cluster to reduce your downtime–when you deploy the image is local to your cluster, which means the time to spin up your new container will be much lower.

You can see how that process worked in this screenshot.

Screen Shot 2018-04-17 at 9.34.15 AM

We’ve redeployed the template (which you can see by the change to deployment configured response from K8s) our pod terminated and restarted, and then SQL Server upgraded, and we were able to connect.


I hope you enjoyed this series of posts–I really feel like Kubernetes represents the future of infrastructure whether in the cloud or on-premises. Software defined infrastructure, and persisted storage make it easy to deploy and manage. When SQL Server gets some more features–the main ones I’m looking for are AD authentication and Always On Availability Groups, I think you will see far more widespread use of SQL Server on containers.

Getting Started with SQL SQL Server and Kubernetes, Part II

You might have noticed that I titled this post Part II, but with no “of N”. This will likely be a periodically recurring series of posts as I take things a bit further with Kubernetes. Also, likely I will be correcting things I screwed up along the way. The first of which is that I said if you had a Mac this is easy to configure using the Docker Edge configuration with Kubernetes. Well it was easy to configure K8s with Docker–however when I got deep into playing with SQL Server, I kept running into weird issues with storage persistence and instance startup. I was chatting with a colleague on the product team (thanks Mihaela!!) and she told me about this bug:


It’s complicated–but basically the Docker filesystem mapper isn’t consistent with SQL Server’s I/O code. So, in lieu of that, I installed Minikube on an Ubuntu VM on my Mac. You can find the instructions here, you will have to install VirtualBox to host the VM for your Kubernetes install. It was really straightforward–if you want to build a “real” Kubernetes cluster, the process is a little bit more complicated, and outside of the scope of today’s post.

What are We Building?

In January, the SQL Server Product Group posted this blog post on deploying SQL Server in a high-availability configuration on Kubernetes. It went without much fanfare, but I dug into the post, and built the solution, and realized it offers nearly the same functionality as a Failover Cluster Instance, with minimal configuration effort. While building an FCI on Windows is quite approachable, building an FCI on Linux is somewhere between painful and awful, depending on the distribution you are using, the day of the week, and the position of the stars. The post assumes you are building on top of the Azure Kubernetes Service–which is a great way to get started, but it costs money, and I wanted to do something in a VM. So we’re building the same basic architecture that is in that PG post, however we are going to build it on minikube.


There are a couple of things we are building here:

Persisted Disk: Inherently containers have no persisted storage. Obviously, this is a problem for database containers. We are going to define a persistent volume claim to map our storage account. This is probably the biggest difference between my code and the PG blog post, as I’m referring to local storage as opposed to Azure disks.

Deployment: This refers to our container and volume. You will see this defined in the code below.

Service: We are defining a service and creating a load balancer for our deployment. The load balancer is the key to this deployment, as it will maintain a persistent IP for our deployment when our container goes away.

By defining this deployment in this way, if we have a failure on a host that is hosting our container, Kubernetes auto-healing process will automatically deploy a new pod (in the context here, a pod just holds our single container, a web server might have multiple containers in the pod.

Let’s Build Something

This assumes that you have minikube up and running. The first thing you’re going to do is build a secret to pass into your deployment, for your SA password.


kubectl create secret generic mssql --from-literal=SA_PASSWORD="MyC0m9l&xP@ssw0rd"


The next thing you are going to do is build you persistent volume claim.


kind: PersistentVolumeClaim

apiVersion: v1


  name: mssql-data-claim



  - ReadWriteOnce



    storage: 10Gi

You will save this text in a file. For the purposes of this posts, we will call it pv-claim.yaml. You will then run the kubectl apply -f pv-claim.yaml command. You will see the message “persistentvolumeclaim “mssql-data-claim” created

Next we are going to build our deployment and our load balancer.

apiVersion: v1

kind: Service


  name: mssql-deployment



    app: mssql


    - protocol: TCP

      port: 1433

      targetPort: 1433

  type: LoadBalancer


apiVersion: apps/v1beta1

kind: Deployment


  name: mssql-deployment


  replicas: 1




        app: mssql


      terminationGracePeriodSeconds: 10


      - name: mssql

        image: microsoft/mssql-server-linux


        - containerPort: 1433


        - name: ACCEPT_EULA

          value: "Y"

        - name: SA_PASSWORD



              name: mssql

              key: SA_PASSWORD


        - name: mssql-persistent-storage

          mountPath: /var/opt/mssql


      - name: mssql-persistent-storage


          claimName: mssql-data-claim

There’s a lot of stuff here. Let’s walk through the key elements of this file:

We’re defining our service and load balancer at the beginning of the code. Next, we are defining our deployment, which specifies the container we’re going to use, which in this case it is the latest release of SQL Server 2017, and it picks up our predefined SA password. Finally, we are defining our volume mount and its path for where it will be mounted in the VM. Save this off to a file called sqldeployment.yaml. You will run the same kubectl apply -f sqldeployment.yaml to deploy this. You will see service “mssql-deployment” created and deployment “mssql-deployment” created. You can verify the deployments by running the below commands:

Screen Shot 2018-04-05 at 12.49.49 PM

You’ll make note of that IP address and port for your SQL deployment. You do need to make note of that port, however, that IP address is not routable within that cluster. There is some weirdness here to running minikube. I’d like to thank the Database Avenger for this post which showed me how to connect to the kube.  Run the following command, which will give the IP address to connect to:

minikube service mssql-deployment --url

Your output will give you the IP address and port you can connect to.

Can We Login Yet?

So, I have SQLCMD installed on my Linux VM (instructions for SQL tools for Linux). If you have made it this far, this is just a standard SQL connection (albeit to a non-standard port, which is denoted by a comma after the IP address)

Screen Shot 2018-04-05 at 12.59.39 PM

You can see my server name is mssql-deployment. I’m already out of space, so come back next week to talk about persisting data, and how failover works.

Getting Started with SQL Server and Kubernetes

If you find yourself asking, “what the heck is Kubernetes, and is the next Hekaton?” you are in the right place. If you know all about K8s (that’s open source shorthand for Kubernetes K+8 letter+S, don’t shoot me, open source projects don’t have marketing people), then hopefully you’ll learn something about how SQL Server interacts with Kubernetes.

No really, What the hell is this?

One of the cool introductions in SQL Server 2017, was support for Docker containers. Feature support was pretty limited (no AD, limited HA/DR options, and some of the other standard SQL Server on Linux limitations), however especially for those of us who work on Macs, Docker support gave us a very easy way to get SQL Server running on our computers without spinning up a virtual machine (VM). Docker is a basic containerization system–what are containers? Containers are similar to VMs, however they run a little bit closer to bare metal, don’t contain the whole version of the operating system (just the runtimes you need), and most importantly are fully software defined.

I recommend two books on the topic.

The Kubernetes Book — This is a good overview and will give you decent understanding around what you need to know

Kubernetes Up and Running   — This is much more comprehensive, deeper dive, that you will want before you try to operationalize K8s.

I’d really like a book on Kubernetes Infrastructure and Stuff for VMWare Admins (no one steal that title, in case I decide to write it), but from my research no such tome exists.

Software Defined? Isn’t that just buzzword?

In recent years, you have like heard the terms software defined networking, software defined storage, and software defined infrastructure, and if you’ve done anything on a public cloud provider, you have taken advantage of the technology. In a nutshell, with *DI you have a file (that’s most likely YAML or JSON) that defines all of your infrastructure. Network connections, storage, the image to build your container from, etc.

There’s a rationale behind all this–large companies like massive enterprises (and what are cloud providers, but massive enterprises with amazing programmers) system administrators and enterprise hardware don’t scale. Imagine if everytime someone wanted to build a VM in Azure, someone at Microsoft had to do something. Anything–even if it was clicking OK to Approve–that process would never scale as there are 1000s of requests per minute.

Enterprise software and hardware don’t scale to public cloud volumes either–if Microsoft or Amazon were running enterprise class storage, there is no way they could offer storage at the low costs they do. And if you are building your own hardware, you may as well build it so that you can easily automate nearly all of the normal tasks, through the use of a common set of APIs. Remind me to write another blog post about object-based storage which is also a common theme in these architectures.

Weren’t you going to show us how to do something?

This post ended up being a lot longer than I expected–so in order to get you started, you are going to install and configure K8s. If you are on a Mac, this is very easy:

Mac Instructions: Download the Edge Version (think beta) of Docker for Mac here. Then follow these instructions.

If you are on Windows, it’s a little more complex. This effort is going to use VirtualBox–there needs to be a Linux virtual host, to run the VM where you build your containers. You’ll then need to ensure Hyper-V is not turned on–it’s off by default, and you can check in turn Windows Features On/Off part of Control Panel.

From there, you can install Docker, and VirtualBox using the Docker toolkit available here. This post does a good job of illustrating the steps you’ll need to configure minikube and kubectl after you’ve got Docker installed.


You know why containers are a thing, and how software defined everything has brought the computing world to this end. In the next post, you will learn how to configure SQL Server in a high availability configuration using K8s. I promised it will be far less painful than clustering on Linux.


Resources from Philly Code Camp Precon

I did a precon at Philly Dot Net’s Code Camp, and I promised that I would share the resources I talked about during the event. I’d like to thank everyone who attended the audience was attentive and had lots of great questions. So anyway here goes in terms of resources that we talked about during the session:

1) Automated build scripts for installing SQL Server.

2) Solarwinds free Database Performance Analyzer

2)  SentryOne Plan Explorer

3) Glenn Berry’s Diagnostic Scripts- (B|T) These were the queries I was using to look at things like Page Life Expectancy, Disk Latency on the plan cache, Glenn’s scripts are fantastic and he does and excellent job of keeping them up to date.

4) SP_WhoIsActive this script is from Adam Machanic (b|t) and will show you what’s going on a server at any point in time. You have the option of logging this to a table as part of an automated process.

5) Here’s the link to the Jim Gray white paper about storing files in a database. It’s a long read, but from one of the best.

Finally, my slides are here. Thanks everyone for attending and thanks to Philly Dot Net for a great conference.

How the Cloud Democratizes Solutions

The grey hairs I see every now and again remind me of how long I’ve been working in IT. I’m now in my 22nd year of having a job (2 years as an intern, but I did have the ‘sys’ password), and I’ve seen a lot of things come and go. When I started working servers cost at least tens of thousands of dollars, were the size of refrigerators, and had less processing power than the Macbook Pro I’m typing this post on. More importantly, they had service contracts that cost a rather large amount of cash per year. This bought you, well I’m not sure, but your hardware reps surely took you out for at least a steak dinner or two. Eventually, we moved to commodity hardware (those boxes from HP and Dell that cost a tenth of what you paid 20 years ago) and service contracts were a few hundred dollars per server per year. (And then those sales reps started selling expensive storage).

Most of my career has been spent working in large Fortune 500 enterprises—I think about things that at the time were only available to organizations of that size and budget, and are now available for a few clicks and a few dollars per hour. I’m going to focus on three specific technologies that I think are cool, and interesting, but as I’m writing this, I’ve already thought of three or four more.


Massively Parallel Processing

MPP processing is a data warehouse design pattern that allows for massive scale out solutions, to quickly process very large amounts of data. In the past it required buying an expensive appliance from Teradata, Oracle, Netezza, or Microsoft. I’m going to focus on Microsoft here, but there are several other cloud options for this model, like Amazon Redshift or Snowflake, amongst others. In terms of the on-premises investment you had to make, effectively to get your foot in the door with one of these solutions, you were looking at at least $250k/USD which is a fairly conservative estimate. In a cloud world? SQL Data Warehouse can cost as little as $6/hour, and while that can add up to a tidy sum over the course of a month, you can pause the service when you aren’t using it, and only pay for the storage. This allows you to do quick proof of concept work, and more importantly compare solutions to see which one best meets your needs. It also allows a smaller organization to get into the MPP game without a major investment.

Secondary and Tertiary Data Centers

Many organizations have two data centers. I’ve only worked for one that had a third data center. You may ask why is this important? A common question I get when teaching Always On Availability Groups, is if we are split across multiple sites, where do we put the quorum file share? The correct answer is that it should be in a third, independent data center. (Which virtually no organizations have). However, Windows Server 2016 offers a great solution, for mere pennies a month—a cloud witness, a “disk” stored in Azure Blob Storage. If you aren’t on Windows Server 2016, it may be possible to implement a similar design using Azure File Storage, but it is not natively supported. Additionally, cloud computing greatly simplifies the process of having multiple data centers. There’s no worries about having staff in two locales, or getting network connectivity between the two sites; that’s all done by your cloud vendor. Just build stuff, and make it reliable. And freaking test your DR (That doesn’t change in the cloud)

Multi-Factor Authentication

If you aren’t using multi-factor authentication for just about everything, you are doing it wrong. (This was a tell that Joey wrote this post and not the Russian mafia). Anyway, security is more important than ever (hackers use the cloud too) and having multi-factor authentication can offer additional levels of security that go far beyond passwords. MFA is not a new thing, and I have a collection of dead SecureID tokens dating back to the 90s that tell me that. However, implementing MFA used to require you to buy an appliance (later it was some software), possible have ADFS, and a lot of complicated configuration work. Now? It’s simply a setting in the Office 365 portal (if you are using AAD authentication; if you are an MS shop learn AAD, it’s a good thing). While I complain about the portal, and how buried this setting is, it’s still far easier and cheaper (a few $ a month) than buying stuff and configured ADFS.

These a but a few examples of how cloud computing makes things that used to be available to only the largest enterprises available to organizations of any size. Cloud is cool and makes things better.

Have You Patched For Spectre/Meltdown Yet? (And more on patches)

It’s security week here at DCAC (you can join us on Friday January 19th, 2018 at 2PM in a webcast to talk more about security) and I wanted to focus on patches. I wrote a couple of weeks ago about the impact of Spectre and Meltdown to SQL Server (and just about every other thing that runs on silicon chips). Well in the interim, Microsoft has patched all currently supported editions of SQL Server—the patches can be hard to find but are all summarized in this KB article. I can’t emphasize enough the need to patch all of your infrastructure for this—the vulnerabilities are big and they are really bad. While you may have physically isolated servers (though these are a rarity in modern IT) an attacker may have gained access to your network via other credentials that were taken from an unpatched server.

So to summarize, you need to patch the following:

  • System BIOs
  • Hypervisor
  • Guest Operating System
  • Browser
  • Your Mouse (probably)

That’s a lot of patching. And a lot of downtime, and testing. It sucks, and yeah, it’s probably also going to impact server performance. You still need to do it—unless you want to be next guy blamed by the CEO of Equifax.

Which brings me to my next topic.


What is your patching strategy?

In my career I found enterprise IT to be stodgy and not always open to new ideas. We were also slow to move generally, and operated a couple of years and versions behind modern IT. However, all of the large enterprises where I worked (5 different Fortune 100s) were really good at centralized management of systems. Which made things like patching much easier. At the telecom company where I worked, I remember having to patch all Windows Servers to fix a remote desktop vulnerability—it was one my first tasks there. We had System Center Configuration Manager to patch (and inventory the patch) of all of those servers. We had a defined maintenance window, and good executive support to say we are going to apply system updates, and you should build customer facing applications to be fault tolerant.

Smaller organizations have challenges with patching—Cabletown had a team of two people who’s job was to manage SCCM. Many smaller orgs are lucky if they have a sysadmin and Windows Server Update Services. So how do you manage updates in a small org? My first recommendation would be to get WSUS—we have it on our organization, and we’re tiny. However, you still need to manage rebooting boxes, and applying SQL Server CUs (and testing, maybe). So what can you do?

  • Use the cloud for testing patches
  • Get a regular patching window
  • Use WSUS to check status of updates
  • When in doubt, apply the patch. I’d rather have to restore a system than be on the news

I mentioned the cloud above—one thing you may want to consider for customer facing applications is platform as a service offerings like Amazon RDS, Microsoft Azure SQL Database, and Azure Web Apps. These services are managed for you, and have been architected to minimize downtime for updates. For example if you are using Azure SQL Database, when you woke up to the Meltdown/Spectre news, your databases were already protected. Without significant downtime.

Spectre and Meltdown–What does this mean for your SQL Servers?

By now, you have probably heard about a major bug in Intel (and ARM, and maybe AMD) processors. Since this vulnerability affects all processors types, you will probably need to update your phone, your tablet, your PC, and all of your servers in the coming weeks. Why and how does it affect everything?

What is this Bug About?

Modern (since the early 2000s) processors use what’s known as “out-of-order” execution. This is somewhat similar to the way SQL Server does read-ahead–in order to improve performance the CPU will execute a series of instructions before the first one is necessary complete. The way this bug works is that an attacker can pass code that will fail, but using some trickery and the magic of caches, build and retrieve (at up to 500 kb/s ) all of the kernel memory. This means things like passwords, that are normally secured in kernel memory can be stolen very easily by an attacker.

Should I Patch?


This is deadly serious bug, that is easy to exploit, with a ton of vectors. You should go through your normal test cycle with patch validation, but applying these patches should be a key priority on the coming weeks.

For a comprehensive listing of patches Allan Hirt (b|t) of SQLHA has put together the most comprehensive reference I’ve seen.


The biggest impact thing that I’ve gleaned from reading all of this stuff, is that if you are running VMWare ESX 5.5, the patches that VMWare has released for that version are not comprehensive, so you will need to likely upgrade to ESX 6 or higher.

Will This Impact My Performance?

Probably–especially If you are running on virtual hardware. For workloads on bare metal, the security risk is much lower, so Microsoft is offering a registry option to not include the microcode fixes. Longer term especially if you are audited, or allow application code to run on your database servers, you will need to enable the microcode options.

This will likely get better over time as software patches are released, that are better optimized to make fewer calls. Ultimately, this will need to fixed on the hardware side, and we will need a new generation of hardware to completely solve the security issue with a minimum impact.

What Do I Need to Patch?

Just about everything. If you are running in a Platform as a Service (Azure SQL DW/DB) you are lucky–it’s pretty much done for you. If you are running in a cloud provider, the hypervisor will be patched for you (it already has if you’re on Azure, surprise–hope you had availability sets configured), however you will still need to patch you guest VMs. You will also need to patch SQL Server–see this document from Microsoft. (also, note a whole bunch of new patches were released today.

What Other Things Do I Need to Think About?

This is just from a SQL Server perspective, but there are a few things that are a really big deal in terms of security now until you have everything patched:

  • Linked Servers–If you have linked servers, you should assume an attacker can read the memory of a linked server by running code from the original box
  • VMs–until everything is patched, an attacker can do all sorts of bad things from the hypervisor level
  • CLR/Python/R–By using external code from SQL Server an attacker can potentially attack system memory.

Don’t Let Your Infrastructure Team Design Your Data Protection Strategy

In the last two days, I’ve been part of two discussions, one of which was about the need to run CHECKDB on modern storage (yes, the answer is always yes, and twice on Sundays), and then another about problems with third party backup utilities. Both of these discussions were born out of (at the end of the day) infrastructure teams wanting to treat database servers like web, application, and file servers, and have one tool to manage them all. Sadly, the world isn’t that simple, and database servers (I’m writing this generically, because I’ve seen this issue crop up with both Oracle and SQL Server). Here’s how it happens, invariably your infrastructure team has moved to a new storage product, or bought an add on for a virtualization platform, that will meet all of their needs. In one place, with one tool.


So what’s the problem with this? As a DBA you lose visibility into the solution. Your backups are no longer .bak and .trn files, instead, they are off in some mystical repository. You have to learn a new tool, to do the most critical part of your job (recovering data), and maybe you don’t have the control over your backups that you might otherwise have. Want to stripe backups across multiple files for speed? Can’t do that. Want to do more granular recovery? There’s no option in the tool for that. Or my favorite one—want to do page level recovery? Good luck getting that from a infrastructure backup tool. I’m not saying all 3rd party backup tools are bad—if you buy one from a database specific vendor like RedGate, Idera, or Quest, you can get value-added features in conjunction with your native ones.

BTW, just an FYI, if you are using a 3rd party backup tool, and something goes terribly sideways, don’t bother calling Microsoft CSS, as they will direct you to the vendor of that software, since they don’t have the wherewithal to support solutions they didn’t write.

Most of the bad tools I’m referring to, operate at the storage layer by taking VSS snapshots of the database after quickly freezing the I/O. In the best cases, this is non-consequential, Microsoft let’s you do it in Azure (in fact it’s a way to get instant file initialization on a transaction log file, I’ll write about that next week). However, in some cases these tools can have faults, or take too long to complete a snapshot, and that can do things like cause an availability group to failover, or in the worst case, corrupt the underlying database, while taking a backup, which is pretty ironic.

While snapshot backups can be a good thing for very large databases, in most cases with a good I/O subsystem, and backup tuning (using multiple files, increasing transfer size) you can backup very large databases in a normal window. I manage a system that backs up 20 TB every day with Ola Hallengren’s scripts, and not even storage magic. Not all of these storage based solutions are bad, but as your move to larger vendors who are further and further removed from what SQL Server or Oracle are, you will likely run into problems. So ask a lot of questions, and ask for plenty of testing time.

So if you don’t like the answers you get, or the results of your testing, what do you do? The place to make the arguments are to the business team for the applications you support. Don’t do this without merit to your argument—you don’t want to unnecessarily burn a bridge with the infrastructure folks, but at the end of the day your backups, and more importantly your recovery IS YOUR JOB AS A DBA, and you need a way to get the best outcome for your business. So make the argument to your business unit, that “Insert 3rd Party Snapshot Magic” here isn’t a good data protection solution and have them raise to the infrastructure management.

Azure VMs with Reduced Core Counts

Something that has come from Microsoft in the last couple of months is the ability to provision an Azure Virtual Machine (VM) with fewer CPUs, than the machine has allocated to it. For example, by default a GS5 VM has 32 CPUs and 448 GB of memory. Let’s say you want to migrate your on-premises SQL Server that has 16 cores and 400 GB of RAM. Well, if you wanted to use a normal GS5, you would have to license an additional 16 cores of SQL Server (since you have all that RAM I’m assuming that you are using Enterprise Edition). Now, with this option, you can get a GS5 with only 16 (or 8) CPUs.

I can hear open source database professionals laughing at turning down CPUs, but this is a reality in many Oracle and SQL Server organizations, so Microsoft is doing us a favor. This doesn’t apply to all VM classes, and currently the pricing calculator does not show these options, however they are in the Azure portal when you select a new VM for creation. The costs of these VMs are the same as the fully allocated ones, though if you are renting your SQL Server licenses through Azure those costs are less.

This option is available on the D, E, G, and M series VMs, and mainly on the larger sizes in those series. If you would like to see cores reduced on other VM sizes, send feedback to Microsoft.



Speaker Idol 2017—A Judge’s Tale

Not to sound too much like Juan Antonio Samaranch, but 2017 was truly the best Speaker Idol I’ve ever had the pleasure of judging (and I’ve been judgy at all of them). I would foremost like to thank Tom LaRock (b|t) for stepping in as an emcee while Denny Cherry was ailing. Tom did an excellent job of keeping the contest flowing, and his stage presence and sense of humor kept our audience (and judges) entertained. I am also taking the opportunity of this post to document two new rules that we are introducing to Speaker Idol for 2018. (I think the official rules may be in a OneNote somewhere, or it’s just tribal knowledge between Denny and I):

  1. Contestants, or any representatives of contestants may have no written or internet communications related in any way to the judging of the contest with judges during the period of the competition. (Which is defined as the moment the first speaker idol contestant speaks, until the final decision of the winner is announced). Penalty is disqualification and removal from final.

This means  if you won, you aren’t allowed to ask (or have anyone else ask) the judges what you did right/wrong in your talk. I would extend this rule only to round winners, but since runner ups have the ability to wildcard into the finals, or if a round winner does this they could automatically be promoted, the rule applies to everyone. You are free to ask judges for their feedback after the competition, but the for the most part, what we say to you on stage is our feedback. If it was really bad, we might be a little nice, but you likely know it was really bad. You can still say hi to a judge at the conference, but don’t ask them how you could improve.

2. No gifts of any value may be offered to judges within a 90 day period before and after the competition.

I chose 90 days somewhat arbitrarily for this, because I don’t think we name contestants 90 days before Summit. And if you care enough to buy judges 30 year old scotch 3 months after the competition, more power to you. If you want to give a judge a sticker, or a business card, outside the competition room, that is acceptable. Nothing more. Sticker or business card. No free training. Or logins to your site. 

That’s enough about rules. Let’s talk about the competition.

The Level of Quality was High

As judges, we’ve never actually had to calculate scores before. For this years final round, we actually flipped over the sign in front of the room and objectively scored our top two on the following:

  • Slides
  • Delivery
  • Content

Everyone who made it to the final was good. Really good. Each of them would be a fine speaker at Summit. So what were the differences? When competitions are there close, scoring comes down to very minor factors like body movement on stage, ticks in delivery, and making the most of your time. Another factor is taking feedback from the earlier rounds and incorporating it into your presentation. Almost all of our contestants improved from their preliminary round—if you made it to the final, congratulations you did and excellent job.

Why the Winner Won

There’s a saying I’ve heard in sports, particularly amongst hitters in baseball, and quarterbacks in football, and I can tell you it also holds true in bike racing, that as you become more experienced, everything around you seems to slow down and lets you observe more of what’s going on in the moment, than someone who is less experienced. The same thing applies to public speaking—when you first do it, you feel nervous, and rushed, and you don’t feel like you can just relax and be yourself. The biggest difference between our winner, and our second place competitor, was that Jeremy was relaxed, delivered his content slowly, so that it could be easily consumed, and conveyed a complex technical concept in a manner that was easily understandable. Both presentations were excellent, Jeremy’s simply rose above.

%d bloggers like this: