Columnstore Indexes and Key Lookups–The Worst

Key Lookups are one of my least favorite SQL Server execution plan operators. This is where for each record in an index seek, SQL Server goes back to the clustered index and looks up a record. Generally, we either live with this condition (for a very small number of rows) or we fix it by adding columns directly or adding included columns to our nonclustered index.

However, in the plan below, we have a different scenario. We have a clustered columnstore index, that has an additional nonclustered index on the table. This was a feature that was added in SQL Server 2016 to allow point lookups on a column without having to scan many row segments of the index. This works pretty well for some conditions, though it is important to know that it can slow down your inserts significantly.

In the last year or so, with a large customer who makes fairly heavy use of this pattern, I’ve noticed another concern. Sometimes, and I can’t figure out what exactly triggers it, the execution plan generated, will do a seek against the nonclustered index and then do a key lookup against the columnstore as seen below. This is bad for two reasons–first the key lookup is super expensive, and generally columnstores are very large, secondly this key lookup is in row execution  mode rather than batch and drops the rest of the execution plan into row mode, thus slowing the query down even further.

 

Screen Shot 2018-11-14 at 7.38.32 AM

I’ve anonymized the schema, but that key lookup is against my clustered columnstore index. This query never finished. So what’s the resolution? Index hints–I’m not a big fan of using hints, but sometimes you need to kick the query optimizer in the shin. When I changed this query with the index hint, it completed in around 7 seconds. The ultimate fix is for Microsoft to fix this costing, but that’s hard. You can vote on my User Voice item here:

https://feedback.azure.com/forums/908035-sql-server/suggestions/36015868-key-lookup-against-columnstore-index-causes-slow-q

If you see this pattern pop up at all, you will definitely want to hint your queries.

 

Not all PaaS Services are the Same

In public cloud computing, the notion of platform as a service, is an offering that offers some key computing concept, as a service. Since you are reading my blog, you are probably most familiar with Azure SQL Database (which if you are old like me, you might call SQL Azure). Azure SQL Database for the uninitiated is Microsoft’s database as a service offering. You make a few mouse clicks or run a few lines of code, pay some money to Microsoft, and you have a database for your application. Microsoft handles your software installation, your settings, backups and restores, and numerous other operational tasks and lets you write your code.

1200px-Production-stn-platform

Azure SQL Database’s original pricing model simply referred to a number called database throughput/transaction (Microsoft really likes to rename things if you haven’t noticed) units, or DTUs, which are a blended measure of CPU, memory, and read/write IOPs. A pricing model like this allows the cloud provider to abstract the underlying hardware, which is another simplification of the cloud model. However, especially in the early days of the public cloud, this concept was hard for IT orgs (many, at the time were still buying physical hardware) to grasp.

Different PaaS offerings offer different levels of abstraction, though they do have a few things in common. You are almost never going to install software—patching and installation are nearly always handled by the vendor. However, with some different offerings you may be doing things like configuring storage and networks—a good example of this is with the new SQL Server offering—Managed Instance. This excellent post from the SQLCAT team shows the complexity of sizing data files to get maximum performance out of the underlying storage. Doing this requires a really firm understanding of Azure storage and how it works on the platform. You also need to know a little bit about virtual networking, because you will need to get your managed instance “wired” into a network where it can talk to your application.

Another difference with Managed Instance is that, you choose a tier, General Purpose, or Business Critical (which affects your storage performance and availability) and then the number of cores you would like, and the generation of hardware you would like. While Azure SQL Database is also moving to this model, it is just noteworthy to see the difference in abstraction. I like to think of this as a curve that has pure infrastructure as a service (IaaS) on one side (VMs in the cloud) and software as a service (SaaS) like Office 365 on the other side. Managed instance is definitely closer to the IaaS side of things.

Another thing you want to verify is, the backup situation for your chosen service. While most services do backups (and you may not even have the option to do your own backups, like in Azure SQL DB), it is important to know what your data loss SLA and what your options are if there’s an outage or a disaster.

I was inspired to write this post, because while delivering training last week, I deployed a new Azure Databricks cluster, and I was totally blown away at how simple the deployment process was—you had two performance options (for advanced deployments you could choose more changes), but it was dead simple from a deployment perspective. I don’t think it’s a good thing or a bad thing that different services have different levels of configuration—I think in some services you need that, but sometimes it’s nice just to have a box that does a service for you.

Automating TempDB Configuration in Azure

One of the unique things about managing SQL Server on Azure VMs is that we use the local D: drive for TempDB. The D: drive (or /dev/sdb1 for those of you running on Linux) is a direct-attached solid state drive (on nearly all VM tiers) which offers the lowest latency storage in Azure, making it ideal for TempDB’s busy workload. There is only one catch–that temporary volume in Azure is ephemeral, meaning the data gets wiped whenever your VM is rebooted.

gear-472008_640

You may think this isn’t a big deal–TempDB gets recreated every time your instance restarts, so who cares if the files get wiped? Well, it’s not so much the files as the directory that the files live in. You could just place your files at the root of the D:\ drive, however that would require your SQL Server service start as admin. Since we like to follow security best practices, we aren’t going to do that. I usually follow this process as defined by Jim Donahoe b|t) in this post.

I was teaching Azure infrastructure last week, and decided that it might be a good idea to do this using Desired State Configuration (DSC) which is part of automation. DSC allows you to use PowerShell scripts and a specific template format to define the configuration a group of machines (or a single machine). Documentation on DSC is sporadic, this project is a work in progress, because I had a client deadline.

But before I can even think about DSC, I needed to code the process in PowerShell. I start out by calling Dbatools which greatly simplifies my TempDB config. By using Set-DBATempDBConfig I just need to pass in the volume size, which I can get from WMI–I’m allocating 80% of the volume to TempDB (I use 80% because it’s below the cutoff of most monitoring tools) and then the script does the rest. I have a good old tempdb script,  but by using DBATools I eliminate the need to figure how to run that in the context of automation.

You can import PowerShell modules into Azure Automation now–this is a relatively recent change. I don’t have this fully baked into DSC yet, but you can see the PowerShell to create the PowerShell script (yes, read that correctly) and the scheduled task in Github.

I’d welcome any feedback and I will add a new post when I finish the DSC piece of this.

Vendors: Tell Me What Your Product Costs

A couple of weeks ago, my friend and colleague Matthew Roche (t), put out a poll about people’s feelings about vendors who don’t publicly advertise the costs associated with their products. In the past, in a disconnected world, when most of your customers had sales reps, and being able to find all of your competitors pricing wasn’t possible, abstracting prices to minimize comparisons made business sense. In an interconnected world, especially given the nature of cloud computing pricing (usually straightforward storage+compute costs), abstracting or hiding your prices just annoys potential customers.

2788169816_fe951bb5b5_z

In my roles where I’ve had to set budget, I’ve frequently been frustrated, especially with hardware vendors, in their inability to get me a rough idea of their pricing. I’m not asking for a quote–I understand discounts, volume pricing, etc. I just want to know the number of zeroes in your product costs. I understand different customers have different costs, but make your pricing clear, so I can understand it. As much as I’ve bagged on Oracle for their “aggressive” sales tactics, I’ve always been able to go to their website and get full list of retail pricing for their products. And I applaud them for that.

Why am I writing about this now? I mostly thought the IT industry was beyond this type of pricing, especially given the move towards cloud computing, pricing has become much more transparent. However, last week I was at a very large customer, and we were talking to a vendor who had a very promising technology. The vendor couldn’t give the customer any semblance of pricing–this was a two-fold problem. Their licensing model is so abstracted that I’ve got no idea what I was buying (and you thought DTUs were bad), and secondly, the sales rep could barely give us any idea of what our monthly spend would be for the product.

Part of the modern, connected world is that consumers can quickly comparison shop. For example, 80% of retail customers check pricing online when they are in a store (Regalado, Antonio. “It’s All E-Commerce Now.” Technology Review, November 4, 2013. http://www.technologyreview.com/news/520786/its-all-e-commerce-now/). Enterprise customers are doing the same thing–while that may be frustrating to old school salespeople, it’s the way of life now.

Make your pricing as transparent as possible–if your prices may fluctuate due to other factors like cloud pricing, put in a footnote in your price list, but still make it clear. Your sales message will be all the better.

A Problem with Storage Spaces, Failover Clustering, and Always On Availability Groups in Azure

This is quite possibly my longest blog post title, ever, however it is pretty important for anyone who is building SQL Server configurations using Azure virtual machines. If you aren’t familiar with Azure storage, the max size of an individual disk is 4 TB (and more importantly 7500 IOPs–most workloads will be better served by the 1 TB/5000 IOPs P30 disks), and to get a volume larger than the size of a single disk (and with more IOPs), the admin will use Storage Spaces to create a RAID 0 group (because Azure provides RAID in the infrastructure) and you get the sum of your storage and IOPs. This process is well documented and I’ve done it a number of times or customer workloads. I was building a new availability group for a customer last week, and the process was a little bit different, and failed on me.

Windows Clustering is Aggressive with Your Storage

Note: As I was writing this post, about four different parts of the VM creation process changed in the portal. Welcome to cloud computing boys and girls.

So in this demo–I created a VM and added two disks. I’ll also join the VM to our Active Directory domain. I haven’t added failover clustering yet. From server manager, I can see my primordial storage pool, with the two disks I’ve added.

p1

So, next I’ll add the Failover Clustering feature and build a one node cluster. For this demo, you only need one node–the behavior is same in a single or multi-node configuration. I’ve built a cluster and you will note that there are no disks present in the cluster.

p2

However, if I go back to storage spaces, I still see my primordial pool, but no physical disks, and now I have an error about “Incomplete Communication with Cluster”

p4

At this point, I am unable to configure a storage pool for my SQL Server data.p5

I’ve had inconsistent results here–but in no situation have I been able to create a storage pool.

Workaround

This really sucks, but the workaround is the evict the node from the cluster, and then create your storage pool as documented above. That’s kind of gross, so a better workaround is to configure your storage spaces pool, before you add your node to your Windows cluster. I did test the process of adding a new disk to an existing pool, after the server has been clustered and that process works as expected.

Root Cause

My thought (and I haven’t tried to debug this, but I have communicated with Microsoft Windows and SQL teams about it) is that clustering is being too aggressive with the available storage. I tried running some PowerShell to prevent clustering from taking the disks, but I still had the same result. I’ll update this post if I hear anything further from Microsoft.

 

Kubernetes, Part III–The Failover and The Rolling Upgrade

If you have been following this series, you’ve learned how to install and configure minikube and get started with running SQL Server within Kubernetes. In the final part of this series, you will learn about how failover works, and how you can do an rolling upgrade of SQL Server.

How This Works

Kubernetes works kind of like Windows Failover Clustering does–in that, since we have defined our service, it runs a health check on our pod running SQL Server. There is also a notion of auto-healing–our service should match our deployment manifest which is called Desired State, which means if our pod (the unit of our containment) goes away, the Kubernetes control plane will bring back our pod and therefore database. In the event that a node were to go away, since our deployment is stored in the Kubernetes master, SQL Server will come back online. The timing on this failover is very similar to a SQL Server Failover Cluster restart–as crash recovery will need to be performed. Let’s see how this works. In this first shot, we have a working instance–with a database called demo.

s1

Next–I’m going to use the kubectl delete pod command, where we will remove our container.

s2

As you can see in the kubectl get pod command, we have one container terminating and another creating. Next, I will run the kubectl logs command to get the log from our container–this is just the SQL Server error log, because it is written to stdout (that’s the standard output) in Linux.

s3

Now, I can log back into my container and see my database.

Screen Shot 2018-04-17 at 9.19.41 AM

Performing a Rolling Upgrade

One of the other cool things we can do is a rolling upgrade. In the manifest I supplied, when you build your container, you will always get the latest release of SQL Server. However, you can specify a specific CU–in this next example, I will change the image line in my manifest to pull down CU4 to image: microsoft/mssql-server-linux:2017-CU4. I’ve deployed that here, as you can see.

Screen Shot 2018-04-17 at 9.27.52 AM

So, I’m simply going to change the image line in my manifest line, from CU4 to CU5 and then redeploy our manifest using the kubectl apply -f command. You will want to have deployed the version of SQL Server to test environment in your cluster to reduce your downtime–when you deploy the image is local to your cluster, which means the time to spin up your new container will be much lower.

You can see how that process worked in this screenshot.

Screen Shot 2018-04-17 at 9.34.15 AM

We’ve redeployed the template (which you can see by the change to deployment configured response from K8s) our pod terminated and restarted, and then SQL Server upgraded, and we were able to connect.

Summary

I hope you enjoyed this series of posts–I really feel like Kubernetes represents the future of infrastructure whether in the cloud or on-premises. Software defined infrastructure, and persisted storage make it easy to deploy and manage. When SQL Server gets some more features–the main ones I’m looking for are AD authentication and Always On Availability Groups, I think you will see far more widespread use of SQL Server on containers.

Getting Started with SQL SQL Server and Kubernetes, Part II

You might have noticed that I titled this post Part II, but with no “of N”. This will likely be a periodically recurring series of posts as I take things a bit further with Kubernetes. Also, likely I will be correcting things I screwed up along the way. The first of which is that I said if you had a Mac this is easy to configure using the Docker Edge configuration with Kubernetes. Well it was easy to configure K8s with Docker–however when I got deep into playing with SQL Server, I kept running into weird issues with storage persistence and instance startup. I was chatting with a colleague on the product team (thanks Mihaela!!) and she told me about this bug:

https://github.com/Microsoft/mssql-docker/issues/12

It’s complicated–but basically the Docker filesystem mapper isn’t consistent with SQL Server’s I/O code. So, in lieu of that, I installed Minikube on an Ubuntu VM on my Mac. You can find the instructions here, you will have to install VirtualBox to host the VM for your Kubernetes install. It was really straightforward–if you want to build a “real” Kubernetes cluster, the process is a little bit more complicated, and outside of the scope of today’s post.

What are We Building?

In January, the SQL Server Product Group posted this blog post on deploying SQL Server in a high-availability configuration on Kubernetes. It went without much fanfare, but I dug into the post, and built the solution, and realized it offers nearly the same functionality as a Failover Cluster Instance, with minimal configuration effort. While building an FCI on Windows is quite approachable, building an FCI on Linux is somewhere between painful and awful, depending on the distribution you are using, the day of the week, and the position of the stars. The post assumes you are building on top of the Azure Kubernetes Service–which is a great way to get started, but it costs money, and I wanted to do something in a VM. So we’re building the same basic architecture that is in that PG post, however we are going to build it on minikube.

Components

There are a couple of things we are building here:

Persisted Disk: Inherently containers have no persisted storage. Obviously, this is a problem for database containers. We are going to define a persistent volume claim to map our storage account. This is probably the biggest difference between my code and the PG blog post, as I’m referring to local storage as opposed to Azure disks.

Deployment: This refers to our container and volume. You will see this defined in the code below.

Service: We are defining a service and creating a load balancer for our deployment. The load balancer is the key to this deployment, as it will maintain a persistent IP for our deployment when our container goes away.

By defining this deployment in this way, if we have a failure on a host that is hosting our container, Kubernetes auto-healing process will automatically deploy a new pod (in the context here, a pod just holds our single container, a web server might have multiple containers in the pod.

Let’s Build Something

This assumes that you have minikube up and running. The first thing you’re going to do is build a secret to pass into your deployment, for your SA password.

 

kubectl create secret generic mssql --from-literal=SA_PASSWORD="MyC0m9l&xP@ssw0rd"

 

The next thing you are going to do is build you persistent volume claim.

 

kind: PersistentVolumeClaim

apiVersion: v1

metadata:

  name: mssql-data-claim

spec:

  accessModes:

  - ReadWriteOnce

  resources:

   requests:

    storage: 10Gi

You will save this text in a file. For the purposes of this posts, we will call it pv-claim.yaml. You will then run the kubectl apply -f pv-claim.yaml command. You will see the message “persistentvolumeclaim “mssql-data-claim” created

Next we are going to build our deployment and our load balancer.

apiVersion: v1

kind: Service

metadata:

  name: mssql-deployment

spec:

  selector:

    app: mssql

  ports:

    - protocol: TCP

      port: 1433

      targetPort: 1433

  type: LoadBalancer

---

apiVersion: apps/v1beta1

kind: Deployment

metadata:

  name: mssql-deployment

spec:

  replicas: 1

  template:

    metadata:

      labels:

        app: mssql

    spec:

      terminationGracePeriodSeconds: 10

      containers:

      - name: mssql

        image: microsoft/mssql-server-linux

        ports:

        - containerPort: 1433

        env:

        - name: ACCEPT_EULA

          value: "Y"

        - name: SA_PASSWORD

          valueFrom:

            secretKeyRef:

              name: mssql

              key: SA_PASSWORD

        volumeMounts:

        - name: mssql-persistent-storage

          mountPath: /var/opt/mssql

      volumes:

      - name: mssql-persistent-storage

        persistentVolumeClaim:

          claimName: mssql-data-claim

There’s a lot of stuff here. Let’s walk through the key elements of this file:

We’re defining our service and load balancer at the beginning of the code. Next, we are defining our deployment, which specifies the container we’re going to use, which in this case it is the latest release of SQL Server 2017, and it picks up our predefined SA password. Finally, we are defining our volume mount and its path for where it will be mounted in the VM. Save this off to a file called sqldeployment.yaml. You will run the same kubectl apply -f sqldeployment.yaml to deploy this. You will see service “mssql-deployment” created and deployment “mssql-deployment” created. You can verify the deployments by running the below commands:

Screen Shot 2018-04-05 at 12.49.49 PM

You’ll make note of that IP address and port for your SQL deployment. You do need to make note of that port, however, that IP address is not routable within that cluster. There is some weirdness here to running minikube. I’d like to thank the Database Avenger for this post which showed me how to connect to the kube.  Run the following command, which will give the IP address to connect to:

minikube service mssql-deployment --url

Your output will give you the IP address and port you can connect to.

Can We Login Yet?

So, I have SQLCMD installed on my Linux VM (instructions for SQL tools for Linux). If you have made it this far, this is just a standard SQL connection (albeit to a non-standard port, which is denoted by a comma after the IP address)

Screen Shot 2018-04-05 at 12.59.39 PM

You can see my server name is mssql-deployment. I’m already out of space, so come back next week to talk about persisting data, and how failover works.

Getting Started with SQL Server and Kubernetes

If you find yourself asking, “what the heck is Kubernetes, and is the next Hekaton?” you are in the right place. If you know all about K8s (that’s open source shorthand for Kubernetes K+8 letter+S, don’t shoot me, open source projects don’t have marketing people), then hopefully you’ll learn something about how SQL Server interacts with Kubernetes.

No really, What the hell is this?

One of the cool introductions in SQL Server 2017, was support for Docker containers. Feature support was pretty limited (no AD, limited HA/DR options, and some of the other standard SQL Server on Linux limitations), however especially for those of us who work on Macs, Docker support gave us a very easy way to get SQL Server running on our computers without spinning up a virtual machine (VM). Docker is a basic containerization system–what are containers? Containers are similar to VMs, however they run a little bit closer to bare metal, don’t contain the whole version of the operating system (just the runtimes you need), and most importantly are fully software defined.

I recommend two books on the topic.

The Kubernetes Book — This is a good overview and will give you decent understanding around what you need to know

Kubernetes Up and Running   — This is much more comprehensive, deeper dive, that you will want before you try to operationalize K8s.

I’d really like a book on Kubernetes Infrastructure and Stuff for VMWare Admins (no one steal that title, in case I decide to write it), but from my research no such tome exists.

Software Defined? Isn’t that just buzzword?

In recent years, you have like heard the terms software defined networking, software defined storage, and software defined infrastructure, and if you’ve done anything on a public cloud provider, you have taken advantage of the technology. In a nutshell, with *DI you have a file (that’s most likely YAML or JSON) that defines all of your infrastructure. Network connections, storage, the image to build your container from, etc.

There’s a rationale behind all this–large companies like massive enterprises (and what are cloud providers, but massive enterprises with amazing programmers) system administrators and enterprise hardware don’t scale. Imagine if everytime someone wanted to build a VM in Azure, someone at Microsoft had to do something. Anything–even if it was clicking OK to Approve–that process would never scale as there are 1000s of requests per minute.

Enterprise software and hardware don’t scale to public cloud volumes either–if Microsoft or Amazon were running enterprise class storage, there is no way they could offer storage at the low costs they do. And if you are building your own hardware, you may as well build it so that you can easily automate nearly all of the normal tasks, through the use of a common set of APIs. Remind me to write another blog post about object-based storage which is also a common theme in these architectures.

Weren’t you going to show us how to do something?

This post ended up being a lot longer than I expected–so in order to get you started, you are going to install and configure K8s. If you are on a Mac, this is very easy:

Mac Instructions: Download the Edge Version (think beta) of Docker for Mac here. Then follow these instructions.

If you are on Windows, it’s a little more complex. This effort is going to use VirtualBox–there needs to be a Linux virtual host, to run the VM where you build your containers. You’ll then need to ensure Hyper-V is not turned on–it’s off by default, and you can check in turn Windows Features On/Off part of Control Panel.

From there, you can install Docker, and VirtualBox using the Docker toolkit available here. This post does a good job of illustrating the steps you’ll need to configure minikube and kubectl after you’ve got Docker installed.

Conclusion

You know why containers are a thing, and how software defined everything has brought the computing world to this end. In the next post, you will learn how to configure SQL Server in a high availability configuration using K8s. I promised it will be far less painful than clustering on Linux.

 

Resources from Philly Code Camp Precon

I did a precon at Philly Dot Net’s Code Camp, and I promised that I would share the resources I talked about during the event. I’d like to thank everyone who attended the audience was attentive and had lots of great questions. So anyway here goes in terms of resources that we talked about during the session:

1) Automated build scripts for installing SQL Server.

2) Solarwinds free Database Performance Analyzer

2)  SentryOne Plan Explorer

3) Glenn Berry’s Diagnostic Scripts- (B|T) These were the queries I was using to look at things like Page Life Expectancy, Disk Latency on the plan cache, Glenn’s scripts are fantastic and he does and excellent job of keeping them up to date.

4) SP_WhoIsActive this script is from Adam Machanic (b|t) and will show you what’s going on a server at any point in time. You have the option of logging this to a table as part of an automated process.

5) Here’s the link to the Jim Gray white paper about storing files in a database. It’s a long read, but from one of the best.

Finally, my slides are here. Thanks everyone for attending and thanks to Philly Dot Net for a great conference.

How the Cloud Democratizes Solutions

The grey hairs I see every now and again remind me of how long I’ve been working in IT. I’m now in my 22nd year of having a job (2 years as an intern, but I did have the ‘sys’ password), and I’ve seen a lot of things come and go. When I started working servers cost at least tens of thousands of dollars, were the size of refrigerators, and had less processing power than the Macbook Pro I’m typing this post on. More importantly, they had service contracts that cost a rather large amount of cash per year. This bought you, well I’m not sure, but your hardware reps surely took you out for at least a steak dinner or two. Eventually, we moved to commodity hardware (those boxes from HP and Dell that cost a tenth of what you paid 20 years ago) and service contracts were a few hundred dollars per server per year. (And then those sales reps started selling expensive storage).

Most of my career has been spent working in large Fortune 500 enterprises—I think about things that at the time were only available to organizations of that size and budget, and are now available for a few clicks and a few dollars per hour. I’m going to focus on three specific technologies that I think are cool, and interesting, but as I’m writing this, I’ve already thought of three or four more.

image

Massively Parallel Processing

MPP processing is a data warehouse design pattern that allows for massive scale out solutions, to quickly process very large amounts of data. In the past it required buying an expensive appliance from Teradata, Oracle, Netezza, or Microsoft. I’m going to focus on Microsoft here, but there are several other cloud options for this model, like Amazon Redshift or Snowflake, amongst others. In terms of the on-premises investment you had to make, effectively to get your foot in the door with one of these solutions, you were looking at at least $250k/USD which is a fairly conservative estimate. In a cloud world? SQL Data Warehouse can cost as little as $6/hour, and while that can add up to a tidy sum over the course of a month, you can pause the service when you aren’t using it, and only pay for the storage. This allows you to do quick proof of concept work, and more importantly compare solutions to see which one best meets your needs. It also allows a smaller organization to get into the MPP game without a major investment.

Secondary and Tertiary Data Centers

Many organizations have two data centers. I’ve only worked for one that had a third data center. You may ask why is this important? A common question I get when teaching Always On Availability Groups, is if we are split across multiple sites, where do we put the quorum file share? The correct answer is that it should be in a third, independent data center. (Which virtually no organizations have). However, Windows Server 2016 offers a great solution, for mere pennies a month—a cloud witness, a “disk” stored in Azure Blob Storage. If you aren’t on Windows Server 2016, it may be possible to implement a similar design using Azure File Storage, but it is not natively supported. Additionally, cloud computing greatly simplifies the process of having multiple data centers. There’s no worries about having staff in two locales, or getting network connectivity between the two sites; that’s all done by your cloud vendor. Just build stuff, and make it reliable. And freaking test your DR (That doesn’t change in the cloud)

Multi-Factor Authentication

If you aren’t using multi-factor authentication for just about everything, you are doing it wrong. (This was a tell that Joey wrote this post and not the Russian mafia). Anyway, security is more important than ever (hackers use the cloud too) and having multi-factor authentication can offer additional levels of security that go far beyond passwords. MFA is not a new thing, and I have a collection of dead SecureID tokens dating back to the 90s that tell me that. However, implementing MFA used to require you to buy an appliance (later it was some software), possible have ADFS, and a lot of complicated configuration work. Now? It’s simply a setting in the Office 365 portal (if you are using AAD authentication; if you are an MS shop learn AAD, it’s a good thing). While I complain about the portal, and how buried this setting is, it’s still far easier and cheaper (a few $ a month) than buying stuff and configured ADFS.

These a but a few examples of how cloud computing makes things that used to be available to only the largest enterprises available to organizations of any size. Cloud is cool and makes things better.

%d bloggers like this: