How the Cloud Democratizes Solutions

The grey hairs I see every now and again remind me of how long I’ve been working in IT. I’m now in my 22nd year of having a job (2 years as an intern, but I did have the ‘sys’ password), and I’ve seen a lot of things come and go. When I started working servers cost at least tens of thousands of dollars, were the size of refrigerators, and had less processing power than the Macbook Pro I’m typing this post on. More importantly, they had service contracts that cost a rather large amount of cash per year. This bought you, well I’m not sure, but your hardware reps surely took you out for at least a steak dinner or two. Eventually, we moved to commodity hardware (those boxes from HP and Dell that cost a tenth of what you paid 20 years ago) and service contracts were a few hundred dollars per server per year. (And then those sales reps started selling expensive storage).

Most of my career has been spent working in large Fortune 500 enterprises—I think about things that at the time were only available to organizations of that size and budget, and are now available for a few clicks and a few dollars per hour. I’m going to focus on three specific technologies that I think are cool, and interesting, but as I’m writing this, I’ve already thought of three or four more.

image

Massively Parallel Processing

MPP processing is a data warehouse design pattern that allows for massive scale out solutions, to quickly process very large amounts of data. In the past it required buying an expensive appliance from Teradata, Oracle, Netezza, or Microsoft. I’m going to focus on Microsoft here, but there are several other cloud options for this model, like Amazon Redshift or Snowflake, amongst others. In terms of the on-premises investment you had to make, effectively to get your foot in the door with one of these solutions, you were looking at at least $250k/USD which is a fairly conservative estimate. In a cloud world? SQL Data Warehouse can cost as little as $6/hour, and while that can add up to a tidy sum over the course of a month, you can pause the service when you aren’t using it, and only pay for the storage. This allows you to do quick proof of concept work, and more importantly compare solutions to see which one best meets your needs. It also allows a smaller organization to get into the MPP game without a major investment.

Secondary and Tertiary Data Centers

Many organizations have two data centers. I’ve only worked for one that had a third data center. You may ask why is this important? A common question I get when teaching Always On Availability Groups, is if we are split across multiple sites, where do we put the quorum file share? The correct answer is that it should be in a third, independent data center. (Which virtually no organizations have). However, Windows Server 2016 offers a great solution, for mere pennies a month—a cloud witness, a “disk” stored in Azure Blob Storage. If you aren’t on Windows Server 2016, it may be possible to implement a similar design using Azure File Storage, but it is not natively supported. Additionally, cloud computing greatly simplifies the process of having multiple data centers. There’s no worries about having staff in two locales, or getting network connectivity between the two sites; that’s all done by your cloud vendor. Just build stuff, and make it reliable. And freaking test your DR (That doesn’t change in the cloud)

Multi-Factor Authentication

If you aren’t using multi-factor authentication for just about everything, you are doing it wrong. (This was a tell that Joey wrote this post and not the Russian mafia). Anyway, security is more important than ever (hackers use the cloud too) and having multi-factor authentication can offer additional levels of security that go far beyond passwords. MFA is not a new thing, and I have a collection of dead SecureID tokens dating back to the 90s that tell me that. However, implementing MFA used to require you to buy an appliance (later it was some software), possible have ADFS, and a lot of complicated configuration work. Now? It’s simply a setting in the Office 365 portal (if you are using AAD authentication; if you are an MS shop learn AAD, it’s a good thing). While I complain about the portal, and how buried this setting is, it’s still far easier and cheaper (a few $ a month) than buying stuff and configured ADFS.

These a but a few examples of how cloud computing makes things that used to be available to only the largest enterprises available to organizations of any size. Cloud is cool and makes things better.

Always On Availability Groups transport has detected a missing log block…

If you are running SQL Server 2016 (especially before CU3) you have received this error:

DATE/TIME:    8/21/2017 11:24:53 AM

DESCRIPTION:    Always On Availability Groups transport has detected a missing log block for availability database "Database". LSN of last applied log block is (99939:95847:0). Log scan will be restarted to fix the issue. This is an informational message only. No user action is required.

COMMENT:    (None)

JOB RUN:    (None)

Image result for missing block

I’ve talked with the product team about it, and its just something that happens, and is more of an informational message. Based on some discussions I’ve have on #sqlhelp on Twitter, it may be related to not enough network bandwidth between nodes. But if you see these sporadically, relax, it’s nothing major.

Would You Fly a Plane with One Engine? Or Run Your Airline with One Data Center(re)?

For those of you who may of been in the US or outside of Europe this past weekend, you may not have heard about the major British Airways IT outage, that took down their entire operations for most of Saturday and into Sunday. Rumors, which were later confirmed, were that a switch from primary to backup power at their primary data centre (they’re a UK company, so I’ll spell it in the Queen’s English), lead to a complete operations failure. I have a bit of inside information, since my darling wife was stuck inside of Terminal 5 at Heathrow.

Image result for jet with missing engine

There’s a requirement for planes that travel across oceans call ETOPs, which stands for Extended Range Operation with Two-Engine Airplanes, however in parlance is know as Engines Turn or Passengers Swim. This protocol and requirements are a set of rules that ensure if a plane has a problem over a body of water, it can make it back to shore for a safe landing. As someone who flies across oceans a decent amount, I am very happy the regulatory bodies have these rules in place.

However, there are no such rules for data centers that run airline operations. In fact, in January, Delta Airlines had a major failure which took down most of its operations for a couple of days. Most IT experts have surmised that Delta was running a single data center for it’s operations. Based on the evidence from Saturday’s incident with BA, I have to assume that they are, as well. One key bit of evidence, was that BA employees were unable to access email. They are an Office 365 customer, so theoretically, even if on-premises systems were down e-mail should work. However, if they were using Active Directory Federation Services, so that all of their passwords were stored on-prem, then the data center being down, would mean they couldn’t authenticate, and therefore would not have email.

This was my biggest clue that BA was running with a single data center—was that email didn’t work. While some systems, particularly some of the mainframe systems that may handle flight operations, have a tendency to not do well with failover across sites, Active Directory is one of the best distributed systems there is, and is extremely resilient to failures. In fact, given BA’s global business, I’m really surprised they didn’t have ADFS servers in locations around the world.

Enter the Cloud

Denny and I sat talking yesterday and running some numbers on what we thought a second data center would cost a company like BA. Our rough estimate (and this is very rough) was around $30-40 million USD. While that is a ton of money, it is estimated that weekend’s mess may cost BA up to  £150 million (~$192MM USD). However, companies no longer have to build multiple data centers in order to have redundancy, as Microsoft (and Amazon, and Google) have data centers throughout the world. The cloud gives you the flexibility to protect critical systems, and at a much cheaper cost. I’ve designed DR strategies for small firms that cost under $100/month, and I’ve had real-time failover that supported 99.99% uptime. With the resources of a firm like BA, this should be a no-brainer given the risk profile.

What About Outsourcing?

Much has been made of the fact that BA has outsourced much of its IT functions to TCS and various other providers. Some have even tried to place blame on the providers for this outage. Frankly, I don’t have enough detail to blame anyone, and it seems more like the data center operator’s issue. However, I do think it speaks to the lack of attention and resources paid to technology at a company that clearly depends on it heavily. Computers and data are more important to business now than ever, and if your firm doesn’t value that, you are going to have problems down the road.

Conclusions

In the cloud era, I’m convinced no business, no matter how big or small should run with a single data center. It is way too cheap and easy to ship your backups to multiple sites, and be online in a matter of hours with a cloud provider. Given the importance and consolidation of airlines to our world economy, it probably wouldn’t be a terrible idea if their regulators created regulations requiring failover and failover testing. Don’t let this happen to your stock price.

//platform.twitter.com/widgets.js

%d bloggers like this: