PASS Board Elections 2013

First of all, before I make any recommendations, please vote—you should have received your ballot in your PASS registered email by now, and it will provide a link to vote for the board elections in 2013. Now on to my purpose for posting—my recommendation.

I’m voting for Allen Kinsel (b|t) and Tim Ford for the open and US seats on the board. I’ve worked with Allen extensively in his prior roles on the board, and he was an excellent asset to SQLPass and in particular to the chapter leadership. Tim has been a good friend, and we’ve had the opportunity to work together on several occasions. He will serve well if elected.

Building a Private Cloud Part 1 of 5

I decided to start writing this series of blog posts months ago, but hasn’t really gotten around to it, until Andy Galbraith (b|t) brought it up again last week on Twitter. The end of this story will be how at my last job I was part of team that rolled out a comprehensive private cloud solution. It wasn’t as pretty as Amazon or Microsoft’s cloud solutions, but it worked. Users could connect to a portal, select the size and type of VM they wanted (Windows, Linux, SQL Server, Oracle, or MySQL) and based on those inputs, usually within an hour or so have a VM, that was functional, accessible and in the proper network zone. Before I go into all the details, I’d like to take a step back and properly define what cloud computing is—our friends at National Institute of Standards and Technology (NIST) have done just that in this document:

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.


I’m going to borrow from NIST again but they mention these five essential characteristics:

  • On-demand self-service
  • Broad network access
  • Resource pooling
  • Rapid Elasticity
  • Measured service

We managed to reach all of these in our project—the measured service piece was a little bit of a challenge, as the company didn’t have a chargeback model, but we still managed to keep track of what group requested what.

Getting Started

In reality, there are several things you need to do, before you can even think of doing this sort of a project. It’s not something I’d necessarily recommend to IT organizations that are not current operating at a high level of maturity.

  • Robust Virtualization Environment—our early attempts at virtualization were dramatically undersized and didn’t scale out well. Before we embarked on this project we rolled out best in class VM hardware along with enterprise class storage that was designed around virtualization. If it doesn’t perform well—clients won’t use it
  • Consistent Installation Processes—if you don’t have scripted out processes for installing SQL Server, or images of standard Windows and Linux builds, you will want to start on that before you think about starting a private cloud project. We were pretty fortunate in this respect as we were a large shop that had to scale on a large basis.
  • Good Scripting—prior to starting at my previous employer, I pretty much thought PowerShell was an optional skill. Managing 20 or so SQL Servers, was never so hard that I couldn’t do it manually, but when I was responsible for 750, there was no option. I wrote a ton of PowerShell (and had great guidance from someone I consider to be an expert) and a lot of T-SQL to fully automate the SQL Server installation process, and more importantly, the post-installation process, which consisted of setting up all of our standards and jobs.
  • Consulting Help—you probably don’t have the skills in-house to build your own service layer, or configure System Center Orchestrator to start automating the VM process—so you’re going to need help. Choose wisely.

Next Steps

In my next post in this series, I’ll talk about the function and importance of the services layer to building your private cloud.


The Cloud Just Got Easier

Yesterday Microsoft and AT&T made what I consider to be a major announcement for the future of cloud computing.

One of the trickiest parts of cloud computing is configuring the network connection to the cloud provider. Microsoft has made great strides in recent months to make it easier to make the Azure network feel like it’s a part of your internal network. This has enabled technologies like Hybrid AlwaysOn Availability Groups (which I’ll be presenting about this weekend at SQL Saturday San Diego, and again at the SQLPass Summit) where you have one node of your availability group in your data center, and another node in Windows Azure. With all that said—it still wasn’t easy to get the VPN working correctly (I’m not exactly a network guru), and a VPN connection was typically dependent on the public internet, which means latency can vary widely.

So with this partnership with AT&T (who is an American company, but does provide enterprise telecom worldwide) your data goes from your network to an AT&T routing center, then directly to Microsoft without ever reaching the public internet. More importantly, in addition to being fast and easier to setup it will be consistent, so you can tune your applications based on that latency.

One more hurdle to cloud just got a lot lower.

Resources from SQL Licensing Presentation

The slides from today’s presentation are available are here:


The Microsoft SQL Server licensing guide is here.

The Microsoft licensing guide for virtual environments is here.

The Microsoft volume licensing guide is here.

Glenn Berry’s blogs on CPU performance is here.




%d bloggers like this: