You’re Speaking…and You Don’t Have Slides

I had this dream that other week. I was in the big room at PASS Summit, sitting in the audience. I was relaxed, as I thought I was presenting later in the day, when I quickly realized, due to the lack of speaker on the stage, that I was the next speaker, and the room was full. And I was playing with my laptop and I didn’t have a slide deck. In my dream, this talk was a 300 level session on troubleshooting SQL Server, something I feel like I could do pretty easily, you know with slides. Or a whiteboard.


I woke up, before I started speaking. So, I’m not sure how I would have handled it—interpretive dance? I’m a pretty bad dancer. One thing, I will mention, and I saw my friend Allan Hirt (b|t) have to do this last month in Boston—really good (and really well rehearsed) speakers, can do a very good talk without their slides. Slides can be a crutch—one of the common refrains in Speaker Idol judging is don’t read your slides. It is bad form—do I sometimes read my slides? Yeah, everyone does occasionally. But when you want to deliver a solid technical message, the best way to do that is telling stories.

I’m doing a talk next month in Belgium (April 10, in Gent), right before SQL Bits. It’s going to be about what not to do in DR. My slide deck is mostly going to be pictures, and I’m going to tell stories—stories from throughout my career, and some stores from friends. It’s going to be fun, names will be changed to protect the guilty.

So my question and guidance for you dear readers, is to think about what you would do if the projector failed and you did not have a whiteboard. I can think of a number of talks I can do without a whiteboard–in India last year, another instructor and I demonstrated Azure networking by using our bodies as props. What would you do in this situation?

Into the Blue—Building DR in Windows Azure

Tomorrow (at 1300 EDT/1800 GMT), I’ll be presenting to the SQL Pass High Availability and Disaster Recovery VC on one my favorite topics of late—building disaster recovery from your data center into Windows Azure Infrastructure as a Service. You can register for the session here. While I’m not screaming “cloud, cloud, cloud” from the rooftops, I do feel like DR is a really great point of entry to the cloud for many smaller organizations, who may lack the resource to have a second data center, or perhaps only need their applications to be highly available during certain parts of year. Additionally, the ability to backup SQL Server databases directly to Azure Blob Storage also can meet your offsite backup requirements with very little headaches.

The cloud isn’t all unicorns and rainbows however—there are some definite challenges in getting these solutions to work properly. I’ll discuss that and the following in this session:

  • Choosing the right DR solution
  • Networking and the cloud
  • Economics of cloud DR
  • Implementation concerns for AlwaysOn Availability Groups

I hope you can join me on Tuesday.

Stupid Interview Questions—Trying to Find the Right Employees

I was driving into the office this morning, and as I do whenever I’m in the car, I was alternating between NPR and Mike and Mike on ESPN Radio (professional sports are an interest of mine). Anyway, Mike and Mike were discussing a recent story about the Cleveland Brown’s asking their potential draft picks questions such as “Name all of the things you could do with a single brick in one minute?” I’m not sure about you, but I’m not sure how that translates into hiring the most effective person to snap a football and then block a raging defensive lineman, but hey I don’t work in football.

What does this have to do with football?

Bad IT Interviews

I do, however work in IT—and I’ve interviewed for a lot of roles, as well as interviewed a lot of people for roles. Fortunately, for my current role I was hired through people I had worked with in the past, and there was barely a formal interview process. Even my previous role for the internet provider many of you are likely reading this post on, the interview process was straightforward and consisted mostly of conversations about technology and style of design. I actually have to think back to many moons ago to one particularly bad interview, with a director who thought he was the be-all and end-all of IT management. Some of the questions where:

  • How many gas/petrol stations are there in the United States?
  • Why is a manhole cover round?
  • How many pieces of bubble gum are in this jar? (Ok, I made this one up, but you get the idea)

To this list I would like add the following questions, which I hope are destined to the waste bin of history:

  • What is your biggest weakness?
  • Where do you see yourself in five years?
  • How did you hear about this position?

None of the above questions really help me (as the hiring manager) determine if a person a qualified for a role as a data professional (or frankly any other job). They are filler, and additionally any halfway prepared candidate is going to have prepared answers for them.

Building the Better Interview

I’ve been trained on a large number of interview techniques, between business school and corporate America. There is no magic set of interview questions—however, my favorite way to interview a candidate is to get them talking about a technology they’ve worked on and are passionate about. This serves two purposes—it lets me see if they are really into technology, or if it’s just a job to them, and additionally I can pretty quickly gather their technical level with follow on questions. Conversations are much better than trivia e.g.—”What is the third line item when you right click on a database in SQL Server Management Studio?”

One other thing I’ll add is make sure you have qualified people to do an interview—if you are trying to hire a DBA, and you don’t have any on staff, consider bringing in a consultant to do interviews—it’s small investment, that could save you a lot of money down the road.

So what stupid interview questions have you heard? Answer in the comments..

Upcoming Presentations

In preparation for the upcoming PASS Summit, I will be presenting each of the sessions I will be doing at the Summit this week. The first will be Wednesday at the Philadelphia SQL Server Users Group, where I will be presenting my session on Hybrid Availability Groups.

Into the Blue: Extending AlwaysOn Availability Groups

For many organizations, having a second data center or co-location center doesn’t make sense, financially or logistically. Typically, this would limit options for building out a disaster recovery (DR) solution. However, now with Windows Azure virtual machines and SQL Server AlwaysOn Availability Groups, you can connect your on-premise solution to a real-time secondary replica, providing read scalability and a solid DR solution.

This session will demonstrate how to extend an Availability Group into Windows Azure, discussing the pros and cons as well as the cost of the solution. You will walk away with a solid understanding of AlwaysOn functionality within Windows Azure VMs, the costs and benefits of building a DR solution within Windows Azure, and how Azure-based backup and recovery can work.

I will also be presenting this at the PASS Summit in Room 217 D from 2:45-4 PM (1445-1600) on Friday 18 October.


On Friday, I will be presented to the Albuquerque (NM) SQL Server User’s Group on my other PASS Summit topic–

Accelerate Database Performance Through Data Compression

Much like the cars of the 1970s sacrificed gas mileage for better performance, database technology has also made its share of sacrifices for efficiency. Fortunately, times have changed significantly since then. Just as adding a turbocharger to a car delivers more power while saving fuel, the addition of compression to a database accelerates read performance while saving disk space.

Come learn how, why, and when compression is the solution to your database performance problems. This session will discuss the basics of how compression and deduplication reduce your data volume. We’ll review the three different types of compression in SQL Server 2012, including the overhead and benefits of each and the situations for which each is appropriate, and examine the special type of compression used for ColumnStore indexes to help your data warehouse queries fly. As with turbo, data compression also has drawbacks, which we’ll cover as well.

I’ll be doing this at PASS Summit in Room 203A at 3-4:15 PM (1500-1615 for my non-US readers) on Wednesday 16 October.

I hope to see many of you either at these presentations or at Summit!!


T-SQL Tuesday #44 Second Chances

Bradley Ball (b|t) who also happens to be my moderator for the upcoming 24 Hours of PASS is the leader for this month’s T-SQL Tuesday, which is all about second chances. When I think about things I’ve screwed up in my career, and there are many, I always fall back to one, and it doesn’t even involve SQL Server, but another RDBMS and it gives a couple of lessons on how to be good DBAs.

It was late on a Friday afternoon, 1522 to be exact (here is lesson #1—never do anything involving production on a Friday afternoon unless you have to), and I got a call from a user, asking me to refresh the QA database with production data. This system was an environmental monitoring system, that monitored the atmosphere and surfaces for the biopharmaceutical manufacturing plant that I worked in, it wasn’t exactly mission critical, but it was still pretty important. Since the user was a friend of mine I jumped right on it. In the database I was working on, as opposed to what I’d probably do in SQL Server (which would be to restore a backup) there is a very easy to use import/export feature, that allows for easy logical restoration of a specific objects/schemas, etc. So this was my standard methodology for doing a refresh with prod data, but I had yet to script it (lesson #2—be lazy, if you have to do something more than once, script and automate it).

Anyway, I go ahead and take my export of production, and start to import back into the QA environment. Typically, my process would be to log into QA, drop the user that owned the objects, and then run the import. For whatever reason (and in this case it was probably a good thing), I didn’t do that. I started my import and noticed I was getting some errors—once again, not something I’d ordinarily do, but I cancelled the job and reran it with error suppression on (lesson #3—always read the errors, and never turn error suppression on). The job completed without error, I emailed the user back telling him that it was complete, and I went on with my Friday afternoon. About five minutes later, I got a phone call from the same user:






There’s this classic moment that happens in IT (and probably happens in each of these blog posts), that I like to call “the bead of sweat moment”, it’s that moment you realize you #$%ed up badly, but no one else has quite realized that you are responsible yet. I asked the user to get everyone out of the system. In ordinary companies an error and outage like this would not necessarily be a big deal, but this a control system in a pharmaceutical plant, so it was heavily regulated. What had happened was that I (accidentally) did an import from prod onto itself, those errors I suppressed were duplicate record errors.

So now, I had a production database filled with duplicate data. So I go tell me boss—I screwed up, and we’re going to be down for about 10-15 minutes. Fortunately, I knew from the log of the job, the exact time the records were inserted. Also, the database was in the equivalent of full recovery mode, so I was able to do a point in time restore to the second before I started the import job. This leads us to lesson #4—always have a backup, and even better if it’s near to the system (in this case it was on local disk, before it was zipped off to tape).

The lesson I learned were several, but the biggest is that if you have good backups (and regularly test your restoration process) you are protected from a lot of evils.

PASS Summit 2013—I’m Speaking

Last week I received a really exciting email—I’ll be speaking at this year’s PASS Summit in Charlotte, NC. I had the good fortune of being selected to present on two topics. Without further ado, here’s what I’ll be presenting:

Accelerate Database Performance through Data Compression

I feel like Data Compression is one of the most underutilized features of SQL Server Enterprise Edition. In this session, we will build an understanding of its costs and benefits, and the best use cases for it in your environment. Additionally, we will discuss the new ColumnStore index feature in SQL Server 2012. There will be a lot of demos, and hopefully I won’t have to go to my DR laptop like I did in Atlanta.

Into the Blue—Extending Always On Availability Groups

I know a lot of us (my friends in the SQL Community) bash the cloud, and I completely agree that “cloud”, “big data” and “agile” are the three buzzwords that tend to make me want to punch a wall. However, DR solutions provide an interesting use of off-premise resources, a lot of smaller companies may have high RPO/RTO needs, but may not have the in-house capacity to support them (multiple geo-separated data centers). In this session we will show how to use the Windows Azure Virtual Machines to extend your AlwaysOn Availability Group into Azure.

I hope to see many of you in Charlotte.

PASS Business Analytics Conference Day 1 Keynote

We start with PASS President Bill Graziano who leads off with talking about the growth of data analytics over the past few years. Connect.Share.Learn. If you are reading this and don’t know, SQL Saturday Chicago is this weekend. I’ll be presenting there.

Matt Wolken from Dell/Quest takes the stage to discuss Business Intelligence. Talks about the rate of change in the BI space. Talking about how the shifting demographics of social media are influencing spending. Mentioning great increases in mobile computing. Mentions the tweet from the Hudson River. Analytics used to backwards looking, now it’s looking forward and revenue driving. Companies that implement BI and BA solutions tend to be more profitable (13%). Dell has a new social command center to monitor feedback and support.

Now, we welcome Microsoft Keynote speakers Amir Netz and Kamal Hathi to the stage. Amir, is talking about his Apple II and how he programmed extensively on it. Mentions using VisiCalc and Lotus 123. Talking about how spreadsheets evolved into OLAP. Kamal is now talking about how OLAP evolved into Hadoop, and how do business users understand usage of tools in the big data space. Yet it all comes back to excel, I suspect this will come back to Powerview and data explorer. I shall refrain from editorial content, but ZOOMIT. Excel is getting much better at dealing with external data. Compares evolution of cars from model T to BMW. Amir is talking about how BI is now at the stage of the early slide projector. PowerView is awesome.

Microsoft is doing a nice job of using sentiment analysis from Twitter in this demo, to demonstrate American Idol success predictions. The data shows that positive sentiment is key into American Idol. Explains how Twitter has changed the dynamics of how people watch TV. This means if you are showing something that should be live–SHOW IT LIVE (I’m talking about bike and F1 racing, NBC Sports Network)

Seeing a new excel plugin called geo flow, which lays out spatial data within excel, this functionality has been getting better, but this is really nice. GeoFlow has the ability to zoom in via touch, very nice, and additionally it has a replay functionality. Outstanding graphics and functionality.

And that’s a wrap…more tomorrow

PASS Business Analytics Conference—Why Am I Presenting There?

The new PASS Business Analytics Conference is a new concept for PASS—we’ve seen Business Intelligence (BI) User Groups and even SQL Saturdays dedicated to this subset of PASS, but a whole conference? What is driving this demand? I can’t explain the whole industry, but I can at least provide some perspective from what I see in my window.

I don’t intend to start a debate between relational databases and NoSQL datastores—that’s a religious war I have no intention of jumping into. I’m also not going to abuse the terms, big data, and data in combination with some body of water (data pond, data lake, data ocean, etc.—seriously who comes up with this stuff?). What I will talk about, is how a relational database isn’t always the right answer for every data set, and how relational databases from major vendors (especially with enough cores to do serious analytic workloads) are REALLY EXPENSIVE. So, especially since a lot of my expertise is in Infrastructure based solutions, how did I end up presenting at BaCON?

My organization sees the changing landscape of data—and we generate and save TONS of data. We’re not always choosing the best path for our architecture. So, given I’m on the architectural team, I started investigating some alternative solutions like Hadoop and Hive for less structured non-transactional data. To make it easy to learn this stuff, it helped to have a use case, where I could take it from start to finish. I’m not by any means an expert in Data Analysis, but I am fortunate to be presenting with a great friend who is—Stacia Misner (b|t). So what are we going talk about at BaCON?

Our data set represents about a week’s worth of set top box data from the largest cable provider in the US. We are going to discuss, our data source, and how we used Hadoop and then Hive, to allow us to perform multiple types of analysis on the data in an extremely nimble fashion. From there using PowerView and some other tools, we see the impacts of various events on metrics such as viewer engagement and channel preferences.

For those of you who are SQL Server and/or Oracle professionals—this is a brave new world, but think of like learning a new version of something. You are building on an existing skill set—you already do tons of data analysis in your job. This is just another step in the process, and it will be part the skill set of the 21st century data professional.

Upcoming Presentations

Just wanted to use this space to plug a couple of presentations I’ll be doing this week.

SQL 2012 All About HA and DR

Central Pennsylvania SQL Server Users Group

Tuesday February 12

In this session I’ll be talking about all of the HA and DR options that are available in SQL Server 2012. We’ll cover the pros and cons of each choice, and talk about some external solutions, such as SAN replication and virtualization.

New Features in Windows Server 2012 Failover Clustering

Philadephia SQL Server Users Group

Wednesday February 13

Windows Server 2012 is here, and Failover Clustering has some nice improvements. I’ll talk about some of the features that are most useful to DBAs.

Lastly, I will be presenting at the PASS Business Analytics conference in Chicago in April—more to come on that later this week.

SQL Saturday #200 – Philadelphia 2013

It is with great pride that I announce the 200th SQL Saturday today. It’s with even more exuberance that I announce that it will be my user group’s event in Philadelphia (well Malvern to be specific). June 1st 2013, at Microsoft, where we had the event last year.

The webpage is here.

I had a great time running last year’s event, and was really happy with how it went, and I look forward to putting on another great event. We will do something special, since this is #200 and we are in the bicentennial city.

%d bloggers like this: