Stupid Interview Questions—Trying to Find the Right Employees

I was driving into the office this morning, and as I do whenever I’m in the car, I was alternating between NPR and Mike and Mike on ESPN Radio (professional sports are an interest of mine). Anyway, Mike and Mike were discussing a recent story about the Cleveland Brown’s asking their potential draft picks questions such as “Name all of the things you could do with a single brick in one minute?” I’m not sure about you, but I’m not sure how that translates into hiring the most effective person to snap a football and then block a raging defensive lineman, but hey I don’t work in football.

What does this have to do with football?

Bad IT Interviews

I do, however work in IT—and I’ve interviewed for a lot of roles, as well as interviewed a lot of people for roles. Fortunately, for my current role I was hired through people I had worked with in the past, and there was barely a formal interview process. Even my previous role for the internet provider many of you are likely reading this post on, the interview process was straightforward and consisted mostly of conversations about technology and style of design. I actually have to think back to many moons ago to one particularly bad interview, with a director who thought he was the be-all and end-all of IT management. Some of the questions where:

  • How many gas/petrol stations are there in the United States?
  • Why is a manhole cover round?
  • How many pieces of bubble gum are in this jar? (Ok, I made this one up, but you get the idea)

To this list I would like add the following questions, which I hope are destined to the waste bin of history:

  • What is your biggest weakness?
  • Where do you see yourself in five years?
  • How did you hear about this position?

None of the above questions really help me (as the hiring manager) determine if a person a qualified for a role as a data professional (or frankly any other job). They are filler, and additionally any halfway prepared candidate is going to have prepared answers for them.

Building the Better Interview

I’ve been trained on a large number of interview techniques, between business school and corporate America. There is no magic set of interview questions—however, my favorite way to interview a candidate is to get them talking about a technology they’ve worked on and are passionate about. This serves two purposes—it lets me see if they are really into technology, or if it’s just a job to them, and additionally I can pretty quickly gather their technical level with follow on questions. Conversations are much better than trivia e.g.—”What is the third line item when you right click on a database in SQL Server Management Studio?”

One other thing I’ll add is make sure you have qualified people to do an interview—if you are trying to hire a DBA, and you don’t have any on staff, consider bringing in a consultant to do interviews—it’s small investment, that could save you a lot of money down the road.

So what stupid interview questions have you heard? Answer in the comments..

AlwaysOn Availability Groups and Automation

Last week I encountered a unique application requirement—we have a database environment configured with AlwaysOn Availability Groups for high availability (HA) and disaster recovery (DR), and our application was going to be creating and dropping databases from the server on a regular basis. So, I had to develop some code to handle this process automatically. I had a couple of things going in my favor on this—it was our custom developed application that was doing this, so I knew the procedures I was writing would be called as part of the process, and additionally, our Availability Group was only two nodes at the moment, so my version 1 code could be relatively simplistic and work. Aaron Bertrand (b|t), posted on this stack exchange thread, his code is a good start. I’m not going to put all of my code in this post—it’s ready for our prime time, but I have a few more fixed I’d like to make before releasing the code into the wild.

Dropping a Database

First of all, I need to say it’s very important to secure these procedures—they can do bad things to your environment if run out of context—particularly this one. I denied execute on everyone except the user who would be calling the procedure. I didn’t want any accidents happening. Dropping a database from an availability group is slight different then doing it from a standalone server. The process is as follows:

  1. Remove the database from the availability group (on the primary)
  2. Drop the database from the primary
  3. Drop the database from the secondary node(s)

Since, we have to initiate this from the primary instance we need to find out two pieces of data—1) what availability group are we removing the database from and 2) is the instance we are on the primary instance. In my code—I didn’t actually have to drop the database from the primary server. That piece was being called from another proc. So, I just had to remove from the availability group, and remove on the secondary. There are a number of ways to connect the secondary database—this needs to happen in SQLCMD mode, which isn’t possible in a stored procedure. We could use a linked server, or we could enable xp_cmdshell and run a SQLCMD script, and then disable xp_cmdshell. This isn’t my favorite technique from a security perspective, but I was under a time crunch, and the procedure is locked down, but in the future I will probably rebuild this with linked servers (created within the procedure)

SET QUOTED_IDENTIFIER ON

GO

ALTER PROCEDURE [dbo].[RemoveDBfromAG] @dbname VARCHAR(50)

AS

BEGIN

EXEC sp_configure 'show advanced options'

,1

RECONFIGURE

WITH OVERRIDE

EXEC sp_configure 'xp_cmdshell'

,1

RECONFIGURE

WITH OVERRIDE

DECLARE @dbid INT

DECLARE @groupid VARCHAR(50)

DECLARE @groupname VARCHAR(50)

DECLARE @server VARCHAR(50)

DECLARE @primary VARCHAR(50)

DECLARE @secondary VARCHAR(50)

DECLARE @sql NVARCHAR(max)

DECLARE @dropsecondary VARCHAR(500)

SELECT @dbid = database_id

FROM sys.databases

WHERE NAME = @dbname

SELECT @groupid = group_id

FROM sys.availability_databases_cluster

WHERE database_name = @dbname

SELECT @groupname = NAME

FROM sys.availability_groups

WHERE group_id = @groupid

SELECT @server = @@SERVERNAME

SELECT @primary = primary_replica

FROM sys.dm_hadr_availability_group_states

WHERE group_id = @groupid

SELECT @secondary = node_name

FROM sys.dm_hadr_availability_replica_cluster_nodes

WHERE node_name != @primary

SELECT @sql = 'alter availability group ' + @groupname + ' remove database [' + @dbname + ']'

SELECT @dropsecondary = 'sqlcmd -S "' + @secondary + '" -E -Q "exec ReturnsTestInstanceDropSecondaryDB [' + @dbname + ']"'

IF NOT EXISTS (

SELECT primary_replica

FROM sys.dm_hadr_availability_group_states

WHERE primary_replica = @primary

)

BEGIN

RETURN

END

ELSE

BEGIN

EXECUTE sp_executesql @sql

WAITFOR DELAY '00:00:25'

EXEC xp_cmdshell @dropsecondary

EXEC sp_configure 'xp_cmdshell'

,0

RECONFIGURE

WITH OVERRIDE

END

END

GO

The one particularly unique thing you will notice in the code—is a “WAITFORDELAY”—what I observed is that after the secondary database is removed from the availability group, it goes into recovering for about 10 seconds—and we are unable to drop a database while it’s in recovery. By implementing that wait (the T-SQL equivalent of a sleep command) the database was able to be dropped.

Adding a New Database

Adding a new database has similar requirements—we have a couple of additional factors though. We have to verify that the instance we are on, is the primary for the availability group we are adding the database into. This is where I really need to fix my code—it assumes that there are only two nodes in our availability group cluster. I need to refactor the code to potentially loop through the other four secondaries (or 7 if we are talking about SQL Server 2014). Also, I’m using a linked server connection—this also assumes that the drive letter and path to the data file directory on the secondary are also the same. To summarize, the process is as follows:

  1. Accept the availability group and database names as input parameters
  2. Verify that the node you are working on is the primary for the availability group
  3. Backup the database and transaction log of the database to file share you’d like to add to the availability group
  4. Add the database to the availability group on the primary
  5. Restore the database and transaction log to the secondary with norecovery
  6. Alter the database to set the availability group

My code for this isn’t totally flushed out—it works in my environment, but I don’t think it’s ready for sharing. I’ll share later, I promise.

Pitfalls to Avoid

This isn’t that different than just building out an availability group, but many of the same caveats apply, you need to ensure agent jobs and logins affiliated with a database are on all nodes in the availability groups. Additionally, the procedures to add and remove databases from your availability group, need to run on all of the nodes. Also, if you are doing this programmatically, the connection should use the listener, so that your application is always connecting to the primary instance.

On Presenting—Making Bad Days Good

Last weekend I had the pleasure of speaking at a wonderful technology event, in the Caribbean. Ok, so it wasn’t the Caribbean, it was Cleveland, but a lot of my friends were there, I had a good time and the event was awesome. Thanks Adam (b|t) and crew for running a really fantastic SQL Saturday.

Anyway, on Friday my good friend Erin Stellato (b|t) asked if I could do an additional presentation, that I’ve done frequently in the past (it’s on High Availability and Disaster Recovery in SQL Server—one of my favorite topics). I accepted the offer, but the only issue is that I didn’t have a demo environment, and if there’s anything I’ve learned about SQL Server audiences in my years of speaking, is that they love demos. So I spent a decent amount of time on Friday and Saturday morning getting a virtualization environment built, and didn’t pay too much attention to my other presentation, on columnstore indexes.

The HA and DR went great, but I flustered a little in my columnstore session—I knew the materials, but I hadn’t touched the demos in a while, and I just really didn’t like the general flow of the session. Combining the way I felt, with the attendee feedback (attendees—always give feedback, speakers love it, even if it’s bad), I decided to refactor both my demos and some visualizations in my slides, in an effort to make things clearer. Fortunately, I get to present the session again this week (thanks Edmonton SQL Server UG), so I’ll get to test the new slides.

The moral of this story, is sometimes we have bad days—everyone does. However, getting overly upset about it, doesn’t do a whole lot of good—think about where you can improve and do it.

 

T-SQL Tuesday—Bad Bets

The year was 94, in my trunk is raw, and in the rear view mirror was the mother $%^in’ law. Got two choices y’all pull over the car or bounce on the devil, put the pedal to the floor” – Jay Z 99 Problems

Ok, so the situation I’m going to be talking about isn’t as extreme as Jay’s of being chased by the cops, with a large load of uncut cocaine in my trunk. The year was actually 2010, things had been going well for me in my job, I had just completed a major SQL consolidation project, built out DR and HA where there was none before, and was speaking on a regular basis at SQL Community events. A strange thing happened though—I got turned down to go to PASS Summit (in retrospect, I should have just paid out of pocket,) and I wasn’t happy about. All of the resources in the firm were being sucked into the forthcoming SAP project. So, back when I thought staying with a company for a long time was a very good thing (here’s a major career hint—focus on your best interest first, because the company will focus on its best interest ahead of yours), I decided to apply for the role of Infrastructure Lead for the SAP project. I’ve blogged about this in great detail here, so if you want to read the gory details you can, but I’ll summarize below.

The project was a nightmare—my manager was quite possibly mental, the CIO was an idiot who had no idea how to set a technical direction, and as an Infrastructure team we were terribly under-resourced. The only cool part, was due to the poor planning of the CIO and my manager, the servers ended up in Switzerland, so I got to spend about a month there, and work with some really good dedicated folks there. I was working 60-70 hours a week, gaining weight and was just generally unhappy with life. I had a couple of speaking engagements that quarter the biggest of which was SQL Rally in Orlando—I caught up with friends (thank you Karen, Kevin, and Jen) and started talking about career prospects. I realized how what I was doing was useless, and wasn’t going to move my career forward in any meaningful sense. So I started applying for jobs, and I ended up in a great position.

I’ve since left that role, and am consulting for a great company now, but almost every day I look back fondly at the all the experiences I got to have in that job (the day after I met my boss, he offered to pay my travel to PASS Summit 2011) and how much it improved my technical skills and gave me tons of confidence to face any system.

The moral of this story, is to think about your life ahead of your firms. The job market is great for data pros—if you are unhappy, leave. Seriously—I had 5 recruiters contact me today.

%d bloggers like this: