24 Hours of PASS—Into the Blue

I was really happy to be selected to speak at this year’s SQLPASS Summit, and one of my topics has me even more excited. As you may have noticed in this blog, or via some of the many presentations I’ve done on the topic, Disaster Recovery is a real area of interest for me. I think it comes from spending most of my life in the path of hurricanes and tropical storms. Anyway, Disaster Recovery is generally always expensive—and it’s something (much like flood insurance) that is used infrequently, so it’s one of the first things that gets cut out of IT budget. Additionally, while large companies may have three or more geographically disparate data centers, many smaller organizations have a room in their main building, or just a cage or two at a local colocation center. So what is my presentation about?

Starting in April 2013, when Windows Azure Virtual Machines went live, we’ve had the ability to create an AlwaysOn Availability Group, which spans from our on premise data center into Windows Azure. Yes, there is some magic that needs to happen on the network and Active Directory side of the shop to make this happen—but it’s getting easier. In the presentation, we’ll take a look at what you need to tell your AD and Network teams to do in order to make this hybrid solution happen. I’ll also outline the benefits—both from a cost and technology perspective, of implementing a hybrid solution. If you look carefully, you’ll notice I’ve avoided the word “cloud”—oops everyone take a shot, anyway I’m trying to make this presentation not about buzzwords and more about building solid solutions for companies that might not have the ability to do it otherwise.

Currently, there are some limitations to doing this hybrid Availability Group solution compared to doing it on premise, however they are going away rapidly. If you tune in to my session at the 24 Hours of PASS, you might even get to see some surprises and new tech!

Session 23 – August 1, 2013 at 10:00 GMT

Into the Blue: Extending AlwaysOn Availability Groups

Joseph Dantoni
Cloud Application Development & Deployment 

Odd Security Behavior in TempDB

This is one of those posts that I fell into because someone was doing something horribly wrong. I found some code yesterday that was creating objects in TempDB using the following DDL.

Create table tempdb.dbo.table_name

Please never do that—if you are going to create a temp table, only create using # or ## as the prefix, so it behaves normally. Ok, rant over—now on to the issue I’m seeing. My environment is running SQL 2012 SP1. I create a couple of very basic procedures.

/*Don't Ever Do This*/

create procedure test1 as
select * into tempdb.dbo.perm from sys.databases
update tempdb.dbo.perm set name=’Test’
drop table tempdb.dbo.perm;

/*This is the right(er) way*/

create procedure test2 as
select * into ##perm from sys.databases
update ##perm set name=’Test’ drop table ##perm;

So I create a user with public (and exec on the two above procedures) in the destination database for the stored procedure. Additionally, the user was granted DDL_ADMIN in TempDB.

So when I run test1 I get the following:

(5 row(s) affected)

Msg 229, Level 14, State 5, Procedure test1, Line 5

The UPDATE permission was denied on the object ‘perm’, database ‘tempdb’, schema ‘dbo’.

The user can do the select (and the drop table—it has DDL_ADMIN, note the user doesn’t need to be granted DDL_Admin to work with normal temp tables) , but it can’t do the update.

When I run test2, I get the following:

(5 row(s) affected)

(5 row(s) affected)

That does the select into, executes the update and drops the table. Logically, I would think the update would fail, as the user doesn’t have db_datawriter in TempDB.

On Twitter Daniel Taylor (b|t) (thanks much!, btw) pointed out the following Microsoft documentation which offers some insight.

I suspect if an object gets created as # or ## those rules apply, but if it gets created as a normal table, typical SQL Server security rules apply. I’ve investigated the sys.database_permissions view, and looked in ResourceDB, but I haven’t found any complete confirmation.

The Two Year Release Cycle and the Enterprise

Last month at TechEd, Microsoft committed to what many of us knew already—a two year (in some cases, more like 18 month) release cycle for its server and system group products (Windows Server, SQL Server, System Center, et al.). While in some ways I think this is really outstanding (it’s cool to see problems get fixed and new features get added faster), the reality is that it is hard for Enterprise’s to adopt new software versions. Of the bundle Windows Server, should be the easiest, but I’ve had application teams tell me their new SQL Server couldn’t be on Windows version Z, just because their old servers were version Y. Yes, I know this is completely irrational and has no technical merit—it still happens though. The bigger issue tends to be with SQL Server and Independent Software Vendors (ISVs)—ISVs develop their product on a given version of the RDBMS and aren’t always quick to regression test, when a new version comes out. When I was a DBA this would frustrate me to no end.

So what is an enterprise to do about this? Throw your hands in the air, scream and run NT4? No—that would be a bad idea—you couldn’t even RDP into NT 4-remember that? Here are some things I’ve done:

  • Limit support for old versions—and be aggressive with the timelines. At my last employer we avoided letting versions go into limited support modes
  • Keep to a set number of versions available for new builds—this will save hassle around your build process, and can help force teams to adopt new versions of software
  • Sell the new version—Both SQL Server 2012 and 2014 have major enhancements, which can really help application performance, but your dev teams probably don’t know about them. Give them presentations on new features that can alleviate their pain points.

One of the other things I attempted at my last job (we built a really slick VM automation process) was to limit the automated VM builds to SQL and Windows 2012. Unfortunately, this didn’t work as well as we expected, we still had a lot of 2008 and 2008 R2 requests that needed to handled manually.

My last thought on this, is that this is partially an attempt by Microsoft to draw my customers towards Azure/Office 365. The theory is that those platforms are always being upgraded with the latest features and versions, and you don’t have to take the time do the installation and integration testing, since Microsoft already did it for us.

What are your thoughts on the two year cycle? How will it impact your company?

T-SQL Tuesday #44 Second Chances

Bradley Ball (b|t) who also happens to be my moderator for the upcoming 24 Hours of PASS is the leader for this month’s T-SQL Tuesday, which is all about second chances. When I think about things I’ve screwed up in my career, and there are many, I always fall back to one, and it doesn’t even involve SQL Server, but another RDBMS and it gives a couple of lessons on how to be good DBAs.

It was late on a Friday afternoon, 1522 to be exact (here is lesson #1—never do anything involving production on a Friday afternoon unless you have to), and I got a call from a user, asking me to refresh the QA database with production data. This system was an environmental monitoring system, that monitored the atmosphere and surfaces for the biopharmaceutical manufacturing plant that I worked in, it wasn’t exactly mission critical, but it was still pretty important. Since the user was a friend of mine I jumped right on it. In the database I was working on, as opposed to what I’d probably do in SQL Server (which would be to restore a backup) there is a very easy to use import/export feature, that allows for easy logical restoration of a specific objects/schemas, etc. So this was my standard methodology for doing a refresh with prod data, but I had yet to script it (lesson #2—be lazy, if you have to do something more than once, script and automate it).

Anyway, I go ahead and take my export of production, and start to import back into the QA environment. Typically, my process would be to log into QA, drop the user that owned the objects, and then run the import. For whatever reason (and in this case it was probably a good thing), I didn’t do that. I started my import and noticed I was getting some errors—once again, not something I’d ordinarily do, but I cancelled the job and reran it with error suppression on (lesson #3—always read the errors, and never turn error suppression on). The job completed without error, I emailed the user back telling him that it was complete, and I went on with my Friday afternoon. About five minutes later, I got a phone call from the same user:

 

 

 

 

 

There’s this classic moment that happens in IT (and probably happens in each of these blog posts), that I like to call “the bead of sweat moment”, it’s that moment you realize you #$%ed up badly, but no one else has quite realized that you are responsible yet. I asked the user to get everyone out of the system. In ordinary companies an error and outage like this would not necessarily be a big deal, but this a control system in a pharmaceutical plant, so it was heavily regulated. What had happened was that I (accidentally) did an import from prod onto itself, those errors I suppressed were duplicate record errors.

So now, I had a production database filled with duplicate data. So I go tell me boss—I screwed up, and we’re going to be down for about 10-15 minutes. Fortunately, I knew from the log of the job, the exact time the records were inserted. Also, the database was in the equivalent of full recovery mode, so I was able to do a point in time restore to the second before I started the import job. This leads us to lesson #4—always have a backup, and even better if it’s near to the system (in this case it was on local disk, before it was zipped off to tape).

The lesson I learned were several, but the biggest is that if you have good backups (and regularly test your restoration process) you are protected from a lot of evils.

%d bloggers like this: