Letting Go—SQL Server 2005 Deprecation

SQL Server 2005 is deprecated this week, specifically tomorrow. That means if you have a problem, or a new bug is discovered, Microsoft support will require you to upgrade in order for support to help you with your problem. This is particularly a big deal when a new security bug is found, and it is not patched for the release of SQL Server that your organization is running

But I Work in Healthcare, Banking, Government

If you work in government, I don’t really have any advice for you—I”m not familiar with how to get through the government machinations to get off of really old software. However, if you work in health care, or banking you want to make sure your boss, and your quality assurance people know that you are now running an unsupported version of the software. When I worked in pharmaceuticals and device this worked pretty well—what really helped was getting our RBDMS classified as part of infrastructure so it didn’t have to be qualified like application software. If you work in banking or finance, the better option to follow up is the security angle—if there is a new zero day attack, your servers won’t be patched until your upgrade.

SQL Server 2005—A Pretty Awesome Release

As much as I’ve been working on SQL Server 2016 lately (and it really is awesome), but SQL Server 2005 was a groundbreaking release that really gave Microsoft market share in the big enterprise database space. A few of the features that were introduced in SQL 2005 include:

  • Database Mirroring
  • Dynamic Management Views (DMVs)
  • Service Broker
  • SSIS
  • Modern Consistency Checks
  • SQL Server Management Studio

SQL Server 2005 was an excellent release, however if you are using it, you are running effectively 11 year old software to host your applications. That’s a bad idea—remember what cell phones were like in 2005?

image

That’s what your database looks like now. Just remember—if you are doing an upgrade skip 2008, 2008 R2 and 2012, and just go straight to SQL Server 2014 (or 2016 if you are reading this after RTM), there’s no reason to upgrade to software that’s going to be desupported in two years. Also, install it on the latest version of Windows, so you don’t have to do this again when Windows 2008R2 is deprecated.

Challenges of Speaking—Being Ready for Anything..

I have the good fortune to have a job that allows me to travel around the world speaking about technology. I love this—it gives me a way to see new places, experience other cultures, and meet a lot of wonderful people how have similar interests to me. I always say about half of my job is working on computers and databases, and the other half is talking and writing about them. I’ve been speaking professionally for almost 10 years now, and have had a lot of unique situations.

One of the biggest challenges is that I speak a lot about cloud computing—generally speaking this involves connecting over a myriad of ports to internet connections that may or may not be blocked. Here are some tips that I’ve used to deal with getting to the cloud in various bad network configurations.

  • Use a VPN to remote desktop to another machine that can get out
  • When you realize the network is blocking RDP packets use your phone to tether
  • Set up a VM in Azure with RDP running on Port 443 (https port) and RDP to that
  • Finally, use screenshots, if all else fails
    All of these are complicated by foreign countries, different networks, different network policies, etc.
    The one that takes the cake for challenges though is losing the projector. I had this happen for the first time at a SQL Saturday in Houston—I was doing an session on SQL Server High Availablity and Disaster Recovery and projector locked up. It was a session I easily could have done as a white board. However, the staff was able to get me up and running again.
    Recently, I had my machine lock up, during a session at SQL Saturday Cleveland, so while I was trying to get it to work, I calmly pulled out my second machine, fired it up while I got my other machine working, and kept going. I had another failure, so I restarted on my DR machine. I’m not saying you need to have two laptops in your bag, but it is a nice to have.
    Finally, this week while I was teaching in India, I lost video again—this time there weren’t good white boards. My topic was networking in Azure—so I worked with another presnter to do a live session with each of use representing parts of the network. The session went over quite well.

So what are the tips you need to know:

  • Always have a DR plan (this goes for travel, presenting, IT, and life in general)
  • Stay calm, getting overly nervous does you or the audience no good
  • If things go really sideways, don’t be scared to co-present with someone. It’s a good rescue mechanism
  • Your audience wants you to succeed, they aren’t against you. So relax.
    When all else fails, scotch.

Getting Started with Polybase—Incorrect Syntax near ‘EXTERNAL’ Error

I was playing around with configuring Polybase this morning. Nothing to exotic, just installing the feature getting started. I got everything installed and was about ready to created my data source. The command to that is as follows:

CREATE EXTERNAL DATA SOURCE AzureStorage
WITH (           
TYPE = Hadoop,        
LOCATION = ‘wasb://polybase@mystorageaccountnamehere.blob.core.windows.net’,
CREDENTIAL = AzureStorageCredential);

Unfortunately, it returned the following error:

Msg 102, Level 15, State 1, Line 1

Incorrect syntax near ‘EXTERNAL’.

So what was the fix? A reboot after installing Polybase. The official documentation refers to restarting SQL Server, which I did. Nothing warned me to reboot, but rebooting fixed the error.

Using SQL Sentry Plan Explorer to Shed Light on Estimated Execution Plans

Note: While I do have business relationships with many vendors, I do not have one with SQL Sentry aside from attending their great annual party at the PASS Summit.

SQL Server allows us to have access to two types of execution plans, estimated and actual. Actual execution plans give us the real row counts and statistics the optimizer gets when executing the query, while the estimated plan displays the estimates the SQL Server optimizer used based on statistics when it generated the plan. The ideal scenario for performance tuning is to have the actual plan, but when you are dealing with a long running query that runs hours, and a busy procedure cache, that can be a real challenge to get.

So yesterday, I was helping a friend tune a couple of queries, one of which was running hours, and one of them was running in minutes. So I got my hands on the estimated execution plans, and I loaded them quickly in SQL Sentry Plan Explorer (which has a free version). I can’t recommend this tool enough—for simple query plans Management Studio is ok, but the second you have any level of complexity, Plan Explorer allows you to quickly break it down and identify where the problems are. As in this case:

fig1

Figure 1 The Good Query

fig2

Figure 2 The Bad Query

I quickly honed into the fact that the bad query was doing a lazy spool with 6 BILLION rows versus the 229 million in the good query. However, my friend who was looking at Management Studio was asking where I was getting those numbers. This is what the XML from the original plan showed:

<RelOp AvgRowSize=”51″ EstimateCPU=”0.00889634″ EstimateIO=”0.01″ EstimateRebinds=”0″ EstimateRewinds=”139581″ EstimatedExecutionMode=”Row” EstimateRows=”48868″
LogicalOp=”Lazy Spool” NodeId=”55″ Parallel=”true” PhysicalOp=”Table Spool” EstimatedTotalSubtreeCost=”1242.86″>

I made the assumption that Plan Explorer was doing some math on the execution plan, so I contacted my good friend Aaron Betrand (b|t) who works at SQL Sentry and asked him about it. His response was that “Plan Explorer will sometimes adjust the displayed row counts for certain operators where we know execution count is greater than one”. This is a really great use case for getting better data out of estimated execution plans when that is all you have to work with.

Are You Still Installing SQL Server from the GUI?

This post was inspired by the following tweet from the Robert Davis.

A little bit of a story—I once had a client who was insanely worried about renaming SQL Servers for a migration. Note: It’s no big deal to rename an offline SQL Server—the ONLY thing you need to change is the internal server name by running sp_dropserver and sp_addserver. There is a single registry key called “ORIGINAL_SERVER_NAME” it stays the same across multiple server renames. Every other SQL registry key changes when you do a rename of the server. So there is nothing to worry about in the registry if you do a name change. This name change process was part of our automation strategy for cloud at Comcast, and if I had to guess, it’s also part of what Microsoft is doing in Azure.

Anyway, on to my original topic—I know most shops aren’t Comcast where we were deploying multiple servers a week. However, automating your installation process has a lot of other benefits—you don’t have to worry about misconfigurations, unless you put in into your script. It also forces you into adopting a standard drive letter approach to all of your servers. So when something breaks, you know exactly where to go. And more importantly you save 15 minutes of clicking next.

Courtesy of XKCD

I love this comic from XCKD. It doesn’t take a lot of time to automate this process. So how to you do it?

It’s easy—run through the installer, and use all of your best practices, service accounts like you would for any new install. When you get to the end, before you click install. You’ll notice this:

See that file—save it somewhere handy and then cancel out of the install. There are a couple of edits you’ll need to do to make it automation ready, the biggest is you want your install to either run in quiet or quiet simple mode (I prefer quiet simple—it shows the GUI, but doesn’t allow any interaction with it). From there it’s as simple as running the following command (assumes you have the SQL media mounted under D: )

D:\setup.exe \configurationfile=C:\temp\GoodConfigFile.ini

Webinar—HA and DR in the Clouds

This Thursday (21 January 2016 at 1 PM EST/1800 GMT) I will be doing a webinar in conjunction with SIOS and MSSQLTIPS.com on disaster recovery and high availability in the clouds. This webinar will cover techniques and requirements around high availability and disaster recover in the cloud, specifically around Microsoft Azure. We will talk about several different platform options, and some of the new features in both Windows Server 2016 and SQL Server 2016 and how they affect your architecture.

You can register here:

Webinar Registration

I look forward to you joining me on Thursday.

How Do I Measure IOPs from SQL Server?

One of the more common challenges DBAs face when requesting new SAN environments from storage administrators is being able to gather the actual number of I/O operations per second that SQL Server actually performs. See your storage admins always speak in terms of performance in terms of IOPs, latency and throughput, whereas database folks like to be able to speak in terms of transactions per second. So in the past it was a challenge for the DBA to communicate with the SAN admin about IOPs requirements—it involved carefully capturing a lot of different perfmon counters, and making sure they all added up correctly. It was definitely more art than science.

Introducing Resource Governor in SQL Server 2014

One of the key feature enhancements to SQL 2014 was the addition of I/O control to resource governor. Resource governor is an enterprise edition feature that allows DBAs to manage workloads by classifying connections that have specific amounts of hardware resources allocated to them. When this feature was introduced in SQL 2008, it had one missing element—the ability to constrain workloads based on I/O. Especially since CPUs have gotten much faster since 2008, I/O has become the bottleneck for many, many systems. So, in SQL 2014, Resource Governor can manage IOPs per volume (note, it’s not in the GUI as of now, you’ll have to use T-SQL to configure it).

I have Standard Edition—How Does This Help Me?

Well, if you have standard edition, you won’t be able to use resource governor to manage resources. However, the perfmon counters from resource governor still come with your install. And this is how you can measure IOPs from SQL.

Figure 1 Perfmon Counters

That handy SQL Server:Resource Pool Stats counter and it’s Disk Read IO/sec and Disk Write IO/sec provide you with the data you need to give your SAN admin. In this screenshot this is an Enterprise Edition instance, and you can see my resource pools on the left side—so if you are using resource governor, you could use this to classify IO workload by application for potential chargeback situations.

Good luck and happy SAN tuning.

Using PowerShell to Generate a Transaction Log Restore Script

This is a really basic PowerShell post—those of you who do lots of PowerShell automation can skip it, and I know I could have taken this three steps further and automated the whole process. Thanks Mike Fal (b|t)! (seriously, if you want to read some great SQL PowerShell tips hit Mike’s blog) J Anyway, I’m building an Availability Group for a client with a very large database. I had taken a full backup, and was going to take a differential, so I didn’t have to apply a full day’s worth of transaction log backups to the database. Well, it turned out the workload consisted largely of bulk insert operations, which are minimally logged—what that meant was that my differential backup was huge, but my transaction log backups were quite manageable. Except for the fact that I had about 100 of them, so I didn’t want to hand enter all of the file names.

One of my favorite techniques for automation is to use the concatenation operator in T-SQL (+) to dynamically create SQL statements (note: if your reading this and you’re a database developer, don’t do this, write stored procedures—it’s a valid technique for running one-off queries, but doesn’t scale well). We can do the same thing in PowerShell. But first things first—let’s figure out how to list the files.

 

$backupPath=\\fileshare\instancename\database\LOG\

    Get-childitem $backpath

What we are doing with those two commands, is defining a variable ($backupPath) and then doing a file listing on it. That listing isn’t useful to us yet—it has too much information.

What we want to do is parse the name out of that data and concatenate a restore statement.

 

Get-Childitem $backupPath|sort -property LastWritetime|select {“restore log dataBaseName from disk='”+$backupPath+$_.name+”‘ with norecovery”}

We are doing a few things here—we’re doing our original listing of files, but we are now piping it (the |) to the next command—from there we are taking advantage of PowerShell’s object model. You’ll notice the curly brackets after the select—those are specifying a script, which you will need in order to concatenate text with the piped in variable. Unlike in dynamic SQL there is no need to escape the single quotes, that our T-SQL command will ultimately need, if a string is in double quotes in PowerShell is treated just as a string and not evaluated. So that single quote will print as a single quote. You’ll note we are a doing a select and our value $_.name is the file name—this is one of the objects that the get-childitem cmdlet contains. We are using the $_. in front of “name” because we are in a script task. For example, if you executed the following code:

 

Get-Childitem $backupPath|select name

 

You will see this:

 

So what do we get when we run our statement?

I only had one backup here—on my client system I had over 200. I was able to just copy the code and paste into Management Studio and run the restores. You will note that this is the bare minimum of what you can do with this—you could use time intelligence to specify a time you wanted to restore from, and you could definitely automate the restores into PowerShell commands for SQL Server. I was keeping it simple and fast as this was a simple scenario. One thing I probably would do that I didn’t is use the write-output cmdlet to send the code to a SQL file. Happy Thanksgiving and Happy Coding!

 

 

A Long Day’s Journey to a Saints Game or #joeystravelstories

Introduction

When I noticed the PASS Summit and MVP Summits were on back to weekends, and there was a New Orleans Saints home game on All Saint’s Day (01 November) and Alaska Airlines had introduced a new nonstop flight to Seattle that would let me watch that Saints game and make it back in time for MVP Summit, I jumped at the chance. (Apologies for the absurdly long run-on sentence—it’s late) So after a fantastic PASS Summit I went to Seattle’s airport early Saturday morning. For reasons I don’t remember now, I had booked myself on an American Airlines (I do have status with them) flight through Dallas rather than the inbound nonstop Alaska flight.

 

The Saints

In case you were wondering the significance of All Saint’s Day in relation to my favorite American football team, the New Orleans Saints, the franchise was granted to the city 49 years ago today, November 1, 1966. I’ve been to Saints games all over the United States, and even one in London. My dad keeps season tickets and I usually travel in for several games a year. It’s a fun bonding experience, and I’ve seen all sorts of miraculous things happen like Steve Gleason blocking the punt the night the Superdome reopened, Garrett Hartley kicking the Saints into the Super Bowl, and Tracy Porter closing the deal on that Super Bowl.

Travel

In general, I have pretty good travel luck. I suppose it helps that I live in a hub city (Philadelphia) so most of my flights are non-stop. Unlike my good friend Karen Lopez, who has her own hashtag (#karenstravelstories) I mostly have an occasional delay here or there, but I usually get where I am going without issue.

Today, however that would not be the case, we were a few minutes late leaving Dallas, and I knew there were some pretty bad thunderstorms around the New Orleans area. As we made our approach into Louis Armstrong airport the turbulence was incredible, and I was genuinely concerned about our ability to make a safe landing. As it turns out, so was the captain of the plane. We aborted the landing, and circled briefly before heading to scenic Shreveport, Louisiana (which is famous for Hope Foley having grown up there) to refuel and hopefully come back to New Orleans.

Oh, but American…

So we landed in Shreveport, and the pilot made mention of a minor mechanical issue that had to be checked out by a mechanic, in addition to our refueling, so to his credit he let everyone off of the plane. The flight crew was absolutely fantastic in general, but there was one problem, we were basically in a complete information blackout. I was chatting with the flight attendants and they were in the dark as well. After a couple of hours of this, I really just wanted them to cancel the flight so I could get my checked bags (yes, slap me now) and get a rental car. The on-call mechanic (who was a contractor and not an AA employee) was apparently trick or treating with his kid and couldn’t make it.

American didn’t officially cancel the flight until shortly after 10 PM (we landed at Shreveport at 6:30).

My DR Plan, or Why I’m Blogging This..

As a business traveler, you learn to plan for adversity and always have three fallback plans. It’s just like dealing with a major database issue—you need to have a plan of attack. As soon as I heard we were landing in Shreveport, I knew I was pretty screwed. It’s a tiny airport with next to no service to anywhere that’s not a hub. In addition to the Saints game, I need to make my Alaska flight tomorrow night at 7 so I can get to MVP Summit. So I had a couple options:

  • Wait out the delay—this was problematic, but a plan to where you are going is always the best bet
  • Get a rental car and drive to New Orleans—it’s a 4-5 hour drive, but I couldn’t get my baggage
  • Get a rental car and drive to Dallas and get AA to book me on the first flight to New Orleans tomorrow morning
  • Stand in line with the other people at the airport and try to get rebooked (LOL, don’t ever do this—you’ll get to where you’re going next Thursday)

After a certain point in time, I decided to book the rental car. I knew we weren’t going to get out on the plane until really late, and I started hearing murmurs of cancellation from the flight crew (pro tip—always stand next to the flight crew). As soon as the flight was officially cancelled, I took off to rental car, and got a car reserved before the hoard got there. I was in my car with my bags within 30 minutes.

Never Outdrive Your Headlights

This is a quote from “The Art of Racing in the Rain” that I used both metaphorically and in reality tonight. It was late, but I had the benefit of being on west coast time and napping on my flight from Seattle. So I didn’t feel tired at all, but my fallback plan was to stop and get a hotel if needed. It was also raining the whole way, which made the level of concentration required a little extreme, but I think it helped me stay awake. For example, I was mentally composing this post in my last hour. This same theory applies to solving a database problem—always look at what’s in front of you before trying to chase down other problems that might not be real, or that you can’t see in your vision.

It’s Not Always a SAN Problem

Tomorrow I’ll be doing a new webinar on how to identify performance problems related to storage in SQL Server. Sometimes, what looks like a SAN problem to SQL Server may just be related to missing indexes, or not enough memory, or something else that’s forcing SQL Server to do way too much I/O. In this session you’ll learn how to:

  • Identify when SQL Server is having storage related issues
  • Understand what’s happening from the perspective of the O/S
  • What performance counters are useless when you are on a SAN
  • How to talk to your SAN admin about performance

Join me and my friends at Embarcadero for this great webcast at 11:00am Pacific / 1:00pm Central / 2:00pm Eastern on Wednesday September 16.

Follow

Get every new post delivered to your Inbox.

Join 2,701 other followers

%d bloggers like this: