Cluster Aware Updating Part II—SQL Server Failover Cluster Instances

I recently wrote here about my experiences testing out the new Windows Server 2012 feature, cluster aware updating with SQL Server AlwaysOn Availability Groups. As you see in the post, it didn’t go so well. I didn’t have the opportunity to test on the more traditional SQL Server Failover Cluster instance.

Well after the Thanksgiving holiday, I was able to get my infrastructure up and running and build a Failover Cluster Instance. Note—I was using SQL 2012 SP1 and Windows 2012, and use Starwind SAN software for the shared storage requirement. I am very happy to report, that the Cluster Aware Updating process simply worked, and failed over the instance correctly. So in order to configure this, you will simply need to configure Cluster Aware Updating and not do anything else to SQL Server. Details on Cluster Aware Updating can be found here.

I suspect the issue with AlwaysOn Availability Groups relates to the way it interacts with the cluster service. In our traditional model of failover clusters, do all of our instance control through Failover Cluster Manager, however with AlwaysOn AGs we are specifically instructed not to:



So I think since Cluster Aware Updating is attempting to manipulate that service, that is what is causing the errors. Just my thoughts, please share yours!

SQL Saturday #173—Washington DC

My Slides for the presentation are here.

Replication Subscribers and AlwaysOn–from Books Online.

Link to Starwind Documentation

The Best Consulting Gig Ever

Before I knew anything about anything, I signed up for a site called Elance. It’s a reverse auction site for contract/consulting type work. From seeing the prices on the site, I can’t imagine the quality of service is very high, and probably ends up with questions getting asked like “Do you have any MSSQL instances running on Linux?” (oops, that happened at my real job today). Anyway, I hadn’t paid any mind to Elance in years, but I got an email this morning saying that someone had invited to submit a proposal for a “DBA Performance Architect” role. Since the title looked interesting I read the email.

DBA Performance Architect

Fixed price: Less than $500  |  Database Development  |  Posted: Apr 19, 2012
  some guy,    United Arab Emirates
XYZ LLC requires all candidates to complete a 1 week test assignment as part of our interview process. You will be paid for the test assignment ONLY if we decide to hire you based on the test assignment results. If we decide not to hire you, there will be no payment made. 

The XYZ LLC DBA Performance Architect role pays $30/hr ($1200 fixed price payment if your trial performance leads to a successful hire). Please note that we require signup or an existing affiliation with an approved staffing and payment platform that charges a percentage fee on your gross earnings that will impact your net payment.

The XYZ LLC DBA Performance Architect position is based around improving application performance by improving the performance of the database. Job includes increasing database performance through, database deployment and configuration, optimizing SQL and stored procedures, indexing and schema based optimization, and database tools optimization.

This role requires experience with Oracle, Postgres, Microsoft SQL Server, SQL, stored procedures, mysql.

In addition, this role has the following non-technical requirements

English – all candidates must be able to speak and write capably in English. English need not be the native first language, but it should be sufficient to enable technical discussion.
Video – candidates must have the computer hardware and networking bandwidth to conduct a seamless video skype conversation for team communication. Ongoing use of webcam for billing and skype video are required.
Full Time – This job is only offered for a Full Time basis (40 hrs/wk).

Qualification for this role begins with an unpaid testing phase where candidates are required to provide verifiable identification, complete online skills testing, and attend a brief Skype video interview. The testing phase lasts for no more than a week, and highly motivated candidates can complete all requirements of the testing phase within a few hours.

Candidates who pass our testing phase will be offered a one week fixed price trial assignment. Candidates who successfully complete our trial assignment within the given time frame will be offered the full time position at 40 hrs/wk at the jobs hourly rate. Those who show progress and dedication during the trial phase but fail to complete the assignment successfully may be offered an alternate position within XYZ LLC.

If interested, simply click “Apply for this Position” below.
Added 18 MAY 2012, 14:55 PM EDT
The pay rate for United States citizens and permanent residents is $24/hr to cover W-2 processing.

There are just so many things wrong with this I don’t know where to start. A one week free trial phase to make sure I know what I’m doing, and this only for a one week fixed priced assignment (at < $500). Wow, and then they only are paying $30/hr, oh wait you’re in the US make that $24/hr. I can barely hire someone who can turn on a computer for that in the US—much less someone who knows Oracle, Microsoft SQL Server and Postgres. Also, I have the distinct feeling that if you were to resolve their performance issues in the first unpaid testing week, then surprise you don’t pass the test assignment, and you fixed their problems for free.

I’ve seen some really crappy job postings in the past, but this one takes the cake. Good luck hiring for this XYZ.

Cluster Aware Updating and AlwaysOn Availability Groups

One of the features I was most looking forward to in Windows Server 2012, was Cluster Aware Updating. My company has a lot of Windows servers, and therefore a lot of clusters. So when a big vulnerability happens and they all need to be rebooted, we use System Center Configuration Manager to handle the reboots automatically. Unfortunately, clusters must maintain quorum to stay running, so rebooting them has generally been a manual process.

However with Windows 2012, we have a new featured called Cluster Aware Updating that is smart enough to handle this for us. It allows us to define a cluster for patching, so we can tell our automated tools to update and reboot the cluster, or we can even just update and reboot manually. This seems like a big win—it was hard to test in earlier releases of Windows 2012, as updates weren’t available. So my question was how it would work with SQL Server. My first test (I’ll follow up with testing a SQL Server Failover Cluster Instance) was with my demo AlwaysOn Availability Groups environment.

The environment was as follows:

  • One Domain Controller (controlling the updates as well)
  • Two SQL Server 2012 SP1 nodes
  • No Shared Storage
  • File Share and Node Majority Quorum Model (File Share was on DC)
  • Updates downloaded from Windows Update Internet service

I ran into some early issues when I ran out of C: drive space on one of my SQL VMs, it was less than intuitive that the lack of storage was the issue, but I was able to figure it out and work through it. So I started onto attempt #2. The process for how cluster aware updating works as follows:

  • Scans both nodes looking for required updates
  • Chooses node to begin updates on (in my case it was the node that wasn’t the primary for my AG—not sure if that’s intentional)
  • Puts node into maintenance mode, pausing the node in the cluster
  • Applies Updates
  • Reboots
  • Verifies that no additional updates are required
  • Takes node out of maintenance mode.

All was well when my node SQLCluster2 went through this process. When SQLCluster1 went into maintenance mode this happened:

When I logged into SQL Server on SQLCluster2 to check the Availability Group dashboard, I found this.

The Availability Group was in resolving status. Mind you the cluster still had quorum, and was running. I couldn’t connect to the databases that are members of the AG, and I could connect to the listener, but the again databases were inaccessible. The only option to bring these DBs online is to perform a manual forced failover to the other node, which may involve data loss. After the updating is completed the services do resolve themselves.

I was hoping Cluster Aware Updating would work a little more seamlessly than that. As far as I can tell, to avoid an outage, I will need to either have manual intervention, or build in some intelligent scripting to fail my AGs over ahead of time. Hopefully this will get resolved in forthcoming SPs and/or CUs.

**Update–Kendal Van Dyke (b|t) messaged me and proposed that changing the failover and failback settings for the cluster (the number of failures that are allowed in a given time period) could resolve the issue.  Unfortunately, I saw the same behavior that I saw above.

SAN Basics for DBAs–Central Pennsylvania Users Group

I will presenting tonight at the Central Pennsylvania Users group on SAN Basics for DBAs (and other data pros!). I’ve given this presentation many times, but it’s been updated to reflect new information about automated storage tiering and what it means for the database.

The slides are available here, and I will update this post with any additional resources.

Here is a link to SQLIO. And a best practices document.

Lastly here is an EMC white paper on the automated storage tiering, I mentioned lasted night.

PASS Summit 2012 Day 1 Keynote

Live here this morning from the PASS Summit 2012 keynote, we are starting with Bill Graziano, PASS president. We have 3894 attendees this year.

57 countries are represented at this year’s Summit. PASS Tv is new this year.

The PASS website has been vastly improved to make it easier to find events local to users. There is now a French language SQL Server virtual chapter which makes me happy. Viva la France!

The PASS growth internationally has been fantastic and big. PASS is launching a new BI conference in Chicago in May. Everyone is focussing more on big data.

The SQL Cat team is here and Microsoft has hands on labs this year. Additionally discounted certification exams are available all week.

Ted Kummert of Microsoft takes the stage. He’s the VP for SQL at Microsoft. SP1 for SQL 2012 is live today, which provides a lot of functionality integration with the recently released Office 2012.

The focus of today’s session is clearly big data, and business analytics. Talking about patient records and building decision support systems, leveraging the volume of data to make better decisions. Beginning to talk about xvelocity and in-memory systems for analytics.

In memory transactional database will be in the next major release of SQL Server, which means I hopefully never have to hear about SAP Hana again. We’re seeing a bad version of SSMS but this is very pre-release. Showing a 10x performance improvement with this product, which is code named Hekaton. Talking about Columnstore indexes. Demoing Columnstore and new Hekaton, will be updatable and be able to be clustered.

There is a video showing Bwin, and online gaming site and one of the biggest users of SQL Server. They have an incredible about of throughput. Windows Azure cache is going to leverage the Hekaton technology.

Now, we shift back to the Hadoop model in both Windows in Azure. I haven’t seen a lot of buy in for Hadoop on Windows, because most of the folks I know who use it, are really into Linux.

The latest version of Parallel Data Warehouse using Windows 2012 storage spaces in an effort to reduce storage costs. The new PDW is really fast, a count(*) on a 6 pb table ran in a matter of second. Showing impressive compression values, generally between 5x and 15x, which will save a lot of disk costs.

Now talking about changing the query processor for more modern processing. Introducing Polybase, which ties non relational data to relational. PDW continues to be mentioned, so I wonder if this is PDW only. Also, external tables are now supported finally. This is all nice stuff, but I wonder if will be supported outside of PDW.

Self service BI continues to be a big push, and ties nicely to office 2012, MS worked with Excel team to fully integrate BI work into Excel. Self service BI for all.

%d bloggers like this: