SQL Rally — Vote for Me!!

For those of you who don’t know, PASS has a new event this year, in addition to the annual PASS Summit in Seattle. SQL Rally will be May 11-13 in Orlando, Florida, and will be a lower cost, slightly more community focussed event.  In addition to the lower cost, sessions will be voted on by the PASS community. You can register to vote here.

I’ve been presenting on SQL Server for a couple of years now, and am heavily involved with the Philadelphia SQL Server Users Group, as well as our SQL Saturday coming on March 5.

So, that brings me here–to shill for your vote. The session I’m presenting is entitled Deploying Data Tier Applications to the Cloud. This will be similar to sessions I presented at Philly .net Code Camp, and the SQL Saturdays in New York and Washington, but with a twist. This will be a start to finish demo of building a database application, using Data Tier Applications, and deploying it to SQL Azure.

My session is in the Azure track of the Database and Application Development track at SQL Rally.

Mark Kromer of Microsoft and I have submitted a similar presentation to TechEd, but this one is unique to SQL Rally. I’d love your vote so I can share it with the community.

Adding a new disk to a SQL Server Cluster Instance

In order to do anything involving a SQL Server clustered instance (restore, backup, store/read data) storage must be accessible to the clustered instance. Here we will discuss the process of how to add a new LUN to a SQL cluster.

First off, have your storage admin add the disk, and make it accessible to all of the nodes of the cluster. Most SANs are different, so I’m not going to discuss that syntax here.

The next step is a little painful, you need to go to each of your cluster nodes and do a rescan of storage. In order to do this, go to Server Manager and right click on Disk Management > Rescan Disks

Next, go the server where the cluster service you would like to add the disk to is running (you can accomplish this by entering the virtual cluster name into Terminal Services) and online the disk (From Disk Management)

After the disk is brought online, it must be initialized. This is done from the same disk management dialogue. Configure this with either a MBR or GUID partition table (preferable for very large LUNS).

Next create the volume, in order to this, right click in the disk area, and select “New Simple Volume”

To create this drive under an existing drive letter (which I like to do for clustering) select “Mount in the following empty NTFS folder”

Create a new folder under the drive letter associated with your cluster resource group. I typically name the folder $RESOURCE_GROUP_$PURPOSE. Where resource group would RG1_ and purpose would be backup/data/logs.

Lastly, you need to label your volume and format. Once again for clarity, I usually for $RESOURCE_GROUP_Identifier_$PURPOSE_$LUN#.

Next, launch failover cluster manager (this can run either on your server or desktop). Select manage a cluster, and the enter the name of the cluster you are managing (if it doesn’t auto-populate)

Right click on the Storage folder on the right and select add a disk. If the disk is presented and configured properly, the cluster should see it, and add it to available disks.

Expand the left hand window and select “Services and Applications”

Right click on the instance you would like to add the storage to, and select “Add Storage”. A dialogue box will pop up with the available disks to add.

This is where you select the appropriate disk and add it into your cluster. You should be able to identify by its disk number and volume information.

Go into SSMS and using the Attach  Database dialogue (or any dialogue that interacts with the files system–backup/restore, etc), and verify that SQL Server can see the disk.

Server Virtualization–the bottom end of the spectrum

Brent Ozar posted an excellent blog yesterday, on the upper limits of server virtualization. In his post, he discussed the limitations of the upper limits of using VMWare for database servers. I mentioned to Brent on twitter that, I had just been talking about the other side of this–what’s too small to virtualize.

I was in a meeting yesterday, discussing a recent acquisition for our company, and one of our remote manufacturing sites, and the costs involved of converting them to a virtual infrastructure. Each of these sites currently have around 10 physical servers, and no shared storage platform (currently). The leading management argument was that it’s cheaper to replace the servers of a regular cycle, than to make the investment into a virtual infrastructure.

The hardware costs for the project are as low as about $50-60k–using HP 360s and MSA iSCSI storage. That’s 3 servers and 3-4Tb of storage. The real killer is the VMWare licensing–we’re looking at close to $40k a host, which brings the total cost of the project to well over $100k. We’re in an odd spot, because we’re in a large company, but supporting smaller sites, that need some enterprise management features. A smaller shop could get away with VMWare Essentials plus, which is a much more affordable $3k a server (all prices are non-discounted).

However, that brings the total cost of the project to about $70k–which would replace all of the standalone servers on site at least once.

This obviously doesn’t account for reduced management and backup costs, nor does it account for the higher availability of the VMWare environment. High Availability can still be a hard sell in small shops–believe it or not. But that’s where the value in virtualizing their hardware is–outstanding uptime and ease of management. With a little bit higher cost.

I’m a big fan of virtualization, but some times it can be a hard sell to the pointy haired boss.