Remotely Logging Perfmon Files into SQL Server—Security Issues?

This post comes with a strongly worded warning—don’t do what I’m going to do here to anything that resembles a production environment. Hopefully some smart person, will come in and give me the obvious fix, but for now the only way to make this work is to do something really bad from a security perspective. So if you want to do this—create a test instance grant minimal rights, and then revoke them after you are done with your data loads.

Last week, our VM team asked if I could get the I/O Operations per Second (IOPs) for a large number of representative physical servers, that would be VM candidates. In this case, I took a SWAG, and took all of our physical servers that have averaged less than 20% CPU utilization and had less than 800GB total data. In order to get the IOPs number I would have to run perfmon (or go through the SAN team, but perfmon seemed easier). Using Logman, I was able to execute some PowerShell to start log collections on a large number of hosts. Below is the code I used—much of it I borrowed from HappySysAdmin.

In case you are wondering, the only two perfmon counters I was really interested in were “\PhysicalDisk(*)\Disk Reads (and writes)/sec”. I wasn’t trying to tune anything—just trying to get raw data.

So now that I had all of this data collected, I brought it back to my tools server, where my plan was to use Relog in order to load into my SQL Server. This was a flawed plan—it turns out different versions and Windows, Perfmon and Relog don’t play nice together—I ran into some issues. So rather than waste a bunch of time troubleshooting—I decided to go back to my old friend PowerShell, and it’s Invoke-Command cmdlet. For those of you who aren’t familiar Invoke-Command runs the command in the context of the host it’s being executed on (as opposed to the remote host executing the script). So in this case the version of relog would be correct for the version of PerfMon that created the *.blg file. Grant Holliday has a great blog post here on how to log *.blg files into SQL Server. I took his code and wrapped it in some PowerShell.

The Relog to SQL process does rely on an ODBC connection to the target SQL Server. So as part of my code I would create that, and then clean it up. The process for this is much better in Windows 2012/PowerShell 3.0, but since I’m not on those platforms I needed to call DOS commands like obdcconf.exe. One concern I immediately noticed was that even though I tried to pass SQL credentials to the ODBC, it would resolve using Windows credentials. I wasn’t overly concerned—Invoke-Command runs with the credentials of the executing user, in this case my account. This code is below:

So, I ran that—and I saw the following error.

So like any good admin, I went to the log

That’s interesting—Relog tries to connect to SQL Server as Anonymous Login. If you look carefully at the about PoSH—you’ll note that I run a “whoami > test.out”, which shows the credentials of the current user, well it shows up as my account. Does anyone have any idea why Relog tries to connect as Anonymous—or a workaround?

In my case, I had an instance with no secure data—so I created a login for anonymous, gave it just the rights it needed, and then deleted it after I was done. Only do this short term, and someplace where there is no secure data.

Anyway, I thought I had developed something that could be pretty useful, but was hampered by this security piece. Any thoughts in the comments are greatly appreciated.

Upcoming Presentations

Just wanted to use this space to plug a couple of presentations I’ll be doing this week.

SQL 2012 All About HA and DR

Central Pennsylvania SQL Server Users Group

Tuesday February 12

In this session I’ll be talking about all of the HA and DR options that are available in SQL Server 2012. We’ll cover the pros and cons of each choice, and talk about some external solutions, such as SAN replication and virtualization.

New Features in Windows Server 2012 Failover Clustering

Philadephia SQL Server Users Group

Wednesday February 13

Windows Server 2012 is here, and Failover Clustering has some nice improvements. I’ll talk about some of the features that are most useful to DBAs.

Lastly, I will be presenting at the PASS Business Analytics conference in Chicago in April—more to come on that later this week.

The SQL Virtualization Tax?

I’ve been working in virtual server environments for a long time, and a big proponent of virtualization. It’s a great way to reduce hardware costs and power consumption, and frankly for smaller shops it’s also their easy foray into high availability. The main reason for the high availability are technologies like VMWare’s vMotion and Microsoft’s Hyper-V Live Migration—if a physical server in a virtualization farm fails the underlying virtual servers get moved to other hardware, without any downtime. This is awesome, and one of the best features of a virtual environment. What I don’t like is when software vendors feel they are getting the raw end of the deal with virtualization, so they develop asinine licensing policies around.

Oracle is my favorite whipping boy in this discussion—Oracle is most typically licensed by the CPU core. In my opinion, a CPU core should be the number of cores that the operating system can address. Oracle agrees with me, but only in the case of hard partitions (mostly old, expensive UNIX hardware that they happen to sell). Basically, if I have a cluster of 64 physical nodes, and I have one virtual machine, with one virtual CPU, Oracle expects me to license EVERY CORE in that cluster. The ways around this are to physically lock down your virtual machine to a given hardware pool and then license all of those cores (a smaller number of course). The other option is to dedicate a bunch of hardware to Oracle, and virtualize it—while this works, it definitely takes away a lot of the flexibility of virtualization, and is a non-starter for many larger IT organizations.

Microsoft, on the other hand has been generally pretty fair in their virtualization licensing policies. An Enterprise license for Windows Server bought you four VM licenses, and SQL Server (before 2008 R2) had some very favorable VM licensing. However, starting with the SQL Server 2012 things started to get a bit murkier—for Enterprise Edition, we have to buy a minimum of 4 core licenses, even if you are only running one 1 or 2 virtual CPUs. However, we don’t have to license every core in the VM farm. One thing that caught my eye with the SQL Server 2012 licensing, is that if you license all of the physical cores in a VM farm, you can run unlimited number of VMs running SQL Server, but only if you purchase Software Assurance. Software Assurance costs 29% of license costs, and is a recurring annual cost. In the past Software Assurance was generally only related to the right to upgrade the version of your software (e.g. if you had SA, you could upgrade from SQL 2008 R2 to SQL 2012). This rule bothered me, but it didn’t really affect me, so I ignored it.

I was talking to Tim Radney (b|t) yesterday, and he mentioned that in order to do vMotion/LiveMigration (key features of virtualization) Software Assurance was required. I hadn’t heard this before, but sure enough in this document from Microsoft, it is mentioned:

So, in a nutshell if you want to run SQL Server in virtual environment, and take advantage of the features that you paid for, you have to pay Microsoft an additional 29% per license of SQL Server. I think this stinks—please share your thoughts in the comments

AlwaysOn Availability Groups and SharePoint 2013

We’ve been in the process of building a team site in SharePoint 2013, and since I’m building I decided to investigate using AlwaysOn Availability Groups. This is slightly different than building a normal availability group—the application doesn’t use a listener.

First of all—you want to read the official Microsoft guidance here. It can be a little confusing, so here are the basics of what you need to.

  1. Create temp database—you need to create the Availability Group before the SharePoint install is kicked off, so you want to have a database to include in your AG.
  2. Build your AG and add your temp database to it. I’m not going to go into a great deal of detail on this, but if you need help check parts 1, 2, and 3 of my series on building an AlwaysOn Availability Group.
  3. After this is complete—go to Failover Cluster Manager and add a Client Access Point resource to the role for your Availability Group.

When you create the client access point, you will need to assign an IP address to it. Additionally, you will need to get an entry in DNS so that your SharePoint server can talk to the access point. You will see in “Other Resources” in your cluster role in Failover Cluster Manager. Unlike a Listener, this does not require an Active Directory Virtual Computer Object (VCO) to be created.

 

When you install SharePoint, and have the option to specify your database setting—use the client access point.

 

 

After the SharePoint installation is complete, add your SharePoint databases to your availability group, and then synchronize with your secondary. Failovers happen seamlessly once this is complete.

I do have one final note of concern—SharePoint creates a lot of databases, without notifying anyone (different applications in SharePoint create new DBs for themselves). I’m still working on a good strategy to automatically add new SharePoint databases to my Availability Group. I wouldn’t do this in any other application, and feel guilty about doing it here. My thoughts are leaning towards a system trigger that fires on new DB create, and then launches a stored procedure that will take a full backup and add the DB to the AG. That feels really ugly to me—does anyone have better ideas for a cleaner solution?

 

%d bloggers like this: