PASS Summit 2014 Speaker Idol—A Judge’s Perspective

One of the most interesting events I got to take part in during last week’s PASS Summit, was to be on the panel of judges for Speaker Idol. Denny Cherry (b|w) borrowed an idea from TechEd and sold to PASS—basically speakers who had never spoken before at PASS Summit would get the opportunity to give a five minute talk on a topic of their choosing, and the top three speakers plus one wildcard would advance to the final round on Friday, with the winner receiving a guaranteed slot to speak at the 2015 PASS Summit. Going into this as a judge, I wasn’t quite sure what to expect, but we saw some really amazing stuff.

As IT consultants and professionals, public speaking is just a part of our full time jobs, but it’s something to work on as part of overall career development. As a speaker, you will have good days and bad days (sometimes on the same day—this happened to me last Wednesday at Summit), but you learn to roll with the punches, and recover when things are bad. The five minute lightning talk format of Speaker Idol magnifies this—any mistakes tend to be glaring.

Keep Your Legs Together

As a regular speaker and a judge, I think you immediately tend to hone in on the mistakes you find yourself making. In my case, almost every time I speak, within the first 30 or so seconds I find myself either moving just a little bit, or rocking back and forth. When I took a class in public speaking during my MBA program, it was hammered into me to keep my feet together, which gives your body a more stable platform and prevents rocking. Movement in general isn’t bad—but you want to make movements that emphasize your points or garner engagement with the audience, not small movements that are distracting.

Jazz Hands

Being raised in an Italian family, I’m totally guilty of over-gesticulating when I speak—your hands are powerful, and are a fantastic way of emphasizing a key point, but at the same time need to be controlled. Another thing to note is that what may feel like a very large hand movement to you, when you are on stage can look small and just distracting to a large audience. Big hand movements emphasize your point—small ones are just distracting.

Why the Winners Won

Our top two speakers (and it was really close) Pieter Vanhove and Rob Volk were both really amazing. Both of them took some pretty big risks by leaving the stage to engage the audience. This can be a very powerful move, or it can flop. They both nailed it. Additionally, they were both spot on in timing, which while important in a regular talk, is critical in a lightning talk. Pieter in particular had extremely beautiful slides—while using some of the conference template, he didn’t let his slides be constrained by it—adding in some very excellent images. Additionally, both talks managed to compress a lot of information into five minutes—I think most of the judge’s panel did not know the SSMS had the ability to do regular expression replacement. Pieter gave a great talk on the benefits of using SQL Server Central Management Server to manage a large environment (I think that’s a very underutilized feature).

 

Denny’s post is here.

Karen’s post is here

Are There Data Egress Charges for Using Power BI Against AzureDB?

At PASS Summit last week (and if you weren’t there, you missed an amazing conference as usual) Stacia Misner (b|t), whom I co-authored the white paper “Using Power BI in a Hybrid Environment” asked me a question that she was asked in her session. If a user is using Microsoft Azure SQL Database as a data source in Power BI, does that data source incur data egress charges? A couple of background notes here—generally speaking in most cloud computing scenarios data ingress (that is loading your data) is free, whereas data egress (retrieving your data into, say a reporting solution) costs money (pricing details are here, and not terribly expensive). Another generality around most cloud scenarios is that data transfer within the same data center also does not count against your data egress charges.

Another thing to note is that Office 365/Power BI and Microsoft Azure are two completely different services—I tried to be as clear about this as I could in the white paper, and sometimes it does strike me as odd, that two services, quite possibly living in the same data center are not aware of each other, but this does again seem to be the case. So let’s walk through the scenario for this.

Power BI and Microsoft Azure SQL DB

Data refresh in Power BI, is only directly supported by two data sources, SQL Server and Oracle (either on-premises, or running in Azure), if you want to connect to Azure DB (or a vast array of other data sources) the method to use, is to bring the data into your Excel data model using Power Query, and then pass the connection string into the data sources within Power BI. (Note—for full details on this, read the above white paper). So to test this I built a very simple Azure DB using the Adventure Works (AW) 2012 database for Azure (currently, Azure DB is not fully feature compatible with in-box SQL Server, so a special edition of AW is needed). I then generated a data model, against two tables, just enough that auto refresh would work normally.

Figure 1 Workbook in Power BI

In case you were wondering why I called the workbook “Bingo” I was having some issues related to the build of Power Query I was on that may have caused a little bit of frustration (or maybe that was just Drew Brees’ performance in Sunday’s Saints game). Bingo was the workbook that finally worked correctly. If I go to the data refresh screen, and execute “refresh report now”

 

Figure 2 Data Refresh Screen

From here—you can execute a refresh on demand. There is no concept of an incremental refresh—it seems like all data is refreshed when data refresh happens. So how do we see if this incurs data egress?

Azure DB and DMVs

There is a DMV in AzureDB named sys.bandwith_usage that tracks bandwidth usage for a given database by the hour. So an entry is made for each hour, where there is data usage against a given DB. I haven’t figured out an easy way to show space used in Azure DB for a given table, but from looking at my on-premises version of AW, I can about 12 MB of data in the tables I am using in this workbook.

So let’s check the DMV.

Figure 3 Query Results

As you can see in the results—refreshing my workbook seems to “cost” about 17 MB. From what I can tell there is no differentiation between an Azure data source and an on-premises one (some of that data use, particularly master, was from an on-premises SSMS session). I was hoping to provide a little more detail, but DMVs and netstat let me down—any help in the comments would be greatly appreciated.

Conclusion

In the grand scheme of things this isn’t a big deal (you get 5 GB of egress for free a month), but it is something to think about when designing BI solutions in the cloud. A couple of things that would be great from a Power BI perspective would be the ability to incrementally refresh data, and some sort of Azure direct connect, which didn’t include data charges.

 

PASS Summit Surprise—Career Management Session

In a last minute replacement, Karen Lopez (twitter) and I are presenting a session today at 3PM, at the PASS Summit in rooms 615-617. This is a really fun session we’ve given at a number of SQL Saturdays and other events—on managing your career.

 

 You Wouldn’t Let HR Manage Your DBs…

…So don’t let them manage your career.

Do you know that you may have left tens of thousands of dollars on the table during your last negotiation? Do you know that you can ask for more than money when negotiating salary? Are you taking vacation just to be here at the PASS Summit?

In this session we will share our experiences working in a range of organizations from very formal giant corporate HR departments, government agencies and small tech startups. You will learn about how your HR organization works, what salary levels and midpoints are, negotiation strategies, when to say “no” and how to say “yes”.

 

We’ll talk about negotiation, career paths, and making the right career choices for you—come join us at 3 PM.

PASS Summit 2014—My Sessions

I was honored again this year to be able to speak at the PASS Summit, and even more so with the fact that I got a three hour session as well as a regular session. So what am I going to be talking about?

Building Perfect SQL Servers, Every Time (Wed 11/5 10:15 PST Room 6A)

If you are reading this blog and you enjoy my postings on esoteric internals of columnstore indexes or availability groups, this probably isn’t the session for you. However, if you are new to SQL Server, or are just looking for ways to improve and automate your installation process this is the session for you. In this session you will learn about what you need to change after you install SQL Server (and why), how to do scripted installs for automation and efficiency, and the lessons I learned in building a private cloud environment when I worked for big cable. As a consultant, I write a lot of health checks all about things that are wrong with client’s configurations—come to this session and your health check will read “This company clearly knows SQL Server and has a well qualified DBA”

SQL Server DR in Microsoft Azure—Building Your Second Data Center (Wed 11/5 15:00 PST Room 6A)

So Wednesday is going to be a busy day for me. Last year at Summit, I spoke about building out a hybrid AlwaysOn Availability Groups model—building out my demo environment was one of the most challenging things I’ve ever done in IT. I’m happy to report Microsoft has made a lot of things a lot easier, and I’ve learned a lot more about hybrid networking.

Please don’t let the Azure part of this topic scare anyone off—you will learn a lot about Azure infrastructure and how things like hybrid networking work, but at the same time I plan on walking through the basics and more of the fundamental SQL Server DR techniques, their pluses and minuses, and which one might be right for you given your situation.

Even if you can’t make it to one of my sessions, I’ll be around all week and would love to chat about technology. See you in Seattle.

Building Perfect SQL Server Northern New Jersey SQL Server Users Group

Slides on slideshare below:

 

The scripts are located here.

SQL Server Days Belgium

So Belgium is one of my favorite countries in the world—it has the best beer (sorry Germany, and America you are trying hard, but the Belgians have a history), it is the spiritual home of cyclocross (which is one of my favorite sports), and has the best formula one circuit in the world at Spa Fracorchamps (sorry again Germany, the Nordschliefe is amazing, but open wheel cars won’t race there anymore). One additional thing to love about Belgium is SQL Server Days—a great two event, featuring SQL Server experts from around the world, such as Ola Hallengren, Denny Cherry, Buck Woody, Stacia Misner and Grant Fritchey to name a few.

I will be delivering a preview of the three hour session I will be doing at PASS Summit on using Microsoft Azure as your second data center. I’ll be talking about all of the potential backup and disaster recovery options, and which can be right for your solution. Additionally, I will be taking part in the BI Power Hour to talk a little bit about the analytics of Belgian beer using Power BI. I hope to see you there!

 

Figure 1 Sven Nys Koksijde, Belgium

 

Figure 2 Spa Francorchamps Circuit, Francorchamps, Belgium

 

Figure 3 Westvletren Ale

How to Use a Local Drive for TempDB in a Multi-Instance Cluster

One of the cool new features that was introduced in SQL Server 2012 was official support for using a local drive for TempDB in a SQL Server Failover Cluster Instance. In prior versions, this was unofficially supported (it was a bit of hack), but it made sense—TempDB is ephemeral (it gets recreated every time SQL Server restarts) and the increased prevalence of PCI SSD cards we are a great fit for the random IO patterns of TempDB. The only caveat to this is that the drive letter for TempDB needed to exist on all nodes in the cluster (and this doesn’t get checked as part of the cluster setup process, so you’ll want to failover to all nodes as part of your testing).

Recently, I had a chance to test out this setup—but in a slightly more complicated environment. In a two-node cluster with a single instance this is really straightforward, simply create a drive T: (or the drive letter of your choice) on both nodes and you are done. In the case I was working on, I had a four node cluster with three instances of SQL Server—so I would need three TempDB volumes on each node for a total of twelve. I could just assign each volume its own drive letter, but one of the concerns in a multi-instance cluster is running out of drive letters. In order to get around the limitation of only 25 available drive letters in a cluster (remember Windows always needs C:\) is to use mount points.

So I applied the same rules to TempDB—I created 3 volumes on the local SSD on each node:

T:\Instance1\TempDB_SSD

T:\Instance2\TempDB_SSD

T:\Instance3\TempDB_SSD

After this was completed, I failed each instance onto each node in the cluster to ensure that everything worked as expected. This setup is pretty straightforward, but not well documented for this scenario. So, have fun and enjoy good TempDB performance (don’t forget to create at least four files, either!)

 

Azure Virtual Machines and SQL Server—Mind Your Endpoints

I am a big advocate of using Microsoft Azure VMs for lots of uses—many clients don’t have the wherewithal to manage a full scale data center operation, or in other cases they don’t have the budget for a second data center for disaster recovery. Azure is a great use case for those options, as well as quickly spinning up dev environments for testing new releases or doing a proof of concepts. In fact, I’m currently working on a PoC for a client and we are using Azure IaaS and Power BI for Office 365.

The biggest fear most people have around cloud computing is the security aspect—they don’t trust their data not to be in their data center. In general, I think Microsoft (and Amazon) have way better security than most data centers that I’ve ever set foot in, but whenever you are using a cloud provider, you have to have a good understanding of the nuances of their security model. One thing to note about Microsoft Azure is that all virtual machines get their own public IP address (personally, I feel like this a waste of a limited resource, as VMs that are within virtual networks generally have no need for a public facing IP address, but that’s a different blog post) and security is provided by creating endpoints (by default the SQL Server template opens PowerShell, RDP and 1433 for SQL Server). Access to these endpoints can be controlled by ACL—you can define a list of IP addresses (presumably the other machines in your network) that can talk to your VM over that endpoint. However, by default, your new VM is accessible on port 1433 to the entire internet.

I was troubleshooting connectivity from my SharePoint VM to my SQL Server VM this morning, and I went to the SQL Server log, and I found:

 

Figure 1 Log of Failed Logins

Those IP addresses aren’t on my virtual network, and they aren’t the public IPs of any of the servers in my network. Let’s use an IP lookup service to see where they are from:

 

Figure 2 Ip Address #1 Nanjing, China

 

Figure 3 IP Address #2 Walnut, CA

 

As Denny Cherry (b|t) mentions in Securing SQL Server having an SA account named SA and enabled is a definite security risk. Since SQL Server accounts won’t get locked out from failed password attempts these hackers know half of the battle, and they are hammering my VM trying to guess the SA password. Chred1433 seems like an interesting name for a user (or a hack attempt at SQL Server) and kisadmin shows up in this list of attacks on SQL Server.

Securing Your VM

So what does this mean for you? If you have VMs in Azure (or in your own data center—this is just general security best practices):

  • Never expose port 1433 to the internet. There are some scenarios where you have to, but I try to always work around this
  • Always disable your SA account—use domain groups for access to SQL Server
  • When launching a SQL Server VM in Microsoft Azure either disable the endpoint on 1433 or use ACLs to limit access to specific machines
  • Use Azure Virtual Networks and Gateways to connect securely to Azure Infrastructure—when you have a virtual network, you never have to use the public IP address, and all connections can take place over secure VPN connections

No one wants to have their data breached—so make sure to follow these steps!

June was a Good Month

The month of June and into July have been very good to me. Along with my great volunteers at PSSUS, Microsoft, and all of our wonderful sponsors we had a great SQL Saturday event on June 6-7. Allan Hirt, Stacia Misner and myself had great precons, and almost 60 hours of training was provided to our attendees.

A few days after that happened, I got an email that I’d been awaiting for a long time—Microsoft was awarding me as a SQL Server MVP. I can’t begin to describe how humbled and honored I was to receive this award. I don’t do all the things I do in the community because of recognition (I do it because I love my friends in the community and its fun), but it is really nice to get recognition for the work I’ve done. There are entirely too many people to thank, for their assistance and guidance with my career progression as a presenter and writer, but I thank you all for helping to get me where I am.


 

 

 

 

So that happened, and then I got to Germany for a bit of a vacation. For those of you who don’t know, I’m a pretty big fan of auto racing. Germany is home to one of the largest, fastest racetracks in the world, the Nürburgring Nordschleife. We were lucky enough to be there the week of the 24 Hours of the Nürburgring. We didn’t stay for the race, but went to a qualifying day. It is incredibly impressive to see people driving fast cars around the plunging, twisting circuit that Formula 1 abandoned in 1976 for safety reasons. The skill and bravery of the drivers was quite impressive. Also, the engineering skills of some of the race fans was quite good—there were many elaborate camping setups, a beer pulley, and the below trash can converted into a grill/keg holder/stereo/prep table.

 

 

 

 

 

 

 

 

 

 

The week after I returned from Germany, sessions were announced for the PASS Summit. I was awarded two sessions, including a three hour talk on Hybrid Disaster Recovery. I’m looking forward to seeing folks in Seattle in November—my birthday is during summit week, so let’s have a beverage!

Right after that, I got news that I would be speaking at both SQL Server Days in Belgium and Live 360! in Orlando this fall. So it will be a busy quarter.

PASS Summit 2014—My Sessions and Free the Comments

This week, I had the honor of being selected to speak at the 2014 PASS Summit in Seattle, WA this coming November. As always, the program committee did an excellent job of combing through a ton of submissions. I was graced with two sessions this year—one is called “Building Your Second Datacenter Disaster Recovery in Microsoft Azure”, and the other is “Building Perfect SQL Servers, Every Time” (I’m noticing a trend here about building stuff)

For the Azure session, I started last year (at spoke at last year’s summit on the topic) of using a hybrid model for building an Always On Availability Group. At the time, the process was complex and fairly complicated. Since then, Microsoft has done a lot of work (as have I, I just finished writing a white paper with Stacia Misner (b|t) on implementing Power BI in Hybrid IT) to make the process easier in simpler. In this three hour session attendees will learn not just about availability groups, but other DR options like log shipping, mirroring and replication, and how to implement them in both cloud-only and hybrid models. It should be an interesting session, with lots of opportunities for my demos to fail.

My other session is about things that need to be done after installing SQL Server-it’s something I’m passionate about. As a consultant, I get to see a lot of SQL Servers, and they aren’t always pretty. SQL Server has a lot of pretty bad defaults in place (max memory, max degree of parallelism, data file autogrowth sizes) and these can lead to poor server performance if left in place. In addition, you will learn about how you can fully automate all of these best practices, so you don’t have to click next and watch the green bar go across the screen. I’ll also talk about the lessons we learned in building a private cloud environment at Comcast.

For my final comment, there has been a lot of controversy around session and precon selection for this summit. I had several friends on the program committee, who I know put a lot of work into comments on each abstract reviewed. As a speaker who gets rejected sometimes (and who doesn’t) being able to read those comments (even on selected sessions) is a great resource for feedback and understanding about what to change. For whatever reason, PASS has decided to not supply speakers with this comments, which I feel is a big mistake and an insult to the program committees and speakers who put in a lot of work to write abstracts and comments. #freethecomments

Follow

Get every new post delivered to your Inbox.

Join 1,895 other followers

%d bloggers like this: