One of the challenges of being a consultant is having to work with a number of clients, and having different login credentials and accounts. In the early days of Azure, this was exceptionally painful, but over time the experience of using the portal with multiple identities and connecting to Azure tenants has gotten much easier. However, when writing PowerShell or Azure CLI code, switching accounts and contexts is slightly more painful. Also, when you are doing automation, you may be touching a lot of resources at one time, you want to be extra careful that you are in the right subscription and tenant.
Enter cloud shell.
If you click on the highlighted icon from the Azure Portal, you will launch cloud shell. This will require you to have an Azure Storage account which will consume a small amount of resources (€$£)–don’t sweat this–it’s literally going to cost pennies per month, unless you decided to upload terabytes of images to your cloud shell (don’t do this). The storage is there so you can maintain a history of commands and even store script files there.
With cloud shell you are automatically logged into the tenant associated with your login–you will still need to select the subscription. As shown below–you can see the subscriptions available to your login.
The other cool thing about cloud shell is that you also have built-in text editors including vim and code. While means you can paste code into a text editor and save it in your shell. Since you have a storage account that data is persisted. So you can have a bunch of scripts saved in your cloud shell. This is great for developing for Azure automation, or just running some ad-hoc scripts.
You can also go full screen with code–as shown above. While all of the examples I’ve shown have been PowerShell, you can also launch a bash shell running the Azure CLI.
I’ve written a lot about my thoughts on PASS this year. While I understand some of my posts could have been considered inflammatory, I wrote them from a deep position of love for the SQL Server and broader Microsoft Data Platform Community, and I decided to run for the Board of Directors because I wanted to ensure that the opportunities provided by PASS continued for others.
Before I tell you about why you should vote for me for the PASS Board of Directors, I wanted to talk a little bit about my history with PASS. I don’t remember exactly when I got started in PASS (I likely signed up for a virtual chapter earlier), but when I moved to Philadelphia and was in a role that required me to get more in-depth with SQL Server (I used to be an Oracle DBA) and I got involved with the Philadelphia SQL Server User’s Group. Shortly thereafter, I gave a talk or two, and then joined the board of the organization. I was able to attend my first PASS Summit in 2011, and ran SQL Saturday Philadelphia for 5 or so years. I was also a regional mentor for the Mid-Atlantic region in the US for several years.
I bring up this point to highlight one of the things that I think is so important about PASS and is often missed—local chapters and regional events. While PASS Summit is an international event that represents a “family reunion” for #sqlfamily, these reunions happen every weekend all over the world at SQL Saturday events, and monthly a local user group meeting. In order to grow our membership and commit to our mission of Connect, Share, and Learn those local events need to be priority. I would like to work towards having a speaker database, even it is something rudimentary, where UGs could seek out a speaker to present virtually, or in-person. This is a good example of something that could be a community project—it represents something that the community could build in a hack-a-thon after building some requirements.
One of the challenges PASS has consistently faced over time is problems with its technology systems, which are mostly developed in-house, making them both expensive and time-consuming to update and upgrade. I would work towards moving to Software as a Service products that many conferences and user groups are already using. This can both be a potential for cost savings and enhance the productivity of all involved.
PASS faces many challenges in coming years and will have to adapt to stay alive in relevant. The best way to do that is to stick to the core mission of Connect, Share, and Learn, and to remember that everything starts locally, whether it be a user group or a SQL Saturday. We also need to ensure that our sponsors are happy and receiving value from the organization. If you vote for me, I will do my best to make those things happen.
I would also like to recommend you vote for my colleagues Steph Locke and Matt Gordon. We have been doing a lot of thinking about how a community organization should work and would provide the right leadership for PASS.
Creating an Extended Events session (as well as viewing events) in Azure SQL Database is slightly different than a typical SQL Server. Since you don’t have access to the file system of the server where your database live, you need to configure a storage account target for persistence of your extended event sessions. You can write them to the ring buffer, but since you do not have the ability to “view live events” in SQL Server Management Studio, this is of limited benefit. You read about what you need to do in docs here, but in a nutshell it’s create a storage account (or use an existing one) create a database scoped credential so you can use the storage account, and then create the xEvents session.
The reason why I’m writing this post is that there is a bit of a bug here that’s not fully documented. Many of us (especially those of us who are consultants) work across the scope of Azure Active Directory tenants. What that means is firstname.lastname@example.org might manage a database in the contoso.com Azure AD tenant while still being logged in with the email@example.com identity. Normally, this isn’t an issue but there are a couple of places where some odd things happen with cross-tenancy. When you try to create a credential in your database, you will receive the following error, even if you are the database owner.
Started executing query at Line 1
Msg 2760, Level 16, State 1, Line 1 The specified schema name "firstname.lastname@example.org" either does not exist or you do not have permission to use it.
Total execution time: 00:00:00.195
You should note the rapid execution time of that error–this isn’t failing when going out to a storage account to validate the credential, the code is failing in the database. I posted something about this to the Microsoft MVP DL and the ever brilliant Simon Sabin emailed me and suggested that I try to create a schema called email@example.com and then create the credential. Sure enough–that worked fine and I could proceed. In the customer system where this happened, we were fortunate enough to have global admin rights in AAD, and just created a new user in their subscription, and used it.
Note: I’m running for the PASS board of directors. Candidates are not allowed to disparage PASS, and I’m of the opinion that I’m not disparaging PASS in this post, but if anyone thinks I am, please let me know in the comments.
One topic I’m always yelling about on Twitter and to any community speakers I chat with, is to never do anything for a for profit company without compensation. Compensation can take the form of having your travel expenses paid for a conference, or just getting paid for work product. No matter how small the effort, it’s a lot of work to write a column, or do a presentation, and you are a subject matter expert whether you realize it or not. Community events are a different story—I’ve spoken at user groups, virtual chapters, many SQL Saturdays and even some community conferences, like SQLBits (in person SQLBits does provide hotel rooms for speakers) and EightKB without any compensation.
I’ve written about the PASS Pro subscription offering, that PASS and C&C launched to build a secondary revenue stream, beyond PASS Summit. When it was launched, I said, conceptually it was a good idea, but I didn’t have faith in C&C to provide good execution. I also thought PASS would struggle to fill a content pipeline, without a large capital infusion to pay speakers to build content, the way Pluralsight and LinkedIn Learning do.
It turns out my assumption was correct, as this morning I received an email (forwarded to me) asking PASS speakers to teach Microsoft Learn modules to PASS Pro members. The work consists of doing a webinar and leading a Q&A session for an hour, and the compensation is “This is a volunteer position”. Yes, that’s it—PASS expects folks to do this work for free.
My (and DCAC’s) typical compensation for something like this would be $1000 at a minimum. In addition to the active training time we are typically paid for the time required to prepare to give the training class. Prep time for Azure training is hard—Azure changes all of the time, which means you frequently need to update demos, change screenshots in slides, and possibly even refactor entire sections of training because the Azure platform is constantly changing. Asking the community to provide free training to a service, that is an attempt to prop up a for-profit event management company is just unconscionable to me.
Because PASS Pro is a paywalled service there are a limited number of people who can attend these events. This means this training is not upholding the PASS mission of Connect, Share, Learn to the community at large. More importantly PASS is asking experts to provide their expertise and skill for free. Just like artists shouldn’t work ‘for exposure’, you shouldn’t either. If someone is asking you to build content for them, you have a valuable level of expertise that that company needs.
I don’t mean to sound pretentious, but I’ve worked long and hard to have a very solid understanding of the Azure ecosystem, and I consider myself knowledgeable. My knowledge is valuable—it took me a long time to acquire that knowledge as well as the ability to teach others that knowledge in a manner that they can understand. That knowledge and skill is valuable, and asking professionals to work for free completely undermines the foundation of having a Professional Association.
I’ll take this a step further and say that anyone who does this for free is actively harming the rest of the community. As I mentioned before we at DCAC are paid for things like this, along with many other members of the data community and the Microsoft Certified Trainer community. When others do free training for a paid service you are undercutting all of those people. You may ask what I think about people doing things on YouTube or other free services—those videos are available for the whole world to see, and in many cases are loss leaders to try to get viewers to subscribe to paid services.
I’m of the opinion that PASS Pro being a paid service had the potential to divide the community. However, when speakers are paid, that’s another opportunity for speakers to make money, and is how nearly all legitimate online training services work. If the success of your business model depends on free labor for your paid service, that’s just theft of labor, and you don’t have a feasible business model.
One of the benefits of cloud computing is flexibility and scale—I don’t need to procure hardware or licenses as you get new customers. This flexibility and platform as a service offerings like Azure SQL Database allow a lot of flexibility in what independent software vendors or companies selling access can provide to their customers. However, there is a lot of work and thought that goes into it. We have had success with building out these solutions with customers at DCAC, so in this post, I’ll cover at high level some of the architectural tenants we have implemented.
Authentication and Costing
The cloud has the benefit of providing detailed billing information, so you know exactly what everything cots. The downside to this is that the database provided is very granular and detailed and can be challenging to breakdown. There are a couple of options here—you can create a new subscription for each of your customers which means you will have a single bill for each customer, or you can place each of your customers into their own resource, and use tags to identity which customer is associated with that resource group. The tags are in your Azure bill and this allows you to break down your bill by each customer. While the subscription model in cleaner in terms of billing, however it adds additional complexity to the deployment model and ultimately doesn’t scale.
The other thing you need to think about is authenticating users and security. Fortunately, Microsoft has built a solution for this with Azure Active Directory, however you still need to think about this. Let’s assume your company is called Contoso, and your AAD domain is contoso.com. Assuming you are using AAD for your own business’s users, you don’t want to include your customers in that same AAD. The best approach to this is to create a new Azure Active Directory tenant for your customer facing resources—in this case called cust.contoso.com. You would then add all of the required accounts from contoso.com to cust.contoso.com in order to manage the customer tenant. You may also need to create a few accounts in the target tenant, as there are a couple of Azure operations that require an admin from home tenant.
Deployment of Resources
One of the things you need to think about is what happens when you onboard a new customer. This can mean creating a new resource group, a logical SQL Server, and a database. In our case, it also means enabling a firewall rule, and enabling performance data collection for the database, and a number of other configuration items. There are a few ways you can do this—you can use an Azure Resource Manager (ARM) template, which contains all of your resource information, which is a good approach that I would typically recommend. In my case, there were some things that I couldn’t do in the ARM template, so I resorted to using PowerShell and Azure Automation to perform deployments. Currently our deployment is semi-manual as someone manually enters the parameters into the Azure Automation runbook, but it could be easily converted to be driven by an Azure Logic App, or a function.
Deployment of Data and Data Structures
When you are dealing with multiple databases across many customers, you desperately want to avoid schema drift that can happen. This means having a single project for all of your databases. If you have to add a one-off table for a customer, you should still include it in all of your databases. If you are pushing data into your tables (as opposed the data being entered by the application or users) you should drive that process from a central table (more to come about this later).
Where this gets dicey is with indexes, as you have may have some indexes that are needed for specific customer queries. In general, I say the overhead on write performance of having some additional indexes is worth the potential benefit on reads. How you manage this is going to depend on the number of customer databases you are managing—if you are you have ten databases, you might be able to manage each databases indexes by themselves. However, as you scale to a larger number of databases, you aren’t going to be able to manage this by hand, Azure SQL can add and drop indexes it sees fit, which can help with this, but isn’t a complete solution.
Hub Database and Performance Data Warehoue
Even if you aren’t using a hub and spoke model for deploying your data, having a centralized data repository for metadata about your client databases. One of the things that is a common task is collecting performance data across your entire environment. While you can use Azure SQL Diagnostics to capture a whole lot of performance information in your environment, with one of our clients we’ve taken a more comprehensive approach combining the performance data from Log Analytics, Audit data that also goes to Log Analytics, and the Query Store data from each database. While log analytics contains data from the Query Store, there was some additional metadata that we wanted to capture that we could only get from the Query Store directly. We use Azure Data Factory packages (which were built by my co-worker Meagan Longoria (b|t) to a SQL Database that serves as a data warehouse for that data. I’ve even built some xQuery to do some parsing of execution plans, to identity which tables are most frequently queried. You may not need this level of performance granularity, but it is a talk you should have very early in your design phase. You can also use a 3rd party vendor tool for this—but the costs may not scale if your environment grows to be very large. I’m going to do a webinar on that in a month or so–I need to work it out the details, but stay tuned.
You want to have the ability to quickly do something across your environment, so having some PowerShell that can loop through all of your databases is really powerful. This code allows you to make configuration changes across your environment, or if you use dbatools or invoke-sqlcmd to run a query everywhere. You also probably need to get pretty comfortable with Azure PowerShell, as you don’t want to have to change something in the Azure Portal across 30+ databases.
I have received feedback that some folks think I just want to burn PASS down, or that I don’t want a for profit company involved with a community organization. Neither of those things are remotely what I’m thinking—I’ve only been loud and writing about it here, because I want PASS to survive, which is going to be near impossible with a loss of its main revenue source (in-person PASS Summit) and its expenses (C&C) which haven’t dropped nearly enough in the face of the aforementioned revenue loss. What do I see as a future for PASS?
Virtual Summit is going to happen in 2020, it’s also probably going to lose money. It’s effectively a sunk cost at this point, so I’m not going to waste any time talking about that. In 2021, PASS has a tough decision to make—large international conferences are unlikely to be a thing until 2022, when the covid vaccine has been broadly distributed. Planning a virtual conference in 2021 is risky as well, given that most of the competition is free. I think doing a low cost (and lower overhead) smaller scale event using a much cheaper platform like Microsoft Teams or even GoToWebinar would be a good small bet, without much risk. I also think a small conference the size of the old SQL Rally (a few hundred people and run in a hotel, not a conference center) could be viable for Q4 of 2021.
The reason for doing this would be an effort to try to keep the fundamental networking aspect of PASS going, while reducing financial risk. The original SQL Rally was a community organized event—by keeping it small, you not only reduce costs, but you also reduce the time to plan, which allows you to have a better assessment of the pandemic situation. PASS could also think about leveraging larger SQL Saturdays like Atlanta and Dallas, amongst others to be candidates for a Rally, as these events have community organizers who are very experienced at running larger scale events.
The Managing Organization
I’ve said what I’m going to say about C&C, but it’s very clear that PASS as an organization is untenable with its current cashflow situation. This means costs need to be drastically cut wherever possible. PASS won’t be in the business of planning large in person conferences until 2022, and therefore doesn’t require a large event management firm dedicated to its management. I would recommend hiring a full-time executive director (yes, I know I said we need to reduce spending) to manage the organization and manage vendor relationships. C&C currently has a seat on the on the PASS exec board with a title of Executive Director, which is a conflict of interest, and I would propose ending that immediately. The Executive Director role needs to be someone who understands both data and analytics and building communities. Finding this person will be a challenge, but I believe they are out there. I would also move to stop using the custom developed platforms PASS is using and move to using Software as a Service platforms where possible. Sessionize is probably the most obvious solution here, but there are others.
The Role of the Board
As I was reading the by-laws and guidance for the PASS BoD, I came across this paragraph.
Role of PASS Board Members
“What PASS does not need from the Board is tactical execution or day to day management of organizational activities”—I can’t imagine running a SQL Saturday and completely outsourcing everything to a third party—I feel the same way about our community organization, especially in this time of crisis. I think this is completely wrong, and the main reason why PASS is in the situation it is right now. The Board of Directors needs to take an active role in managing the organization, period. We, as a community organization are in a situation where the organization might go bankrupt and die, and while this is largely due to a force majeure (the pandemic), it is also due to decisions made that are in the interest of the managing firm, and not the community. When I was heavily involved in running my PASS chapter, I had a board member, who’s portfolio was chapters, who took an active interest in the chapters, and their needs and worked his tail off to make things better. Unfortunately, he was not re-elected, and things never got any better from there.
The board needs to take an active role—while the day to day operations of the org, would be managed by the executive director, and eventually some administrative staff, in a time where the organization needs to be austere with its spending, being on the board should require you to get your hands dirty. I would also try to involve the community—there are lots of projects, that over time could have been open sourced, but there has always been push back from the board. Given the success of community managed projects like DBATools, I see no reason to not engage volunteers who are willing to help, especially on community facing projects.
I’ve been involved in PASS for nearly 15 years now—I want it to survive, because having a centralized community organization is a good thing and makes the community stronger. The central organization also provides governance and helps with sponsors. PASS cannot survive financially in its current state, and we as a community must band together to help it survive and foster the changes to make it a sustainable organization. While we are doing that, we can make it a better community org.
Sorry for spammy, SEO title, we got to pay the bills. Sometimes it’s fun to just write some code to solve problems, and not think about the world’s larger problems for a few hours. Last week, I learned something new from a client—that you can change managed disks in Azure in Premium Storage to Standard Storage if the VM connected to those disks is powered off. This is a cost savings of nearly $100/month per month per disk (assuming 1 TB disks) and since the SQL Server image in the marketplace uses two 1 TB disks, this can save you a good amount of money from your Azure spend.
This code will loop through each resource group in your subscription and look for resource groups with the Tag “Use:Demo”. If you aren’t familiar with Tags in Azure (or AWS) they are a metadata application layer that allows you to more easily identity and filter resources. The most common use case is to make your Azure bill easier to navigate. However, you can also incorporate tagging into your management operations, as you see in this example.
After it identifies each resource group with that tag, it will then look for VMs in those resource groups, and power them down if they are running, and then migrate each premium disk on the VM to Standard. I have similar code in Github to do the opposite, however, I haven’t glammed it up to support the tagging functionality yet.
This code is available at DCAC’s GitHub here. To take this a step further you could create an Azure Automation runbook to deploy this code. In order to do that you would need to import the modules Az.Resources and Az.Compute into your automation account.
If you saw any of my angry tweets last night, it’s not just because the Saints weren’t good. I’ve been writing a lot about PASS and C&C the for-profit event management firm that runs virtually all of PASS’ operation. I personally think C&C imposes a financial burden on the Microsoft Data Platform community that will ultimately kill PASS. I want to run for the board of directors (once you agree to run for the board you have to agree not to speak or write poorly of PASS, but it doesn’t say anything about C&C) to try to return PASS to being a community oriented organization. PASS has been a great organization and the connections I have made have been a great foundation for the career success that myself and many others have achieved. The reason I agreed to speak at PASS Summit this year was to help enable the organization’s survival, despite my lasting frustrations with C&C.
PASS had a couple of options for doing PASS Summit virtually, and they’ve failed at every turn. The best option would have been to do a super low-cost virtual summit, using Microsoft Teams, and tried to keep the pricing at level the average DBA could pay out of pocket. This big reduction in revenue is bad for C&C’s business, but frankly given that there likely won’t be a big conference until 2022, C&C should be operating on an austerity budget, since PASS’ main income source has been severely constrained.
The Burden on Speakers
I’ve lost count of how many webinars I’ve done this year—it’s been a lot. 98% have been live—in some cases with some really dicey demos, like I did at Eight KB. Doing a webinar or a user group meeting is a decent amount of effort, but no more than doing in-person session. However, PASS Summit has asked speakers to record their sessions—recording a session takes me at a minimum 2-3x the amount of time to execute than to simply deliver a session. Setting up cameras, lighting, and doing small amounts of editing all add up to considerable amounts of time. Additionally, you have to render the video and then upload it to the site. I say this with experience, because I just recorded three sessions for SQLBits.
You might ask why I was willing to record sessions for Bits, but not PASS Summit. That’s a good question—SQLBits is truly a community run event, for the community, by the community. Sure it can be rough around the edges, but it’s a great event, and in general the conference is great to work with. Additionally, SQLBits always pays for speaker’s hotel rooms, it’s nominal in the cost of an international trip, but it’s something that makes you feel wanted as a speaker and I remember it. PASS Summit, unless you have a preconference session (precon) doesn’t offer any renumeration at all to speakers, nor have they ever. All that being said, after recording my Bits sessions, I said “I’m never doing that for free again”. In addition after doing the work for your session, you have to show up and do Q&A for your session.
Why You Shouldn’t Speak at PASS Summit (and TimeZones are hard)
PASS has asked speakers to record their sessions just six weeks before the conference. These recordings will only ever be seen by paid attendees of the conference, and possibly PASS Pro members. Speakers received a highly confusing email informing them of this late last night, which included the time and date of their sessions. It wasn’t clear if “live sessions” still needed to be recorded—which is even more confusing to speakers. Speakers weren’t consulted about the need to record their sessions when the revised speaker agreement went out. This burden has been imposed at the last minute. I haven’t gotten any official communications since July when I received my speaker code. It’s not fair to impose this on speaker’s this late in the process, especially when you aren’t compensating them for their time. Also, this is insignificant, but we were supposed to get the slide template in July, and it’s still not in my inbox. I’ve have no communications from PASS about Summit since July.
Precons are all starting in the speaker’s native time zone, which will limit the audience for many precon speakers—European speakers are starting at early a 3 AM EST, which means basically no one in North America (PASS’ main market). Most regular conference sessions are 8-5 PM EST—which probably is a decent compromise, but still greatly limits the west coast in the morning and other regions of the world like Asia. There are some evening and overnight sessions but those are extremely limited compared to EST business hour sessions. All schedules for a worldwide event are going to be a compromise, but I feel like some creativity could have been used to better support a virtual audience. For example, Ignite has replays of all its sessions available for broader time zone coverage. As far as I know, no speakers were consulted during the making of this schedule.
Doesn’t This Hurt the Community?
A successful PASS Summit is a good thing for the community. However, with the poor management of C&C, the marketing for the event has been poor, and with most other events either going to free or freemium models, PASS continues to charge a premium for the event. The platform that PASS is using hasn’t been demoed to speakers or attendees, to show how it would have value over a free conference like EightKB or Ignite.
I’m not going to speak at PASS Summit. I’m going to record my session, and put it on YouTube, so everyone can watch the session. And I’ll do a live Q&A to talk about it—it’s a really cool session about a project I’ve worked on to aggregate query store data across multiple databases. I challenge other speakers to follow me—the conference is so bad and so expensive, because C&C is trying to prop itself up on the back of the community. C&C needs to go away before we can move forward. I was frustrated before, but this Summit fiasco has really pushed me over the top.
I’ve written a couple of recent posts that were extremely critical of PASS and more so C&C which is the company that manages PASS. Someone who read my last post pointed out that I probably didn’t emphasize the budget numbers I talked about enough. So let’s talk about that. I grabbed the most recent PASS financial data which was published in March 2020.
Cash on Hand (effective)
Summit Revenue Projections
This is a challenge, because obviously I don’t have the actual costs for PASS is spending per attendee for virtual summit. Many years ago, my rough understanding of Summit cost per attendee was that it was $400. So, for the purposes of my math, I’m going to estimate that virtual summit will cost $100/attendee (I suspect the actual cost is closer to $250 given that is what chapter leaders are being charged. Per the June, meeting minutes C&C has agreed to reduce their expenses by $500,000. It’s not clear where that comes in, but let’s just say that drops non-Summit expenses to $2.7MM.
If we have 2000 attendees of virtual PASS Summit in 2020 which I think may be generous estimate, all paying for the whole conference.
Cash on Hand (effective)
If we have 1000 attendees doing the All in One Bundle and 500 attendees doing the 3 day conference.
Cash on Hand (effective)
Given my experience and the current economy, I think my above projections are fairly optimistic. Let’s say my cost projections of $100 per person are too low, and the costs are $250 per person. Also, let’s say only 500 people sign up for the full conference and 500 register for the three day conference.
Cash on Hand (effective)
PASS doesn’t officially release attendance numbers, they say that 4000 people attended PASS Summit last year, which sounds really great. However, conference math is a factor here—many conferences count precons separate from the individual conference attendance. If you attended two precons, and the conference you would count as three conference attendees. In a best-case scenario where you had 4000 attendees, top line revenue would still drop by $3.5 million (or 54%) and fixed operating expenses are only down $500k (or 16%). That is as they say in business school is an untenable situation.
This is just focusing on the short term—2021 will face similar challenges. It is very possible that by November 2021, in-person conferences will be back (this assumes a vaccine in place, but Goldman Sachs does, and I trust them when it comes to money). However, I don’t see attendance quickly returning to pre-pandemic levels until 2022 or 2023, which means PASS will likely continue dipping into its cash on hand until reaching bankruptcy.
Sure, PASS Pro is a second potential revenue source, but it faces many challenges in getting of the ground and adding enough revenue to have any substantial impact. In addition to the fact that it has many community speakers feel alienated by the conversion of their Summit sessions or networking events into paid of profit sessions.
One final note, in FY2020 PASS spent approximately five percent ($385K) of its revenue on community activities. That number was substantially beefed up by a Microsoft SQL Server 2019 upgrade effort and to the total community spend has been dropping over time. For a point of reference C&C charged pass $525k for IT services in 2019. It’s important to remember that PASS exists to serve the broader SQL community and not a for-profit firm.
I’m writing this post because I’ve been mired in configuring a bunch of distributed availability groups for a client, and while the feature is technically solid, the lack of tooling can make it a challenge to implement. Specifically, I’m implementing these distributed AGs (please don’t use the term DAG as you’ll piss off Allan Hirt, but more importantly its used in Microsoft Exchange High Availability, so it’s taken) in Azure which adds a couple of additional changes because of the need for load balancers. You should note this feature is Enterprise Edition only, and is only available starting with SQL Server 2016.
First off why would you implement a distributed availability group? If you want to implement a disaster recovery (DR) strategy in addition to a high availability strategy with your AG. There’s limited benefit of implementing this architecture if you don’t have at least four nodes in your design. But consider the following design:
In this scenario, there are two data centers with four nodes. All of the servers are in a single Windows Server Failover Cluster. There are three streams from the transaction log on the primary which is called SQL1. This means we are consuming double the network bandwidth to send data to our secondary site in New York. With the distributed availability group, each location gets its own Windows Cluster and availability group, and we only send one transaction log stream across the WAN.
This benefits a few scenarios–the most obvious being, it’s a really easy way to do a SQL Server upgrade or migration. While Windows clustering now supports rolling OS upgrades, its much easier to do a distributed AG, because the clusters are independent of each other and have no impact on each other. The second is that its very easy to fail back and forth between these distributed availability groups. You have also reduced by half the amount of WAN bandwidth you need for your configuration, which can represent a major cost savings in a cloud world or even on-premises.
If you think this is cool, you are with smart people–this is the technology Microsoft has implemented for geo-replication in Azure SQL Database. The architecture is really robust, and if you think about the tens of thousands of databases in Azure, you can imagine all of the bandwidth saved.
That’s Cool How Do I Start?
I really should have put this tl;dr at the start of this post. You’ll need this page at docs.microsoft.com. There’s no GUI. Which kind of sucks, because you can make typos in your T-SQL and the commands can still potentially validate and give you non-helpful error messages (ask me how I know). But in a short list here is what you do:
Create your first WSFC on your first two nodes
Create an Availability Group on your first WSFC, and create a listener. Add your database(s) to this AG
If you are in Azure, ensure your ILB has port 5022 (or whatever port you use for your AG endpoint) open
Create your second WSFC on the remaining two nodes
Create the second AG and listener, without a database. In case you really want to use the AG wizard, add a database to your AG, and then remove it. (Or quit being lazy and use T-SQL to create your AG)
Create the distributed AG on the first AG/WSFC
Add the second AG to your distributed Availability Group
This seems pretty trivial and when all of your network connections work (you need to be able to hit 1433 and 5022 from the listener’s IP address across both clusters). However, SQL Server has extremely limited documentation and management around this feature. The one troubleshooting hint I will provide is to always check the error log of the primary node of the second AG (this is known as the global forwarder), which is where you will see any errors. The most common error I’ve seen is:
A connection timeout has occurred while attempting to establish a connection to availability replica ‘dist_ag_00’ with id [508AF404-ED2F-0A82-1B8A-EA23BA0EA27B]. Either a networking or firewall issue exists, or the endpoint address provided for the replica is not the database mirroring endpoint of the host server instance
Sadly, that error is a bit of a catch all. In doing this work, I had a typo in my listener name on the secondary and SQL Server still processed the command. (So there’s no validation that everything and connect when you create the distributed AG). I’m sure in Azure this is all done via API calls, which means humans aren’t involved, but since there is no real GUI support for distributed AGs, you have to type code. So type carefully.
Overall, I think distributed availability groups are a nice solution for high available database servers, but without more tooling there won’t be broader adoption, and in turn, there won’t be more investment from Microsoft in tooling. So it’s a bit of a catch 22. Hopefully this post helps you understand this feature, where it might be used, and how to troubleshoot it.