PASS is Untenable in its Current State

I’ve written a couple of recent posts that were extremely critical of PASS and more so C&C which is the company that manages PASS. Someone who read my last post pointed out that I probably didn’t emphasize the budget numbers I talked about enough. So let’s talk about that. I grabbed the most recent PASS financial data which was published in March 2020.

Income7,538,163 
Expenses (total) 6,589,810
Expenses (Non-Summit) 3,275,430
Cash on Hand (effective)2,477,769 

Summit Revenue Projections

This is a challenge, because obviously I don’t have the actual costs for PASS is spending per attendee for virtual summit. Many years ago, my rough understanding of Summit cost per attendee was that it was $400. So, for the purposes of my math, I’m going to estimate that virtual summit will cost $100/attendee (I suspect the actual cost is closer to $250 given that is what chapter leaders are being charged. Per the June, meeting minutes C&C has agreed to reduce their expenses by $500,000. It’s not clear where that comes in, but let’s just say that drops non-Summit expenses to $2.7MM.

If we have 2000 attendees of virtual PASS Summit in 2020 which I think may be generous estimate, all paying for the whole conference.

Income1,998,000 
Expenses (Summit) 200,000
Expenses (Non-Summit) 2,700,000
Summit Subtotal (902,000)
Cash on Hand (effective)1,575,769 

If we have 1000 attendees doing the All in One Bundle and 500 attendees doing the 3 day conference.

Income1,298,500 
Expenses (Summit) 150,000
Expenses (Non-Summit) 2,700,000
Summit Subtotal (1,551,500)
Cash on Hand (effective)926,269 

Given my experience and the current economy, I think my above projections are fairly optimistic. Let’s say my cost projections of $100 per person are too low, and the costs are $250 per person. Also, let’s say only 500 people sign up for the full conference and 500 register for the three day conference.

Income799,000 
Expenses (Summit) 250,000
Expenses (Non-Summit) 2,700,000
Summit Subtotal (2,151,000)
Cash on Hand (effective)326,769 

PASS doesn’t officially release attendance numbers, they say that 4000 people attended PASS Summit last year, which sounds really great. However, conference math is a factor here—many conferences count precons separate from the individual conference attendance. If you attended two precons, and the conference you would count as three conference attendees. In a best-case scenario where you had 4000 attendees, top line revenue would still drop by $3.5 million (or 54%) and fixed operating expenses are only down $500k (or 16%). That is as they say in business school is an untenable situation.

This is just focusing on the short term—2021 will face similar challenges. It is very possible that by November 2021, in-person conferences will be back (this assumes a vaccine in place, but Goldman Sachs does, and I trust them when it comes to money). However, I don’t see attendance quickly returning to pre-pandemic levels until 2022 or 2023, which means PASS will likely continue dipping into its cash on hand until reaching bankruptcy.

Sure, PASS Pro is a second potential revenue source, but it faces many challenges in getting of the ground and adding enough revenue to have any substantial impact. In addition to the fact that it has many community speakers feel alienated by the conversion of their Summit sessions or networking events into paid of profit sessions.

One final note, in FY2020 PASS spent approximately five percent ($385K) of its revenue on community activities. That number was substantially beefed up by a Microsoft SQL Server 2019 upgrade effort and to the total community spend has been dropping over time.  For a point of reference C&C charged pass $525k for IT services in 2019. It’s important to remember that PASS exists to serve the broader SQL community and not a for-profit firm.

Distributed Availability Groups–The Good, the Bad, and the Ugly

I’m writing this post because I’ve been mired in configuring a bunch of distributed availability groups for a client, and while the feature is technically solid, the lack of tooling can make it a challenge to implement. Specifically, I’m implementing these distributed AGs (please don’t use the term DAG as you’ll piss off Allan Hirt, but more importantly its used in Microsoft Exchange High Availability, so it’s taken) in Azure which adds a couple of additional changes because of the need for load balancers. You should note this feature is Enterprise Edition only, and is only available starting with SQL Server 2016.

First off why would you implement a distributed availability group? If you want to implement a disaster recovery (DR) strategy in addition to a high availability strategy with your AG. There’s limited benefit of implementing this architecture if you don’t have at least four nodes in your design. But consider the following design:

In this scenario, there are two data centers with four nodes. All of the servers are in a single Windows Server Failover Cluster. There are three streams from the transaction log on the primary which is called SQL1. This means we are consuming double the network bandwidth to send data to our secondary site in New York. With the distributed availability group, each location gets its own Windows Cluster and availability group, and we only send one transaction log stream across the WAN.

This benefits a few scenarios–the most obvious being, it’s a really easy way to do a SQL Server upgrade or migration. While Windows clustering now supports rolling OS upgrades, its much easier to do a distributed AG, because the clusters are independent of each other and have no impact on each other. The second is that its very easy to fail back and forth between these distributed availability groups. You have also reduced by half the amount of WAN bandwidth you need for your configuration, which can represent a major cost savings in a cloud world or even on-premises.

If you think this is cool, you are with smart people–this is the technology Microsoft has implemented for geo-replication in Azure SQL Database. The architecture is really robust, and if you think about the tens of thousands of databases in Azure, you can imagine all of the bandwidth saved.

That’s Cool How Do I Start?

I really should have put this tl;dr at the start of this post. You’ll need this page at docs.microsoft.com. There’s no GUI. Which kind of sucks, because you can make typos in your T-SQL and the commands can still potentially validate and give you non-helpful error messages (ask me how I know). But in a short list here is what you do:

  1. Create your first WSFC on your first two nodes
  2. Create an Availability Group on your first WSFC, and create a listener. Add your database(s) to this AG
  3. If you are in Azure, ensure your ILB has port 5022 (or whatever port you use for your AG endpoint) open
  4. Create your second WSFC on the remaining two nodes
  5. Create the second AG and listener, without a database. In case you really want to use the AG wizard, add a database to your AG, and then remove it. (Or quit being lazy and use T-SQL to create your AG)
  6. Create the distributed AG on the first AG/WSFC
  7. Add the second AG to your distributed Availability Group

This seems pretty trivial and when all of your network connections work (you need to be able to hit 1433 and 5022 from the listener’s IP address across both clusters). However, SQL Server has extremely limited documentation and management around this feature. The one troubleshooting hint I will provide is to always check the error log of the primary node of the second AG (this is known as the global forwarder), which is where you will see any errors. The most common error I’ve seen is:

A connection timeout has occurred while attempting to establish a connection to availability replica ‘dist_ag_00’ with id [508AF404-ED2F-0A82-1B8A-EA23BA0EA27B]. Either a networking or firewall issue exists, or the endpoint address provided for the replica is not the database mirroring endpoint of the host server instance

Sadly, that error is a bit of a catch all. In doing this work, I had a typo in my listener name on the secondary and SQL Server still processed the command. (So there’s no validation that everything and connect when you create the distributed AG). I’m sure in Azure this is all done via API calls, which means humans aren’t involved, but since there is no real GUI support for distributed AGs, you have to type code. So type carefully.

Overall, I think distributed availability groups are a nice solution for high available database servers, but without more tooling there won’t be broader adoption, and in turn, there won’t be more investment from Microsoft in tooling. So it’s a bit of a catch 22. Hopefully this post helps you understand this feature, where it might be used, and how to troubleshoot it.

How Do I Make MySQL Highly Available with Open Source Tools?

This is clickbait post title, sorry. You are here now. The correct answer is that you should purchase MySQL as a database service from your favorite cloud provider (Google, Amazon, and Azure all offer prebuilt database as a service offerings) because they have gone through the trouble of making their solution highly available. I’ll speak to Azure, because I’m familiar with the platform–Microsoft doesn’t employ MySQL High Availability per se, however both the storage and the VM are highly available. If there is a disk failure, the service is not impacted, and if there is a failure in the compute tier, a new VM is provisioned.

My second recommendation, if you really, really want to build your own thing is to build a Windows Server Failover Cluster, and used shared storage. Make the MySQL service a clustered resource, and assign a floating IP to the service that will fail with it. (Yes, I know you have to pay M$ for the Windows Server licenses).

Why shouldn’t you use an open source solution to make your MySQL database highly available? First let’s look at a picture of a common MySQL high availability architecture:

If we think about what we need a clustering solution to provide it comes down to a few things:

  • Providing a floating IP address to allow connection to the primary
  • Check the health of the database services and initiate a failover in the event one of them isn’t healthy
  • Executing a clean database failover and providing the ability to easily fail back
  • Ensuring the stability of the overall cluster, maintaining quorum, and avoiding split brain scenarios
  • If you are using a shared storage scenario, the clustering solution needs to manage the shared storage to coordinate failover with services.

If you are using SQL Server with Windows Server Failover Clustering, the cluster service takes care of all of the above, and more. When you look to do this on Linux for MySQL that there about 10 different sets of components you can use to make the service highly available. At the basis of all of these solutions is MySQL replication it’s pretty trivial transactional replication. MySQL’s replication service is fairly robust, and the GTID implementation is pretty solid.

The problem is that the rest of the components are all mix and match. You could use Haproxy to float the IP address, but there’s no way to do a smart health check of the database. It simply does a port connection test. Which means, if your primary goes away, and then comes back without some advanced configuration your floating IP is going to fail back to the original primary whether it’s actually the primary in your replication pair. This is but one example–you are going to end up with 3 or 4 different components to execute each of these functions, and congratulations you are in charge of a complex distributed system that you are responsible for administering for the rest of your life.

But Joey, Facebook/Google/Pick You Other Favorite online megacorp run MySQL and they support it with 5 9s. Ok, sure, I don’t disagree with this–and as databases, MySQL and PostgreSQL are generally ok. But look around at your engineering staff–wait do you have engineering staff? If you don’t have a few people who have both really good Linux SA skills and DBA skills, you are going to be pretty quickly in situation where support is a challenge.

Finally, consider if you need an HA solution. Are you running on a virtual machine? As long as your infrastructure is solid, that probably gets you to about 99.5% availability on a bad week. What you absolutely want to avoid is the Windows 2000 paradigm, which is where your high availability solution incurs more downtime than a standalone system.

Adding an Azure Active Directory User to Azure SQL Database with Automation

My teammate Meagan (b|t) messaged me in Teams yesterday afternoon to say “Joey, the client created a new database using your automated process, and my ETL user (which is a AAD user) didn’t get created, can you fix it?” Well, after a quick perusal of emails I remembered that I had the asked the client to add the create user process to their initial population process which hadn’t occurred yet. The reason why I did this was that creating an Azure Active Directory user in an Azure SQL Database from Azure Automation was painful and maybe not even possible. However, I pinged Rob Sewell (b|t) about the best way to do that. This sounded not that bad to do, but I managed to hit land mines around every corner.

The First Problem

Azure Automation is mostly PowerShell only—there is a Python option, but I’ve never used it, and I’m not going to start now. The trick with PowerShell is that it’s great for things you have to do to Azure Resources, it’s far less good for things you have to do inside of databases (think creating a user). I typically use the invoke-sqlcmd cmdlet, however we have a chicken and egg problem—I can’t create an AAD user from a SQL connection (a connection made using a SQL login) and invoke-sqlcmd doesn’t support authenticating with AAD. The Azure Automation service allows you to import 3rd party soluitons from the PowerShell gallery, so you can use DBATools which I did here. Rob has an excellent blog post here that describes this process.

$appid = (Get-AzKeyVaultSecret -vaultName “beard-key-vault” -name “service-principal-guid”).SecretValueText

$Clientsecret = (Get-AzKeyVaultSecret -vaultName “beard-key-vault” -name “service-principal-secret”).SecretValue

$credential = New-Object System.Management.Automation.PSCredential ($appid,$Clientsecret)

$tenantid =  (Get-AzKeyVaultSecret -vaultName “beard-key-vault” -name “Sewells-Tenant-Id”).SecretValueText

$AzureSQL = Connect-DbaInstance -SqlInstance $SqlInstance -Database $databasename  -SqlCredential $credential -Tenant $tenantid  -TrustServerCertificate

Invoke-DbaQuery -SqlInstance $AzureSql -Database $DatabaseName  -SqlCredential $credential -Query $query

The code, which I happily stole from Rob’s blog allows me to connect as a service principal. To easily facilitate this I made my automation account part of my DBA group (the Azure AD Admin group for the Azure SQL Server), which you can assign without this ridiculous process. I threatened to add Meagan’s ETL user to that group, but she was going to out me on Twitter.

After running that code I could connect to Automation run as account to my Azure SQL DB, but my query was failing with the following error:

I’m logged as a service principal there—hence the weird GUID, you can see that I have basically every privilege in SQL Server, but I can’t create a user from an external provider. PowerShell (and automation) say that the user could not be resolved.

The Next Land Mine

So I DMed Rob, and asked him WTF? It turns out for this to work, you need to create a service principal for your Azure SQL Database. If you aren’t familiar with service principals they are analogous to service accounts in an on-premises world. Doing this was the easiest step in the process—I have a PoSH script to hit every server in my subscription, and it was trivial to add a service principal as well as add to my database runbook. However, that was just the first part.

You have to give the service principal the “directory reader” permission in Azure AD, and the effective way to do this with Automation is to assign that privilege to a group. Well, it turns out adding AAD roles to group is a relatively new feature (it’s in preview) and more importantly requires P1 or P2 Azure Active Directory which has a per user cost. Which meant I needed to get approval. After much chatter on a DCAC teams channel I discovered since this feature was not user-assigned (e.g. it’s enabled for the entire AAD tenant once it’s enabled) I only had to have one AAD license in the tenant (I assigned it to Meagan). Once that was in place, I could grant the directory permission to the SQL Server Service Principals group.

Are We Done Yet?

I should have noticed in the documentation provided by the SQL team assigning groups with PowerShell, that there was a reference to the preview PowerShell module for Azure AD (I did, but I didn’t think it mattered because I was just assigning a user to a group). So I thought I had everything wired up when I started getting the following error:

Add-AzureADGroupMember -ObjectId $g.objectid  -RefObjectId $miIdentity.ObjectId

Add-AzureADGroupMember: Error occurred while executing AddGroupMember

Code: Authorization_RequestDenied

Message: Insufficient privileges to complete the operation.

RequestId: 62301512-a42f-4d00-a798-4e1bd4062df8

DateTimeStamp: Tue, 25 Aug 2020 13:14:08 GMT

HttpStatusCode: Forbidden

HttpStatusDescription: Forbidden

HttpResponseStatus: Completed

I have Global Admin and Subscription owner in the two environments I was testing in, so clearly this wasn’t a permissions issue. To further prove that point, I was able to add the service accounts I had created to the group through the Azure portal. So after writing like three emails with my further discoveries to the Azure MVP distribution list (I could add the service principal to a regular group, just not one with a role assigned to it). I went back and decided to play with that preview module.

Everything up to this point is me being an idiot, but I’m going to yell at Microsoft for a second. I couldn’t install the azureadpreview on my Mac because its dependent on Winforms—I thought Az modules were all supposed to be built on .NET core. I also couldn’t get it to run in cloud shell, which may be related to the Winforms thing, or not.

I do have a Windows VM, so I installed the module there, and it successfully worked on the DCAC tenant.  I went to Azure Automation to install the module. If you’ve never imported a module into Azure Automation, you should know that the portal notification about a module import being complete is meaningless, because Azure Automation is a lying liar who lies.

Look on the modules page and hit refresh a lot. It usually takes 1-2 minutes for a module to import. I messaged Kerry in Teams.

And what do you know? It worked. I was concerned and about ready to murder someone, but it worked. Rob’s code is really helpful and he covers key vault in his post. I did have some open GUIDs in some of my code pictures, it’s cool those aren’t sensitive. However, you should store all your secrets in Key Vault as it’s fast and easy.

The other thing I learned in this process is that you can now make a guest user you Azure Active Directory Admin (this means I could make joey@dcac.com or jdanton1@yahoo.com an admin in the joeydantoni.com tenant), which you weren’t able to do before. Prior to this you could use a group and add a guest user to that group as I mentioned above. (Note: you should still use a group and not a single user as it’s best practice)

What Do You Want Your Professional Association to Be?

I wanted to write a post about the things I think PASS does well and are foundational to the future with or without the PASS organization. Before I address that I want to speak to an ethical matter related to the new PASS Pro service offering. Every year for PASS Summit, speakers sign an agreement (which if you sign, print it out—they don’t email you a copy, nor can you access prior year’s agreements in the speaker portal) that includes giving PASS the right to the recording of your session. This has always been there, so that PASS could sell online or USB access to session recordings. A few years ago, PASS started selling this access to people who didn’t attend the PASS Summit, which I was fine with—the context of that is still the conference.

With the PASS Pro offering, PASS took all of the sessions that community and Microsoft speakers did in Summit 2019 and bundled it as part of a paywalled offering. Speakers were not asked for their permission to reuse this content as part of a for-profit subscription model, nor were they compensated. Which in both cases is probably legal? However, I strongly question the ethics of this—when a community event morphs into a paid service, most speakers (myself included) want a cut of take, which is only fair as it’s a lot of work to speak at a conference. PASS will say that “all of this money is going back to the community”, which is frankly bullshit. First, the platform they are hosting PASS Pro on is not free, and then there is whatever C&C is charging PASS for this project, which is likely significant. As I’ve mentioned before, the community typically sees about 5% of PASS’ revenue, and the revenue numbers for PASS Pro make that an absolute pittance, while potentially alienating community members. This makes me think hard about speaking at future PASS events.

Stop Bitching, What Did You Really Want to Write About?

The SQL Server community has been part of my life for last 12 plus years—its why I have my current job, have most of my professional network, and why I’ve spoken all over the world. PASS motto is “Connect, Share, Learn” and I think it is a good one and should be the goal of any community organizations.  Let’s talk about the things that make up that community:

  • Conferences
  • SQL Saturday Events
  • Virtual Chapters
  • User Groups
PASS Summit 2011, First Timers Event

Conferences

Having a centralized community organization like PASS has some benefits. The biggest benefit is the ability to have PASS Summit, which is a mostly independent community organized conference that allows us to have a deep dive on Microsoft Data Platform (sorry to use a marketing phrase, but it’s a good description of what the topics covered are) over a week’s time. If you haven’t attended a vendor conference like Microsoft Ignite, it’s a very different experience compared to PASS Summit. The sessions are more marketing driven, and while you have excellent access to Microsoft engineering staff, you aren’t going to have precons on deep dives into topics like high availability and query optimization, and you won’t see nearly as many community speakers offering real-world perspective.

Having a big conference is a good thing, and it’s something PASS has done fairly well and would be a major loss if PASS were to fail. Running a big conference is expensive and hard, and would likely only happen with vendor support, or over the period of several years of growth from a smaller conference to eventually become a big conference. This is a US-centric view, as SQLBits and Data Platform Summit in India have been running pretty large scale conferences for several years.

SQL Saturday Events

SQL Saturdays are awesome—they provide a great starting point for a new speaker. There’s even less pressure than a user group, because your attendees are typically divided between a few tracks. I also have a fondness in my heart for them, as they are where I grew my career and gained organization skills by running a successful event for several years. However, they don’t need PASS to be a success. PASS in recent years has deemphasized SQL Saturday because of a flawed notion that they were cannibalizing Summit attendance (this may be true on the margins, to which I would say, make Summit better). While having a centralized infrastructure for these events is nice, the website is an absolute trainwreck, and should probably be refactored. Numerous SQL Saturdays in Europe have become independent events, without issue—sites like Sessionize make it really easy to run an event. I foresee a little bit of a lull, but these events can run well without a centralized org—just look at Code Camp.

User Groups

Even moreso than SQL Saturdays I do not see the loss of a centralized org having any impact on user group meetings. In recent years, the only service PASS has offered user groups is web hosting (and some funding tied to Summit discount codes, a program which has gotten more limited over time). User Groups, by their nature are fairly independent entities. I look forward to speaking at any UG post covid—having free pizza and meeting a bunch of data pros is always a good time.

Virtual Chapters

As you may have noted in reading my blog, I tend to be cynical about PASS C&C. However, in researching for this post, I noted that (as of Monday) PASS Virtual Chapters have provided 173 hours of free training in 2020. Gathering data is a bit arduous, so I didn’t capture previous year’s data, but I was really impressed at the amount of training in one place. There are other virtual events (especially this year) but having a central place and organization is real benefit to an organization and the SQL community.

What Does All of This Mean?

This means PASS’ core competencies are running Summit, and Virtual Chapters. (wow, I feel like that was the most MBA thing I’ve ever written). Any organization going forward needs to have those structures in place. Summit provides a revenue source, that can allow everything else to proceed.  It also means trying to provide a paid online training and certification service lies outside of its competencies and shouldn’t continue.

However, the challenge PASS faces (and is ultimately tied to its survival) is that Summit is not going to have the same level of revenue for at least the next two years, and expenses haven’t dropped significantly. In recent years I’ve heard a common refrain from board members—PASS Summit was “too big for a small event management company, and not big enough for a large event company”. Since PASS Summit is going to be smaller at least in the medium term, perhaps now is the time for change to save the organization.

I’d welcome comments on anything I missed or your thoughts on what your most meaningful community experiences have been.

Managing Statistics in Azure SQL Database Serverless

One of the only things platform as a service databases like Azure SQL Database do not do for you is actively manage column and index statistics. While backups, patches, and even integrity checks are built into the platform services, managing your metadata is not. Since Azure SQL Database lacks a SQL Sever Agent for scheduling, you have to use an alternative for job scheduling. One of the more common approaches is to use Azure Automation runbooks which are PowerShell (or Python) jobs that are run from within Azure. I like to call Automation “cron for Azure”. My good friend Erin Stellato (b|t) just recorded a YouTube video that walks through the process of setting up an Azure Automation runbook for stats maintenance. There are a lot of steps to get your runbook up and running, and if I had a very small environment, I might just recommend using an Azure Logic App, with a schedule trigger–for a handful of databases, you could be up and running in a few minutes.

silver and gold coins
Photo by Pixabay on Pexels.com

However in my case I’m working on a customer subscription, and I need to have my solution automatically deal with all of the databases in their subscription. My customer has a data as a service model, and has been using the “serverless” tier of Azure SQL Database in order to reduce their compute costs. The serverless tier is effectively auto-close (and auto-scale) for Azure SQL Database, which means the first time you attempt to connect to the gateway (the gateway is what you actually connect to–yourdatabase.database.windows.net is a public IP that in turn connects to your actual database(s). When you connect to that gateway, Azure will begin the process of turning on your database–this can take up to 45 seconds, which means the first connection will most likely fail.

I was running into failures, so I looked at my code, and made some minor modifications.

{
$svr=(get-AzSqlServer -ResourceGroupName $rgs).ServerName
#write-host 'rg:'$rgs
foreach ($svrs in $svr)
{
$sql=$svrs+'.database.windows.net'
write-host $sql
$d =Get-azSqlDatabase -ResourceGroupName $rgs -ServerName ` $svrs|Where-Object {$_.DatabaseName -NE 'master'}
$db = $d.DatabaseName
$servicetier = $d.RequestedServiceObjectiveName
$size=$d.maxSizeBytes

if ([string]::IsNullOrEmpty($db) -eq $false)
{
$Params = @{
'ServerInstance' = "$sql";
'Database' = "$db";
'Username' = "$adminlogin";
'Password' = "$pwd";
'Query' = "$query";
}

# write-host $Params.Query
Invoke-Sqlcmd @params

Start-Sleep -Seconds 45
$query = "EXEC dbo.usp_AdaptiveIndexDefrag;"


$Params = @{
'ServerInstance' = "$sql";
'Database' = "$db";
'Username' = "$adminlogin";
'Password' = "$pwd";
'Query' = "$query";
}
#


Invoke-Sqlcmd @params

}

}

In this case, I’ve made my first query select @@servername, but you can have any query issued–it’s going to fail anyway. I then add a sleep command in PowerShell, I’m using 45 seconds, but you could probably drop that to 30 seconds if you have a tight time window. I’m then using the AdaptiveIndexDefrag script that Pedro Lopes (t) has written.

In Order to Save PASS, We Need to Fire C&C

Let’s talk through the current situation PASS faces. Your business has had its main revenue source devastated by a raging pandemic. To raise revenue from other sources you plan to announce a new online learning and networking service. Except your website is down for five hours on the morning of the rollout, and even after that, there’s still no mention of the new service on the front page of the site. It seems like the term “botched rollout” is used quite frequently when it comes to our community organization, more often than not in fact. In normal times, we can laugh, but when the organization is facing financial peril, execution and decision making are absolutely critical. In this post, I’ll talk about PASS, their new service offering PASS Pro, and my vision for saving the organization.

Let’s Just Talk About Economics

Before I talk about my opinions on the Pro membership offering and how it could work, let’s talk about basic economics of the PASS organization. PASS’ major source of revenue is Summit—in 2018 revenue was about $9MM USD, and Summit made up about $8.9MM of that revenue (thanks to Steph Locke for her excellent research into the budget). Summit isn’t happening in 2020—the conference is going virtual. However, given the reality of a global vaccine distribution process and depressed business travel budgets, even if it happens in 2021, revenues will be likely be down. Which likely places PASS in the awkward situation of trying to run a conference that may have costs which significantly exceed revenue.

Focusing on 2020 for now, PASS Summit is virtual, as are VMWorld, AWS reInvent and Microsoft Ignite. Ignite and reInvent, which are typically around the same cost of the physical Summit, is free as in beer, and VMWare is using a freemium model which only costs $299. Virtual PASS Summit costs $599, or $999 with precons. I’m not pointing out these differences in cost to be petty—those other conferences are subsidized by the vendors who run them, and you won’t get the deep level of database content you’ll see at PASS Summit. However, those other conferences present headwinds that could slow the number of attendees that attend virtual PASS Summit. If PASS were to have 2000 attendees that attended precons, they would only have $2MM of revenue. I think having 2000 attendees is probably optimistic given Zoom fatigue, the fact that other events are free, and the depressed economy, but we’ll run with that number.

The Other Problem

As Steph mentions in her post, and I’ve mentioned before, C&C is a for-profit consulting firm that performs event management and community management services for PASS. There is no meaninful difference between PASS and C&C. PASS Board members have stated that C&C has hired staff just to work with PASS full-time (which brings into question labor laws about contractor status both in the US and Canada, but I’ll leave that to lawyers). In fact in that same post, C&C is referred to as PASS HQ. C&C, in a typical year, charges PASS around $4MM in services. You should also note that there are no PASS board meetings that happen without C&C representation. C&C has a non-voting seat on the PASS board, in the form of their CEO, Judy Christensen.

For those of you who are math majors, or are Business Intelligence specialists, you’ll note that $2MM Revenue <  $4MM Expenses. PASS does have some cash reserves, but they aren’t sustainable forever, and frankly why should our community continue to subsidize a private firm as we are losing money. The search for other revenue sources is a good idea, but is six months late, and millions of dollars short.

PASS Pro Membership

Yesterday PASS announced a new Pro Membership model. I know a lot of work went into this effort, over the course of many months and the mission is noble. I have colleagues who are developing excellent content for the platform that I think is quite good. However, there are a few problems. First, I know of a few community members who in recent months who were asked to host community networking events without being told these events were associated with a paid service. As someone who speaks and writes for part of his living, I’m fine with doing something for free to help a community organization (I’m receiving no compensation for speaking at PASS Summit), but when something is to purely support a for-profit effort (even community leaders have to pay for access to the new offering), to quote a wise man, “#$%@ you, pay me”. The bigger ethical issue is these community members were not informed that these “Community Connects” events were for a paid service. This changes a community organization into a for-profit business (which is mainly what it was before, however this makes it explicitly clear).

Beyond those issues, I just can’t see this working as a successful business (and I hope I’m wrong), PASS Pro is both on price and mostly business model competing with Pluralsight and LinkedIn Learning (formerly Lynda, disclosure: I have training videos on LinkedIn Learning and DCAC is compensated for them). PASS also faces competition from other folks in the DBA space who have their own successful private training services. However, those organizations have a wide base of training offerings, and a steady pipeline of content, and more importantly are both public companies with large coffers to help them pay content creators to build a continuous pipeline of content. While, PASS is paying content creators for the new service, this model is likely unsustainable over time without major capital investment to fund content creation.

With all that being said, let’s just run some numbers. While PASS always talks of the 350,000 people on their email list, that number is just a dream. The data quality is awful, and everyone who is involved with the community knows that the actual number is an order of magnitude less. I’m going to be generous as say PASS has 50,000 active members. (I personally think the number is closer to 10-15k). If they get a 20% adoption at $120/year, that represents revenue of $1.2 million/year. This would be a significant revenue enhancement, however in a crowded marketplace, I just don’t see 10,000 people signing up for the service, nor do I have faith in C&C to build a service that’s attractive to a broad audience. In my decade plus long of working with PASS, while Summit has been successfully run, nothing else has.

To the Botched Rollout

The website going down on the morning of the launch was bad luck. However, it’s part of many systemic failures that we’ve seen throughout the years of C&C’s leadership of PASS. Computers break, but not having monitoring and automation in place to repair your website is just poor form for an IT professional organization. However, the bigger failure is that PASS hasn’t clearly defined what the benefits of the Pro membership are. From the blog post the following benefits are defined:

  1. Download access to PASS Summit 2019 Learning Pathways
  2. Exclusive partner discounts
  3. PASS Community Connects networking sessions
  4. A PASS Pro Member badge

I received a copy of the email sent to community leaders announcing this service—there are a number of “we aren’t sure yet” or “this service is new and we’re figuring things out” in the email that tells me the business model and the service were not fully thought out. There’s also mention of new features including an educational series, but no clear definition of what that is going to look like, and a PASS Training Record (which is based on surveys I’ve taken, probably leading into a certification of some kind, which is exceptionally hard and expensive to execute well, especially in a cloud world where exams need revisions every few months). This is a pretty limited set of benefits compared to Pluralsight, which already has a fully featured offering, with certification preparation and an excellent website/mobile experience.

I like the idea of diversifying revenue streams, however this offering is not fully baked, and in my opinion is unlikely to be successful. (I hope I’m wrong). Additionally, this offering was only introduced in a blog post, is not on the front page of pass.org, and as of 24 hours after the announcement, has not been emailed to members.

You Talk a Lot, What Would You Do?

If I were on the PASS board (and I’m thinking about running for the PASS board) I would do everything I could to reduce costs dramatically over the medium term. I would pursue termination of the contract with C&C, and probably look to downscale Summit in 2021, to be more at the level of a SQL Rally event. I just don’t see conference attendance returning to pre pandemic levels in 12-14 months. At that point, you are reducing costs to the level of hosting the websites for SQL Saturday and Virtual Chapters, other properties. I start planning for a Summit in 2022, to be back at the level of 2019, and look at all other opportunities to reduce overhead. These are desperate times for the community org, and I care far more about preserving the organization than propping up a for-profit consulting firm.

Practical Security for Your Enterprise Application /s

I’ve written of bunch of blog posts on the right way to architect your cloud or on-premises network applications. Those are pretty popular posts–but yet we continue to see organizations (ahem, Garmin) have major problems with ransomware, but security is hard, and as we’ve seen in the last few months, people can’t even do easy stuff like wearing a mask to prevent the spread of a deadly pandemic. So let’s talk about some security stuff you can do that doesn’t require any effort on your part. I recently had a chance to talk to Flip Jupiter, CTO at a stealth startup Software as a Service company about their security practices.

SQL Server Runs Faster as SA

This is a secret I only tell my best clients, that Microsoft hides from you. You see when you login to SQL Server and execute a query, there’s an in-memory cache that gets checked to see if you have permissions against an object. You know how to fix that problem? Make everyone SA and you bypass that cache, and your workload can now run .00004% faster. You also probably want to run all of your queries with NOLOCK hints, because everyone knows locks are stupid and slow everything down.

Firewall, Shmirewall

Many organizations have tiers to their networks, and limit traffic between these tiers. While this is typically enforced at the network layer, operating systems like Windows and Linux have software firewalls which provide an additional layer of overhead that you have to deal with when trying to connect disparate systems. If you keep these “security enhancements” in place, it means you actually have to understand your network traffic and know what ports talk to which other other ports. Ain’t nobody got time for that.

vintage black windmill during sunset
Photo by Tom Swinnen on Pexels.com

Networks should have the Topology of the Netherlands

Many organizations have layered networks that limit the flow of traffic from one segment of the network to another. This requires a network administrator that knows what they are doing in their job, and means you probably can’t play CandyCrush on the same network your production servers are on. Worse, it means, you might actually have to have a jump host, or learn how to run commands remotely. That sounds hard–I really prefer to RDP directly to my production servers. (Pro-tip–if you give your servers a public IP address, you can RDP to them without “the man” monitoring your web traffic over the company VPN). It also means you should be able to access all of your customer data from every laptop in the company, we don’t want to delay our metadata sales process.

Patches? We don’t need no stinking patches

We’re a serious enterprise, and that means our servers need 12 9s of uptime (yeah, I know, but we didn’t have budget for that second node, or second data center). And since one time, my cousin’s friend’s sister bricked her eight year old laptop because of a Windows Update bug (or maybe the battery died, but who can be sure) we can’t risk the downtime to patch production. Everyone one knows software is best the moment its shipped and gets better all the time, which is why our production workloads run on Windows 2003 and SQL Server 2005.

Security Team? We need more Devs

Security is such a boring topic–you can tell because no one goes to security talks at IT conferences. Besides, security features don’t sell–everyone knows that. So we killed our security team and replaced them with a team of offshore devs. We saved money, and those security people were always nagging us about our network. The offshore dev team is writing some excellent code with their entity framework toolkit.

Kerberos, or Spot as we call it.

One of the key tenants of Active Directory is Kerberos, which is named for the dog that guards the gates of Hades in Greek mythology. We like to call that spot. Kerberos is complicated, with its identities, SPNs, and that damned double hop problem. We solved for this by making all of our users domain admin. Some people say this might be a “security risk”, but we trust all of users, and we know that they won’t do anything like click on a phishing email or download software from the Russian mafia.

You Made it This Far, What’s the Punchline?

In case you haven’t figured it out yet, Flip Jupiter is not a real person, and I’ve just walked through six of the worst security practices that you can possibly do. You should note you should never, ever do any of these practices in real life (though I’ve shutdown Windows Firewalls in way too many demos because I procrastinate). Security should be at the front of mind for all IT professionals whether they be administrators, developers, DevOps, or CTOs. Security isn’t a tool you can bolt on to your killer app, you need to think that way from the beginning. Note: for those you who are really dense: this post is sarcasm and you shouldn’t do any of this

Running Scheduled Tasks Against Azure SQL Database

If you’ve worked with Microsoft SQL Server for any period of time, you are familiar with the SQL Server Agent. The Agent, which remains mostly unchanged since I started working with in 1999, is a fairly robust job scheduler that can also alert you in the event of job failures or system errors. I feel as though it’s a testament to the quality of the original architecture that the code hasn’t changed very much–it still meets the needs of about 90-95% of SQL Server workloads, based on an informal twitter discussion I had a few months ago. There are some cases where an enterprise scheduling tool is needed, but for maintaining most SQL Servers and executing basic ETL, the agent works fine. There’s one problem–the agent is only available in SQL Server and Azure SQL Managed Instance.

colorful toothed wheels
Photo by Digital Buggu on Pexels.com

The lack of an agent, along with the lack of cross-database queries (I know you can do elastic query, it’s not the same thing) has been one of the major barriers to entry to Azure SQL Database. There are a few options that you have depending on the nature of what operations you are running. I’m not going to walk through how to build each of the options–that’s a different post. I will talk about the costs and benefits of each operation.

Automation Runbooks

Automation has been a part of Azure since at least around 2013, and perhaps even longer than that. The main component is called a runbook, which houses your code, and can be scheduled, or manually executed. Runbooks can be PowerShell, Python, or Graphical. The graphical runbooks are pretty limited in operations, and I’ve never seen a Python runbook for SQL Server operations in seven years. Automation has a decent learning curve–it’s not easy, for example, after creating your account, you will need to import all of the PowerShell modules you need to use. And there is the wonkiness of executing SQL commands via PowerShell–you can use invoke-sqlcmd, or import DBATools and use some of the options there. The schedule recurrence has a minimum recurrence of an hour–which is fine for most maintenance activities, but probably won’t work for granular operations.

The other thing you should now about automation is that you need to have your PowerShell code written and mostly working before you run in testing in your automation account. There’s a couple of commands, you can in only run in the context of a runbook (specifically around connecting to Azure itself–it it’s just a SQL query this is pretty trivial) so you want to have your basic code working before you have to troubleshoot the automation path. You can connect to Azure from within your runbook–this is particularly useful if you need to run a query against all of the databases within in a subscription. The alerting process for job failures is challenging–failures aren’t always captured correctly by the automation framework correctly, so you need to build your own error handling.

With all that said–automation is probably the most robust option for large scale operations around Azure SQL Database. Which brings us to Logic Apps…

Logic Apps

Logic Apps, Azure Functions, and Power Automate all offer similar functionality at different levels of abstraction and code writing. Functions, are generally speaking purely code operations and provide a great deal of flexibility, where as Power Automate lands on the almost “no code” side of things. Logic Apps fall squarely in the middle allowing you to use some code, but allowing for “development by click”. To compare Logic Apps to Automation, it took me about 30 minutes to build a runbook this morning to execute some queries against a SQL DB (I had the PoSH written already), where as it took me about 5 minutes to build a Log App to do the same tasks. Logic Apps can be triggered by a schedule (which has a per minute granularity). Logic apps aren’t quite as robust in terms of scoping–it would be really painful for me to build an app that queried every database in my subscription. I would probably have to build a large series of logic apps to do that.

A couple of things to note with Logic Apps–SQL Server = Azure SQL Database–they use the same connector, and it’s a little confusing as Azure SQL DW (yes, it’s Synapse Analytics now, the connectors haven’t been updated) has it’s own set of connectors. If you need to repeatedly run a query (or small set of queries) against a single database, this is your best option.

Elastic Jobs

Elastic jobs are an Azure SQL construct, that requires a main job database that orchestrates jobs amongst the other databases. While there is a portal option to create your job server, the jobs have to be defined either using T-SQL or PowerShell. While Elastic Jobs are GA and the Jobs database effectively has an Agent schema, and offers very similar functionality, the lack of a GUI, and broad community adoption has limited the uptake of elastic jobs. I tried to implement for a project I was working on, and ultimately gave up, because the documentation was limited, and I had Automation code that just worked. I would really like to see better support for Elastic Jobs in the Azure portal–it should be really easy for a user to deploy a job from either the context of their Job database, or the database they are working on. I think this would be the best solution for Azure SQL Database scheduling.

You Should Push Back on Your Software Vendors

I’ve seen two twitter discussions in the last two days about terrible software vendor practices. The first was a vendor who wanted to install binaries on the C: drive (and only the C: drive) of the server hosting the SQL Server database for the application. The other was a vendor who didn’t support using replication to another database to report against their database. Both of these scenarios are terrible–database servers should really only run database services, and it’s none of your software vendor’s business as to what you do with your transaction log.

blue jeans
Photo by VisionPic .net on Pexels.com

Speaking of the software vendor’s business, let’s talk about the business model of software vendors. For the most part, smaller vendors don’t make their profits on your initial purchase of your licenses, instead they charge an annual maintenance fee (sometimes they have different names for it, like support, or in the case of Microsoft Software Assurance), As part of this agreement, you are typically entitled to patches, security fixes, new versions of software, and in some cases support tickets. In order to stay supported, you need to agree to a certain set of requirements from the vendor.

This is a lucrative business–software has very high profit margins, making it a target for investors, private equity, and venture capital. The latter two of those can do bad things to otherwise good companies in order to try to extract every penny of profit out of them. This can include laying off core engineering staff, and replacing them with much cheaper offshore resources, who while good engineers, aren’t familiar with the core architecture and more importantly use cases of the product. They may also cut testing resources, so the program is only “certified” on older versions of database and operating system software. Private equity has done terrible things to a lot of businesses and software isn’t exempt from that. Read this article about a company that has acquired a bunch of zombie independent software vendors (ISVs) and just milks support fees for profit. Seriously, read that article.

While there are some good ISVs out there, they are few and far between, but at all times you need to remember that you are their customer, and they work for you. That doesn’t mean you can yell at a support engineer on the phone, but when they tell you that you can only run their software on SQL Server 2005 running on Windows 2003, and oh yeah, they need the SA account for their application to run, you should push back.

A lot of DBAs I encounter are too timid to do this–the business needs some app for manufacturing widgets, and the widget team just wants that app even though the vendor insists that the backups need to be shipped to a server in Russia. I will say this–pick you battles–it’s not worth to argue about something like a MaxDOP requirement (unless there’s an obvious performance problem there), but when the vendor says something like you can’t use Availability Groups with their app, or wants to trap you onto a legacy version of the RDBMS, you should push back. The other #sqlhelp thread I saw was where someone wanted to build a log reader to apply transactions to a secondary database for reporting, because the vendor didn’t support replication. That’s stupid–you’d be building a very fragile system, when SQL Server has a perfectly good feature (transactional replication) to do the same thing. Or even the vendor who wanted to install software on the SQL Server. No. Just say no.

In summary–you own your software, and you manage your database environment. You shouldn’t do anything that puts your business’s data at risk, but at the same time, you want to manage your environment in as consistent a fashion as possible, while adhering to best practices. https://www.forbes.com/sites/nathanvardi/2018/11/19/how-a-mysterious-tech-billionaire-created-two-fortunesand-a-global-software-sweatshop/#38fe205e6cff