Colorado DOT Post Mortem–Don’t Do Stupid #$%^

A good policy in your technology career, or life in general, is to, to quote a former president, “don’t do stupid shit”. If we look historically at major IT outages or data breaches, there’s always some universal tenants–some piece of critical infrastructure was left completely unsecured, or someone put the private keys on Github. In fact, just last week I wrote my column for Redmond Mag, on some common patterns, mostly specific to SQL Server, but applicable to many other types of applications. I wanted to call it Data Breach Bingo, but I didn’t enough space in my column for 25 vulnerabilities.

Stupid GIF - Find & Share on GIPHY

While we’re here, it’s important that when thinking about security, you think about the basics first. The GRU operatives working for the Russian government likely aren’t targeting your data–it’s some bot network looking for a blank SA password running on a SQL Server that’s got port 1433 open to the internet. If you are on twitter, I highly recommend following Swift on Security a rather serious parody account (of Taylor Swift) that focuses on good security practices. That twitter account also hosts an excellent website called DecentSecurity.com that talks about the basics of how you should be securing your personal and business environments. While you might not be protected from the GRU, you can avoid most of the other “hackxors” in the world that are really just dumb bots.

The reason why I wrote this post, was that my colleague Meagan Longoria (b|t), shared with me a post-mortem from a ransomware attack on the Department of Transportation in the state of Colorado. I would really like to applaud the state for putting out this document, even though it doesn’t paint the organization in the best light. I’m going to use this space to talk about some of the things highlighted in the report.

A virtual server was created on February 18, 2018. The virtual server was directly connected into the Colorado Department of Transportation (CDOT) network, as if it was a local on premise system. The virtual server instance also had an internet address and did not have OIT’s standardized security controls in place.”

Ok, this sounds pretty much like they deployed a VM to either Azure or AWS, and they had a network connection back into their on-premises network. This in and of itself is not problematic, but not applying security policy in the provider is a problem. Both Azure and Amazon have standard security templates and policies that can be configured and applied at deployment.

“The account utilized to establish the connection into the CDOT network was a domain administrator account – this is the highest level privileged account, and means that 1) the account cannot be disabled for too many failed login accounts, and 2) it provides the highest level of access to the agency domain controllers (gatekeepers for all access to everything in the department).”

There’s a lot going on here. I read this as that they joined the server to the Active Directory domain, and used a domain admin account to perform the join. Neither of these are problematic. The next sentence is wrong–domain admin accounts can absolutely be disabled, however this indicates to me, that the department had the “never disable this login” checkbox check. Bad move. Also, in an ideal world, you would never have a domain admin log into member servers, so its password was never in kernel memory, but that’s not how this (or many) hacks work.

“Later, OIT was informed by the vendor that when an external IP address is requested, the vendor automatically opens the Remote Desktop protocol to the internet. The Remote Desktop protocol is how this attack was initiated.”

This is my biggest complaint about both Amazon and Microsoft Azure, especially in the timeframe of this attack, both services defaulted to having a public IP address and didn’t actively discourage public ports from being opened. While it’s a totally bad practice that should be enforced by policy, up until a few months ago on the Azure side, it was very easy to do.

“An attacker discovered this system available on the internet, broke into the Administrator account using approximately 40,000 password guesses until the account was compromised. From there, the attacker was able to access CDOT’s environment as the domain administrator, installing and activating the ransomware attack.”

This is a statement that will sound somewhat impressive to lay people. 40,000 password guesses sounds like a whole lot, however for an automated script that would take seconds. This means they either had approximately 3 character passwords (I’m guessing they didn’t), or the domain administrator was using a common dictionary based password. You can use add-ons to Active Directory to limit the types of passwords users can use, and while this may be painful for normal users, it is absolutely critical for high privilege users like DBAs or System Admins. They should also have separate user accounts for logging in as domain admin, and multi-factor authentication configured.

This is also the part where I would usually say you should have network segmentation configured to block the malware from spreading, but if someone pwns your domain controllers all the segmentation in the world won’t help that much. An attacker could even go as far to install their malware via group policy. Good luck cleaning that one up.

The state of Colorado had a really robust backup solution, and some good network practices (smart firewalls) that were able to limit the spread of the malware. So they were able to recover from this attack relatively quickly. It’s important to worry about big things, but doing all of the little things right and most importantly not doing stupid things will go a long way to securing your environment.

 

 

Azure SQL Database Price and Performance vs Amazon RDS

Yesterday, GigaOm published a benchmark of Azure SQL Database as compared to Amazon’s RDS service. It’s an interesting test case that tries to compare the performance of these platform as a service database offerings. One of the many challenges of this kind of a study is that the product offerings are not exactly analogous. However, GigaOm was able to build a test case where they used similar sized offerings as shown in the image below.

Screen Shot 2019-10-15 at 3.17.40 PM

GigaOm derived from the TPC-E workload, which is a blended OLTP workload which involves both read-only and update transactions. In the raw performance numbers, SQL DB was a little bit better in terms of overall throughput, but where Azure really shines in comparison to AWS is pricing.

In terms of raw pricing, there is a significant difference between Azure SQL DB and AWS pricing for these two instances. Azure is $25,000/month less, however it’s not quite an apples to apples comparison, as Azure SQL DB is single database service as opposed to RDS which acts more like Managed Instance, or on-premises SQL Server. If you use use the elastic pool functionality in Azure SQL DB which is the same cost as single database SQL DB, you get a better comparison.

Screen Shot 2019-10-15 at 3.54.24 PM

Where the real benefits come in, are that Microsoft allows you to bring on-premises licenses to Azure SQL Database, and the fact the Microsoft offers a much lower cost for 3 year reservations. (Which you can know pay for on a month to month basis). This is a big advantage that Microsoft has over Amazon is that they control the licensing for SQL Server. You can argue that this isn’t fair, but it’s Microsoft’s ball game, so they get to make the rules.

There will always be an inherent feature advantage to being on Azure SQL DB over AWS RDS, as Microsoft owns the code and can roll it out faster. While AWS is a big customer, and will be quick to roll out new updates and versions, they are still subject to Microsoft’s external release cycle whereas Azure is not. In my opinion, if you are building a new PaaS solution based on SQL Server, your best approach is to run on Azure SQL DB.

Cloud Field Day–Solo.io #CFD6

As I mentioned, I was in Silicon Valley a couple of weeks ago for an analyst event, and got to meet with a variety of the companies. The final company we met with on Friday, was Solo.io, and I have to say they knocked it out of the park. Their technology was super interesting and their founder Idit Levine, and their CTO, Christian Posta were excellent presenters who were clearly enthusiastic about their product.

SoloIO

So what does Solo.io do? In the modern microservices oriented world, we have distributed systems which are nearly all API driven. Solo.io has a number of products in this space, but their core product Gloo is a modern API gateway that securely bridges modern applications like Lamba or Azure functions to both legacy monolithic applications as well as modern databases running in Kubernetes pods.

They also have another open source project called SuperGloo, which is an abstraction layer for service mesh architecture. A service mesh provides modern applications with monitoring, scaling, and high availability through APIs rather than discrete appliances. Istio from Google is best known tool in this space, and SuperGloo can work with it, and other service meshes in the same architecture.

The other really interesting tool that Solo.io highlighted was called Squash, which is a debugger for distributed systems. If you’ve ever tried to troubleshoot a distributed system, even figuring out where to start can be challenging. By acting as a bridge between Kubernetes (drink) and the IDE, you can choose which pods or containers you are debugging and set breakpoints, or change variables during runtime.

 

Cloud Field Day 6–HashiCorp Consul #CFD6

I was recently in Silicon Valley for Cloud Field Day 6, and one of the companies we met with with HashiCorp. HashiCorp is known mostly for two key products in cloud automation–Terraform and Vault which enable cloud automation, and secrets management respectively. Both of these are open source projects, which have support  and premium feature offerings for companies and are free to get started with for individuals.  Both of these products are considered best of class, and are widely used by many organizations.

Hashicorp

We had the honor of hearing from the founder and CTO of HashiCorp, Mitchell Hashimoto, who spoke to us about Consul, a service based networking tool for dynamic infrastructure (this means things like containers, Kubernetes, and serverless cloud services). Mitchell explained that companies are trying to apply on-premises networking paradigms to cloud infrastructure doesn’t really work.

Consul steps in, how can make this simpler.

  • Service Registry & Health Monitoring
  • Network Middleware Automation
  • Zero trust network with service mesh

 

The goal of the product is easier adoption, crawl, walk, run, earlier adoption. It lets you ID what’s deployed in every single platform–registry does that. Consult provides a unified view for both DNS, and API and provides active health monitoring. It also builds catalog of your entire network. Consul launched in 2014, 50,000+ agents, most widely deployed service discovery tool on AWS. Servers form a cluster and do leader election. All membership is via gossip. Consult requires one server cluster per data center. Separate gossip pool,  the open source edition requires fully connected network, while the enterprise edition allows for hub and spoke topologies.

Consul also provides a number of other services like traffic splitting, which allows you do rolling deployment of application code, while sending a small percentage of traffic to the newly released version of your app, in order to check for errors.

Consul is unique tool–networking in containers and serverless is very challenging, and this product brings it together with old school technology like mainframes and physical servers. Also, given HashiCorp’s record with their other products, I expect this one to be really successful.

 

 

Cloud Field Day 6–Morpheus Data #CFD6

Last week, I had the opportunity to attend Cloud Field Day #6 in Silicon Valley. Along with 11 other brilliant delegates, we got the chance to meet with a variety of companies in the cloud computing space. I’m going to be writing about each of them in the coming weeks, but I’m going to start out by talking about a company we met with on our first day Morpheus Data.

A quote from the presentation that I thought was a really good description of their product was “we help big companies act like startups”. Morpheus’ product is combination of a service catalog, provisioning system, along with configuration management and CI/CD deployment. It’s not a full on CI/CD tool, but it integrates with all of the most common ones.

Morpheus allows you to also manage tasks like monitoring, logging, and backup. The amazing thing to me was the number of connectors that the product supported. The cloud integration here was interesting–you could show back pricing of a cloud stack, migrate workloads between clouds dynamically, and enable cloud deployment of both infrastructure and code. In addition to these cloud offerings, it’s possible to deploy to physical servers as the product includes a PXE boot engine.

I think the thing that impressed the attendees most about Morpheus was the number of integrations the product offered. Some of the other tools in this space are vRealize Automation from VMWare and ServiceNow. I feel as though the integrations and ease of moving to an infrastructure as code model is Morpheus’ strong point. As much as we talk about infrastructure as code, it’s really hard for larger organizations to move in unison in this direction. So having a product that can work with existing tools, while offering benefits on it’s own, can be a real benefit to a lot of organizations.

 

Screen Shot 2019-09-30 at 12.59.50 PM

Building Storage for SQL Server (and other database) Virtual Machines in the Cloud

I wrote a couple of weeks ago, about what not to do with backups in Azure. Because I’ve seen a few improperly configured VMs lately, I wanted to talk about the way the storage works in Azure, and the way we traditionally did things on-premises.

Old School

If you still buy your storage from a three letter company, and your sales rep drives an expensive German car, and has better taste in shoes than Imelda Marcos, you might still configure your storage this way. You might create a separate disk volume for TempDB, transaction log files, and data files. Ideally, you are backing up to a separate storage appliance, and not to the same storage array where your data files live.

This architecture design dates back to when a storage LUN was literally a built of a few disks, and we wanted to ensure that there were enough I/O operations per second to service the needs of the SQL Server, because we only had the available IO of a few disks.

As virtualization became popular storage architectures changes and the a SAN lun was carved out into many small extents (typically 512k-1MB depending on vendor) across the entire array. What this meant was that with modern storage there was no need to separate logs and data files, however some DBAs did, however in an on-premises world there was no penalty for this.

Note: There is a scenario where you would want multiple disk devices in Windows. Under very high I/O workloads, IOs can queue at the Windows disk device level. This is an uncommon performance scenario in my experience

assorted title cassette tapes

Photo by Vova Krasilnikov on Pexels.com

Enter the Cloud

Instead of physical disks, in the cloud your “disk devices” are virtual hard drive files, which are stored across 3 different physical disks on the infrastructure. All storage performance is controlled by quality of service settings on the Azure infrastructure. Each disk you add increases both your IOPs and storage capacity.  Also, each virtual machine has a fixed limit on the number of IOPs available to it (while this is very possible on-premises, it’s far less common).

We then translate this to the operating system level, and in this specific case, Windows Server. In order to get maximal volume and performance out of our disks, we use Storage Spaces in Windows to create pools of storage. The exciting part here is that you get to use RAID 0, since Azure’s (or Amazon’s) infrastructure is providing your RAID. This means if we have 20 1 TB disks, with 5000 IOPs each, we can have a 20 TB pool, that theoretically supports 100k IOPs. (Most VMs in Azure don’t support that level of IO performance, but a couple do).

It’s also important to know that you need to specify the number of columns parameter when building your storage spaces pools in Windows. If you have more than four disks your need to use PowerShell for that–I’ll write more about that next week. But here’s some info from the product teams.

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-sql-performance

This post has good info on columns, but it’s from 2014 and the rest of the storage information is very dated (premium storage didn’t exist). I’m only including because it’s the best explanations of columns that I’ve seen.

Best Practices & Disaster Recovery for Storage Spaces and Pools in Azure

What this means, is that in order to maximize your database server’s IO performance, you should create one large pool, with all the disks. Throw your system DBs and your data and log files all on that volume. And please don’t write your backups to that disk. (BACKUP TO URL was invented for this purpose).

You can also throw TempDB on the local D: drive, which is ephemeral (it goes away when your machine reboots, but so does TempDB), and can over slightly lower latency.

Note, if you’re reading this and you are using Ultra Disk, I haven’t tested any of this stuff with Ultra Disk because I haven’t been able to test it. I think you may not need to stripe disks to achieve good performance.

 

Don’t Backup Your SQL Server VMs in Azure

This headline may seem a little aggressive. You may think, Joey, I’ve heard you and other MVPs say that backups are the most important thing ever and I’ll get fired if I don’t have backups. That is very accurate, but when I talk about backups, I am talking about backing up your databases. If you are running an Azure VM, or have a good connection to Azure, Microsoft offers BACKUP to URL functionality that let you easily isolate your database backups from the VM.

If you are backing up your operating system and system state on your cloud provider, you are wasting storage space, money, and CPU cycles. You may be even negatively impacting your the I/O performance of your VMs. We had a customer who was doing a very traditional backup model–they were backing up their databases to the local file system, and then backing up the SQL Server VM using the Azure backup service. They swore up and down that backups where impacting their database performance, and I didn’t believe them until they told me how they were doing backups.

Cloud Storage Architecture

Unlike in your on-premises environment, where you might have up to a 32 Gbps fibre channel connection to your storage array and then a separate 10 Gbps connection to the file share where you write your SQL Server backups, in the cloud you have a single connection to both storage and the rest of the network. That single connection is metered and correlates to the size (and $$$) of your VM. So bandwidth is somewhat sacred, since backups and normal storage traffic go over the same limited tunnel. This doesn’t mean you can’t have good storage performance, it just means you have to think about things. In the case of the customer I mentioned, they were saturating their network pipe, by writing backups to the file system, and then having the Azure backup service backup their VM, they were saturating their pipe and making regular SQL Server I/Os wait.

Why Backing Up Your VMs is Suboptimal

The Azure Backup service is not natively database aware. There is this service https://docs.microsoft.com/en-us/azure/backup/backup-azure-sql-database which offers native backups in SQL Server for you. While this feature is well engineered, I’m not a huge fan. The reason is the estimated time to create a new VM in Azure is about 5-10 minutes. If you are using backup to URL, you can have a new SQL Server VM in 10 minutes.  You can immediately join the machine to the domain (or better yet, do it in an automated fashion), and then begin restoring your databases. Restoring from the backup service is really slow, whereas you will see decent restore speeds from RESTORE TO URL.

The one bit of complexity in this process is for high availability solutions like Always On Availability Groups, that have somewhat complex configurations. I’m going to say two dirty words here: PowerShell and Source Control. Yes, you should have your configurations scripted and in source control. You’d murder your developers if their web servers required manual configuration for new deployments, so you should do the same for your database configurations.

If you have third party executables installed on your SQL Server for application software, then well, you doing everything wrong.

How I Do Backups in Azure:

  1. Use Ola Hallengren’s scripts. They support BACKUP to URL.
  2. Add this agent job step from Pieter Van Hove to cleanup your backups

Sleep well. Eventually, there will be full support for automated storage tiering in Azure (it still has some limitiations that preclude it’s use for SQL Server backups), so for right now you need the additional manual step.

While I wrote this post around Azure, but I think the same logic should apply to your on-premises VMs and even physical machines. You should be able to get Windows and SQL Server installed within 20-30 minutes if you have a very manual process, and if you have an automated process you should be able to have a machine in less than 10. When I worked at the large cable company, we didn’t backup any of our SQL Server, just the databases.

analog audio backup broken

Photo by Anthony on Pexels.com

Run a PowerShell Script Against all of Your Azure SQL Databases

I started working on this bit of code a few months ago, and it’s served me really well. Just about every command you run against a SQL Database requires you to supply the server name and the resource group name at parameters. And in order to get the list of server names you have to do it for each for resource group.

abstract art circle clockwork

Photo by Pixabay on Pexels.com

This code is pretty simple and looks for an Azure SQL Server in each resource group, and then looks for the databases that aren’t master on each server. In this example I’m setting the storage account for Azure Threat Detection, but you could do anything you wanted in that last loop.

$rg=(Get-AzResourceGroup).ResourceGroupName

foreach ($rgs in $rg)

{

  $svr=(get-azsqlserver -ResourceGroupName $rgs).ServerName

  #write-host 'rg:'$rgs

    foreach ($svrs in $svr)

    {

    #write-host 'server:'$svrs

    if ($svr.Location -eq 'West US' ) {set-variable $stg='storage2'}

    {

     $dbs=(Get-azSqlDatabase -ResourceGroupName $rgs -ServerName  $svrs|Where-Object {$_.DatabaseName -NE 'master'}).DatabaseName|Set-AzSqlDatabaseThreatDetectionPolicy -ResourceGroupName $rgs -ServerName $svrs -DatabaseName $dbs -NotificationRecipientsEmails "bob@contoso.com" -EmailAdmins $True -StorageAccountName $stg




    else ($svr.Location -eq 'West US 2') {set-variable $stg='storage1'}




        $dbs=(Get-azSqlDatabase -ResourceGroupName $rgs -ServerName  $svrs|Where-Object {$_.DatabaseName -NE 'master'}).DatabaseName|Set-AzSqlDatabaseThreatDetectionPolicy -ResourceGroupName $rgs -ServerName $svrs -DatabaseName $dbs -NotificationRecipientsEmails "bob@contoso.com" -EmailAdmins $True -StorageAccountName $stg

    }

    }

}
The last bit of complication in this code, is specifying the storage account based on the location of the Azure SQL Server, which is a property of the server’s object.

The Challenge of Migrating to Azure SQL Database Managed Instance

When Azure SQL Database Managed Instance was introduced to the public at //build a couple of years ago, it was billed as a solution to ease the migration from either on-premises or even infrastructure as a service VMs. You would get all of the benefits of a managed service like built-in high availability and patching, automated backups, and you could do all of the things you couldn’t do in Azure SQL Database, like run CLR, use cross-database queries and have SQL Agent jobs without having to learn Azure Automation and PowerShell. The final big bonus was that you restore your backups from on-premises into the managed instance environment. No more dealing with DACPACs and crying, and drinking, and crying, and drinking, and crying.

 

photo of woodpile

Photo by João Vítor Heinrichs on Pexels.com

I had very early access to managed instance servers, and it seemed obvious to me that an easy migration approach would be to use log shipping. You could write your backups from your source server to URL, restore them with NORECOVERY to your managed instance, repeat the process with log backups, and voila you were in a managed instance. Quick and easy, and more importantly, if you were a DBA, nearly the exact same process you would have executed in your on-premises environment (except with backups to blob storage).

There was a long period of time, were we Data Platform MVPs were unable to deploy managed instances into our Microsoft subscriptions. Which is fine, when capacity is short, it should go to paying customers, not us idiots. However, this meant I was away from the product for a while. During this time Microsoft introduced the Data Migration Service, a comprehensive set of tools to move your data to and from a variety of platforms in an online and offline manor.

While DMS is pretty interesting tooling, I had mostly ignored it until recently. Functionally, the tool works pretty well. The problem is it requires a lot of privileges–you have to have someone who can create a service principal and you need to have the following ports open between your source machine and your managed instance:

  • 443
  • 53
  • 9354
  • 445
  • 12000

While the scope of those firewall rules is limited, in a larger enterprise, explaining why you need port 445 open to anything is going to be challenging. So in addition an AAD admin, the DBA is going to need a network admin to enable this. That service principal you created is also going to need the contributor permission on the entire subscription. Yes, that means it can create anything in the entire subscription. This is probably my biggest complaint. Microsoft does acknowledge this in docs, and says they are working to reduce the permissions that are required.

I’m currently engaged in a VM to Managed Instance migration, and when the client’s DBA was complaining about the complexity of the DMS, I suggested we just use log shipping like I had done when I first played with the Managed Instance service. I was trying to figure out how to automate the process, but then I figured I should just verify I could do a restore with NORECOVERY.

Msg 3032, Level 16, State 2, Line 11
One or more of the options (stats, norecovery, stats=) are not supported for this statement. Review the documentation for supported options.

Sad Trombone. That means the only way to migrate a database in near real-time is to use the DMS. And it’s going to take half of your IT staff in order to do it. In order to reduce the friction to migrations I’ve yelled at a couple of PMs about this, but I thought I would create a User Voice option.

https://feedback.azure.com/forums/908035-sql-server/suggestions/38267374-add-restore-with-norecovery-back-to-managed-instan

Please vote for it, if you are interested.

 

Three Azure Features You Should Really Be Using

There was a thread on one of the Microsoft MVP distribution lists the other week, about recovering from a deleted resource that reminded me of a post I had been meaning to write. In many organizations, the public cloud is the wild west of the IT organization. In the worst cases, this means organization admins are using their gmail accounts to access the subscriptions, but even in well run organizations, the ease of deploying cloud resources leads to the dreaded server sprawl. In this post, I’m going to talk about three features of the Azure Resource Manager architecture that you should be using to better manage your subscription: tags, policies, and locks.

Tags

several assorted color tags

Photo by rawpixel.com on Pexels.com

When I worked in corporate IT, there was no discussion I hated more than the dreaded “server naming convention” discussion. It would typically be held in a room filled with middle managers (who would nearly always be men) who felt the need to exercise their dominance by defining at least two characters of the up to 15 we were allowed by NetBIOS. This also lead to metadata packing as I called it–where we would end up with server names like SWCSAPSQL01P, which would indicate the company, the data center location, the application, the function, an integer, and the environment. Plus server names like that roll right of the tongue. In reality, this is kind of a terrible way to define metadata about server resources, and in a world where we are using things like containers which are disposable, this paradigm does not work. Fortunately Azure (and AWS and Kubernetes) allow for tagging of resources. Tags are simply key-value pairs that describe our objects. For example, if I had a VM running SAP’s SQL Server, I might have the following tags:

Environment:Production

Application:SAP

Function:SQL

Cost Center:Operations

Tags are free form, and you have up to 15 of them per resource, so you describe things very well. Tags also roll up on your Azure bill–hence my use of the cost center tag in my example. You can also use PowerShell and Azure CLI to filter operations by tags, so they are essential to filtering your management and maintenance tasks.

Policy

account black and white business commerce

Photo by Pixabay on Pexels.com

If you are thinking “tags are a really good idea, but the other people on my team are lazy and will never remember to use them” do I have the solution for you. Similar to Windows Server Active Directory, Azure allows you to implement policies to manage your subscription. Before we delve too deep into Azure Policy specifics, let’s talk about the hierarchy of resources within your Azure subscription (for the purposes of this post I’m talking about a single subscription).

Screen Shot 2019-07-29 at 9.32.43 AM

At the highest level we have the subscription, which is made of one or more resource groups, which themselves are composed of zero or more resources. This hierarchy is important to understand for many reasons, not the least of which is that it is how role based access control (RBAC) works in Azure. Security (and policy) are scoped at a level, and then have inheritance down. If you have a role granted at the level of the subscription, you are going to have access to all of the resource groups and resources in that subscription (unless someone has issued an explicit deny, but that’s a different post).

Policy works the same way–a policy has a scope of either a specific resource group or at the subscription. You can define a policy to do any number of things in Azure–if you want to define a policy to enforce tagging and scope it at the subscription level, no one will be able to deploy a resource without the tags you have specified in your policy. You can also specify the type, sizes, and regions where your users can deploy resources. Once policy is in place, users will receive an error if they try to deploy resources that do not meet the definition. Because of this, you should also socialize your policies with anyone who will be deploying resources into your environment, so that they know the rules, and don’t come to your desk with a bat.

Locks

door green closed lock

Photo by Life Of Pix on Pexels.com

The final feature that you should be using are locks. Locks are just what they sound like, and they can perform one of two functions: prevent any changes to the resource, resource group, or subscription (read-only locks) or not allowing any resource at the scope of the lock to be deleted (delete locks). I don’t really like using read-only locks, as they prevent changes like adding disk space, or changing the performance level of an Azure SQL Database. However, I think delete locks should go on every production resource in your subscription. Locks can be removed, if you are the owner of a resource or resource group, but the if you attempt to delete the resource with a lock in place, Azure will throw an error message indicating that the lock is in place.

The cloud is big and complex, and it’s easy to let resources grow out of control, which can lead to configuration drift, security problems, and most importantly excessive spend. These are just a few of the many built-in

tools you can use to make cloud management easier.

%d bloggers like this: