Azure Managed Disks – Look ma, no Storage Accounts!

A new awesome Azure feature has seen the light of day: Azure Managed Disks

Why is that so awesome? Because it makes storage for VMs so much easier! Today when you want to deploy a VM, you’ll need a Storage Account to put the VHDs in. Storage Accounts has limits, such as IOPS, bandwidth, capacity and other things. The limits depend on what type of Storage Account you create – Premium or Standard.

An example of hitting the limits, is if you have 40 VHDs in 1 Standard Storage Account. Each VHD is limited to 500 IOPS, so if all your VHDs are maxed out on IOPS, you will be using 20.000 IOPS. That is the exact number of IOPS a Standard Storage Account is limited to. Now you must make sure no more VHDs are put in that Storage Account, and create a new one instead. I bet you can see the management nightmare coming, if you have a lot of VHDs, right?

Storage Accounts has a lot of cool features, so don’t get me wrong. For VMs I’m just not sold on the idea, and I just so happened to tweet with Karim Vaes about it last week. Karim listed some of all the features, but I just had to disagree on 1 point ;-)

Don’t worry though, Managed Disks is here to help, and Managed Disks scale! With Managed Disks there is no Storage Account to worry about – it’s all managed by Azure. If you need a new disk, the only things you’ll need to know is what size it should be, and if it’s Premium or Standard. If you need 100 disks, it’s the same! Managed Disks will make sure to place it in a Storage Account “behind the scenes”. It’s still isolated from other customers though!

On the technical part, Managed Disks is a top-level resource of Azure Resource Manager, just like VMs and networks. This means you can see disks as a resource in ARM, and not just a file within another resource type (Storage Accounts). For example, take a look at this screenshot:

Here you can see all the disks I have created, and manage them from there. Same goes for PowerShell, with Get-AzureRmDisk:

Managed Availability Sets

In my opinion, one of the best features of Managed Disks, is Availability. To understand this part, let’s first take a look at Availability Sets.
Availability Sets gives you High Availability, by placing VMs within an Availability Set in different Update and Fault Domains. A Fault Domain is a collection of resources that shares the same Single Point Of Failure, for example a Rack. Take a look at the illustration below:

Here I got 2 Availability Sets, configured to use 3 Fault Domains (max is 3). Each VM in each Availability Set is deployed to different Fault Domains. But notice the disks in the bottom – they’re all on the same Storage Stamp, leaving us with a Single Point Of Failure.

Previously when I have deployed VMs in Availability Sets in Azure, I’ve put disks for VM1 in “Storage Account 1”, and VM2 in “Storage Account 2”. That was the recommended way of doing it, and yet you weren’t guaranteed that Storage Account 1 & 2 wasn’t on the same Storage Stamp within Azure. That means you can deploy 100 VMs, and they could all be placed on the same Storage Stamp = Single Point of Failure!

With Managed Disks, your storage is aware of Availability Sets, and will be placed accordingly to make sure you’re not affected by an outage on a single Storage Stamp. Let’s take a look at that Availability Set configuration:

Storage and Compute is now aligned nicely, meaning VM1 and Disk1 is within Fault Domain A, and VM2 and Disk2 is placed in Fault Domain B, etc. Awesome!

One important thing to note though: Your Availability Set should also be Managed. This is a property on the Availability Set resource now! Managed Disks can only be used with Managed Availability Sets. And unmaged/normal disks can only be used with unmanaged/normal Availability Sets. You cannot mix the two types. More on that later, when we look at deployment.



Another great feature of Managed Disks, is the flexibility it provides. If you’ve ever had to move from Standard to Premium Storage, you probably also know how much work went into it.

First you have to delete the VM while keeping the disks.

Then you must copy the disks to a new Storage Account.

Last you need to create a new VM, using the existing disks.

This could take hours depending on the amount of data to transfer. With Managed Disks, you can just resize your VM, after that it’s simply a matter of converting disks to Premium Storage. Easy!



Storage Accounts does have Role Based Access Control implemented, but it’s at the Storage Account level. This means a user will have access to all disks under that Storage Accounts, if he is granted access to it. Again: Management nightmare.

Managed Disks fixes this too. Since it’s a top-level resource, we can manage permission on a specific disk easily. By default we have 3 roles: Owner, Contributor, Reader – just like other resource types. Imagine if someone needs to download a specific disk. Now you can assign them Reader role, and create a download URL (more on that later) for that specific disk. You can of course also create custom RBAC roles.


Disk types & sizes

One downside of Managed Disks in my opinion, is that we have to pay for specific disk sizes. Just like you might know with Premium Storage, there are several intervals for disk sizes. If for example you want a 300 GB disk, you’ll have to pay for a 512 GB disk. This removes the “only pay for what you use”-advantage that cloud has. The table below lists the different disk sizes:

You will still be able to create disks at other sizes, but they will be billed according to the tables above. The good thing is that it’s easier to calculate the costs of storage. Only variable now is IO on standard disks.


Show me the money!

OK, enough talk, let’s take it for a spin! When you deploy Azure Managed Disks, you can use your normal tools: Portal, PowerShell, ARM template, CLI and APIs. Let pick a scenario: 2 VMs in a Managed Availability Set, each with 1 data disk. In this posts I’ll just show the portal experience. PowerShell and ARM templates are of course available too.

First create an Availability Set. Go to Availability Sets, select Add, and enter in your information. 1 thing is different from normal Availability Sets here – the Managed property. Make sure you check that:

When this is created, go to VMs, click Add, select an image and Create:

Fill out the fields as you normally do:

Select a VM size:

The Settings blade is where magic happens. Just like Availability Sets, a only needs 1 different step: Use managed disks! This will remove the option to choose a Storage Account – because you don’t need that, yay! Everything else is like you normally create a VM in Azure, remember to select the Availability Set we just created:

On the final page, click Create, then sit back and relax a few minutes! :-)

After that, create an extra VM using the same procedure, and you will have 2 VMs in a Managed Availability Set. Voila, you’re now using Managed Disks, and you have more resiliency in terms of Storage HA.

Next post will focus on PowerShell and Template deployment, so stay tuned!

10 Responses to Azure Managed Disks – Look ma, no Storage Accounts!


    Are VHD files still used or are managed disks directly mapped?

    • VHD files are still used, yes. Not sure what you mean about directly mapped? The primary difference is that you don’t manage storage accounts for your disks (VHDs) anymore.


    Hi Jesper, when the original VM (with managed disk) got corrupted and we have a snapshot of that VM available. In this scenario, how can we restore the content from the snapshot or create a new VM from that snapshot ? Any ideas ?


    Hi Jesper, Thank you so much. This is what I am looking for. We could also use the Azure IaaS backup, but we have not implemented this yet.


    When you convert to Managed Disks, how do does this impact Azure Backup? We did this and now the Azure agent is failing.

    • I haven’t seen issues with backup, after converting to Managed Disks. What error do you get?


    If I create managed disks under “VM A” can I move those managed disks to “VM B” in the same region?

    • Yes – you can mount and dismount the disks as you need.


Let me hear your opinion