Windows Server 2019 Failover Clustering New Features

Published May 13 2019 03:10 PM 34.5K Views

Greeting Failover Cluster fans!!  John Marlin here and I own the Failover Clustering feature within the Microsoft product team.  In this blog, I will be giving an overview and demo a lot of the new features in Windows Server 2019 Failover Clustering.  I have held off on this to let some things settle down with some of the announcements regarding Azure Stack HCI, upcoming Windows Server Summit, etc.


I have broken these all down into 7 videos so that you can view them in smaller chunks rather than one massive long video.  With each video, I am including a quick description of what features that will be covered.  Each of the videos are approximately 15 minutes long.


Part 1

In this video, we will take a brief look back at Windows Server 2016 Failover Clustering and preview what we did in regards to Windows Server 2019 to make things better.



Part 2

In Part 2 of the series, we take a look at Windows Admin Center and how it can make the user experience better, Cluster Performance History to get a history on how the cluster/nodes are performing, System Insights using predictive analytics (AI) and machine learning (ML), and Persistent Memory which is the latest in storage/memory technology.



Part 3

In Part 3 of the series, we take a look at Cluster Sets as our new scaling technology, actual in-place Windows Server Upgrades which have not been supported in the past, and Microsoft Distributed Transaction Coordinator (MSDTC).



Part 4

In this video, we take a look at Two-Node Hyperconverged and the new way of configuring resiliency, File Share Witness capabilities for achieving quorum at the edge, Split Brain detection and how we try to lessen chances of nodes running independent of each other, and what we did with Security in mind.



Part 5

This video talks about Scale-Out File Servers and some of the connectivity enhancements, Cluster Shared Volumes (CSV) with caching and a security enhancement, Marginal Disk support and the way we are detecting drives that are starting to go bad, and Cluster Aware Updating enhancements for when you are patching your Cluster nodes.



Part 6

This video will talk about enhancements we made with the Cluster Network Name, changes made for when running Failover Clusters in Azure as IaaS virtual machines, and how Domain Migrations are no longer a pain point moving between domains.



Part 7

As a wrap up, we will take a look at a couple announcements made and demonstrated at Microsoft Ignite 2018 regarding IOPs and Capacity.



My hope is that you enjoy these videos and get a good understanding our roadmap from Windows Server 2016 to Windows Server 2019 from a Failover Clustering standpoint.  If there are any questions about any of these features, hit me up on Twitter (below).



John Marlin

Senior Program Manager

High Availability and Storage

Twitter: @JohnMarlin_MSFT


Hi John,

In the roadmap will we be seeing enhancements to Hyper-V Cluster Resource Hosting Subsystem (RHS) to better enable handling of VMs on a cluster that have many different storage resources presented by difference SOFS shares? 

This is to address a problem we have been seeing for a long time when a VM having storage issues on one share of a SOFS can bring down an entire Hyper V cluster.  If a VM loses access to it storage where its vhdx files reside then only that VM should fail not the entire cluster.


Happy to provide more info if required.



@Theo-57 :

You may want to open a case with Microsoft and look into this deeper.  Starting in Windows 2016 Failover Clustering, we added VM Storage Resiliency.  Meaning, if a VM loses access to it's VHD(x), the VM would go into a Paused-Critical state for a certain amount of time before it powers off.  This would be if the VHD(x) was local to the Cluster or remote.  Yes, a little more detail might help.

Occasional Visitor

What are the CLUSTER SET limits? We would like to have 64000 nodes on a single cluster namespace



In theory, there is no limit to the number of nodes, but we have tested up to 1000 nodes.  64000 nodes sounds like quite a large organization.



In theory, there is no limit to the number of nodes, but we have tested up to 1000 nodes.  64000 nodes sounds like quite a large organization.


Just to take this a step further, the information about Cluster Sets, clusters participating, nodes, virtual machines, etc will be contained within the cluster hive of the master cluster.  So it is going to get pretty large.  We are also replicating this hive between the nodes of that cluster.  It may get so large, it could cause issues replicating, perf issues causing hangs or node removals, etc.  So no limitation to the number is theoretical.  Like said, we tested up to 1000 nodes without issues.

Occasional Visitor

There is this problem I am facing. After configuring Distributed Network Name (DNN) in my Windows 2019 cluster setup for SQL HADR, I am unable to access remote drives from outside using the UNC path as \\cluster-name\O$ or \\cluster-name-FCI\O$. While on the other hand, \\cluster-node-name\O$ works. SMB shares are correctly configured on the FCI. Any suggestions @John Marlin ?


DSN.png                    File Share Error.png



Version history
Last update:
‎Dec 04 2019 04:59 PM
Updated by: