Troubleshooting Cluster Shared Volume Auto-Pauses – Event 5120

Published Mar 15 2019 02:54 PM 24.6K Views
First published on MSDN on Dec 08, 2014
In the previous post we have discussed how CSVFS abstracts failures from applications by going through the pause/resume state machine and we have also explained what the auto-pause is. Focus for this blog post will be auto-pauses.

CSV auto pauses when it receives any failure from Direct IO or Block Redirected IO with a few exceptions like STATUS_INVALID_USER_BUFFER, STATUS_CANCELLED, STATUS_DEVICE_DATA_ERROR or STATUS_VOLMGR_PACK_CONFIG_OFFLINE, which indicate either user error or that storage is misconfigured. In both cases there is no value in trying to abstract the failure in CSV because as soon as the IO is retried it will get the same error.

When File System Redirected IO fails (including any metadata IO) then CSV auto pauses only when error is one of the well knows status codes. Here is the list that we have as of Windows Server Technical Preview for vNext:


This list is based on our experience and many years of testing, and includes status codes that you would see when communication channel fails or when storage stack is failing. Please note that this list evolves and changes as we discover new scenarios that we can help to make more resilient using auto pause. This list contains status codes that indicate communication/authentication/configuration failure, status codes that indicate that NTFS/disk on coordinating node are failing, and few CSV specific status codes.

There are also few cases when CSV might auto-pause itself to handle some inconsistency that it observes in its state or when it cannot get to the desired state without compromising data correctness. An example would be when file is opened from multiple computers, and on write from one cluster node CSV needs to purge cache on another node, and that purge fails because someone has locked pages then we would auto-pause to see if retry would avoid the problem. In these cases you might see auto-pauses with status codes like STATUS_UNSUCCESSFUL or STATUS_PURGE_FAILED or STATUS_CACHE_PAGE_LOCKED.

When CSV conducts an auto pause, an event 5120 is written to the System event log.  The description field will contain the specific status code that resulted in the auto pause.

Log Name:      System
Source:        Microsoft-Windows-FailoverClustering
Event ID:      5120
Task Category: Cluster Shared Volume
Level:         Error
Cluster Shared Volume 'Volume1' ('Cluster Disk 1') is no longer available on this node because of 'STATUS_VOLUME_DISMOUNTED(C000026E)'. All I/O will temporarily be queued until a path to the volume is reestablished.

Additional information is available in the CSV operational log channel Microsoft-Windows-FailoverClustering-CsvFs/Operational.  This can be found in Event Viewer under ‘Applications and Services Logs \ Microsoft \ Windows \ FailoverClustering-CsvFs \ Operational’.  Here is an Event 9296 logged to that channel:

Log Name:      Microsoft-Windows-FailoverClustering-CsvFs/Operational
Source:        Microsoft-Windows-FailoverClustering-CsvFs-Diagnostic
Event ID:      9296
Task Category: Volume Autopause
Level:         Information
Keywords:      Volume State
Volume {ca4ce06f-6bAE-4405-b328-fd9d123469b3} is autopaused. Status 0xC000026E. Source: Tunneled metadata IO

Event Xml:
<Event xmlns="">
<Data Name="Volume">0xffffe000badfb1b0</Data>
<Data Name="VolumeId">{CA4CE06F-6B06-4405-B058-FD9D1CF869B3}</Data>
<Data Name="CountersName">Volume1me3</Data>
<Data Name="FromDirectIo">false</Data>
<Data Name="Irp">0xffffcf800fb72990</Data>
<Data Name="Status">0xc000026e</Data>
<Data Name="Source">11</Data>
<Data Name="Parameter1">0x0</Data>
<Data Name="Parameter2">0x0</Data>

In addition to status code the Event 9296 will contain the source of the auto-pause, and in some cases may contain additional parameters helping to further narrow down the scenario. Here is the complete list of sources.

  1. Unknown

  2. Tunneled metadata IO

  3. Apply byte range lock on down-level file system

  4. Remove all byte range locks

  5. Remove byte range lock

  6. Continues availability resume complete

  7. Continues availability resume complete for paging file object

  8. Continues availability set bypass

  9. Continues availability suspend handle on close

  10. Stop buffering on file close

  11. Remove all byte range locks on file close

  12. User requested

  13. Purge on oplock break

  14. Advance VDL on oplock break

  15. Flush on oplock break

  16. Memory allocation to stop buffering

  17. Stopping buffering

  18. Setting maximum oplock level

  19. Oplock break acknowledge to CSV filter

  20. Oplock break acknowledge

  21. Downgrade buffering asynchronous

  22. Oplock upgrade

  23. Query oplock status

  24. Single client notification complete

  25. Single client notification stop oplock

Auto Pause due to STATUS_IO_TIMEOUT

One of common auto-pause reasons is STATUS_IO_TIMEOUT, because of intra-cluster communication over the network.  This is happening when SMB client observes that an IO is taking over 1-4 minutes (depending on IO type). If IO times out then SMB client would attempt to fail IOs to another channel in multichannel configuration or if all channels are exhausted then it would fail IO back to the caller.

You can learn more about SMB multichannel in the following blog posts

Configuring IP Addresses and Dependencies for Multi-Subnet Clusters

Configuring IP Addresses and Dependencies for Multi-Subnet Clusters - Part II

Configuring IP Addresses and Dependencies for Multi-Subnet Clusters - Part III

Force network traffic through a specific NIC with SMB multichannel

On the diagram above you can see two node cluster, Node 2 is coordinator node. Let’s say Application running on Node 1 issued an IO or metadata operation that CSVFS forwarded to NTFS over SMB (follow the red path on the diagram above). Any of the components along the red path (network, file system drivers attached to NTFS, volume and disk drivers, software and hardware on the storage box, firmware on the disk) can take a long time. Once SMB Client sent IO it starts a timer. If IO does not complete in 1-4 minutes then SMB Client will suspect that there might be something wrong with the network. It will disconnect the socket and would retry all IOs using another socket on another channel. If all channels were tried then IO would fail with STATUS_IO_TIMEOUT. In case of CSV there are some internal controls (for example oplock request) that cannot be simply retried on another channel so SMB Client would fail them back to CSVFS, which would trigger an auto pause with STATUS_IO_TIMEOUT.

Please note that CSVFS on the coordinating node would not use SMB to communicate to NTFS so these IOs would not complete with STATUS_IO_TIMEOUT from SMB Client.

The next question is how we can find what operation is taking time, and why?

First please note that auto pause with STATUS_IO_TIMEOUT would be reported on a non-coordinating node (Node 1 on the diagram above) while IO is stuck on the coordinating node (Node 2 on the diagram above).

Second please note that the nature of the issue we are dealing with is a hang, and in this case traces are not particular helpful because in the traces it is hard to tell what activity took time, and where it was stuck. We found two approaches to be helpful when troubleshooting this sorts of issues:

  1. Collect a dump file on the coordinating node while hanging IO is in flight. There are number of options how you can create a dump file from most brutal:

    1. Bugchecking your machine using sysinternals notmyfault ( )

    2. Configuring KD and using sysnternals livekd ( )

    3. Windbg. In fact this approach was so productive that starting from Windows Server Technical Preview cluster on observing an auto-pause due to STATUS_IO_TIMEOUT on non-coordinating node would collect kernel live dump on the coordinating node. We can open dump file using windbg ( ) and try to find out what IO is taking long time and why.

  2. On the coordinating node keep running Windows Performance Toolkit ( ) session with wait analysis enabled ( ). When non-coordinating node auto pauses with STATUS_IO_TIMEOUT stop WPT session and collect etl file. Open etl using WPA and try to locate the IO that is taking long time, the thread that is executing this IO and what this thread has been blocked on. In some cases it might be helpful to also keep WPT sampling profiler enabled in cases if thread that is handling IO is not stuck forever, but periodically makes some forward progress.

The reason for STATUS_IO_TIMEOUT might very from software, configuration, to hardware issue. Always check your system event log for events indicating HBA or disk failures. Make sure you have all the latest updates.

Recommended hotfixes and updates for Windows Server 2012 R2-based failover clusters

Recommended hotfixes and updates for Windows Server 2012-based failover clusters

Make sure your storage and disks have latest firmware supported for your environment. If it is not going away then troubleshoot it using one of the ways described above and analyze the dump or trace.

It is expected that you may at times see Event 5120’s in the System event log, I would suggest not to worry about infrequent 5120’s as long it is happening once is a while (once a month or once a week), if cluster recovers from that, and you do not see workload failures. But I would suggest to monitor them and do some data mining for frequency and type (source and status code) of auto pauses.  In some scenarios, an Event 5120 may be expected.  This blog is an example of when an Event 5120 is expected during snapshot deletion:

For instance if you see that frequency of auto pauses increased after certain date then check perhaps you have installed or enabled certain feature that was not on or was not used before.

You might be able to correlate auto-pause with some other activity that was happening on one of the cluster nodes around the same time. For example backup or antivirus scan.

Or perhaps you see auto pause is happening only when certain node is coordinating. Then there might be some issue with hardware on that node.

Or perhaps physical disk is going bad causing failure then try to look for storage errors in the system event log and query disk resiliency counters using powershell

Get-PhysicalDisk | Get-StorageReliabilityCounter |  ft DeviceId,ReadErrorsTotal,ReadLatencyMax,WriteErrorsTotal,WriteLatencyMax -AutoSize

The list above is not exhaustive, but might give you some idea on how to approach the problem.


In this blog post we went over possible causes for the event 5120, what they might mean and how to approach troubleshooting. Windows Server has plenty of tools that would help you with troubleshooting 5120. Keep in mind that 5120 does not mean that your workload failed. Most likely cluster will successfully recover from that failure, and your workload will keep running. If recovery was not successful when you see event 5142, and that will be the subject of the next post.

Vladimir Petter
Principal Software Engineer
Clustering & High-Availability

To learn more, here are others in the Cluster Shared Volume (CSV) blog series:

Cluster Shared Volume (CSV) Inside Out

Cluster Shared Volume Diagnostics

Cluster Shared Volume Performance Counters

Cluster Shared Volume Failure Handling
Occasional Visitor


This is a very interesting article that I've come across while attempting to troubleshoot some issues I have in my cluster. I am seeing many 5120's and it's frequently taking my cluster down. The associated errors I am seeing in the Microsoft-Windows-FailoverClustering-CsvFs-Diagnostic event log say:

Volume {8DF9597C-BA98-4663-B8A2-6A2FF95A0149} is autopaused. Status C000020C. Source: 26.

... and I note that source 26 is not included in your list. Have you any idea what this might be?

I'm also seeing references to C000020C that might be related to a timeout to the clusterstorage$ share, but if I run Get-SMBShare -IncludeHidden on my nodes I don't see any share. M y cluster is Server 2016 - is the share still used?

Thanks, Steve


@scuthber1966 Hi Steve, I ran into the same issues as you do except the fact I am running on W2019. I was wondering if you were able to find out more.


Thank you


Frequent Visitor

Hey @scuthber1966 @vinsroman - I have the same issues running Server 2019. My Server 2016 clusters do not have this issue. Have either of you found the issue in your scenario? 


Hi @Jared Brown,

in my case, it was caused by the network and some missed heartbeats but haven't got to the root cause. We disabled IPSec for live migrations and introduced second ad-hoc network between the cluster nodes dedicated for live migrations and cluster communication.

Version history
Last update:
‎Mar 15 2019 02:54 PM
Updated by: