Virtualization articles Virtualization articles Mon, 25 Oct 2021 13:20:15 GMT Virtualization 2021-10-25T13:20:15Z AMD Nested Virtualization Support <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AMD nested MN_RELEASE 19636.png" style="width: 999px;"><img src=";px=999" role="button" title="AMD nested MN_RELEASE 19636.png" alt="AMD Nested Support showing a VM running on a VM on AMD Hardware" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">AMD Nested Support showing a VM running on a VM on AMD Hardware</span></span></P> <P><A href="#" target="_blank" rel="noopener">Nested Virtualization</A> is not a new idea. In fact, we <A href="" target="_blank" rel="noopener">announced</A> our first preview of Nested Virtualization running on Windows way back in 2015.&nbsp; From that Windows Insider preview to now, Nested Virtualization has been used in a variety of offerings in a variety of ways.&nbsp; Today, you can find Nested Virtualization <A href="#" target="_blank" rel="noopener">support</A> in Azure that gives the Azure users flexibility in how they want to setup their environments.&nbsp; An example of Nested Virtualization being used to support our developer community is to accelerate Microsoft’s <A href="#" target="_blank" rel="noopener">Android Emulation</A>.&nbsp; Nested Virtualization is being used by&nbsp; IT Pros to set up a home labs. And we can’t forget containers! If you want to use a Hyper-V Containers inside a VM, you guessed it: this is enabled with Nested Virtualization.&nbsp; You can start to see why Nested Virtualization is such a useful technology.</P> <P>&nbsp;</P> <P>There is one group of users that was unable to take advantage of Nested Virtualization on Windows. These were our users with AMD hardware.&nbsp; Not a week goes by where the team doesn’t get a request for Nested Virtualization support for AMD from our community or from within Microsoft.&nbsp; In fact, it is the number 1 ask on Windows Server’s <A href="#" target="_blank" rel="noopener">uservoice page</A>. At the time of this blog post, it was almost 5x more than the next feedback item.</P> <P>&nbsp;</P> <P>I am happy to announce that the community has been heard and starting with Windows Build 19636, you will be able to try out Nested Virtualization on AMD processors! If you’re on the Windows Insider Fast ring then you can try this out today.</P> <P>&nbsp;</P> <P>As this is a preview release of Nested Virtualization on AMD, there are some guidance and limitations to keep in mind if you want to try this out.</P> <UL> <LI>Ensure your OS build number is 19636 or greater</LI> <LI>Right now, this has been tested on AMD’s first generation Ryzen/Epyc or newer processors</LI> <LI><SPAN>For maximum stability and performance u</SPAN><SPAN>se a Windows guest with an OS version that is greater than or equal to the host OS version (19636) for now</SPAN>.&nbsp; Linux KVM guest support will be coming in the future</LI> <LI>Create a version 9.3 VM. Here’s an example PowerShell command to ensure a version 9.3 VM is being used: &nbsp;New-Vm -VMName “L1 Guest” -Version 9.3</LI> <LI>Follow the rest of the steps in our <A href="#" target="_blank" rel="noopener">public documentation</A></LI> </UL> <P>&nbsp;</P> <P>June 12, 2020 edit: changed wording around Guest OS recommendation.</P> Sat, 13 Jun 2020 00:26:50 GMT chuybregts 2020-06-13T00:26:50Z VMware Workstation and Hyper-V <P>As a follow up to our previous post on <A href="" target="_blank" rel="noopener">VMware and Hyper-V Working Together</A>,&nbsp; VMware has released a version of VMware Workstation that works with the <A href="#" target="_blank" rel="noopener">Windows Hypervisor Platform</A><U> (WHP)</U>. This release adds support for VMware Workstation running side by side with Microsoft’s virtualization based offerings.&nbsp; For a full write up on the changes VMware made and the details on the version required check out their excellent post <A href="#" target="_blank" rel="noopener">here</A>.</P> <P>&nbsp;</P> <P>In Windows 10, we introduced a number of features that utilize the Windows Hypervisior. These include security enhancements like <A href="#" target="_blank" rel="noopener">Windows Defender Credential Guard</A>, <A href="#" target="_blank" rel="noopener">Windows Defender Application Guard</A>, and <A href="#" target="_blank" rel="noopener">Virtualization Based Security</A> as well as developer features like <A href="#" target="_blank" rel="noopener">Windows Containers</A> and <A href="#" target="_blank" rel="noopener">WSL 2</A>. Prior to the WHP integration, these features needed to be disabled before Workstation was able to launch. Post integration, end users are now able to take advantage of these features and use Workstation!</P> <P>&nbsp;</P> <P>A big thank you and congratulations go out to the engineering teams of both companies that made this possible.&nbsp; This milestone was reached through their hard work and dedication and I’m excited to see the results of this effort being released to the world!</P> Fri, 29 May 2020 06:19:08 GMT chuybregts 2020-05-29T06:19:08Z Hyper-V Powering Windows Features <P><EM>December 2019</EM></P> <P>Hyper-V is Microsoft’s hardware virtualization technology that initially released with Windows Server 2008 to support server virtualization and has since become a core component of many Microsoft products and features. These features range from enhancing security to empowering developers to enabling the most compatible gaming console. Recent additions to this list include Windows Sandbox, Windows Defender Application Guard, System Guard and Advanced Threat Detection, Hyper-V Isolated-Containers, Windows Hypervisor Platform and Windows Subsystem for Linux 2. Additionally, applications using Hyper-V, such as Kubernetes for Windows and Docker Desktop, are also being introduced and improved.</P> <P>&nbsp;</P> <P>As the scope of Windows virtualization has expanded to become an integral part of the operating system, many new OS capabilities have taken a dependency on Hyper-V. Consequently, this created compatibility issues with many popular third-party products that provide their own virtualization solutions, forcing users to choose between applications or losing OS functionality. Therefore, Microsoft has partnered extensively with key software vendors such as VMware, VirtualBox, and BlueStacks to provide updated solutions that directly leverage Microsoft virtualization technologies, eliminating the need for customers to make this trade-off.</P> <P>&nbsp;</P> <H2><A href="" target="_blank" rel="noopener">Windows Sandbox</A></H2> <P>Windows Sandbox is an isolated, temporary, desktop environment where you can run untrusted software without the fear of lasting impact to your PC. &nbsp;Any software installed in Windows Sandbox stays only in the sandbox and cannot affect your host. Once Windows Sandbox is closed, the entire state, including files, registry changes and the installed software, are permanently deleted. Windows Sandbox is built using the same technology we developed to securely operate multi-tenant Azure services like Azure Functions and provides integration with Windows 10 and support for UI based applications.</P> <P>&nbsp;</P> <H2><A href="#" target="_self">Windows<SPAN> Defender Application Guard</SPAN> </A></H2> <P style="font-family: SegoeUI, Lato, 'Helvetica Neue', Helvetica, Arial, sans-serif; color: #333333;">Windows Defender Application Guard (WDAG) is a Windows 10 security feature introduced in the Fall Creators Update (Version 1709 aka&nbsp;RS3) that protects against targeted threats using Microsoft’s&nbsp;Hyper-V&nbsp;virtualization technology. WDAG augments Windows virtualization based security capabilities to prevent zero-day kernel vulnerabilities from compromising the host operating system. WDAG also enables enterprise users of Microsoft Edge and Internet Explorer (IE) protection from zero-day kernel vulnerabilities by isolating a user’s untrusted browser sessions from the host operating system. Security conscious enterprises use WDAG to lock down their enterprise host while allowing their users to browse non-enterprise content.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="clipboard_image_4.png" style="width: 400px;"><img src=";px=400" role="button" title="clipboard_image_4.png" alt="clipboard_image_4.png" /></span></P> <P><EM>Application Guard isolates untrusted sites using a new instance of Windows at the hardware layer.</EM></P> <P>&nbsp;</P> <H2><A href="#" target="_blank" rel="noopener">Windows Defender System Guard</A></H2> <P>In order to protect critical resources such as the Windows authentication stack, single sign-on tokens, the Windows Hello biometric stack, and the Virtual Trusted Platform Module, a system's firmware and hardware must be trustworthy. Windows Defender System Guard reorganizes the existing Windows 10 system integrity features under one roof and sets up the next set of investments in Windows security. It's designed to make these security guarantees:</P> <UL> <LI>To protect and maintain the integrity of the system as it starts up</LI> <LI>To validate that system integrity has truly been maintained through local and remote attestation</LI> </UL> <P>&nbsp;</P> <H2><A href="#" target="_blank" rel="noopener">Windows Defender Advanced Threat Detection</A></H2> <P>Detecting and stopping attacks that tamper with kernel-mode agents at the hypervisor level is a critical component of the unified endpoint protection platform in Microsoft Defender Advanced Threat Protection (<A href="#" target="_blank" rel="noopener">Microsoft Defender ATP</A>). It’s not without challenges, but the deep integration of&nbsp;<A href="#" target="_blank" rel="noopener">Windows Defender Antivirus</A>&nbsp;with&nbsp;<A href="#" target="_blank" rel="noopener">hardware-based isolation</A>&nbsp;capabilities allows the detection of artifacts of such attacks.</P> <P>&nbsp;</P> <H2><A href="#" target="_blank" rel="noopener">Hyper-V Isolated Containers</A></H2> <P>Hyper-V plays an important role in the container development experience on Windows 10. Since Windows containers require a tight coupling between its OS version and the host that it runs on, Hyper-V is used to encapsulate containers on Windows 10 in a transparent, lightweight virtual machine. Colloquially, we call these "Hyper-V Isolated Containers". These containers are run in VMs that have been specifically optimized for speed and efficiency when it comes to host resource usage. Hyper-V Isolated Containers most notably allow developers to develop for multiple Linux distros and Windows at the same time and are managed just like any container developer would&nbsp;expect as they integrate with all the same tooling (e.g. Docker).</P> <P>&nbsp;</P> <H2><A href="#" target="_blank" rel="noopener">Windows Hypervisor Platform</A></H2> <P>The Windows Hypervisor Platform (WHP) adds an extended user-mode API for third-party virtualization stacks and applications to create and manage partitions at the hypervisor level, configure memory mappings for the partition, and create and control execution of virtual processors. The primary value here is that third-party virtualization software (such as VMware) can co-exist with Hyper-V and other Hyper-V based features. <A href="" target="_blank" rel="noopener">Virtualization-Based Security</A> (VBS) is a recent technology that has enabled this co-existence.</P> <P>WHP provides an <SPAN>API</SPAN> similar to that of <A href="#" target="_blank" rel="noopener">Linux's KVM</A> and <A href="#" target="_blank" rel="noopener">macOS's Hypervisor Framework</A>, and is currently leveraged on projects by <A href="#" target="_blank" rel="noopener">QEMU</A> <SPAN>and </SPAN><A href="#" target="_blank" rel="noopener">VMware</A>.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="clipboard_image_5.png" style="width: 400px;"><img src=";px=400" role="button" title="clipboard_image_5.png" alt="clipboard_image_5.png" /></span></P> <P><EM>This diagram provides a high-level overview of a third-party architecture.</EM></P> <P>&nbsp;</P> <H2><A href="#" target="_blank" rel="noopener">Windows Subsystem for Linux 2</A></H2> <P>WSL 2 is the newest version of the architecture that powers the Windows Subsystem for Linux to run ELF64 Linux binaries on Windows. Its feature updates include increased file system performance as well as added full system call compatibility. This new architecture changes how these Linux binaries interact with Windows and your computer’s hardware, but still provides the same user experience as in WSL 1 (the current widely available version). The main difference being that WSL 2 uses a new architecture, which is primarily running a true Linux kernel inside a virtual machine. Individual Linux distros can be run either as a WSL 1 distro, or as a WSL 2 distro, can be upgraded or downgraded at any time, and can run WSL 1 and WSL 2 distros side by side.</P> <P>&nbsp;</P> <H2><A href="#" target="_blank" rel="noopener">Kubernetes Support for Windows</A></H2> <P>Kubernetes started officially supporting Windows Server in production with the release of Kubernetes version 1.14 (in March 2019). Windows-based applications constitute a large portion of the workloads in many organizations. Windows containers provide a modern way for these Windows applications to use DevOps processes and cloud native patterns. Kubernetes has become the de facto standard for container orchestration; hence this support enables a vast ecosystem of Windows applications to not only leverage the power of Kubernetes, but also to leverage the robust and growing ecosystem surrounding it. Organizations with investments in both Windows-based applications and Linux-based applications no longer need to look for separate orchestrators to manage their workloads, leading to increased operational efficiencies across their deployments. The engineering that supported this release relied upon open source and community led approaches that originally brought Windows Server containers to Windows Server 2016.</P> <P>&nbsp;</P> <P>These components and tools have allowed Microsoft’s Hyper-V technology to introduce new ways of enabling customer experiences. Windows Sandbox, Windows Defender Application Guard, System Guard and Advanced Threat Detection, Hyper-V Isolated-Containers, Windows Hypervisor Platform and Windows Subsystem for Linux 2 are all new Hyper-V components that ensure the security and flexibility customers should expect from Windows. The coordination of applications using Hyper-V, such as Kubernetes for Windows and Docker Desktop also represent Microsoft’s dedication to customer needs, which will continue to stand for our main sentiment going forward.</P> Thu, 12 Dec 2019 22:31:20 GMT nickeaton 2019-12-12T22:31:20Z Virtualization-Based Security: Enabled by Default <P><A href="#" target="_blank">Virtualization-based Security (VBS)</A> uses hardware virtualization features to create and isolate a secure region of memory from the normal operating system. Windows can use this "virtual secure mode" (VSM) to host a number of security solutions, providing them with greatly increased protection from vulnerabilities in the operating system, and preventing the use of malicious exploits which attempt to defeat operating systems protections.</P> <P>&nbsp;</P> <P>The Microsoft hypervisor creates VSM and enforces restrictions which protect vital operating system resources, provides an isolated execution environment for privileged software and can protect secrets <A href="#" target="_blank">such as authenticated user credentials</A>. With the increased protections offered by VBS, even if malware compromises the operating system kernel, the possible exploits can be greatly limited and contained because the hypervisor can prevent the malware from executing code or accessing secrets.</P> <P>&nbsp;</P> <P>The Microsoft hypervisor has supported VSM since the earliest versions of Windows 10. However, until recently, Virtualization-based Security has been an optional feature that is most commonly enabled by enterprises. This was great, but the hypervisor development team was not satisfied. We believed that all devices running Windows should have Microsoft’s most advanced and most effective security features enabled by default. In addition to bringing significant security benefits to Windows, achieving default enablement status for the Microsoft hypervisor enables seamless integration of numerous other scenarios leveraging virtualization. Examples include <A href="#" target="_blank">WSL2</A>, <A href="#" target="_blank">Windows Defender Application Guard</A>, <A href="" target="_blank">Windows Sandbox</A>, <A href="#" target="_blank">Windows Hypervisor Platform support for 3rd party virtualization software</A>, and much more.</P> <P>&nbsp;</P> <P>With that goal in mind, we have been hard at work over the past several Windows releases optimizing every aspect of VSM. We knew that getting to the point where VBS could be enabled by default would require reducing the performance and power impact of running the Microsoft hypervisor on typical consumer-grade hardware like tablets, laptops and desktop PCs. We had to make the incremental cost of running the hypervisor as close to zero as possible and this was going to require close partnership with the Windows kernel team and our closest silicon partners – Intel, AMD, and ARM (Qualcomm).</P> <P>&nbsp;</P> <P>Through software innovations like <A href="" target="_blank">HyperClear</A> and by making significant hypervisor and Windows kernel changes to avoid fragmenting large pages in the second-level address translation table, we were able to dramatically reduce the runtime performance and power impact of hypervisor memory management. We also heavily optimized hot hypervisor codepaths responsible for things like interrupt virtualization – taking advantage of hardware virtualization assists where we found that it was helpful to do so. Last but not least, we further reduced the performance and power impact of a key VSM feature called Hypervisor-Enforced Code Integrity (HVCI) by working with silicon partners to design completely new hardware features including Intel’s Mode-based execute control for EPT (MBEC), AMD’s Guest-mode execute trap for NPT (GMET), and ARM’s Translation table stage 2 Unprivileged Execute-never (TTS2UXN).</P> <P>&nbsp;</P> <P><EM>I’m proud to say that as of Windows 10 version 1903 </EM><A href="#" target="_blank"><EM>9D</EM></A><EM>, we have succeeded in enabling Virtualization-based Security by default on some </EM><A href="#" target="_blank"><EM>capable hardware</EM></A><EM>!</EM></P> <P>&nbsp;</P> <P>The <A href="#" target="_blank">Samsung Galaxy Book2</A> is officially the first Windows PC to have VBS enabled <U>by default</U>. This PC is built around the <A href="#" target="_blank">Qualcomm Snapdragon 850</A> processor, a 64-bit ARM processor. This is particularly exciting for the Microsoft hypervisor development team because it also marks the first time that enabling our hypervisor is officially supported on any ARM-based device.</P> <P>&nbsp;</P> <P>Keep an eye on this blog for announcements regarding the default-enablement of VBS on additional hardware and in future versions of Windows 10.</P> Wed, 02 Oct 2019 23:57:30 GMT brucesherwin 2019-10-02T23:57:30Z VMware Workstation and Hyper-V – Working Together <P>Yesterday VMware demonstrated a pre-release version of VMware Workstation with early support for the <A href="#" target="_self">Windows Hypervisor Platform</A>&nbsp;in the&nbsp;<A style="font-family: inherit; background-color: #ffffff;" href="#" target="_self">What's New in VMware Fusion and VMware Workstation</A><SPAN style="font-family: inherit;"> session at VMworld.</SPAN></P> <P>&nbsp;</P> <P>In Windows 10 we have introduced many security features that utilize the Windows Hypervisor.&nbsp; Credential Guard, Windows Defender Application Guard, and Virtualization Based Security all utilize the Windows Hypervisor.&nbsp; At the same time, new Developer features like Windows Server Containers and the WSL 2 both utilize the Windows Hypervisor.</P> <P>&nbsp;</P> <P>This has made it challenging for our customers who need to use VMware Workstation.&nbsp; Historically, it has not be possible to run VMware Workstation when Hyper-V was enabled.</P> <P>&nbsp;</P> <P>In the future – users will be able to run all of these applications together.&nbsp; This means that users of VMware workstation will be able to take advantage of all the security enhancements and developer features that are available in Windows 10.&nbsp; Microsoft and VMware have been collaborating on this effort, and I am really excited to be a part of this moment!</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="IMG_0566.jpg" style="width: 999px;"><img src=";px=999" role="button" title="IMG_0566.jpg" alt="IMG_0566.jpg" /></span></P> <P>&nbsp;</P> <P>Cheers,<BR />Ben</P> Tue, 27 Aug 2019 23:51:32 GMT Ben Armstrong 2019-08-27T23:51:32Z 5/14: Hyper-V HyperClear Update <P>Four new speculative execution side channel vulnerabilities were announced today and affect a wide array of Intel processors. The list of affected processors includes Intel Xeon, Intel Core, and Intel Atom models. These vulnerabilities are referred to as CVE-2018-12126 Microarchitectural Store Buffer Data Sampling (MSBDS), CVE-2018-12130 Microarchitectural Fill Buffer Data Sampling (MFBDS), CVE-2018-12127 Microarchitectural Load Port Data Sampling (MLPDS), and CVE-2018-11091 Microarchitectural Data Sampling Uncacheable Memory (MDSUM). These vulnerabilities are like other Intel CPU vulnerabilities disclosed recently in that they can be leveraged for attacks across isolation boundaries. This includes intra-OS attacks as well as inter-VM attacks.</P><P>&nbsp;</P><P>In a previous blog post, the Hyper-V hypervisor engineering team described our high-performing and comprehensive side channel vulnerability mitigation architecture, <A href="" target="_self">HyperClear</A>. We originally designed HyperClear as a defense against the L1 Terminal Fault (a.k.a. Foreshadow) Intel side channel vulnerability. Fortunately for us and for our customers, HyperClear has proven to be an excellent foundation for mitigating this new set of side channel vulnerabilities. In fact, HyperClear required a relatively small set of updates to provide strong inter-VM and intra-OS protections for our customers. These updates have been deployed to Azure and are available in Windows Server 2016 and later supported releases of Windows and Windows Server. Just as before, the HyperClear mitigation allows for safe use of hyper-threading in a multi-tenant virtual machine hosting environment.</P><P>&nbsp;</P><P>We have already shared the technical details of HyperClear and the set of required changes to mitigate this new set of hardware vulnerabilities with industry partners. However, we know that many of our customers are also interested to know how we’ve extended the Hyper-V HyperClear architecture to provide protections against these vulnerabilities.</P><P>&nbsp;</P><P>As we described in the original HyperClear blog post, HyperClear relies on 3 main components to ensure strong inter-VM isolation:</P><OL><LI><STRONG>Core Scheduler</STRONG></LI><LI><STRONG>Virtual-Processor Address Space Isolation</STRONG></LI><LI><STRONG>Sensitive Data Scrubbing</STRONG></LI></OL><P>As we extended HyperClear to mitigate these new vulnerabilities, the fundamental components of the architecture remained constant. However, there were two primary hypervisor changes required:</P><OL><LI><STRONG>Support for a new Intel processor feature called MbClear.</STRONG> Intel has been working to add support for MbClear by updating the CPU microcode for affected Intel hardware. The Hyper-V hypervisor uses this new feature to clear microarchitectural buffers when switching between virtual processors that belong to different virtual machines. This ensures that when a new virtual processor begins to execute, there is no data remaining in any microarchitectural buffers that belongs to a previously running virtual processor. Additionally, this new processor feature may be exposed to guest operating systems to implement intra-OS mitigations.</LI><LI><STRONG>Always-enabled sensitive data scrubbing.</STRONG> This ensures that the hypervisor never leaves sensitive data in hypervisor-owned memory when it returns to guest kernel-mode or guest user-mode. This prevents the hypervisor from being used as a gadget by guest user-mode. Without always-enabled sensitive data scrubbing, the concern would be that guest user-mode can deliberately trigger hypervisor entry and that the CPU may speculatively fill a microarchitectural buffer with secrets remaining in memory from a previous hypervisor entry triggered by guest kernel-mode or a different guest user-mode application. Always-enabled sensitive data scrubbing fully mitigates this concern. As a bonus, this change improves performance on many Intel processors because it enables the Hyper-V hypervisor to more efficiently mitigate other previously disclosed Intel side channel speculation vulnerabilities.</LI></OL><P>Overall, the Hyper-V HyperClear architecture has proven to be a readily extensible design providing strong isolation boundaries against a variety of speculative execution side channel attacks with negligible impact on performance.</P> Tue, 14 May 2019 19:54:53 GMT brucesherwin 2019-05-14T19:54:53Z Hyper-V HyperClear Mitigation for L1 Terminal Fault <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Aug 14, 2018 </STRONG> <BR /> <H2> Introduction </H2> <BR /> A new speculative execution side channel vulnerability was announced recently that affects a range of Intel Core and Intel Xeon processors. This vulnerability, referred to as L1 Terminal Fault (L1TF) and assigned CVE 2018-3646 for hypervisors, can be used for a range of attacks across isolation boundaries, including intra-OS attacks from user-mode to kernel-mode as well as inter-VM attacks. Due to the nature of this vulnerability, creating a robust, inter-VM mitigation that doesn’t significantly degrade performance is particularly challenging. <BR /> <BR /> For Hyper-V, we have developed a comprehensive mitigation to this attack that we call HyperClear. This mitigation is in-use by Microsoft Azure and is available in Windows Server 2016 and later. The HyperClear mitigation continues to allow for safe use of SMT (hyper-threading) with VMs and, based on our observations of deploying this mitigation in Microsoft Azure, HyperClear has shown to have relatively negligible performance impact. <BR /> <BR /> We have already shared the details of HyperClear with industry partners. Since we have received questions as to how we are able to mitigate the L1TF vulnerability without compromising performance, we wanted to broadly share a technical overview of the HyperClear mitigation and how it mitigates L1TF speculative execution side channel attacks across VMs. <BR /> <H2> Overview of L1TF Impact to VM Isolation </H2> <BR /> As documented <A href="#" target="_blank"> here </A> , the fundamental premise of the L1TF vulnerability is that it allows a virtual machine running on a processor core to observe any data in the L1 data cache on that core. <BR /> <BR /> Normally, the Hyper-V hypervisor isolates what data a virtual machine can access by leveraging the memory address translation capabilities provided by the processor. In the case of Intel processors, the Extended Page Tables (EPT) feature of Intel VT-x is used to restrict the system physical memory addresses that a virtual machine can access. <BR /> <BR /> Under normal execution, the hypervisor leverages the EPT feature to restrict what physical memory can be accessed by a VM’s virtual processor while it is running. This also restricts what data the virtual processor can access in the cache, as the physical processor enforces that a virtual processor can only access data in the cache corresponding to system physical addresses made accessible via the virtual processor’s EPT configuration. <BR /> <BR /> By successfully exploiting the L1TF vulnerability, the EPT configuration for a virtual processor can be bypassed during the speculative execution associated with this vulnerability. This means that a virtual processor in a VM can speculatively access anything in the L1 data cache, regardless of the memory protections configured by the processor’s EPT configuration. <BR /> <BR /> Intel’s Hyper-Threading (HT) technology is a form of Simultaneous MultiThreading (SMT). With SMT, a core has multiple SMT threads (also known as logical processors), and these logical processors (LPs) can execute simultaneously on a core. SMT further complicates this vulnerability, as the L1 data cache is shared between sibling SMT threads of the same core. Thus, a virtual processor for a VM running on a SMT thread can speculatively access anything brought into the L1 data cache by its sibling SMT threads. This can make it inherently unsafe to run multiple isolation contexts on the same core. For example, if one logical processor of a SMT core is running a virtual processor from VM A and another logical processor of the core is running a virtual processor from VM B, sensitive data from VM B could be seen by VM A (and vice-versa). <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Similarly, if one logical processor of a SMT core is running a virtual processor for a VM and the other logical processor of the SMT core is running in the hypervisor context, the guest VM could speculatively access sensitive data brought into the cache by the hypervisor. <BR /> <H2> Basic Inter-VM Mitigation </H2> <BR /> To mitigate the L1TF vulnerability in the context of inter-VM isolation, the most straightforward mitigation involves two key components: <BR /> <OL> <BR /> <LI> <STRONG> Flush L1 Data Cache On Guest VM Entry </STRONG> – Every time the hypervisor switches a processor thread (logical processor) to execute in the context of a guest virtual processor, the hypervisor can first flush the L1 data cache. This ensures that no sensitive data from the hypervisor or previously running guest virtual processors remains in the cache. To enable the hypervisor to flush the L1 data cache, Intel has released updated microcode that provides an architectural facility for flushing the L1 data cache. </LI> <BR /> <LI> <STRONG> Disable SMT </STRONG> – Even with flushing the L1 data cache on guest VM entry, there is still the risk that a sibling SMT thread can bring sensitive data into the cache from a different security context. To mitigate this, SMT can be disabled, which ensures that only one thread ever executes on a processor core. </LI> <BR /> </OL> <BR /> The L1TF mitigation for Hyper-V prior to Windows Server 2016 employs a mitigation based on these components. However, this basic mitigation has the major downside that SMT must be disabled, which can significantly reduce the overall performance of a system. Furthermore, this mitigation can result in a very high rate of L1 data cache flushes since the hypervisor may switch a thread between the guest and hypervisor contexts many thousands of times a second. These frequent cache flushes can also degrade the performance of the system. <BR /> <H2> HyperClear Inter-VM Mitigation </H2> <BR /> To address the downsides of the basic L1TF Inter-VM mitigation, we developed the HyperClear mitigation. The HyperClear mitigation relies on three key components to ensure strong Inter-VM isolation: <BR /> <OL> <BR /> <LI> Core Scheduler </LI> <BR /> <LI> Virtual-Processor Address Space Isolation </LI> <BR /> <LI> Sensitive Data Scrubbing </LI> <BR /> </OL> <BR /> <H3> Core Scheduler </H3> <BR /> The traditional Hyper-V scheduler operates at the level of individual SMT threads (logical processors). When making scheduling decisions, the Hyper-V scheduler would schedule a virtual processor onto a SMT thread, without regards to what the sibling SMT threads of the same core were doing. Thus, a single physical core could be running virtual processors from different VMs simultaneously. <BR /> <BR /> Starting in Windows Server 2016, Hyper-V introduced a new scheduler implementation for SMT systems known as the " <A href="#" target="_blank"> Core Scheduler </A> ". When the Core Scheduler is enabled, Hyper-V schedules virtual cores onto physical cores. Thus, when a virtual core for a VM is scheduled, it gets exclusive use of a physical core, and a VM will never share a physical core with another VM. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> With the Core Scheduler, a VM can safely take advantage of SMT (Hyper-Threading). When a VM is using SMT, the hypervisor scheduling allows the VM to use all the SMT threads of a core at the same time. <BR /> <BR /> Thus, the Core Scheduler provides the essential protection that a VM’s data won’t be directly disclosed across sibling SMT threads. It protects against cross-thread data exposure of a VM since two different VMs never run simultaneously on different threads of the same core. <BR /> <BR /> However, the Core Scheduler alone is not sufficient to protect against all forms of sensitive data leakage across SMT threads. There is still the risk that hypervisor data could be leaked across sibling SMT threads. <BR /> <H3> Virtual-Processor Address Space Isolation </H3> <BR /> SMT Threads on a core can independently enter and exit the hypervisor context based on their activity. For example, events like interrupts can cause a SMT thread to switch out of running the guest virtual processor context and begin executing the hypervisor context. This can happen independently for each SMT thread, so one SMT thread may be executing in the hypervisor context while its sibling SMT thread is still running a VM’s guest virtual processor context. An attacker running code in the less trusted guest VM virtual processor context on one SMT thread can then use the L1TF side channel vulnerability to potentially observe sensitive data from the hypervisor context running on the sibling SMT thread. <BR /> <BR /> One potential mitigation to this problem is to coordinate hypervisor entry and exit across SMT threads of the same core. While this is effective in mitigating the information disclosure risk, this can significantly degrade performance. <BR /> <BR /> Instead of coordinating hypervisor entry and exits across SMT threads, Hyper-V employs strong data isolation in the hypervisor to protect against a malicious guest VM leveraging the L1TF vulnerability to observe sensitive hypervisor data. The Hyper-V hypervisor achieves this isolation by maintaining separate virtual address spaces in the hypervisor for each guest SMT thread (virtual processor). When the hypervisor context is entered on a specific SMT thread, the only data that is addressable by the hypervisor is data associated with the guest virtual processor associated with that SMT thread. This is enforced through the hypervisor’s page table selectively mapping only the memory associated with the guest virtual processor. No data for any other guest virtual processor is addressable, and thus, the only data that can be brought into the L1 data cache by the hypervisor is data associated with that current guest virtual processor. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Thus, regardless of whether a given virtual processor is running in the guest VM virtual processor context or in the hypervisor context, the only data that can be brought into the cache is data associated with the active guest virtual processor. No additional privileged hypervisor secrets or data from other guest virtual processors can be brought into the L1 data cache. <BR /> <BR /> This strong address space isolation provides two distinct benefits: <BR /> <OL> <BR /> <LI> The hypervisor does not need to coordinate entry and exits into the hypervisor across sibling SMT threads. So, SMT threads can enter and exit the hypervisor context independently without any additional performance overhead. </LI> <BR /> <LI> The hypervisor does not need to flush the L1 data cache when entering the guest VP context from the hypervisor context. Since the only data that can be brought into the cache while executing in the hypervisor context is data associated with the guest virtual processor, there is no risk of privileged/private state in the cache that needs to be protected from the guest. Thus, with this strong address space isolation, the hypervisor only needs to flush the L1 data cache when switching between virtual cores on a physical core. This is much less frequent than the switches between the hypervisor and guest VP contexts. </LI> <BR /> </OL> <BR /> <H3> Sensitive Data Scrubbing </H3> <BR /> There are cases where virtual processor address space isolation is insufficient to ensure isolation of sensitive data. Specifically, in the case of nested virtualization, a single virtual processor may itself run multiple guest virtual processors. Consider the case of a L1 guest VM running a nested hypervisor (L1 hypervisor). In this case, a virtual processor in this L1 guest may be used to run nested virtual processors for L2 VMs being managed by the L1 nested hypervisor. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> In this case, the nested L1 guest hypervisor will be context switching between each of these nested L2 guests (VM A and VM B) and the nested L1 guest hypervisor. Thus, a virtual processor for the L1 VM being maintained by the L0 hypervisor can run multiple different security domains – a nested L1 hypervisor context and one or more L2 guest virtual machine contexts. Since the L0 hypervisor maintains a single address space for the L1 VM’s virtual processor, this address space could contain data for the nested L1 guest hypervisor and L2 guests VMs. <BR /> <BR /> To ensure a strong isolation boundary between these different security domains, the L0 hypervisor relies on a technique we refer to as state scrubbing when nested virtualization is in-use. With state scrubbing, the L0 hypervisor will avoid caching any sensitive guest state in its data structures. If the L0 hypervisor must read guest data, like register contents, into its private memory to complete an operation, the L0 hypervisor will overwrite this memory with 0’s prior to exiting the L0 hypervisor context. This ensures that any sensitive L1 guest hypervisor or L2 guest virtual processor state is not resident in the cache when switching between security domains in the L1 guest VM. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> For example, if the L1 guest hypervisor accesses an I/O port that is emulated by the L0 hypervisor, the L0 hypervisor context will become active. To properly emulate the I/O port access, the L0 hypervisor will have to read the current guest register contents for the L1 guest hypervisor context, and these register contents will be copied to internal L0 hypervisor memory. When the L0 hypervisor has completed emulation of the I/O port access, the L0 hypervisor will overwrite any L0 hypervisor memory that contains register contents for the L1 guest hypervisor context. After clearing out its internal memory, the L0 hypervisor will resume the L1 guest hypervisor context. This ensures that no sensitive data stays in the L0 hypervisor’s internal memory across invocations of the L0 hypervisor context. Thus, in the above example, there will not be any sensitive L1 guest hypervisor state in the L0 hypervisor’s private memory. This mitigates the risk that sensitive L1 guest hypervisor state will be brought into the data cache the next time the L0 hypervisor context becomes active. <BR /> <BR /> As described above, this state scrubbing model does involve some extra processing when nested virtualization is in-use. To minimize this processing, the L0 hypervisor is very careful in tracking when it needs to scrub its memory, so it can do this with minimal overhead. The overhead of this extra processing is negligible in the nested virtualization scenarios we have measured. <BR /> <BR /> Finally, the L0 hypervisor state scrubbing ensures that the L0 hypervisor can efficiently and safely provide nested virtualization to L1 guest virtual machines. However, to fully mitigate inter-VM attacks between L2 guest virtual machines, the nested L1 guest hypervisor must implement a mitigation for the L1TF vulnerability. This means the L1 guest hypervisor needs to appropriately manage the L1 data cache to ensure isolation of sensitive data across the L2 guest virtual machine security boundaries. The Hyper-V L0 hypervisor exposes the appropriate capabilities to L1 guest hypervisors to allow L1 guest hypervisors to perform L1 data cache flushes. <BR /> <H2> Conclusion </H2> <BR /> By using a combination of core scheduling, address space isolation, and data clearing, Hyper-V HyperClear is able to mitigate the L1TF speculative execution side channel attack across VMs with negligible performance impact and with full support of SMT. </BODY></HTML> Fri, 22 Mar 2019 00:17:45 GMT Virtualization-Team 2019-03-22T00:17:45Z Hyper-V symbols for debugging <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Apr 25, 2018 </STRONG> <BR /> Having access to debugging symbols can be very handy, for example when you are <BR /> <UL> <BR /> <LI> A partner building solutions leveraging Hyper-V, </LI> <BR /> <LI> Trying to debug a specific issue, or </LI> <BR /> <LI> Searching for bugs to participate in the <A href="#" target="_blank"> Microsoft Hyper-V Bounty Program </A> . </LI> <BR /> </UL> <BR /> <BR /> Starting with symbols for Windows Server 2016 with an installed April 2018 cumulative update, we are now providing access to most Hyper-V-related symbols through the public symbol servers. Here are some of the symbols that are available right now: <BR /> <CODE> <BR /> SYMCHK: hvhostsvc.dll [10.0.14393.2007 ] PASSED - PDB: hvhostsvc.pdb DBG: <BR /> SYMCHK: passthruparser.sys [10.0.14393.2007 ] PASSED - PDB: passthruparser.pdb DBG: <BR /> SYMCHK: storvsp.sys [10.0.14393.2312 ] PASSED - PDB: storvsp.pdb DBG: <BR /> SYMCHK: vhdmp.sys [10.0.14393.2097 ] PASSED - PDB: vhdmp.pdb DBG: <BR /> SYMCHK: vhdparser.sys [10.0.14393.2007 ] PASSED - PDB: vhdparser.pdb DBG: <BR /> SYMCHK: vid.dll [10.0.14393.2007 ] PASSED - PDB: vid.pdb DBG: <BR /> SYMCHK: Vid.sys [10.0.14393.2007 ] PASSED - PDB: Vid.pdb DBG: <BR /> SYMCHK: vmbuspipe.dll [10.0.14393.2007 ] PASSED - PDB: vmbuspipe.pdb DBG: <BR /> SYMCHK: vmbuspiper.dll [10.0.14393.2007 ] PASSED - PDB: vmbuspiper.pdb DBG: <BR /> SYMCHK: vmbusvdev.dll [10.0.14393.2007 ] PASSED - PDB: vmbusvdev.pdb DBG: <BR /> SYMCHK: vmchipset.dll [10.0.14393.2007 ] PASSED - PDB: VmChipset.pdb DBG: <BR /> SYMCHK: vmcompute.dll [10.0.14393.2214 ] PASSED - PDB: vmcompute.pdb DBG: <BR /> SYMCHK: vmcompute.exe [10.0.14393.2214 ] PASSED - PDB: vmcompute.pdb DBG: <BR /> SYMCHK: vmconnect.exe [10.0.14393.0 ] PASSED - PDB: vmconnect.pdb DBG: <BR /> SYMCHK: vmdebug.dll [10.0.14393.2097 ] PASSED - PDB: vmdebug.pdb DBG: <BR /> SYMCHK: vmdynmem.dll [10.0.14393.2007 ] PASSED - PDB: vmdynmem.pdb DBG: <BR /> SYMCHK: vmemulateddevices.dll [10.0.14393.2007 ] PASSED - PDB: VmEmulatedDevices.pdb DBG: <BR /> SYMCHK: VmEmulatedNic.dll [10.0.14393.2007 ] PASSED - PDB: VmEmulatedNic.pdb DBG: <BR /> SYMCHK: VmEmulatedStorage.dll [10.0.14393.2214 ] PASSED - PDB: VmEmulatedStorage.pdb DBG: <BR /> SYMCHK: vmicrdv.dll [10.0.14393.2007 ] PASSED - PDB: vmicrdv.pdb DBG: <BR /> SYMCHK: vmictimeprovider.dll [10.0.14393.2007 ] PASSED - PDB: vmictimeprovider.pdb DBG: <BR /> SYMCHK: vmicvdev.dll [10.0.14393.2214 ] PASSED - PDB: vmicvdev.pdb DBG: <BR /> SYMCHK: vmms.exe [10.0.14393.2214 ] PASSED - PDB: vmms.pdb DBG: <BR /> SYMCHK: vmrdvcore.dll [10.0.14393.2214 ] PASSED - PDB: vmrdvcore.pdb DBG: <BR /> SYMCHK: vmserial.dll [10.0.14393.2007 ] PASSED - PDB: vmserial.pdb DBG: <BR /> SYMCHK: vmsif.dll [10.0.14393.2214 ] PASSED - PDB: vmsif.pdb DBG: <BR /> SYMCHK: vmsifproxystub.dll [10.0.14393.82 ] PASSED - PDB: vmsifproxystub.pdb DBG: <BR /> SYMCHK: vmsmb.dll [10.0.14393.2007 ] PASSED - PDB: vmsmb.pdb DBG: <BR /> SYMCHK: vmsp.exe [10.0.14393.2214 ] PASSED - PDB: vmsp.pdb DBG: <BR /> SYMCHK: vmsynthfcvdev.dll [10.0.14393.2007 ] PASSED - PDB: VmSynthFcVdev.pdb DBG: <BR /> SYMCHK: VmSynthNic.dll [10.0.14393.2007 ] PASSED - PDB: VmSynthNic.pdb DBG: <BR /> SYMCHK: vmsynthstor.dll [10.0.14393.2007 ] PASSED - PDB: VmSynthStor.pdb DBG: <BR /> SYMCHK: vmtpm.dll [10.0.14393.2007 ] PASSED - PDB: vmtpm.pdb DBG: <BR /> SYMCHK: vmuidevices.dll [10.0.14393.2007 ] PASSED - PDB: VmUiDevices.pdb DBG: <BR /> SYMCHK: vmusrv.dll [10.0.14393.2007 ] PASSED - PDB: vmusrv.pdb DBG: <BR /> SYMCHK: vmwp.exe [10.0.14393.2214 ] PASSED - PDB: vmwp.pdb DBG: <BR /> SYMCHK: vmwpctrl.dll [10.0.14393.2007 ] PASSED - PDB: vmwpctrl.pdb DBG: <BR /> SYMCHK: vmprox.dll [10.0.14393.2007 ] PASSED - PDB: vmprox.pdb DBG: <BR /> SYMCHK: vpcivsp.sys [10.0.14393.2214 ] PASSED - PDB: vpcivsp.pdb DBG: <BR /> </CODE> <BR /> <BR /> There is a limited set of virtualization-related symbols that are currently not available: storvsp.pdb, hvax64.pdb, hvix64.pdb, and hvloader.pdb. <BR /> <BR /> If you have a scenario where you need access to any of these symbols, please let us know in the comments below or through the <A href="#" target="_blank"> Feedback Hub </A> app. Please include some detail on the specific scenario which you are looking at. With newer releases, we are evaluating whether we can make even more symbols available. <BR /> <BR /> Alles Gute, <BR /> Lars <BR /> <BR /> [update 2018-04-26]: symbols for vid.sys, vid.dll, and vmprox.dll are now available as well. <BR /> [update 2018-10-24]: symbols for passthruparser.sys, storvsp.sys, and vhdparser.sys are now available. </BODY></HTML> Fri, 22 Mar 2019 00:16:10 GMT Lars Iwer 2019-03-22T00:16:10Z Sneak Peek: Taking a Spin with Enhanced Linux VMs <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Feb 28, 2018 </STRONG> <BR /> <STRONG> **Update: This feature is now generally available. Please see our <A href="#" target="_blank"> latest blog post </A> to learn more** </STRONG> <BR /> <BR /> Whether you're a developer or an IT admin, virtual machines are familiar tools that allow users to run entirely separate operating system instances on a host. And despite being a separate OS, we feel there's a great importance in having a VM experience that feels tightly integrated with the host. We invested in making the Windows client VM experience first-class, and users really liked it. Our users asked us to go further: they wanted that same first-class experience on Linux VMs as well. <BR /> <BR /> As we thought about how we could deliver a better-quality experience--one that achieved closer parity with Windows clients--we found an opportunity to collaborate with the open source folks at <A href="#" target="_blank"> XRDP </A> , who have implemented Microsoft’s RDP protocol on Linux. <BR /> <BR /> We’re partnering with <A href="#" target="_blank"> Canonical </A> on the upcoming Ubuntu 18.04 release to make this experience a reality, and we’re working to provide a solution that works out of the box. Hyper-V’s <A href="#" target="_blank"> Quick Create VM gallery </A> is the perfect vehicle to deliver such an experience. With only 3 mouse clicks, users will be able to get an Ubuntu VM running that offers clipboard functionality, drive redirection, and much more. <BR /> <BR /> But you don’t have to wait until the release of Ubuntu 18.04 to try out the improved Linux VM experience. Read on to learn how you can get a sneak peek! <BR /> <BR /> <EM> Disclaimer: <STRONG> This feature is under development. </STRONG> This tutorial outlines steps to have an enhanced Ubuntu experience in 16.04. Our TARGET experience will be with 18.04. There may be some bugs you discover in 16.04--and that's okay! We want to gather this data so we can make the 18.04 experience great. </EM> <BR /> <BR /> <IMG src="" /> <BR /> <H2> <STRONG> A Call for Testing </STRONG> </H2> <BR /> We've chosen Canonical's next LTS release, Bionic Beaver, to be the focal point of our investments. In the lead up to the official release of 18.04, we'd like to begin getting feedback on how satisfied users are with the general experience. The experience we’re working towards in Ubuntu 18.04 can be set up in Ubuntu 16.04 (with a few extra steps). We will walk through how to set up an Ubuntu 16.04 VM running in Hyper-V with Enhanced Session Mode. <BR /> <BR /> In the future, you can expect to be able to find an Ubuntu 18.04 image sitting in the Hyper-V Quick Create galley :smiling_face_with_smiling_eyes:</img> <BR /> <BR /> <STRONG> NOTE: In order to participate in this tutorial, you need to be on Insider Builds, running at minimum Insider Build No. 17063 </STRONG> <BR /> <H2> <STRONG> Tutorial </STRONG> </H2> <BR /> Grab the Ubuntu 16.04 ISO from Canonical's website, found at <A href="#" target="_blank"> </A> . Provision the VM as you normally would and step through the installation process. We created a set of scripts to perform all the heavy lifting to set up your environment appropriately. Once your VM is fully operational, we'll be executing the following commands inside of it. <BR /> <BR /> <CODE> #Get the scripts from GitHub <BR /> $ sudo apt-get update <BR /> $ sudo apt install git <BR /> $ git clone <A href="#" target="_blank"> </A> ~/linux-vm-tools <BR /> $ cd ~/linux-vm-tools/ubuntu/16.04/ </CODE> <BR /> <BR /> <CODE> #Make the scripts executable and run them... <BR /> $ sudo chmod +x <BR /> $ sudo chmod +x <BR /> $ sudo ./ </CODE> <BR /> <BR /> <STRONG> will need to be run twice in order for the script to execute fully (it must perform a reboot mid-script) </STRONG> . That is, once your VM reboots, you'll need to change dir into the location of the script and run again. Once you've finished running the script, you'll need to run <BR /> <BR /> <CODE> $ sudo ./ </CODE> <BR /> <BR /> After you've run your scripts, shut down your VM. On your host machine in a powershell prompt, execute this command: <BR /> <BR /> <CODE> Set-VM -VMName &lt;your_vm_name&gt;&nbsp; -EnhancedSessionTransportType HvSocket </CODE> <BR /> <BR /> Now, when you boot your VM, you will be greeted with an option to connect and adjust your display size. This will be an indication that you're running in an enhanced session mode. Click "connect" and you're complete. <BR /> <BR /> <IMG src="" /> <BR /> <H2> What are the Benefits? </H2> <BR /> These are the features that you get with the new enhanced session mode: <BR /> <UL> <BR /> <LI> Better mouse experience </LI> <BR /> <LI> Integrated clipboard </LI> <BR /> <LI> Window Resizing </LI> <BR /> <LI> Drive Redirection </LI> <BR /> </UL> <BR /> We encourage you to log any issues you discover <A href="#" target="_blank"> to GitHub </A> . This will also give you an idea of already identified issues. <BR /> <H2> How does this work? </H2> <BR /> The technology behind this mode is actually the same as how we achieve an enhanced session mode in Windows. It relies on the <A href="#" target="_blank"> RDP protocol </A> , implemented on Linux by the open source folks at XRDP, over Hyper-V sockets to light up all the great features that give the VM an integrated feel. Hyper-V sockets, or hv_sock, supply a byte-stream based communication mechanism between the host partition and the guest VM. Think of it as similar to TCP, except it's going over an optimized transport layer called VMBus. We contributed changes which would allow XRDP to utilize hv_sock. <BR /> <BR /> The scripts we executed did the following: <BR /> <UL> <BR /> <LI> Installs the "Linux-azure" kernel to the VM. This carries the hv_sock bits that we need. </LI> <BR /> <LI> Downloads the XRDP source code and compiles it with the hv_sock feature turned on (the published XRDP package in 16.04 doesn't have this set, so we must compile from source). </LI> <BR /> <LI> Builds and installs xorgxrdp. </LI> <BR /> <LI> Configures the user session for RDP </LI> <BR /> <LI> Launches the XRDP service </LI> <BR /> </UL> <BR /> As we mentioned earlier, the steps described above are for Ubuntu 16.04, which will look a little different from 18.04. In fact, with Ubuntu 18.04 shipping with the 4.15 linux kernel (which already carries the hv_sock bits), we won’t need to apply the linux-azure kernel. The version of XRDP that ships as available in 18.04 is already compiled with hv_sock feature turned on, so there’s no more need to build xrdp/xorgxrdp—a simple “apt install” will bring in all the feature goodness! <BR /> <BR /> If you’re not flighting insider builds, <STRONG> you can look forward to having this enhanced VM experience via the VM gallery when Ubuntu 18.04 is released at the end of April. </STRONG> Leave a comment below on your experience or tweet me with your thoughts! <BR /> <BR /> <STRONG> **Update: This feature is now generally available. Please see our <A href="#" target="_blank"> latest blog post </A> to learn more** </STRONG> <BR /> <BR /> Cheers, <BR /> <BR /> Craig Wilhite ( <A href="#" target="_blank"> @CraigWilhite </A> ) </BODY></HTML> Fri, 22 Mar 2019 00:16:03 GMT Virtualization-Team 2019-03-22T00:16:03Z Looking at the Hyper-V Event Log (January 2018 edition) <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jan 23, 2018 </STRONG> <BR /> Hyper-V has changed over the last few years and so has our event log structure. With that in mind, here is an update of <A href="#" target="_blank"> Ben's original post in 2009 </A> ("Looking at the Hyper-V Event Log"). <BR /> <BR /> This post gives a short overview on the different Windows event log channels that Hyper-V uses. It can be used as a reference to better understand which event channels might be relevant for different purposes. <BR /> <BR /> As a general guidance you should <B> start with the Hyper-V-VMMS and Hyper-V-Worker </B> event channels when analyzing a failure. For migration-related events it makes sense to look at the event logs both on the source and destination node. <BR /> <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Below are the current event log channels for Hyper-V. Using "Event Viewer" you can find them under "Applications and Services Logs", "Microsoft", "Windows". <BR /> If you would like to collect events from these channels and consolidate them into a single file, we've published a <A href="#" target="_blank"> HyperVLogs PowerShell module </A> to help. <BR /> <TABLE> <TBODY><TR> Event Channel CategoryDescription </TR> <TR> <TD> Hyper-V-Compute </TD> <TD> Events from the <A href="#" target="_blank"> Host Compute Service (HCS) </A> are collected here. The HCS is a low-level management API. </TD> </TR> <TR> <TD> Hyper-V-Config </TD> <TD> This section is for anything that relates to virtual machine configuration files. If you have a missing or corrupt virtual machine configuration file – there will be entries here that tell you all about it. </TD> </TR> <TR> <TD> Hyper-V-Guest-Drivers </TD> <TD> Look at this section if you are experiencing issues with VM integration components. </TD> </TR> <TR> <TD> Hyper-V-High-Availability </TD> <TD> Hyper-V clustering-related events are collected in this section. </TD> </TR> <TR> <TD> Hyper-V-Hypervisor </TD> <TD> This section is used for hypervisor specific events. You will usually only need to look here if the hypervisor fails to start – then you can get detailed information here. </TD> </TR> <TR> <TD> Hyper-V-StorageVSP </TD> <TD> Events from the Storage Virtualization Service Provider. Typically you would look at these when you want to debug low-level storage operations for a virtual machine. </TD> </TR> <TR> <TD> Hyper-V-VID </TD> <TD> These are events form the Virtualization Infrastructure Driver. Look here if you experience issues with memory assignment, e.g. dynamic memory, or changing static memory while the VM is running. </TD> </TR> <TR> <TD> <B> Hyper-V-VMMS </B> </TD> <TD> Events from the virtual machine management service can be found here. When VMs are not starting properly, or VM migrations fail, this would be a good source to start investigating. </TD> </TR> <TR> <TD> Hyper-V-VmSwitch </TD> <TD> These channels contain events from the virtual network switches. </TD> </TR> <TR> <TD> <B> Hyper-V-Worker </B> </TD> <TD> This section contains events from the worker process that is used for the actual running of the virtual machine. You will see events related to startup and shutdown of the VM here. </TD> </TR> <TR> <TD> Hyper-V-Shared-VHDX </TD> <TD> Events specific to virtual hard disks that can be shared between several virtual machines. If you are using shared VHDs this event channel can provide more detail in case of a failure. </TD> </TR> <TR> <TD> Hyper-V-VMSP </TD> <TD> The VM security process (VMSP) is used to provide secured virtual devices like the virtual TPM module to the VM. </TD> </TR> <TR> <TD> Hyper-V-VfpExt </TD> <TD> Events form the Virtual Filtering Platform (VFP) which is part of the Software Defined Networking Stack. </TD> </TR> <TR> <TD> VHDMP </TD> <TD> Events from operations on virtual hard disk files (e.g. creation, merging) go here. </TD> </TR> </TBODY></TABLE> <BR /> Please note: some of these only contain analytic/debug logs that need to be enabled separately and not all channels exist on Windows client. To enable the analytic/debug logs, you can use the <A href="#" target="_blank"> HyperVLogs PowerShell module </A> . <BR /> <BR /> Alles Gute, <BR /> <BR /> Lars </BODY></HTML> Fri, 22 Mar 2019 00:15:09 GMT Lars Iwer 2019-03-22T00:15:09Z Migrating local VM owner certificates for VMs with vTPM <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Dec 14, 2017 </STRONG> <BR /> Whenever I want to replace or reinstall a system which is used to run <A href="#" target="_blank"> virtual machines with a virtual trusted platform module </A> (vTPM), I've been facing a challenge: For hosts that are not part of a <A href="#" target="_blank"> guarded fabric </A> , the new system does need to be authorized to run the VM. <BR /> Some time ago, I wrote a blog post focused on <A href="#" target="_blank"> running VMs with a vTPM on additional hosts </A> , but the approach highlighted there does not solve everything when the original host is decommissioned. The VMs can be started on the new host, but without the original owner certificates, you cannot change the list of allowed guardians anymore. <BR /> <BR /> This blog post shows a way to export the information needed from the source host and import it on a destination host. Please note that this technique only works for <EM> local </EM> mode and not for a host that is part of a guarded fabric. You can check whether your host runs in local mode by running <CODE> Get-HgsClientConfiguration </CODE> . The property <CODE> Mode </CODE> should list <CODE> Local </CODE> as a value. <BR /> <BR /> <H3> Exporting the default owner from the source host </H3> <BR /> The following script exports the necessary information of the default owner (" <CODE> UntrustedGuardian </CODE> ") on a host that is configured using local mode. When running the script on the source host, two certificates are exported: a signing certificate and an encryption certificate. <BR /> <BR /> <BR /> <H3> Importing the UntrustedGuardian on the new host </H3> <BR /> On the destination host, the following snippet creates a new guardian using the certificates that have been exported in the previous step. <BR /> <BR /> <BR /> Please note that importing the "UntrustedGuardian" on the new host has to be done before creating new VMs with a vTPM on this host -- otherwise a new guardian with the same name will already be present and the creation with the PowerShell snippet above will fail. <BR /> <BR /> With these two steps, you should be able to migrate all the necessary bits to keep your VMs with vTPM running in your dev/test environment. This approach can also be used to back up your owner certificates, depending on how these certificates have been created. <BR /> </BODY></HTML> Fri, 22 Mar 2019 00:14:31 GMT Lars Iwer 2019-03-22T00:14:31Z What's new in Hyper-V for Windows 10 Fall Creators Update? <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Nov 13, 2017 </STRONG> <BR /> <A href="#" target="_blank"> Windows 10 Fall Creators Update </A> has arrived!&nbsp; While we’ve been blogging about new features as they appear in Windows Insider builds, many of you have asked for a consolidated list of Hyper-V updates and improvements since Creators Update in April. <BR /> <BR /> Summary: <BR /> <UL> <BR /> <LI> Quick Create includes a gallery (and you can add your own images) </LI> <BR /> <LI> Hyper-V has a Default Switch for easy networking </LI> <BR /> <LI> It’s easy to revert virtual machines to their start state </LI> <BR /> <LI> Host battery state is visible in virtual machines </LI> <BR /> <LI> Virtual machines are easier to share </LI> <BR /> </UL> <BR /> <UL> </UL> <BR /> <H2> Quick Create virtual machine gallery </H2> <BR /> The virtual machine gallery in Quick Create makes it easy to find virtual machine images in one convenient location. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> You can also add your own virtual machine images to the Quick Create gallery.&nbsp; Building a custom gallery takes some time but, once built, makes creating virtual machines easy and consistent. <BR /> <BR /> <A href="#" target="_blank"> This blog post </A> walks through adding custom images to the gallery. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> For images that aren’t in the gallery, select “Local Installation Source” to create a virtual machine from an .iso or vhd located somewhere in your file system. <BR /> <BR /> Keep in mind, while Quick Create and the virtual machine gallery are convenient, they are not a replacement for the New Virtual Machine wizard in Hyper-V manager.&nbsp; For more complicated virtual machine configuration, use that. <BR /> <H2> Default Switch </H2> <BR /> <IMG src="" /> <BR /> <BR /> The switch named “Default Switch” allows virtual machines to share the host’s network connection using NAT (Network Address Translation).&nbsp; This switch has a few unique attributes: <BR /> <OL> <BR /> <LI> Virtual machines connected to it will have access to the host’s network whether you’re connected to WIFI, a dock, or Ethernet. It will also work when the host is using VPN <BR /> or proxy. </LI> <BR /> <LI> It’s available as soon as you enable Hyper-V – you won’t lose internet setting it up. </LI> <BR /> <LI> You can’t delete or rename it. </LI> <BR /> <LI> It has the same name and device ID on all Windows 10 Fall Creator’s Update Hyper-V hosts. <BR /> Name: Default Switch <BR /> ID: c08cb7b8-9b3c-408e-8e30-5e16a3aeb444 </LI> <BR /> </OL> <BR /> Yes, the default switch does automatically assign an IP to the virtual machine (DNS and DHCP). <BR /> <BR /> I’m really excited to have a always-available network connection for virtual machines on Hyper-V.&nbsp; The Default Switch offers the best networking experience for virtual machines on a laptop.&nbsp; If you need highly customized networking, however, continue using Virtual Switch Manager. <BR /> <H2> Revert! (automatic checkpoints) </H2> <BR /> This is my personal favorite feature from Fall Creators Update. <BR /> <BR /> For a little bit of background, I mostly use virtual machines to build/run demos and to sandbox simple experiments.&nbsp; At least once a month, I accidently mess up my virtual machine.&nbsp; Sometimes I remember to make a checkpoint and I can roll back to a good state.&nbsp; Most of the time I don’t.&nbsp; Before automatic checkpoints, I’d have to choose between rebuilding my virtual machine or manually undoing my mistake. <BR /> <BR /> Starting in Fall Creators Update, Hyper-V creates a checkpoint when you start virtual machines.&nbsp; Say you’re learning about Linux and accidently `rm –rf /*` or update your guest and discover a breaking change, now you can simply revert back to when the virtual machine started. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Automatic checkpoints are enabled by default on Windows 10 and disabled by default on Windows Server.&nbsp; They are not useful for everyone.&nbsp; For people with automation or for those of you worried about the overhead of making a checkpoint, you can disable automatic checkpoints with PowerShell (Set-VM –Name VMwithAutomation –AutomaticCheckpointsEnabled) or in VM settings under “Checkpoints”. <BR /> <BR /> Here’s a <A href="#" target="_blank"> link </A> to the original announcement with more information. <BR /> <H2> Battery pass-through </H2> <BR /> Virtual machines in Fall Creators Update are aware of the hosts battery state. <BR /> <BR /> <IMG src="" /> This is nice for a few reasons: <BR /> <OL> <BR /> <LI> You can see how much battery life you have left in a full-screen virtual machine. </LI> <BR /> <LI> The guest operating system knows the battery state and can optimize for low power situations. </LI> <BR /> </OL> <BR /> <H2> Easier virtual machine sharing </H2> <BR /> Sharing your Hyper-V virtual machines is easier with the new “Share” button. Share packages and compresses your virtual machine so you can move it to another Hyper-V host right from Virtual Machine Connection. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Share creates a “.vmcz” file with your virtual hard drive (vhd/vhdx) and any state the virtual machine will need to run.&nbsp; “Share” will not include checkpoints. If you would like to also export your checkpoints, you can use the “Export” tool, or the “Export-VM” PowerShell cmdlet. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Once you’ve moved your virtual machine to another computer with Hyper-V, double click the “.vmcz” file and the virtual machine will import automatically. <BR /> <BR /> ---- <BR /> <BR /> That’s the list!&nbsp; As always, please send us feedback via FeedbackHub. <BR /> <BR /> Curious what we’re building next? <A href="#" target="_blank"> Become a Windows Insider </A> – almost everything here has benefited from your early feedback. <BR /> <BR /> Cheers, <BR /> Sarah </BODY></HTML> Fri, 22 Mar 2019 00:13:48 GMT scooley 2019-03-22T00:13:48Z Create your custom Quick Create VM gallery <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Nov 08, 2017 </STRONG> <BR /> Have you ever wondered whether it is possible to add your own custom images to the list of available VMs for <A href="#" target="_blank"> Quick Create </A> ? <BR /> <BR /> The answer is: Yes, you can! <BR /> <BR /> Since quite a few people have been asking us, this post will give you a quick example to get started and add your own custom image while we're working on the official documentation. The following two steps will be described in this blog post: <BR /> <OL> <BR /> <LI> Create JSON document describing your image </LI> <BR /> <LI> Add this JSON document to the list of galleries to include </LI> <BR /> </OL> <BR /> <BR /> <IMG src="" /> <BR /> <BR /> <H3> Step 1: Create JSON document describing your image </H3> <BR /> <BR /> The first thing you will need is a JSON document which describes the image you want to have showing up in quick create. The following snippet is a sample JSON document which you can adapt to your own needs. We will publish more documentation on this including a JSON schema to run validation as soon as it is ready. <BR /> <BR /> <BR /> <BR /> To calculate the SHA256 hashes for the linked files you can use different tools. Since it is already available on Windows 10 machines, I like to use a quick PowerShell call: <CODE> Get-FileHash -Path .\contoso_logo.png -Algorithm SHA256 </CODE> <BR /> The values for <CODE> logo </CODE> , <CODE> symbol </CODE> , and <CODE> thumbnail </CODE> are optional, so if there are no images at hand, you can just remove these values from the JSON document. <BR /> <BR /> <H3> Step 2: Add this JSON document to the list of galleries to include </H3> <BR /> <BR /> To have your custom gallery image show up on a Windows 10 client, you need to set the <CODE> GalleryLocations </CODE> registry value under <CODE> HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization </CODE> . <BR /> There are multiple ways to achieve this, you can adapt the following PowerShell snippet to set the value: <BR /> <BR /> <BR /> <BR /> If you don't want to include the official Windows 10 developer evaluation images, just remove the fwlink from the GalleryLocations value. <BR /> <BR /> Have fun creating your own VM galleries and stay tuned for our official documentation. We're looking forward to see what you create! <BR /> <BR /> Lars <BR /> <BR /> Update: The official documentation is now live as well -- for more detail on the gallery functionality and how to create your own gallery: This way please: <A href="#" target="_blank"> Create a custom virtual machine gallery </A> </BODY></HTML> Fri, 22 Mar 2019 00:12:27 GMT Lars Iwer 2019-03-22T00:12:27Z A great way to collect logs for troubleshooting <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Oct 27, 2017 </STRONG> <BR /> Did you ever have to troubleshoot issues within a Hyper-V cluster or standalone environment and found yourself switching between different event logs? Or did you repro something just to find out not all of the important Windows event channels had been activated? <BR /> <BR /> To make it easier to collect the right set of event logs into a single evtx file to help with troubleshooting we have published a <A href="#" target="_blank"> HyperVLogs PowerShell module </A> on GitHub. <BR /> <BR /> In this blog post I am sharing with you how to get the module and how to gather event logs using the functions provided. <BR /> <BR /> <H3> Step 1: Download and import the PowerShell module </H3> <BR /> First of all you need to download the PowerShell module and import it. <BR /> <BR /> <BR /> <H3> Step 2: Reproduce the issue and capture logs </H3> <BR /> Now, you can use the functions provided as part of the module to collect logs for different situations. <BR /> For example, to investigate an issue on a single node, you can collect events with the following steps: <BR /> <BR /> <BR /> Using this module and its functions made it a lot easier for me to collect the right event data to help with investigations. Any feedback or suggestions are highly welcome. <BR /> <BR /> Cheers, <BR /> Lars </BODY></HTML> Fri, 22 Mar 2019 00:12:12 GMT Lars Iwer 2019-03-22T00:12:12Z Copying Files into a Hyper-V VM with Vagrant <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jul 18, 2017 </STRONG> <BR /> <DIV> <BR /> <BR /> A couple of weeks ago, I published a <A href="#" target="_blank"> blog </A> with tips and tricks for getting started with Vagrant on Hyper-V. My fifth tip was to "Enable Nifty Hyper-V Features," where I briefly mentioned stuff like differencing disks and virtualization extensions. <BR /> <BR /> While those are useful, I realized later that I should have added one more feature to my list of examples: the "guest_service_interface" field in "vm_integration_services." It's hard to know what that means just from the name, so I usually call it the "the thing that lets me copy files into a VM." <BR /> <BR /> Disclaimer: this is <EM> not </EM> a replacement for <A href="#" target="_blank"> Vagrant's synced folders </A> . Those are super convienent, and should really be your default solution for sharing files. This method is more useful in one-off situations. <BR /> <H2> Enabling Copy-VMFile </H2> <BR /> Enabling this functionality requires a simple change to your Vagrantfile. You need to set "guest_service_interface" to true within "vm_integration_services" configuration hash. Here's what my Vagrantfile looks like for CentOS 7: <BR /> <DIV> <BR /> # -*- mode: ruby -*- <BR /> # vi: set ft=ruby : <BR /> <BR /> Vagrant.configure("2") do |config| <BR /> = "centos/7" <BR /> config.vm.provider "hyperv" <BR /> "public_network" <BR /> config.vm.synced_folder ".", "/vagrant", disabled: true <BR /> config.vm.provider "hyperv" do |h| <BR /> h.enable_virtualization_extensions = true <BR /> h.differencing_disk = true <BR /> h.vm_integration_services = { <BR /> guest_service_interface: true #&lt;---------- this line enables Copy-VMFile <BR /> } <BR /> end <BR /> end <BR /> </DIV> <BR /> You can check that it's enabled by running <CODE> Get-VMIntegrationService </CODE> in PowerShell on the host machine: <BR /> <CODE> PS C:\vagrant_selfhost\centos&gt; Get-VMIntegrationService -VMName "centos-7-1-1.x86_64" <BR /> <BR /> VMName Name Enabled PrimaryStatusDescription SecondaryStatusDescription <BR /> ------ ---- ------- ------------------------ -------------------------- <BR /> centos-7-1-1.x86_64 Guest Service Interface True OK <BR /> centos-7-1-1.x86_64 Heartbeat True OK <BR /> centos-7-1-1.x86_64 Key-Value Pair Exchange True OK The protocol version of... <BR /> centos-7-1-1.x86_64 Shutdown True OK <BR /> centos-7-1-1.x86_64 Time Synchronization True OK The protocol version of... <BR /> centos-7-1-1.x86_64 VSS True OK The protocol version of... <BR /> </CODE> <BR /> <EM> Note </EM> : not all integration services work on all guest operating systems. For example, this functionality will not work on the "Precise" Ubuntu image that's used in Vagrant's "Getting Started" guide. The full compatibility list various Windows and Linux distrobutions can be found <A href="#" target="_blank"> here </A> . Just click on your chosen distrobution and check for "File copy from host to guest." <BR /> <H2> Using Copy-VMFile </H2> <BR /> Once you've got a VM set up correctly, copying files to and from arbitrary locations is as simple as running <CODE> Copy-VMFile </CODE> in PowerShell. <BR /> <BR /> Here's a sample test I used to verify it was working on my CentOS VM: <BR /> <CODE> Copy-VMFile -Name 'centos-7-1-1.x86_64' -SourcePath '.\Foo.txt' -DestinationPath '/tmp' -FileSource Host <BR /> </CODE> <BR /> Full details can found in the <A href="#" target="_blank"> official documentation </A> . Unfortunately, you can't yet use it to copy files from your VM to your host. If you're running a Windows Guest, you can use <CODE> Copy-Item </CODE> with PowerShell Direct to make that work; see <A href="#" target="_blank"> this document </A> for more details. <BR /> <H2> How Does It Work? </H2> <BR /> The way this works is by running Hyper-V integration services within the guest operating system. Full details can be found in the <A href="#" target="_blank"> official documentation </A> . The short version is that integration services are Windows Services (on Windows) or Daemons (on Linux) that allow the guest operating system to communicate with the host. In this particular instance, the integration service allows us to copy files to the VM over the VM Bus (no network required!). <BR /> <H2> Conclusion </H2> <BR /> Hope you find this helpful -- let me know if there's anything you think I missed. <BR /> <BR /> John Slack <BR /> Program Manager <BR /> Hyper-V Team <BR /> <BR /> </DIV> </BODY></HTML> Fri, 22 Mar 2019 00:11:28 GMT Virtualization-Team 2019-03-22T00:11:28Z Hyper-V virtual machine gallery and networking improvements <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jul 26, 2017 </STRONG> <BR /> In January, <A href="#" title="Quick Create" target="_blank"> we added Quick Create </A> to Hyper-V manager in Windows 10.&nbsp; Quick Create is a single-page wizard for fast, easy, virtual machine creation. <BR /> <BR /> Starting in the latest fast-track Windows Insider builds (16237+) we’re expanding on that idea in two ways.&nbsp; Quick Create now includes: <BR /> <OL> <BR /> <LI> A virtual machine gallery with downloadable, pre-configured, virtual machines. </LI> <BR /> <LI> A default virtual switch to allow virtual machines to share the host’s internet connection using NAT. </LI> <BR /> </OL> <BR /> <IMG src="" /> <BR /> <BR /> To launch Quick Create, open Hyper-V Manager and click on the “Quick Create…” button (1). <BR /> <BR /> From there you can either create a virtual machine from one of the pre-built images available from Microsoft (2) or use a local installation source.&nbsp; Once you’ve selected an image or chosen installation media, you’re done!&nbsp; The virtual machine comes with a default name and a pre-made network connection using NAT (3) which can be modified in the “more options” menu. <BR /> <BR /> Click “Create Virtual Machine” and you’re ready to go – granted downloading the virtual machine will take awhile. <BR /> <H3> Details about the Default Switch </H3> <BR /> The switch named “Default Switch” or “Layered_ICS”, allows virtual machines to share the host’s network connection.&nbsp; Without getting too deep into networking (saving that for a different post), this switch has a few unique attributes compared to other Hyper-V switches: <BR /> <OL> <BR /> <LI> Virtual machines connected to it will have access to the host’s network whether you’re connected to WIFI, a dock, or Ethernet. </LI> <BR /> <LI> It’s available as soon as you enable Hyper-V – you won’t lose internet setting it up. </LI> <BR /> <LI> You can’t delete it. </LI> <BR /> <LI> It has the same name and device ID (GUID c08cb7b8-9b3c-408e-8e30-5e16a3aeb444) on all Windows 10 hosts so virtual machines on recent builds can assume the same switch is present on all Windows 10 Hyper-V host. </LI> <BR /> </OL> <BR /> I’m really excited by the work we are doing in this area.&nbsp; These improvements make Hyper-V a better tool for people running virtual machines on a laptop.&nbsp; They don’t, however, replace existing Hyper-V tools.&nbsp; If you need to define specific virtual machine settings, New-VM or the new virtual machine wizard are&nbsp;the right tools.&nbsp; For people with custom networks or complicated virtual network needs, continue using Virtual Switch Manager. <BR /> <BR /> Also keep in mind that all of this is a work in progress.&nbsp; There are rough edges for the default switch right now and there aren't many images in the gallery.&nbsp; Please give us feedback! &nbsp;Your feedback helps us. &nbsp;Let us know what images you would like to see and share issues by commenting on this blog or submitting feedback through Feedback Hub. <BR /> <BR /> Cheers, <BR /> Sarah </BODY></HTML> Fri, 22 Mar 2019 00:11:22 GMT scooley 2019-03-22T00:11:22Z Vagrant and Hyper-V -- Tips and Tricks <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jul 06, 2017 </STRONG> <BR /> <DIV> <BR /> <H2> Learning to Use Vagrant on Windows 10 </H2> <BR /> A few months ago, I went to <A href="#" target="_blank"> DockerCon </A> as a Microsoft representative. While I was there, I had the chance to ask developers about their favorite tools. <BR /> <BR /> The most common tool mentioned (outside of Docker itself) was <A href="#" target="_blank"> Vagrant </A> . This was interesting -- I was familiar with Vagrant, but I'd never actually used it. I decided that needed to change. Over the past week or two, I took some time to try it out. I got everything working eventually, but I definitely ran into some issues on the way. <BR /> <BR /> My pain is your gain -- here are my tips and tricks for getting started with Vagrant on Windows 10 and Hyper-V. <BR /> <BR /> <STRONG> NOTE: This is a supplement for Vagrant's " <A href="#" target="_blank"> Getting Started </A> " guide, not a replacement. </STRONG> <BR /> <H2> Tip 0: Install Hyper-V </H2> <BR /> For those new to Hyper-V, make sure you've got Hyper-V running on your machine. Our <A href="#" target="_blank"> official docs </A> list the exact steps and requirements. <BR /> <H2> Tip 1: Set Up Networking Correctly </H2> <BR /> Vagrant doesn't know how to set up networking on Hyper-V right now (unlike other providers), so it's up to you to get things working the way you like them. <BR /> <BR /> There are&nbsp;a few NAT networks&nbsp;already created on Windows 10 (depending on your specific build). &nbsp;Layered_ICS should work (but is under active development), while Layered_NAT <A href="#" target="_blank"> doesn't have DHCP </A> . &nbsp;If you're a Windows Insider, you can try Layered_ICS. &nbsp;If that doesn't work, the safest option is to create an external switch via Hyper-V Manager. &nbsp;This is the approach I took. If you go this route, a friendly reminder that the external switch is tied to a specific network adapter. So if you make it for WiFi, it won't work when you hook up the Ethernet, and vice versa. <BR /> <BR /> [caption id="attachment_10175" align="aligncenter" width="879"] <IMG src="" /> Instructions for adding an external switch in Hyper-V manager[/caption] <BR /> <H2> Tip 2: Use the Hyper-V Provider </H2> <BR /> Unfortunately, the <A href="#" target="_blank"> Getting Started </A> guide uses VirtualBox, and you can't run other virtualization solutions alongside Hyper-V. You need to change the " <A href="#" target="_blank"> provider </A> " Vagrant uses at a few different points. <BR /> <BR /> When you install your first box, add --provider : <BR /> <CODE> vagrant box add hashicorp/precise64 --provider hyperv <BR /> </CODE> <BR /> And when you boot your first Vagrant environment, again, add --provider. Note: you might run into the error mentioned in Trick 4, so skip to there if you see something like "mount error(112): Host is down". <BR /> <CODE> vagrant up --provider hyperv <BR /> </CODE> <BR /> <H2> Tip 3: Add the basics to your Vagrantfile </H2> <BR /> Adding the provider flag is a pain to do every single time you run <CODE> vagrant up </CODE> . Fortunately, you can set up your Vagrantfile to automate things for you. After running <CODE> vagrant init </CODE> , modify your vagrant file with the following: <BR /> <DIV> <BR /> Vagrant.configure(2) do |config| <BR /> = "hashicorp/precise64" <BR /> config.vm.provider "hyperv" <BR /> "public_network" <BR /> end <BR /> </DIV> <BR /> One additional trick here: <CODE> vagrant init </CODE> will create a file that will appear to be full of commented out items. However, there is one line not commented out: <BR /> <BR /> [caption id="attachment_10185" align="aligncenter" width="879"] <IMG src="" /> There is one line not commented.[/caption] <BR /> <BR /> Make sure you delete that line! Otherwise, you'll end up with an error like this: <BR /> <CODE> Bringing machine 'default' up with 'hyperv' provider... <BR /> ==&gt; default: Verifying Hyper-V is enabled... <BR /> ==&gt; default: Box 'base' could not be found. Attempting to find and install... <BR /> default: Box Provider: hyperv <BR /> default: Box Version: &gt;= 0 <BR /> ==&gt; default: Box file was not detected as metadata. Adding it directly... <BR /> ==&gt; default: Adding box 'base' (v0) for provider: hyperv <BR /> default: Downloading: base <BR /> default: <BR /> An error occurred while downloading the remote file. The error <BR /> message, if any, is reproduced below. Please fix this error and try <BR /> again. <BR /> </CODE> <BR /> <H2> Trick 4: Shared folders uses SMBv1 for hashicorp/precise64 </H2> <BR /> For the image used in the "Getting Started" guide (hashicorp/precise64), Vagrant tries to use SMBv1 for shared folders. However, if you're like me and have <A href="#" target="_blank"> SMBv1 disabled </A> , this will fail: <BR /> <CODE> Failed to mount folders in Linux guest. This is usually because <BR /> the "vboxsf" file system is not available. Please verify that <BR /> the guest additions are properly installed in the guest and <BR /> can work properly. The command attempted was: <BR /> <BR /> mount -t cifs -o uid=1000,gid=1000,sec=ntlm,credentials=/etc/smb_creds_e70609f244a9ad09df0e760d1859e431 // /vagrant <BR /> <BR /> The error output from the last command was: <BR /> <BR /> mount error(112): Host is down <BR /> Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) <BR /> </CODE> <BR /> You can check if SMBv1 is enabled with this PowerShell Cmdlet: <BR /> <CODE> Get-SmbServerConfiguration <BR /> </CODE> <BR /> If you can live without synced folders, here's the line to add to the vagrantfile to disable the default synced folder. <BR /> <DIV> <BR /> config.vm.synced_folder ".", "/vagrant", disabled: true <BR /> </DIV> <BR /> If you can't, you can try installing cifs-utils in the VM and re-provision. You could also try <A href="#" target="_blank"> another synced folder method </A> . For example, rsync works with Cygwin or MinGW. Disclaimer: I personally didn't try either of these methods. <BR /> <H2> Tip 5: Enable Nifty Hyper-V Features </H2> <BR /> Hyper-V has some useful features that improve the Vagrant experience. For example, a pretty substantial portion of the time spent running <CODE> vagrant up </CODE> is spent cloning the virtual hard drive. A faster way is to use differencing disks with Hyper-V. You can also turn on virtualization extensions, which allow nested virtualization within the VM (i.e. Docker with Hyper-V containers). Here are the lines to add to your Vagrantfile to add these features: <BR /> <DIV> <BR /> config.vm.provider "hyperv" do |h| <BR /> h.enable_virtualization_extensions = true <BR /> h.differencing_disk = true <BR /> end <BR /> </DIV> <BR /> There are a many more customization options that can be added here (i.e. VMName, CPU/Memory settings, integration services). You can find the details in the <A href="#" target="_blank"> Hyper-V provider documentation </A> . <BR /> <H2> Tip 6: Filter for Hyper-V compatible boxes on Vagrant Cloud </H2> <BR /> You can find more boxes to use in the Vagrant Cloud (formally called Atlas). They let you filter by provider, so it's easy to find all of the <A href="#" target="_blank"> Hyper-V compatible boxes </A> . <BR /> <H2> Tip 7: Default to the Hyper-V Provider </H2> <BR /> While adding the default provider to your Vagrantfile is useful, it means you need to remember to do it with each new Vagrantfile you create. If you don't, Vagrant will trying to download VirtualBox when you <CODE> vagrant up </CODE> the first time for your new box. Again, VirtualBox doesn't work alongside Hyper-V, so this is a problem. <BR /> <CODE> PS C:\vagrant&gt; vagrant up <BR /> ==&gt; Provider 'virtualbox' not found. We'll automatically install it now... <BR /> The installation process will start below. Human interaction may be <BR /> required at some points. If you're uncomfortable with automatically <BR /> installing this provider, you can safely Ctrl-C this process and install <BR /> it manually. <BR /> ==&gt; Downloading VirtualBox 5.0.10... <BR /> This may not be the latest version of VirtualBox, but it is a version <BR /> that is known to work well. Over time, we'll update the version that <BR /> is installed. <BR /> </CODE> <BR /> You can set your default provider on a user level by using the VAGRANT_DEFAULT_PROVIDER environmental variable. For more options (and details), <A href="#" target="_blank"> this </A> is the relevant page of Vagrant's documentation. <BR /> <BR /> Here's how I set the user-level environment variable in PowerShell: <BR /> <CODE> [Environment]::SetEnvironmentVariable("VAGRANT_DEFAULT_PROVIDER", "hyperv", "User") <BR /> </CODE> <BR /> Again, you can also set the default provider in the Vagrant file (see Trick 3), which will prevent this issue on a per project basis. You can also just add <CODE> --provider hyperv </CODE> when running <CODE> vagrant up </CODE> . The choice is yours. <BR /> <H2> Wrapping Up </H2> <BR /> Those are my tips and tricks for getting started with Vagrant on Hyper-V. If there are any you think I missed, or anything you think I got wrong, let me know in the comments. <BR /> <BR /> Here's the complete version of my simple starting Vagrantfile: <BR /> <DIV> <BR /> # -*- mode: ruby -*- <BR /> # vi: set ft=ruby : <BR /> <BR /> # All Vagrant configuration is done below. The "2" in Vagrant.configure <BR /> # configures the configuration version (we support older styles for <BR /> # backwards compatibility). Please don't change it unless you know what <BR /> # you're doing. <BR /> Vagrant.configure("2") do |config| <BR /> = "hashicorp/precise64" <BR /> config.vm.provider "hyperv" <BR /> "public_network" <BR /> config.vm.synced_folder ".", "/vagrant", disabled: true <BR /> config.vm.provider "hyperv" do |h| <BR /> h.enable_virtualization_extensions = true <BR /> h.differencing_disk = true <BR /> end <BR /> end <BR /> </DIV> <BR /> </DIV> </BODY></HTML> Fri, 22 Mar 2019 00:11:01 GMT Virtualization-Team 2019-03-22T00:11:01Z Making it easier to revert <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Apr 20, 2017 </STRONG> <BR /> Sometimes when things go wrong in my&nbsp;environment, I&nbsp;don't want to have&nbsp;to clean it all up -- I&nbsp;just want to go back in time to when everything&nbsp;was working. But remembering to maintain&nbsp;good recovery points isn't easy. <BR /> <BR /> Now we're making it so that you can always roll back&nbsp;your virtual machine to a recent good state if you need to.&nbsp;Starting in the latest Windows Insider build,&nbsp;you can now always&nbsp;revert a virtual machine back to the state it started in. <BR /> <BR /> In Virtual&nbsp;Machine Connection, just click the Revert button to undo any changes made inside the virtual machine since it last started. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Under the hood, we're using checkpoints; when you start a virtual machine that doesn't have any checkpoints, we create one for you so that you can easily roll back to it if something goes wrong, then we clean it up once the virtual machine shuts down cleanly. <BR /> <BR /> New virtual machines will be created&nbsp;with "Use automatic checkpoints"&nbsp;enabled by default, but you will have to enable it yourself to use it for&nbsp;existing VMs. The option is off by default on Windows Server.&nbsp;&nbsp;This option can be found in Settings -&gt; Checkpoints -&gt; "Use automatic checkpoints" <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Note: the checkpoint will only be taken automatically when the VM starts if it&nbsp;doesn't have other existing checkpoints. <BR /> <BR /> Hopefully this&nbsp;will come in handy&nbsp;next&nbsp;time you need to undo something in your VM. If you are in the Windows Insider Program, please give it a try and let us know&nbsp;what you think. <BR /> <BR /> Cheers, <BR /> Andy </BODY></HTML> Fri, 22 Mar 2019 00:10:29 GMT Virtualization-Team 2019-03-22T00:10:29Z What's new in Hyper-V for the Windows 10 Creators Update? <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Apr 13, 2017 </STRONG> <BR /> Microsoft just released the <A href="#" target="_blank"> Windows 10 Creators Update </A> .&nbsp; Which means Hyper-V improvements! <BR /> <BR /> New and improved features in Creators Update: <BR /> <UL> <BR /> <LI> Quick Create </LI> <BR /> <LI> Checkpoint and Save for nested Hyper-V </LI> <BR /> <LI> Dynamic resize for VM Connect </LI> <BR /> <LI> Zoom for VM Connect </LI> <BR /> <LI> Networking improvements (NAT) </LI> <BR /> <LI> Developer-centric memory management </LI> <BR /> </UL> <BR /> Keep reading for more details.&nbsp; Also, if you want to try new Hyper-V things as we build them, become a <A href="#" target="_blank"> Windows Insider </A> . <BR /> Faster VM creation with Quick Create <BR /> <IMG src="" /> <BR /> <BR /> Hyper-V Manager has a new option for quickly and easily creating virtual machines, aptly named “Quick Create”. <A href="#" target="_blank"> Introduced in build 15002 </A> , Quick Create focuses on getting the guest operating system up and running as quickly as possible -- including creating and connecting to a virtual switch. <BR /> <BR /> When we first released Quick Create, there were a number of issues mostly centered on our default virtual machine settings ( <A href="#" target="_blank"> read more </A> ).&nbsp; In response to your feedback, we have updated the Quick Create defaults. <BR /> <BR /> Creators Update Quick Create defaults: <BR /> <UL> <BR /> <LI> Generation: 2 </LI> <BR /> <LI> Memory: 2048 MB to start, Dynamic Memory enabled </LI> <BR /> <LI> Virtual Processors: 4 </LI> <BR /> <LI> VHD: dynamic resize up to 100GB </LI> <BR /> </UL> <BR /> Checkpoint and save work on nested Hyper-V host <BR /> Last year we added the ability to run Hyper-V inside of Hyper-V (a.k.a. nested virtualization).&nbsp; This has been a very popular feature, but it initially came with a number of limitations.&nbsp; We have continued to work on the performance, compatibility and feature integration of nested virtualization. <BR /> <BR /> In the Creator update for Windows 10 you can now take checkpoints and saved states on virtual machines that are acting as nested Hyper-V hosts. <BR /> Dynamic resize for Enhanced Session Mode VMs <BR /> <IMG src="" /> <BR /> <BR /> The picture says it all.&nbsp; If you are using Hyper-V’s Enhanced Session Mode, you can dynamically resize your virtual machine.&nbsp; Right now, this is only available to virtual machines that support Hyper-V’s Enhanced Session mode.&nbsp; That includes: <BR /> <UL> <BR /> <LI> Windows Client: Windows 8.1, Windows 10 and later </LI> <BR /> <LI> Windows Server: Windows Server 2012 R2, Windows Server 2016 and later </LI> <BR /> </UL> <BR /> Read <A href="#" target="_blank"> blog </A> announcement. <BR /> Zoom for VM Connect <BR /> Is your virtual machine impossible to read?&nbsp; Alternately, do you suffer from scaling issues in legacy applications? <BR /> <BR /> <B> VMConnect </B> now has the option to adjust <B> Zoom Level </B> under the <B> View </B> Menu. <BR /> <BR /> <IMG src="" /> <BR /> Multiple NAT networks and IP pinning <BR /> NAT networking is vital to both Docker and Visual Studio’s UWP device emulators.&nbsp; When we released Windows Containers, developers discovered number of networking differences between containers on Linux and containers on Windows.&nbsp; Additionally, introducing another common developer tool that uses NAT networking presented new challenges for our networking stack. <BR /> <BR /> In the Creators Update, there are two significant improvements to NAT: <BR /> <OL> <BR /> <LI> Developers can now use for multiple NAT networks (internal prefixes) on a single host. <BR /> That means VMs, containers, emulators, et. al. can all take advantage of NAT functionality from a single host. </LI> <BR /> <LI> Developers are also able to build and test their applications with industry-standard tooling directly from the container host using an overlay network driver (provided by the Virtual Filtering Platform (VFP) Hyper-V switch extension) as well as having direct access to the container using the Host IP and exposed port. </LI> <BR /> </OL> <BR /> Improved memory management <BR /> Until recently, Hyper-V has allocated memory very conservatively.&nbsp; While that is the right behavior for Windows Server, UWP developers faced out of memory errors starting device emulators from Visual Studio ( <A href="#" target="_blank"> read more </A> ). <BR /> <BR /> In the Creators Update, Hyper-V gives the operating system a chance to trim memory from other applications and uses all available memory.&nbsp; You may still run out of memory, but now the amount of memory shown in task manager accurately reflects the amount available for starting virtual machines. <BR /> <BR /> Introduced in <A href="#" target="_blank"> build 15002 </A> . <BR /> <BR /> As always, please send us feedback! <BR /> <BR /> Once more, because I can’t emphasize this enough, <A href="#" target="_blank"> become a Windows Insider </A> – almost everything here has benefited from your early feedback. <BR /> <BR /> Cheers, <BR /> Sarah </BODY></HTML> Fri, 22 Mar 2019 00:09:58 GMT scooley 2019-03-22T00:09:58Z Linux Integration Services 4.1.3-2 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Mar 10, 2017 </STRONG> <BR /> Linux Integration Services has been update to version 4.1.3-2 and is available from <A href="#" target="_blank"> </A> <BR /> <BR /> This is a minor update to correct the RPMs for a kernel ABI change in Red Hat Enterprise Linux, CentOS, and Oracle Linux's Red Hat Compatible Kernel version 7.3. Version 3.10.0-514.10.2.el7 of the kernel was sufficiently different for symbol conflicts to break the LIS kernel modules and create a situation where a VM would not start correctly. This version of the modules is compatible with the new kernel. </BODY></HTML> Fri, 22 Mar 2019 00:07:42 GMT Joshua Poulson 2019-03-22T00:07:42Z How to give us feedback <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Feb 10, 2017 </STRONG> <BR /> We love hearing from you. &nbsp;So what's the best way to give us feedback? <BR /> <BR /> The best way to report an issue or give a quick suggestion is the Feedback Hub on Windows 10 (Windows key + F to open it quickly). The feedback hub lets the product team see all of your feedback in one place, and allows other users to upvote and provide further comments. It's also tightly integrated with our bug tracking and engineering processes, so that we can keep an eye on what users are saying and use this data to help prioritize fixes and feature requests, and so that you can follow up and see what we're doing about it. <BR /> <BR /> In the latest build, we have reintroduced the Hyper-V feedback category. <BR /> <BR /> After typing your feedback, selecting "Show category suggestions" should help you find the Hyper-V category under Apps and Games. It looks like a couple people have already discovered the new category: <BR /> <BR /> <BR /> <BR /> <IMG src="" /> <BR /> <BR /> When you put your feedback in the Hyper-V category, we are also able to collect relevant event logs to help diagnose issues. To provide more information about a problem that you can reproduce, hit "begin monitoring", reproduce the issue, and then "stop monitoring". This allows us to collect relevant diagnostic information to help reproduce and fix the problem. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> We also love to hear from you in our forums if there are any issues you are running into. This is a good place to get direct help from the product group as well as community members. <A href="#" target="_blank"> Hyper-V Forums </A> <BR /> <BR /> <IMG src="" /> <BR /> <BR /> That's all for now. Looking forward to&nbsp;seeing your feedback! <BR /> <BR /> Cheers, <BR /> Andy </BODY></HTML> Fri, 22 Mar 2019 00:07:08 GMT Virtualization-Team 2019-03-22T00:07:08Z Fun fact: Quick Create handles emoji in virtual machine names and splices them into simple Unicode <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Feb 03, 2017 </STRONG> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> I was playing with Windows 10’s on screen keyboard and discovered the emoticons section. &nbsp;Specifically, I found&nbsp;this awesome set of cat emojis. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> WindowsKitty definitely needed to be a VM Name.&nbsp; It even has a laptop!&nbsp; Luckily, it turns out, the Quick Create option we&nbsp;added recently handles emoji beautifully. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Not only does the VM name look great in Quick Create with the&nbsp;crazy Windows 10 emoji, Windows also splices them into simpler Unicode representations for Hyper-V Manager and the file system.&nbsp; I was really enjoying seeing what the simplified Unicode would be – in this case, cat + computer. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Which begs the question, how do emoji VM names look in PowerShell? <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Unfortunately, not so good – maybe someday. <BR /> <BR /> In conclusion, if you don’t need PowerShell scripting (or love referencing VMs via GUID) maybe emoji names are for you.&nbsp; It makes me smile, at least. <BR /> <BR /> For further reading, checkout <A href="#" target="_blank"> this blog post </A> about how Windows 10 rethinks how we treat emoji. <BR /> <BR /> Have fun! <BR /> <BR /> Sarah </BODY></HTML> Fri, 22 Mar 2019 00:06:36 GMT scooley 2019-03-22T00:06:36Z Editing VMConnect session settings <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Feb 02, 2017 </STRONG> <BR /> When you connect to a VM with Virtual Machine Connection in enhanced session mode, you're prompted to choose some settings for display and local resources. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> The main thing that changes between sessions&nbsp;is usually display configuration. But since you can now <A href="#" target="_blank"> resize after connecting </A> starting in the latest Insider build, you might not want to see&nbsp;this page each time you connect. You can select&nbsp; "Save my settings for future connections to this virtual machine" and you won't see this page&nbsp;for future sessions. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> <BR /> <BR /> However, you might want to occasionally configure local resources like audio and devices, so there are 2 easy ways to get back to these settings: <BR /> <OL> <BR /> <LI> In Hyper-V Manager, you will see an option to " <STRONG> Edit Session Settings... </STRONG> " for any VM for which you have saved settings. <BR /> <A href="#" target="_blank"> <BR /> </A> <IMG src="" /> <BR /> </LI> <BR /> <LI> Open VMConnect from command line or Powershell, and specify the <STRONG> /edit </STRONG> flag to open the session settings. </LI> <BR /> </OL> <BR /> <IMG src="" /> <BR /> <BR /> <BR /> Cheers, <BR /> Andy </BODY></HTML> Fri, 22 Mar 2019 00:05:45 GMT Virtualization-Team 2019-03-22T00:05:45Z Live Migration via Constrained Delegation with Kerberos in Windows Server 2016 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Feb 01, 2017 </STRONG> <BR /> Introduction <BR /> Many Hyper-V customers have run into new challenges when trying to use constrained delegation with Kerberos to Live Migrate VMs in Windows Server 2016.&nbsp; When attempting to migrate, they would see errors with messages like "no credentials are available in the security package," or "the Virtual Machine Management Service failed to authenticate the connection for a Virtual Machine migration at the source host: no suitable credentials available."&nbsp; After investigating, we have determined the root cause of the issue and have updated guidance for how to configure constrained delegation. <BR /> Fixing This Issue <BR /> Resolving this issue is a simple configuration change in Active Directory when setting up constrained delegation.&nbsp;&nbsp;In <A href="#" target="_blank"> our documentation </A> , when you reach the fifth instruction in Step 1, select "use any authentication protocol" instead of "use Kerberos only."&nbsp; The other instructions have not changed. <BR /> <DIV> <IMG src="" /> </DIV> <BR /> Root Cause <BR /> Warning: the next two sections go a bit deep into the internal workings of Hyper-V. <BR /> The root cause of this issue is an under the hood change in Hyper-V remoting.&nbsp; Between Windows Server 2012R2 and Windows Server 2016, we shifted from using the Hyper-V WMI Provider *v1* over *DCOM* to the Hyper-V WMI Provider *v2* over *WinRM*.&nbsp; This is a good thing: it unifies Hyper-V remoting with other Windows remoting tools (e.g. PowerShell Remoting).&nbsp; This change matters for constrained delegation because: <BR /> <OL> <BR /> <LI> WinRM runs as NETWORK SERVICE, while the Virtual Machine Management Service (VMMS) runs as SYSTEM. </LI> <BR /> <LI> The way WinRM does inbound authentication stores the nice, forwardable Kerberos ticket in a location that is unavailable to NETWORK SERVICE. </LI> <BR /> </OL> <BR /> The net result is the WinRM cannot access the forwardable Kerberos ticket, and the Live Migration fails on Windows Server 2016.&nbsp; After exploring possible solutions, the best (and fastest) option here is to change the configuration to enable "protocol transition" by changing the constrained delegation configuration as above. <BR /> How does this impact security? <BR /> You may think this approach is less secure, but in practice, the impact is debatable. <BR /> <BR /> When Kerberos Constrained Delegation (KCD) is configured to “use Kerberos only,” the system performing delegation must possess a Kerberos service ticket from the delegated user as evidence that it is acting on behalf of that user.&nbsp; By switching KCD to “use any authentication protocol”, that requirement is relaxed such that a service ticket acquired via Kerberos S4U logon is acceptable.&nbsp; This means that the delegating service is able to delegate an account without direct involvement of the account owner.&nbsp; While enabling the use of any protocol — often referred to as “protocol transition” — is nominally less secure for this reason, the difference is marginal due to the fact that the disabling of protocol transition provides no security promise.&nbsp; Single-sign-on authentication between systems sharing a domain network is simply too ubiquitous to treat an inbound service ticket as proof of anything.&nbsp; With or without protocol transition, the only secure way to limit the accounts that the service is permitted to delegate is to mark those accounts with the “account is sensitive and cannot be delegated” bit. <BR /> Documentation <BR /> We're working on modifying our documentation to reflect this change. <BR /> <DIV> John Slack <BR /> Hyper-V Team PM </DIV> </BODY></HTML> Fri, 22 Mar 2019 00:05:04 GMT Virtualization-Team 2019-03-22T00:05:04Z Introducing VMConnect dynamic resize <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jan 27, 2017 </STRONG> <BR /> Starting in the latest Insider's build, you can resize the display for a session in Virtual Machine Connection just by dragging the corner of the window. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> When&nbsp;you connect to a VM, you'll still see the normal options which determine the size of the window and the resolution to pass to the virtual machine: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Once you log in, you can see that the guest OS is using the specified resolution, in this case 1366 x 768. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Now,&nbsp;if we resize the window, the resolution in the guest OS is automatically adjusted. Neat! <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Additionally, the system DPI settings are passed to the VM. If I change my scaling factor on the host, the VM display will scale as well. <BR /> <BR /> There are 2 requirements for dynamic resizing to work: <BR /> <UL> <BR /> <LI> You must be running in <STRONG> Enhanced session&nbsp;mode </STRONG> </LI> <BR /> <LI> You must be fully <STRONG> logged in </STRONG> to the guest OS (it won't work on the lockscreen) </LI> <BR /> </UL> <BR /> <BR /> <BR /> This remains a work in progress, so we would love to hear your thoughts. <BR /> <BR /> -Andy <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> </BODY></HTML> Fri, 22 Mar 2019 00:04:20 GMT Virtualization-Team 2019-03-22T00:04:20Z No more "out of memory" errors for Windows Phone emulators in Windows 10 (unless you're really out of memory) <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jan 27, 2017 </STRONG> <BR /> For those of you who run emulators in Visual Studio, you may be familiar with an annoying error: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> It periodically pops up even when task manager reports enough available memory – this is especially true for machines with less than 8GB RAM.&nbsp; Most of the time, it’s because there genuinely isn’t enough memory available but sometimes it’s because of Hyper-V’s root memory reserve (discussed in <A href="#" target="_blank"> KB2911380 </A> ). <BR /> <BR /> This blog will tell you what the root memory reserve is, why it exists, and why you shouldn’t need it on Windows 10 starting in build 15002 (original announcement <A href="#" target="_blank"> here </A> ).&nbsp; I also wrote a <A href="#" target="_blank"> mini script </A> to clear the registry key that controls root memory reserve if you think it may be set on your system. <BR /> <BR /> <BR /> <H3> So, What is the root memory reserve and why is it there? </H3> <BR /> Root memory reserve is the memory Hyper-V sets aside to make sure there will always be enough available for the host to run well. <BR /> <BR /> We change Hyper-V host memory management periodically based on feedback and new technology (things like dynamic memory and changes in clustering). &nbsp;The root memory reserve is only one piece of that equation and even calculating that piece has several factors. <STRONG> <STRONG> <STRONG> <STRONG> Modifying it is not supported </STRONG> </STRONG> </STRONG> </STRONG> but there is still a registry key available for times when the default isn’t appropriate for one reason or another. <BR /> <BR /> <A href="#" target="_blank"> KB2962295 </A> basically describes measuring, monitoring, and modifying the root reserve. <BR /> <BR /> <A href="#" target="_blank"> KB2911380 </A> tells you how to manually set it. <BR /> <BR /> And now I’m here to tell you to remove it! <BR /> <BR /> <BR /> <H3> </H3> <BR /> <H3> Why you don't&nbsp;need root memory reserve any more. </H3> <BR /> We stopped using a root memory reserve in favor of other memory management tools in Windows 10.&nbsp; The things that make it necessary are unique to server environments (clustering, service level agreements…). <BR /> <BR /> However, while the default memory management settings on server are now different from Hyper-V on Windows,&nbsp; if root reserve is set on Windows 10 Hyper-V will respect it -- you won’t see any of the memory management changes we made.&nbsp; Which is why now is the time to clear that custom root memory reserve. <BR /> <BR /> <BR /> <BR /> Cheers, <BR /> Sarah </BODY></HTML> Fri, 22 Mar 2019 00:02:32 GMT scooley 2019-03-22T00:02:32Z A closer look at VM Quick Create <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jan 20, 2017 </STRONG> <BR /> <P> <BR /> </P> <P> Author: Andy Atkinson </P> <P> In the last Insiders build, we introduced Quick Create to quickly create virtual machines with less configuration (see <A href="#" target="_blank"> blog </A> ). </P> <P> <IMG src="" /> </P> <P> We’re trying a few things to make it easier to set up a virtual machine, such as combining installation options to a single field for all supported file types, and adding a control to enable Windows Secure Boot more easily. </P> <P> <IMG src="" /> </P> <P> Quick Create can also help set up your network. If there’s no available switch, you’ll see a button to set up an “automatic network”, which will automatically configure an external switch for the virtual machine and connect it to the network. </P> <P> To simplify the number of settings, we had to pick some good default settings for the virtual machine, which are currently: </P> <UL> <LI> Generation: 2 </LI> <LI> StartupRAM: 1024 MB </LI> <LI> DynamicRAM: Enabled </LI> <LI> Virtual Processors: 1 </LI> </UL> <P> After the virtual machine is created, you will see the confirmation page with quick access to edit settings or to connect. </P> <P> <IMG src="" /> </P> <P> Are there other controls you want in Quick Create? Are we picking good defaults? </P> <P> This is still a work in progress, so let us know what you think! </P> <P> - Andy </P> </BODY></HTML> Fri, 22 Mar 2019 00:02:15 GMT scooley 2019-03-22T00:02:15Z Cool new things for Hyper-V on Windows 10 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jan 10, 2017 </STRONG> <BR /> Insider build 15002 is now available for Fast Ring windows insiders. In it, you’ll find a few improvements in Hyper-V for Windows 10 users: <BR /> <UL> <BR /> <LI> A new virtual machine Quick Create experience (work in progress). </LI> <BR /> <LI> More aggressive memory allocation for starting virtual machines.&nbsp; This is especially useful for anyone using emulators in Visual Studio or static memory virtual machines. </LI> <BR /> </UL> <BR /> Check it out and send feedback! <BR /> <H2> Virtual machine Quick Create </H2> <BR /> <IMG src="" /> <BR /> <BR /> Hyper-V Manager has a new single-page wizard that makes it faster and easier to create virtual machines.&nbsp; You can access it through a new “Quick Create…” button (1). <BR /> <BR /> Quick Create focuses on getting the guest operating system up and running.&nbsp; It automatically creates virtual hardware necessary to run the guest operating system (2).&nbsp; Including a virtual switch!&nbsp; Since many desktop users see internet in the virtual machine as essential, we added the option to create an external switch (3) directly to the new virtual machine experience. <BR /> <BR /> <STRONG> Quick Create is still under active development – try it out and please leave feedback! </STRONG> <BR /> <H2> Changes in memory allocation </H2> <BR /> Starting in build 15002, we changed how Hyper-V on Windows 10 allocates memory for starting virtual machines. <BR /> <BR /> In the past, when you started a virtual machine, Hyper-V allocated memory very conservatively.&nbsp; As an example, we maintained reserved memory for the Hyper-V host (root memory reserve) so even if task manager showed 2 GB available memory, Hyper-V wouldn’t use all of it for starting virtual machines.&nbsp; Hyper-V also wouldn’t ask for applications to release unused memory (trim).&nbsp; Conservative memory allocation makes sense in a hosting environment where very few applications run on the Hyper-V host and the ones that do are high priority – it doesn’t make much sense for Windows 10 and desktop virtualization. <BR /> <BR /> In Windows 10, you’re probably running several applications (web browsers, text editors, chat clients, etc) and most of them will reserve more memory than they’re actively using.&nbsp; With these changes, Hyper-V starts allocating memory in small chunks (to give the operating system a chance to trim memory from other applications) and will use all available memory (no root reserve).&nbsp; Which isn’t to say you’ll never run out of memory but now the amount of memory shown in task manager accurately reflects the amount available for starting virtual machines. <BR /> <BR /> <STRONG> Note: </STRONG> For people using Hyper-V with device emulators in Visual Studio – the emulator does have overhead so you will need at least 200MB more RAM available than the emulator you’re starting suggests (i.e. a 512MB emulator actually needs closer to 700MB available to start successfully). <BR /> <BR /> I’ll post a follow up blog going into more nitty gritty details on this later. <BR /> <BR /> Have fun making virtual machines! <BR /> <BR /> Cheers, <BR /> Sarah </BODY></HTML> Fri, 22 Mar 2019 00:01:43 GMT scooley 2019-03-22T00:01:43Z Linux Integration Services Download 4.1.3 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Dec 23, 2016 </STRONG> <BR /> We've just published an update for the Linux Integration Services download. This release includes a series of upstream updates and adds compatibility with Red Hat Enterprise Linux, CentOS, and Oracle Linux RHCK 7.3. <BR /> <BR /> The LIS 4.1.3 download is available from the <A href="#" target="_blank"> Microsoft Download Center </A> . <BR /> <BR /> The LIS download is not required for Linux distributions that have built-in LIS, as described in <A href="#" target="_blank"> "Which Linux Integration Services should I use in my Linux VMs?" </A> <BR /> <BR /> Linux Integration Services is an open source project that is part of the Linux Kernel, and we welcome public involvement with the LIS download on <A href="#" target="_blank"> github </A> . </BODY></HTML> Fri, 22 Mar 2019 00:01:26 GMT Joshua Poulson 2019-03-22T00:01:26Z Allowing an additional host to run a VM with virtual TPM <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Oct 25, 2016 </STRONG> <BR /> Recently a colleague got a new PC and asked me how he could migrate his existing virtual machines to his new system.&nbsp; Because he had enabled a virtual <A href="#" target="_blank"> Trusted Platform Module </A> (TPM) on these VMs, he wasn’t sure how to proceed. This is also a common scenario when moving VMs to a <A href="#" target="_blank"> guarded fabric </A> . <BR /> <BR /> TPMs are an established and standardized technology which can be used for different purposes around system trustworthiness and identity. For example, they can be used to ensure the OSes boot loader and boot configuration has not been tampered with before unsealing a BitLocker encrypted disk, or to have a strong system identity based on hardware. Virtual TPMs bring these great capabilities to virtual machines running on Windows 10 1511 and Windows Server 2016 hosts or newer. <BR /> <BR /> To protect the virtual TPM’s state, it is stored encrypted. This means, some keys must be updated so the VM can run on the destination system. The overall process involves two basic steps before moving the VM to the new host: <BR /> <OL> <BR /> <LI> Importing the destination system’s guardian information on the source host </LI> <BR /> <LI> Updating the virtual machine’s key protector </LI> <BR /> </OL> <BR /> <H2> Importing the destination system’s guardian </H2> <BR /> First, the guardian information for the destination system or fabric must be exported. If you plan to authorize a guarded fabric, please make sure the destination hosts are properly configured with the Host Guardian Service information. Also, note that if you run this on a host in a guarded fabric, each host that is part of this guarded fabric will be able to run the virtual machine once the key protector is updated. If in doubt, ask your administrator. <BR /> <BR /> The following script snippet can be used to export guardian information from a destination host by simply running it on this host. <BR /> <BR /> <BR /> <BR /> If the destination host is part of a guarded fabric, the Host Guardian Service’s data is written to the file. Otherwise, a local guardian is created if it does not exist with the default name and exported. <BR /> <BR /> On the source host, run this command in an administrative PowerShell to import the guardian information which was previously exported. <BR /> <BR /> <BR /> <H2> Updating the virtual machine’s key protector </H2> <BR /> With the destination system’s guardian information present on the source system, each virtual machine’s key protector can now be updated to include the new guardian. <BR /> <BR /> For this step, the assumption is that the source system is running in local mode and the right guardian information is present. If you are running on Windows 10 and can start your VM with a virtual TPM, this should be the case. <BR /> <BR /> <BR /> <BR /> The script loops through all VMs with an enabled vTPM and adds the guardian for the destination system exported above. <BR /> <H2> Finishing up </H2> <BR /> Finally, the virtual machines can be exported on the source and imported on the destination host. You should be good &nbsp;to start the VMs. <BR /> <BR /> Hope this helps, <BR /> <BR /> Lars </BODY></HTML> Fri, 22 Mar 2019 00:01:22 GMT Lars Iwer 2019-03-22T00:01:22Z Linux Integration Services Download 4.1.2-2 hotfix <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Oct 21, 2016 </STRONG> <BR /> We've just published a hotfix release of the Linux Integration Services download, version 4.1.2-2. <BR /> <BR /> This release addresses two critical issues: <BR /> <BR /> "Do not lose pending heartbeat vmbus packets" (for versions 5.x, 6.x, 7.x) <BR /> Hyper-V hosts can be configures to sent "heartbeat" packets to guests to see if they are active, and reboot them when they do not respond. These heartbeat packets can queue up when a guest is paused expecting a response when the guest is re-activated, for example when a guest is moved by live migration. This fix corrects a problem where some of these packets could be dropped leading the host to reboot an otherwise healthy guest. <BR /> <BR /> "Exclude UDP ports in RSS hashing" (for version 6.x, 7.x) <BR /> While improving network performance by taking advantage of host-supported offloads we had introduced a problem with UDP workloads on Azure. This change fixes excessive UDP packet loss in this scenario. <BR /> <BR /> Linux Integration Services 4.1.2-2 can be downloaded <A href="#" target="_blank"> here </A> . </BODY></HTML> Fri, 22 Mar 2019 00:01:15 GMT Joshua Poulson 2019-03-22T00:01:15Z Waiting for VMs to restart in a complex configuration script with PowerShell Direct <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Oct 11, 2016 </STRONG> <BR /> Have you ever tried to automate the setup of a complex environment including the base OS, AD, SQL, Hyper-V and other components? <BR /> <BR /> For my demo at Ignite 2016 I did just that. &nbsp;I would like to share a few things I learned&nbsp;while writing a single PowerShell script that builds the demo environment from scratch. The script heavily uses <A href="#" target="_blank"> PowerShell Direct </A> and just requires the installation sources put into specific folders. <BR /> <BR /> In this blog post I’d like to provide solutions for two challenges that I came across: <BR /> <UL> <BR /> <LI> Determining when a virtual machine is ready for customization using PowerShell Direct, and – as a variation of that theme – </LI> <BR /> <LI> Determining when Active Directory is fully up and running in a fully virtualized PoC/demo environment. </LI> <BR /> </UL> <BR /> <H2> Solution #1 Determining when a virtual machine is ready for customization using PowerShell Direct </H2> <BR /> Some guest OS&nbsp;operations require multiple restarts. If you’re using a simple approach to automate everything from a single script and check for the guest OS to be ready, things might go wrong. For example, with a naïve PowerShell Direct call using Invoke-Command, the script might resume while the virtual machine is restarting multiple times to finish up role installation. This can lead to unpredictable behavior and break scripts. <BR /> <BR /> One solution is using a wrapper function like this: <BR /> <BR /> <BR /> <BR /> This wrapper function first makes sure that the virtual machine is running, if not, the VM is started. If the heartbeat integration component is enabled for the VM, it will also wait for a proper heartbeat status – this resolves the multiple-reboot issue mentioned above. Afterwards, it waits for a proper PowerShell Direct connection. Both wait operations have time-outs to make sure script execution is not blocked perpetually. Finally, the provided script block is run passing through arguments. <BR /> <BR /> <H2> Solution #2 Determining when Active Directory is fully up and running </H2> <BR /> <BR /> Whenever a Domain Controller is restarted, it takes some time until the full AD functionality is available. &nbsp;If you use a VMConnect session to look at the machine during this time, you will see the status message “Applying Computer Settings”. &nbsp;Even with the Invoke-CommandWithPSDirect wrapper function above, I noticed some calls, like creating a new user or group, will fail during this time. <BR /> <BR /> In my script, I am therefore waiting for AD to be ready before continuing: <BR /> <BR /> <BR /> <BR /> This function leverages the Invoke-CommandWithPSDirect function to ensure the VM is up and running. To make sure that Active Directory works properly, it then requests the local computer’s AD object until this call succeeds. <BR /> <BR /> Using these two functions has saved me quite some headache. For additional tips, you can also take a look at <A href="#" target="_blank"> Ben’s tips around variables and functions </A> . <BR /> <BR /> Cheers, <BR /> <BR /> Lars <BR /> <BR /> PS: The full script for building the guarded fabric demo environment for Ignite 2016’s session BRK3124: Dive into Shielded VMs with Windows Server 2016 Hyper-V will be shared through our <A href="#" target="_blank"> Virtualization Documentation GitHub </A> . </BODY></HTML> Fri, 22 Mar 2019 00:00:13 GMT scooley 2019-03-22T00:00:13Z Linux Integration Services download Version 4.1.2 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Aug 10, 2016 </STRONG> <BR /> We are pleased to announce the availability of Linux Integration Services (LIS) 4.1.2. This point release of the LIS download expands supported releases to Red Hat Enterprise Linux, CentOS, and Oracle Linux with Red Hat Compatible Kernel 6.8. This release also includes upstream bug fixes and performance improvements not included in previous LIS downloads. <BR /> <BR /> See the separate PDF file "Linux Integration Services v4-1c.pdf" for more information. <BR /> <BR /> The LIS download is an optional way to get Linux Integration Services updates for certain versions of Linux. To determine if you want to download LIS refer to the blog post <A href="#" target="_blank"> "Which Linux Integration Services should I use in my Linux VMs?" </A> <BR /> <BR /> <STRONG> Download Location </STRONG> <BR /> <BR /> The Linux Integration Services download is available either as a disk image (ISO) or gzipped tar file. The disk image can be attached to a virtual machine, or the tar file can upload and expanded to install these kernel modules. Refer to the instruction PDF available separately from the download named "Linux Integration Services v4-1c.pdf" <BR /> <BR /> <A href="#" target="_blank"> </A> <BR /> <BR /> <STRONG> Linux Integration Services documentation </STRONG> <BR /> <BR /> See also the TechNet article “Linux and FreeBSD Virtual Machines on Hyper-V” for a comparison of LIS features and best practices for use here: <A href="#" target="_blank"> </A> <BR /> <BR /> <STRONG> Source Code </STRONG> <BR /> Linux Integration Services code is open source released under the GNU Public License version 2 (GPLv2) and is freely available at the LIS GitHub project here: <A href="#" target="_blank"> </A> and in the upstream Linux kernel: <A href="#" target="_blank"> </A> <BR /> </BODY></HTML> Fri, 22 Mar 2019 00:00:08 GMT Joshua Poulson 2019-03-22T00:00:08Z Which Linux Integration Services should I use in my Linux VMs? <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jul 12, 2016 </STRONG> <BR /> <STRONG> Overview </STRONG> <BR /> If you run Linux guest VMs on Hyper-V, you may wonder about how to get the “best” Linux Integration Services (LIS) for your Linux distribution and usage scenario.&nbsp; Getting the “best” is a bit nuanced, so this blog post gives a detailed explanation to enable you to make the right choice for your situation. <BR /> <BR /> Microsoft has two separate tracks for delivering LIS.&nbsp; It’s important to understand that the tracks are separate, and don’t overlap with each other.&nbsp; You have to decide which track works best for you. <BR /> <BR /> <STRONG> “Built-in” LIS </STRONG> <BR /> One track is through the Linux distro vendors, such as Red Hat, SUSE, Oracle, Canonical, and the Debian community.&nbsp; Developers from Microsoft and the Linux community at large submit LIS updates to the Linux Kernel Mailing List, and get code review feedback from the Linux community.&nbsp; When the feedback process completes, the changes are incorporated into the upstream Linux kernel as maintained by Linus Torvalds and the Linux community “maintainers”. <BR /> <BR /> After acceptance Microsoft works with the distro vendors to backport those changes into whatever Linux kernel the Linux distro vendors are shipping.&nbsp; The distro vendors take the changes, then build, test, and ultimately ship LIS as part of their release.&nbsp; Microsoft gets early versions of the releases, and we test as well and give feedback to the distro vendor.&nbsp; Ultimately we converge at a point where we’re both happy with the release. We do this with Red Hat, SUSE, Canonical, Oracle, etc. and so this process covers RHEL, CentOS, SLES, Oracle Linux, and Ubuntu.&nbsp; Microsoft also works with the Debian community to accomplish the same thing. <BR /> <BR /> This track is what our <A href="#" target="_blank"> documentation </A> refers to as “built-in”.&nbsp; You get LIS from the distro vendor as part of the distro release.&nbsp; And if you upgrade from CentOS 7.0 to 7.1, you’ll get updated LIS with the 7.1 update, just like any other Linux kernel updates.&nbsp; Same from 7.1 to 7.2. This track is the easiest track, because you don’t do anything special or extra for LIS – it’s just part of the distro release.&nbsp; It’s important to note that we don’t assign a version number to the LIS that you get this way.&nbsp; The specific set of LIS changes that you get depends on exactly when the distro vendor pulled the latest updates from the upstream Linux kernel, what they were able to include (they often don’t include every change due to the risk of destabilizing), and various other factors.&nbsp; The tradeoff with the “built-in” approach is that you won’t always have the “latest and greatest” LIS code because each distro release is a snapshot in time.&nbsp; You can upgrade to a later distro version, and, for example, CentOS 7.2 will be a later snapshot than CentOS 7.1.&nbsp; But there are inherent delays in the process.&nbsp; Distro vendors have freeze dates well in advance of a release so they can test and stabilize.&nbsp; And, CentOS, in particular, depends on the equivalent RHEL release. <BR /> <BR /> End customer support for “built-in” LIS is via your Linux distro vendor under the terms of the support agreement you have with that vendor.&nbsp; Microsoft customer support will also engage under the terms of your support agreement for Hyper-V.&nbsp;&nbsp; In either case, fixing an actual bug in the LIS code will likely be done jointly by Microsoft and the distro vendor.&nbsp; Delivery of such updated code will come via your distro vendor’s normal update processes. <BR /> <BR /> <STRONG> Microsoft LIS Package </STRONG> <BR /> The other track is the Microsoft-provided LIS package, which is available for RHEL, CentOS, and the Red Hat Compatible Kernel in Oracle Linux <STRONG> . </STRONG> LIS is still undergoing a moderate rate of change as we make performance improvements, handle new things in Azure, and support the Windows Server 2016 release with a new version of Hyper-V.&nbsp; As an alternative to the “built-in” LIS described above, Microsoft provides an LIS package that is the “latest and greatest” code changes.&nbsp; We provide this package backported to a variety of older RHEL and CentOS distro versions so that customers who don’t stay up-to-date with the latest version from a distro vendor can still get LIS performance improvements, bug fixes, etc.&nbsp; &nbsp;And without the need to work through the distro vendor, the Microsoft package has shorter process delays and can be more “up-to-date”. &nbsp;&nbsp;But note that over time, everything in the Microsoft LIS package shows up in a distro release as part of the “built-in” LIS.&nbsp; The Microsoft package exists only to reduce the time delay, and to provide LIS improvements to older distro versions without having to upgrade the distro version. <BR /> <BR /> The Microsoft-provided LIS packages are assigned version numbers.&nbsp; That’s the LIS 4.0, 4.1 (and the older 3.5) that you see in the version grids in the <A href="#" target="_blank"> documentation </A> , with a link to the place you can download it.&nbsp; Make sure you get the latest version, and ensure that it is applicable to the version of RHEL/CentOS that you are running, per the grids. <BR /> <BR /> The tradeoff with the Microsoft LIS package is that we have to build it for specific Linux kernel versions.&nbsp; When you update a CentOS 7.0 to 7.1, or 7.1 to 7.2, you get changes to the kernel from CentOS update repos.&nbsp; But you don’t get the Microsoft LIS package updates because they are separate.&nbsp; You have to do a separate upgrade of the Microsoft LIS package.&nbsp; If you do the CentOS update, but not the Microsoft LIS package update, you may get a binary mismatch in the Linux kernel, and in the worst case, you won’t be able to boot.&nbsp; The result is that you have extra update steps if you use the Microsoft provided LIS package.&nbsp; Also, if you are using a RHEL release with support through a Red Hat subscription, the Microsoft LIS package constitutes “uncertified drivers” from Red Hat’s standpoint.&nbsp; Your support services under a Red Hat subscription are governed by Red Hat’s “uncertified drivers” statement here: <A href="#" target="_blank"> Red Hat Knowledgebase 1067 </A> . <BR /> <BR /> Microsoft provides end customer support for the latest version of the Microsoft provided LIS package, under the terms of your support agreement for Hyper-V.&nbsp; If you are running other than the latest version of the LIS package, we’ll probably ask you to upgrade to the latest and see if the problem still occurs.&nbsp; Because LIS is mostly Linux drivers that run in the Linux kernel, any fixes the Microsoft provides will likely be as a new version of the Microsoft LIS package, rather than as a “hotfix” to an existing version. <BR /> <BR /> <STRONG> Bottom-line </STRONG> <BR /> In most cases, using the built-in drivers that come with your Linux distro release is the best approach, particularly if you are staying up-to-date with the latest minor version releases.&nbsp; You should use the Microsoft provided LIS package only if you need to run an older distro version that isn’t being updated by the distro vendor.&nbsp; You can also run the Microsoft LIS package if you want to be running the latest-and-greatest LIS code to get the best performance, or if you need new functionality that hasn’t yet flowed into a released distro version.&nbsp; Also, in some cases, when debugging an LIS problem, we might ask you to try the Microsoft LIS package in order to see if a problem is already fixed in code that is later than what is “built-in” to your distro version. <BR /> <BR /> Here's a tabular view of the two approaches, and the tradeoffs: <BR /> <TABLE> <TBODY><TR> <TD> <STRONG> Feature/Aspect </STRONG> </TD> <TD> <STRONG> “Built-in” LIS </STRONG> </TD> <TD> <STRONG> Microsoft LIS package </STRONG> </TD> </TR> <TR> <TD> Version Number </TD> <TD> No version number assigned.&nbsp; Don’t try to compare with the “4.0”, “4.1”, etc. version numbers assigned to the Microsoft LIS package </TD> <TD> LIS 4.0, 4.1, etc. </TD> </TR> <TR> <TD> How up to date? </TD> <TD> Snapshot as of the code deadline for the distro version </TD> <TD> Most up-to-date because released directly by Microsoft </TD> </TR> <TR> <TD> Update process </TD> <TD> Automatically updated as part of the distro update process </TD> <TD> Requires a separate step to update the Microsoft LIS package.&nbsp; Bad things can happen if you don’t do this extra step. </TD> </TR> <TR> <TD> Can get latest LIS updates for older distro versions? </TD> <TD> No.&nbsp; Only path forward is to upgrade to the latest minor version of the distro (6.8, or 7.2, for CentOS) </TD> <TD> Yes.&nbsp; Available for a wide range of RHEL/CentOS versions back to RHEL/CentOS 5.2.&nbsp; See <A href="#" target="_blank"> this documentation </A> for details on functionality and limitations for older RHEL/CentOS versions. </TD> </TR> <TR> <TD> Meets distro vendor criteria for support? </TD> <TD> Yes </TD> <TD> No, for RHEL.&nbsp; Considered “uncertified drivers” by Red Hat.&nbsp; Not an issue for CentOS, which has community support. </TD> </TR> <TR> <TD> End customer support process </TD> <TD> Via your distro vendor, or via Microsoft support.&nbsp; LIS fixes delivered by distro vendor normal update processes. </TD> <TD> Via Microsoft support per your Hyper-V support agreement.&nbsp; Fixes delivered as a new version of the Microsoft LIS package. </TD> </TR> </TBODY></TABLE> <BR /> <BR /> <BR /> </BODY></HTML> Fri, 22 Mar 2019 00:00:05 GMT Virtualization-Team 2019-03-22T00:00:05Z Windows NAT (WinNAT) -- Capabilities and limitations <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 25, 2016 </STRONG> <BR /> <STRONG> Author: Jason Messer </STRONG> <BR /> <BR /> <BR /> <BR /> How many devices (e.g. laptops, smart phones, tablets, DVRs, etc.) do you have at home which connect to the internet? Each of these devices probably has an IP address assigned to it, but did you know that that the public internet actually only sees <STRONG> one IP address </STRONG> for all of these devices? How does this work? <BR /> <BR /> Network Address Translation (NAT) allows users to create a private, internal network which shares a public IP address(es). When a new connection is established (e.g. web browser session to <A href="#" target="_blank"> </A> ) the NAT <EM> translates </EM> the private (source) IP address assigned to your device to the shared public IP address which is routable on the internet and creates a new entry in the NAT flow state table. When a connection comes back into your network, the public (destination) IP address is translated to the private IP address based on the matching flow state entry. <BR /> <BR /> This same NAT technology and concept can also work with host networking using virtual machine and container endpoints running on a single host. IP addresses from the NAT internal subnet (prefix) can be assigned to VMs, containers, or other services running on this host. Similar to how the NAT translates the source IP address of a device, NAT can also translate the source IP address of a VM or container to the IP address of a virtual network adapter (Host vNIC) on the host. <BR /> <BR /> [caption id="attachment_8585" align="alignnone" width="721"] <IMG src="" /> Figure 1: Physical vs Virtual NAT[/caption] <BR /> <BR /> <BR /> <BR /> Similarly, the TCP/UDP ports can also be translated (Port Address Translation – PAT) so that traffic received on an external port can be forwarded to a different, internal port. These are known as static mappings. <BR /> <BR /> [caption id="attachment_8595" align="alignnone" width="904"] <IMG src="" /> Figure 2: Static Port Mappings[/caption] <BR /> <H2> Host Networking with WinNAT to attach VMs and Containers </H2> <BR /> The first step in creating a host network for VMs and containers is to create an internal Hyper-V virtual switch in the host. This provides Layer-2 (Ethernet) internal connectivity between the endpoints. In order to obtain external connectivity through a NAT (using WinNAT), we add a Host vNIC to the internal vSwitch and assign the default gateway IP address of the NAT to this vNIC. This essentially creates a router so that any network traffic from one of the endpoints that is destined for an IP address outside of the internal network (e.g. will go through the NAT translation process. <BR /> <BR /> <EM> Note: when the Windows container feature is installed, the docker daemon creates a default NAT network automatically when it starts. To stop this network from being created, make sure the docker daemon (dockerd) is started with the ‘-b “none”’ argument specified. </EM> <BR /> <BR /> In addition to address translation, WinNAT also allows users to create static port mappings or forwarding rules so that internal endpoints can be accessed from external clients. Take for example an IIS web server running in a container attached to the default NAT network. The IIS web server will be listening on port 80 and so it requires that any connections coming in on a particular port to the host from an external client will be forwarded or mapped to port 80 on the container. Reference Figure 2 above to see port 8080 on the host being mapped to port 80 on the container. <BR /> <BR /> In order to create a NAT network to connect VMs, please follow these instructions: <A href="#" target="_blank"> </A> <BR /> <BR /> In order to create a NAT network for containers (or use the default nat network) please follow these instructions: <BR /> <A href="#" target="_blank"> </A> <BR /> <H2> Key Limitations in WinNAT (Windows NAT implementation) </H2> <BR /> <UL> <BR /> <LI> Multiple internal subnet prefixes not supported </LI> <BR /> <LI> External / Internal prefixes must not overlap </LI> <BR /> <LI> No automatic networking configuration </LI> <BR /> <LI> Cannot access externally mapped NAT endpoint IP/ ports directly from host – must use internal IP / ports assigned to endpoints </LI> <BR /> </UL> <BR /> <H3> Multiple Internal Subnet Prefixes </H3> <BR /> Consider the case where multiple applications / VMs / containers each require access to a private NAT network. WinNAT only allows for one internal subnet prefix to be created on a host which means that if multiple applications or services need NAT connectivity, they will need to coordinate between each other to share this internal NAT network; each application cannot create its own individual NAT network. This may require creating a larger private NAT subnet (e.g. /18) for these endpoints. Moreover, the private IP addresses assigned to the endpoints cannot be re-used so that IP allocation also needs to be coordinated. <BR /> <BR /> [caption id="attachment_8605" align="alignnone" width="494"] <IMG src="" /> Figure 3: Multiple Internal NAT Subnets are not allowed – combined into a larger, shared subnet[/caption] <BR /> <BR /> Lastly, individual external host ports can only be mapped to one internal endpoint. A user cannot create two static port mappings with external port 80 and have traffic directed to two different internal endpoints. These static port mappings must be coordinated between applications and services requiring NAT. WinNAT does not support dynamic port mappings (i.e. allowing WinNAT to automatically choose an external – or ephemeral – host port to be used by the mapping). <BR /> <BR /> <EM> Note: Dynamic port mappings are supported through docker run with the -p | -P options since IP address management (IPAM) is handled by the Host Network Service (HNS) for containers. </EM> <BR /> <H3> Overlapping External and Intern IP Prefixes </H3> <BR /> A NAT network may be created on either a client or server host. When the NAT is first created, it must be ensured that the internal IP prefix defined does not overlap with the external IP addresses assigned to the host. <BR /> <BR /> Example – <STRONG> This is not allowed </STRONG> : <BR /> <BR /> <EM> Internal, Private IP Subnet: <BR /> IP Address assigned to the Host: </EM> <BR /> <BR /> If a user is roaming on a laptop and connects to a different physical network such that the container host’s IP address is now within the private NAT network, the internal IP prefix of the NAT will need to be modified so that it does not overlap. <BR /> <H3> Automatic Network Configuration </H3> <BR /> WinNAT itself does not dynamically assign IP addresses, routes, DNS servers, or other network information to an endpoint. For container endpoints, since HNS manages IPAM, HNS will assign IP networking information from the NAT network to container endpoints. However, if a user is creating a VM and connecting a VM Network Adapter to a NAT network, the admin must assign the IP configuration manually inside the VM. <BR /> <H3> Accessing internal endpoints directly from the Host </H3> <BR /> Internal endpoints assigned to VMs or containers cannot be accessed using the external IPs / ports referenced in NAT static port mappings directly from the NAT host. From the NAT host, these internal endpoints must be addressed directly by their internal IP and ports. For instance, assume a container endpoint has IP and is running a web server which is listening on port 80. Moreover, assume a port mapping has been created through docker to forward traffic from the host’s IP address ( received on TCP port 8080 to the container endpoint. In this case, a user on the container host cannot directly access the web server using the externally mapped ports. e.g. A user operating on the container host cannot access the container web server indirectly on <A href="#" target="_blank"> </A> . Instead, the user must directly access the container web server on <A href="#" target="_blank"> </A> . <BR /> <BR /> The one caveat to this limitation is that the internal endpoint can be accessed using the external IP/port from a separate, VM/container endpoint running on the same NAT host: this is called hair-pinning. E.g. A user operating on container A can access a web server running in Container B using the internal IP and port of <A href="#" target="_blank"> </A> . <BR /> <H2> Configuration Example: Attaching VMs and Containers to a NAT network </H2> <BR /> If you need to attach multiple VMs and containers to a single NAT, you will need to ensure that the NAT internal subnet prefix is large enough to encompass the IP ranges being assigned by different applications or services (e.g. Docker for Windows and Windows Container – HNS). This will require either application-level assignment of IPs and network configuration or manual configuration which must be done by an admin and guaranteed not to re-use existing IP assignments on the same host. <BR /> <BR /> The solution below will allow both Docker for Windows (Linux VM running Linux containers) and Windows Containers to share the same WinNAT instance using separate internal vSwitches. Connectivity between both Linux and Windows containers will work. <BR /> <H3> Example </H3> <BR /> <UL> <BR /> <LI> User has connected VMs to a NAT network through an internal vSwitch named “VMNAT” and now wants to install Windows Container feature with docker engine <BR /> <OL> <BR /> <LI> <BR /> PS C:\&gt; Get-NetNat “VMNAT”| Remove-NetNat <EM> (this will remove the NAT but keep the internal vSwitch). </EM> <BR /> </LI> <BR /> <LI> Install Windows Container Feature </LI> <BR /> <LI> DO NOT START Docker Service (daemon) </LI> <BR /> <LI> Edit the arguments passed to the docker daemon (dockerd) by adding --fixed-cidr=&lt;container prefix&gt; parameter. This tells docker to create a default nat network with the IP subnet &lt;container prefix&gt; (e.g. so that HNS can allocate IPs from this prefix. </LI> <BR /> <LI> <BR /> PS C:\&gt; Start-Service Docker; Stop-Service Docker <BR /> </LI> <BR /> <LI> <BR /> PS C:\&gt; Get-NetNat | Remove-NetNAT <EM> (again, this will remove the NAT but keep the internal vSwitch) </EM> <BR /> </LI> <BR /> <LI> <BR /> PS C:\&gt; New-NetNat -Name SharedNAT -InternalIPInterfaceAddressPrefix &lt;shared prefix&gt; <BR /> </LI> <BR /> <LI> <BR /> PS C:\&gt; Start-Service docker <BR /> </LI> <BR /> </OL> <BR /> </LI> <BR /> </UL> <BR /> <EM> Docker/HNS will assign IPs to Windows containers from the &lt;container prefix&gt; </EM> <BR /> <BR /> <EM> Admin will assign IPs to VMs from the difference set of the &lt;shared prefix&gt; and &lt;container prefix&gt; </EM> <BR /> <BR /> <BR /> <UL> <BR /> <LI> User has installed Windows Container feature with docker engine running and now wants to connect VMs to the NAT network <BR /> <OL> <BR /> <LI> <BR /> PS C:\&gt; Stop-Service docker <BR /> </LI> <BR /> <LI> <BR /> PS C:\&gt; Get-ContainerNetwork | Remove-ContainerNetwork -force <BR /> </LI> <BR /> <LI> <BR /> PS C:\&gt; Get-NetNat | Remove-NetNat <EM> (this will remove the NAT but keep the internal vSwitch) </EM> <BR /> </LI> <BR /> <LI> Edit the arguments passed to the docker daemon (dockerd) by adding -b “none” option to the end of docker daemon (dockerd) command to tell docker not to create a default NAT network. </LI> <BR /> <LI> <BR /> PS C:\&gt; New-ContainerNetwork –name nat –Mode NAT –subnetprefix &lt;container prefix&gt; <EM> (create a new NAT and internal vSwitch – HNS will allocate IPs to container endpoints attached to this network from the &lt;container prefix&gt;) </EM> <BR /> </LI> <BR /> <LI> <BR /> PS C:\&gt; Get-Netnat | Remove-NetNAT <EM> (again, this will remove the NAT but keep the internal vSwitch) </EM> <BR /> </LI> <BR /> <LI> <BR /> PS C:\&gt; New-NetNat -Name SharedNAT -InternalIPInterfaceAddressPrefix &lt;shared prefix&gt; <BR /> </LI> <BR /> <LI> <BR /> PS C:\&gt; New-VirtualSwitch -Type internal <EM> (attach VMs to this new vSwitch) </EM> <BR /> </LI> <BR /> <LI> <BR /> PS C:\&gt; Start-Service docker <BR /> </LI> <BR /> </OL> <BR /> </LI> <BR /> </UL> <BR /> <EM> Docker/HNS will assign IPs to Windows containers from the &lt;container prefix&gt; </EM> <BR /> <BR /> <EM> Admin will assign IPs to VMs from the difference set of the &lt;shared prefix&gt; and &lt;container prefix&gt; </EM> <BR /> <BR /> In the end, you should have two internal VM switches and one NetNat shared between them. <BR /> <H2> Troubleshooting </H2> <BR /> <OL> <BR /> <LI> Make sure you only have one NAT </LI> <BR /> </OL> <BR /> Get-NetNat <BR /> <OL> <BR /> <LI> If a NAT already exists, please delete it </LI> <BR /> </OL> <BR /> Get-NetNat | Remove-NetNat <BR /> <OL> <BR /> <LI> Make sure you only have one "internal" vmSwitch for the application or feature (e.g. Windows containers). Record the name of the vSwitch for Step 4 </LI> <BR /> </OL> <BR /> Get-VMSwitch <BR /> <OL> <BR /> <LI> Check to see if there are private IP addresses (e.g. NAT default Gateway IP Address - usually *.1) from the old NAT still assigned to an adapter </LI> <BR /> </OL> <BR /> Get-NetIPAddress -InterfaceAlias "vEthernet(&lt;name of vSwitch&gt;)" <BR /> <OL> <BR /> <LI> If an old private IP address is in use, please delete it </LI> <BR /> </OL> <BR /> Remove-NetIPAddress -InterfaceAlias "vEthernet(&lt;name of vSwitch&gt;)" -IPAddress &lt;IPAddress&gt; <BR /> <H3> Removing Multiple NATs </H3> <BR /> We have seen reports of multiple NAT networks created inadvertently. This is due to a bug in recent builds (including Windows Server 2016 Technical Preview 5 and Windows 10 Insider Preview builds). If you see multiple NAT networks, after running <EM> docker network ls </EM> or <EM> Get-ContainerNetwork </EM> , please perform the following from an elevated PowerShell: <BR /> $KeyPath = "HKLM:\SYSTEM\CurrentControlSet\Services\vmsmp\parameters\SwitchList" <BR /> $keys = get-childitem $KeyPath <BR /> foreach($key in $keys) <BR /> { <BR /> if ($key.GetValue("FriendlyName") -eq 'nat') <BR /> { <BR /> $newKeyPath = $KeyPath+"\"+$key.PSChildName <BR /> Remove-Item -Path $newKeyPath -Recurse <BR /> } <BR /> } <BR /> remove-netnat -Confirm:$false <BR /> Get-ContainerNetwork | Remove-ContainerNetwork <BR /> <STRONG> Restart the Computer </STRONG> <BR /> <BR /> <BR /> <BR /> ~ Jason Messer </BODY></HTML> Fri, 22 Mar 2019 00:00:00 GMT scooley 2019-03-22T00:00:00Z What Happened to the “NAT” VMSwitch? <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 14, 2016 </STRONG> <BR /> <STRONG> Author: Jason Messer </STRONG> <BR /> <BR /> Beginning in Windows Server Technical Preview 3, our users noticed a new Hyper-V Virtual Switch Type – “NAT” – which was introduced to simplify the process of connecting Windows containers to the host using a private network. This allowed network traffic sent to the host to be redirected to individual containers running on the host through network and port address translation (NAT and PAT) rules. Additional users began to use this new VM Switch type not only for containers but also for ordinary VMs to connect them to a NAT network. While this may have simplified the process of creating a NAT network and connecting containers or VMs to a vSwitch, it resulted in confusion and a layering violation in the network stack. <BR /> <BR /> Beginning in Windows Server Technical Preview 5 and with recent Windows Insider Builds, the “NAT” VM Switch Type has been removed to resolve this layering violation. <BR /> <BR /> In the OSI (Open Systems Interconnect) model, both physical network switches and virtual switches operate at Layer-2 of the network stack without any knowledge of IP addresses or ports. These switches simply forward packets based on the Ethernet headers (i.e. MAC addresses) in the Layer-2 frame. NAT and PAT operate at Layers-3 and 4 respectively of the network stack. <BR /> <TABLE> <TBODY><TR> <TD> <STRONG> Layer </STRONG> </TD> <TD> <STRONG> Function </STRONG> </TD> <TD> <STRONG> Example </STRONG> </TD> </TR> <TR> <TD> Application (7) </TD> <TD> Network process </TD> <TD> HTTP, SMTP, DNS </TD> </TR> <TR> <TD> Presentation (6) </TD> <TD> Data representation and encryption </TD> <TD> JPG, GIF, SSL, ASCII </TD> </TR> <TR> <TD> Session (5) </TD> <TD> Interhost communication </TD> <TD> NetBIOS </TD> </TR> <TR> <TD> Transport (4) </TD> <TD> End-to-End Connections </TD> <TD> TCP, UDP (Ports) </TD> </TR> <TR> <TD> Network (3) </TD> <TD> Path determination and routing based on IP addresses </TD> <TD> Routers </TD> </TR> <TR> <TD> Data Link (2) </TD> <TD> Forward frames based on MAC addresses </TD> <TD> 802.3 Ethernet, Switches </TD> </TR> <TR> <TD> Physical (1) </TD> <TD> Send data through physical signaling </TD> <TD> Network cables, NIC cards </TD> </TR> </TBODY></TABLE> <BR /> <BR /> <BR /> Creating a “NAT” VM Switch type actually combined several operations into one which can still be done today (detailed instructions can be found <A href="#" target="_blank"> here </A> ): <BR /> <UL> <BR /> <LI> Create an “internal” VM Switch </LI> <BR /> <LI> Create a Private IP network for NAT </LI> <BR /> <LI> Assign the default gateway IP address of the private network to the internal VM switch Management Host vNIC </LI> <BR /> </UL> <BR /> In Technical Preview 5 we have also introduced the Host Network Service (HNS) for containers which is a servicing layer used by both Docker and PowerShell management surfaces to creates the required network “plumbing” for new container networks. A user who wants to create a NAT container network through docker, will simply execute the following: <BR /> c:\&gt; docker network create -d nat MyNatNetwork <BR /> and HNS will take care of the details such as creating the internal vSwitch and NAT. <BR /> <BR /> Looking forward, we are considering how we can create a single arbitrator for all host networking (regardless of containers or VMs) so that these workflows and networking primitives will be consistent. <BR /> <BR /> ~ Jason </BODY></HTML> Thu, 21 Mar 2019 23:59:26 GMT scooley 2019-03-21T23:59:26Z Linux Integration Services 4.1 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Mar 21, 2016 </STRONG> <BR /> We are pleased to announce the availability of Linux Integration Services (LIS) 4.1. This new release expands supported releases to Red Hat Enterprise Linux, CentOS, and Oracle Linux with Red Hat Compatible Kernel 5.2, 5.3, 5.4, and 7.2. In addition to the latest bug fixes and performance improvements for Linux guests running on Hyper-V this release includes the following new features: <BR /> <UL> <BR /> <LI> Hyper-V Sockets (Windows Server Technical Preview) </LI> <BR /> <LI> Manual Memory Hot-Add (Windows Server Technical Preview) </LI> <BR /> <LI> SCSI WNN </LI> <BR /> <LI> lsvmbus </LI> <BR /> <LI> Uninstallation scripts </LI> <BR /> </UL> <BR /> <BR /> <BR /> See the ReadMe file for more information. <BR /> <BR /> <BR /> <BR /> <STRONG> Download Location </STRONG> <BR /> <BR /> <BR /> <BR /> The Linux Integration Services installation scripts and RPMs are available either as a tar file that can be uploaded to a virtual machine and installed, or an ISO that can be mounted as a CD. The files are available from the Microsoft Download Center here: <A href="#" target="_blank"> </A> <BR /> <BR /> <BR /> <BR /> A ReadMe file has been provided information on installation, upgrade, uninstallation, features, and known issues. <BR /> <BR /> <BR /> <BR /> See also the TechNet article “ <A href="#" target="_blank"> Linux and FreeBSD Virtual Machines on Hyper-V </A> ” for a comparison of LIS features and best practices for use here: <A href="#" target="_blank"> </A> <BR /> <BR /> <BR /> <BR /> Linux Integration Services code is released under the GNU Public License version 2 (GPLv2) and is freely available at the LIS GitHub project here: <A href="#" target="_blank"> </A> <BR /> <BR /> <BR /> <BR /> <STRONG> Supported Virtualization Server Operating Systems </STRONG> <BR /> <BR /> <BR /> <BR /> Linux Integration Services (LIS) 4.1 allows Linux guests to use Hyper-V virtualization on the following host operating systems: <BR /> <UL> <BR /> <LI> Windows Server 2008 R2 (applicable editions) </LI> <BR /> <LI> Microsoft Hyper-V Server 2008 R2 </LI> <BR /> <LI> Windows 8 Pro and 8.1 Pro </LI> <BR /> <LI> Windows Server 2012 and 2012 R2 </LI> <BR /> <LI> Microsoft Hyper-V Server 2012 and 2012 R2 </LI> <BR /> <LI> Windows Server Technical Preview </LI> <BR /> <LI> Microsoft Hyper-V Server Technical Preview </LI> <BR /> <LI> Microsoft Azure. </LI> <BR /> </UL> <BR /> <BR /> <BR /> <STRONG> Applicable Linux Distributions </STRONG> <BR /> <BR /> <BR /> <BR /> Microsoft provides Linux Integration Services for a broad range of Linux distros as documented in the “ <A href="#" target="_blank"> Linux and FreeBSD Virtual Machines on Hyper-V </A> ” topic on TechNet. Per that documentation, many Linux distributions and versions have Linux Integration Services built-in and do not require installation of this separate LIS package from Microsoft. This LIS package is available for a subset of supported distributions in order to provide the best performance and fullest use of Hyper-V features. It can be installed in the listed distribution versions that do not already have LIS built, and can be installed as an upgrade in listed distribution versions that already have LIS built in. LIS 4.1 is applicable to the following guest operating systems: <BR /> <UL> <BR /> <LI> Red Hat Enterprise Linux 5.2-5.11 32-bit, 32-bit PAE, and 64-bit </LI> <BR /> <LI> Red Hat Enterprise Linux 6.0-6.7 32-bit and 64-bit </LI> <BR /> <LI> Red Hat Enterprise Linux 7.0-7.2 64-bit </LI> <BR /> <LI> CentOS&nbsp; 5.2-5.11 32-bit, 32-bit PAE, and 64-bit </LI> <BR /> <LI> CentOS 6.0-6.7 32-bit and 64-bit </LI> <BR /> <LI> CentOS 7.0-7.2 64-bit </LI> <BR /> <LI> Oracle Linux 6.4-6.7 with Red Hat Compatible Kernel 32-bit and 64-bit </LI> <BR /> <LI> Oracle Linux 7.0-7.2 with Red Hat Compatible Kernel 64-bit </LI> <BR /> </UL> </BODY></HTML> Thu, 21 Mar 2019 23:58:10 GMT Joshua Poulson 2019-03-21T23:58:10Z Setting up Linux Operating System Clusters on Hyper-V (3 of 3) <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Mar 02, 2016 </STRONG> <BR /> Author:&nbsp;Dexuan Cui <BR /> <BR /> <A href="#" title="Setting up Linux Operating System Clusters on Hyper-V (2 of 3)" target="_blank"> Link to Part 2: Setting up Linux Operating System Clusters on Hyper-V <BR /> </A> <A href="#" title="Setting up Linux Operating System Clusters on Hyper-V (1 of 3)" target="_blank"> Link to Part 1: Setting up Linux Operating System Clusters on Hyper-V </A> <BR /> <H2> <STRONG> Background </STRONG> </H2> <BR /> This blog post is the third in a series of three that walks through setting up Linux operating system clusters on Hyper-V.&nbsp; The walk-through uses <A href="#" target="_blank"> Red Hat Cluster Suite (RHCS) </A> as the clustering storage and Hyper-V’s <A href="#" target="_blank"> Shared VHDX </A> as the shared storage needed by the cluster software. <BR /> <BR /> Part 1 of the series showed how to set up a Hyper-V host cluster and a shared VHDX.&nbsp; Then it showed how to set up five CentOS 6.7 VMs in the host cluster, all using the shared VHDX. <BR /> <BR /> Part 2 of the series showed how to set up the Linux OS cluster with the CentOS 6.7 VMs, running RHCS and the <A href="#" target="_blank"> GFS2 file system </A> .&nbsp; The GFS2 file system is specifically designed to be used on shared disks accessed by multiple node in a Linux cluster, and so is a natural example to use. <BR /> <BR /> This post now makes use of the Linux OS cluster to provide high availability.&nbsp; A web server is set up on one of the CentOS 6.7 nodes, and various failover cases are demonstrated. <BR /> <BR /> Let’s get started! <BR /> <BR /> <BR /> <H2> <STRONG> Setup a web server running on a node and experiment with the failover case </STRONG> </H2> <BR /> Note: this is actually an “Active-Passive” cluster ( <A href="#" target="_blank"> A cluster where only one node runs a given service at a time, and the other nodes are in stand-by to take over, should the need arise </A> ). Setting up an “Active-Active” cluster is much more complex, because it requires great awareness of the underlying applications, thus one mostly sees this with very specific applications - e.g. database servers that are designed to support multiple database servers accessing the same database disk storage. <BR /> <BR /> <BR /> <OL> <BR /> <LI> <A href="#" target="_blank"> Add a Failover Domain <BR /> </A> “A failover domain is a named subset of cluster nodes that are eligible to run a cluster service in the event of a node failure”. <BR /> <BR /> With the below configuration (priority 1 is of the highest priority and priority 5 is of the lowest priority), by default the web server runs on node 1. If node 1 fails, node 2 will take over and run the web server. If node 2 fails, node 3 will take over, etc. <IMG src="" /> </LI> <BR /> <LI> Install &amp; configure Apache <STRONG> on every node </STRONG> </LI> <BR /> </OL> <BR /> # yum install httpd <BR /> # chkconfig httpd off&nbsp;&nbsp; # By Default, Apache doesn’t automatically start. <BR /> <OL> <BR /> <LI> On node 1, make the minimal change to the default Apache config file /etc/httpd/conf/httpd.conf: </LI> <BR /> <BR /> (Note: <STRONG> /mydata </STRONG> is in the shared GFS2 partition) <BR /> </OL> <BR /> -DocumentRoot "/var/www/html" <BR /> +DocumentRoot "/ <STRONG> mydata </STRONG> /html" <BR /> -&lt;Directory "/var/www/html"&gt; <BR /> +&lt;Directory "/ <STRONG> mydata </STRONG> /html"&gt; <BR /> <P> And scp /etc/httpd/conf/httpd.conf to the other 4 nodes. </P> <BR /> <P> Next, add a simple html file /mydata/html/index.html with the below content: </P> <BR /> <BR /> <STRONG> &lt;html&gt; &lt;body&gt; &lt;h1&gt; "Hello, World" (test page)&lt;/h1&gt;&nbsp; &lt;/body&gt; &lt;/html&gt; <BR /> <BR /> </STRONG> <BR /> <OL> <BR /> <LI> Define the “Resources” and “Service Group” of the cluster </LI> <BR /> <BR /> Note: here is the “floating IP” (a.k.a. virtual IP). An end user uses <A href="#" target="_blank"> </A> to access the web server, but the web server httpd daemon can be running on any node of the cluster according to the fail over configuration, when some of the nodes fail. <BR /> </OL> <BR /> <P> <IMG src="" /> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <BR /> <OL> <BR /> <LI> Test the Web Server from another host </LI> <BR /> <BR /> Use a browser to access <A href="#" target="_blank"> <BR /> </A> <IMG src="" /> <A href="#" target="_blank"> <BR /> </A> Keep pressing “F5” to refresh the page and everything works fine. <BR /> <BR /> We can verify the web server is actually running on node1: <BR /> [root@my-vm1 ~]# ps aux | grep httpd <BR /> <BR /> root&nbsp;&nbsp;&nbsp;&nbsp; 13539&nbsp; 0.0&nbsp; 0.6 298432 12744 ?&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; S&lt;s&nbsp; 21:38&nbsp;&nbsp; 0:00 /usr/sbin/httpd -Dmy_apache -d /etc/httpd -f /etc/cluster/apache/apache:my_apache/httpd.conf -k start <BR /> <LI> Test Fail Over </LI> <BR /> <BR /> Shutdown node 1 by “shutdown -h now” and the end user will detect this failure immediately by keeping pressing F5: <BR /> <IMG src="" /> In ~15 seconds, the end user finds the web server backs to normal: <BR /> <IMG src="" /> <BR /> <BR /> Now, we can verify the web server is running on node 2: <BR /> <BR /> [root@my-vm2 ~]# ps aux | grep http <BR /> root&nbsp;&nbsp;&nbsp;&nbsp; 13879&nbsp; 0.0&nbsp; 0.6 298432 12772 ?&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; S&lt;s&nbsp; 21:58&nbsp;&nbsp; 0:00 /usr/sbin/httpd -Dmy_apache -d /etc/httpd -f /etc/cluster/apache/apache:my_apache/httpd.conf -k start <BR /> And we can check the cluster status: <BR /> [root@my-vm2 ~]# clustat <BR /> Cluster Status for my-cluster @ Thu Oct 29 21:59:40 2015 <BR /> <BR /> Member Status: Quorate <BR /> Member Name&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ID&nbsp;&nbsp; Status <BR /> ------ ----&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ---- ------ <BR /> my-vm1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1 Offline <BR /> my-vm2&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;2 Online, Local, rgmanager <BR /> my-vm3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3 Online, rgmanager <BR /> my-vm4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4 Online, rgmanager <BR /> my-vm5&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5 Online, rgmanager <BR /> /dev/block/8:33&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 Online, Quorum Disk <BR /> <BR /> Service Name&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Owner (Last)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; State <BR /> ------- ----&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;----- ------&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ----- <BR /> service:my_service_group&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;my-vm2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; started <BR /> <LI> Now we power off node 2 by clicking Virtual Machine Connection’s “Turn Off” icon. </LI> <BR /> <BR /> Similarly, we’ll find out node 3 will take over node 2 and the end user can still notice the webserver backs to normal after a transient black-out. <BR /> [root@my-vm3 ~]# clustat <BR /> Cluster Status for my-cluster @ Thu Oct 29 22:03:57 2015 <BR /> <BR /> Member Status: Quorate <BR /> Member Name&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ID&nbsp;&nbsp; Status <BR /> ------ ----&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;---- ------ <BR /> my-vm1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;1 Offline <BR /> my-vm2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;2 Offline <BR /> my-vm3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3 Online, Local, rgmanager <BR /> my-vm4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;4 Online, rgmanager <BR /> my-vm5&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;5 Online, rgmanager <BR /> /dev/block/8:33&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0 Online, Quorum Disk <BR /> <BR /> Service Name&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Owner (Last) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;State <BR /> ------- ----&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;----- ------ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;------ <BR /> service:my_service_group&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;my-vm3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;started <BR /> <BR /> <LI> Now we power off node 3 and 4 and later we’ll find the web server will be running in node 5, the last node, in ~20 seconds. </LI> <BR /> <LI> Now let’s power on node 1 and after node 1 re-joins the cluster, the web server will be moved from node 5 to node 1. </LI> <BR /> </OL> <BR /> <H2> <STRONG> Summary and conclusions </STRONG> </H2> <BR /> We’ve often been asked whether Linux OS clusters can be created for Linux guests running on Hyper-V.&nbsp; The answer is “Yes!”&nbsp; This series of 3 blog posts shows how to set up Hyper-V and make use of the Shared VHDX feature to provide shared storage for the cluster nodes.&nbsp; Then it shows how to set up Red Hat Cluster Suite and a shared GFS2 file system.&nbsp;&nbsp; Finally, we wrapped up with a demonstration of a web server that fails over from one cluster node to another. <BR /> <BR /> Other cluster software is available for other Linux distros and versions, so the process for your particular environment may be different, but the fundamental requirement for shared storage is typically the same across different cluster packages.&nbsp; Hyper-V and Shared VHDX provide the core infrastructure you need, and then you can install and configure your Linux OS clustering software to meet your particular requirements. <BR /> <BR /> Thank you for following this series, <BR /> Dexuan Cui </BODY></HTML> Thu, 21 Mar 2019 23:58:05 GMT scooley 2019-03-21T23:58:05Z Setting up Linux Operating System Clusters on Hyper-V (2 of 3) <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Feb 23, 2016 </STRONG> <BR /> Author:&nbsp;Dexuan Cui <BR /> <BR /> <A href="#" title="Setting up Linux Operating System Clusters on Hyper-V (1 of 3)" target="_blank"> Link to Part 1&nbsp;Setting up Linux Operating System Clusters on Hyper-V </A> <BR /> <H2> <STRONG> Background </STRONG> </H2> <BR /> This blog post is the second in a series of three that walks through setting up Linux operating system clusters on Hyper-V.&nbsp; The walk-through uses <A href="#" target="_blank"> Red Hat Cluster Suite (RHCS) </A> as the clustering storage and Hyper-V’s <A href="#" target="_blank"> Shared VDHX </A> as the shared storage needed by the cluster software. <BR /> <BR /> Part 1 of the series showed how to set up a Hyper-V host cluster and a shared VHDX.&nbsp; Then it showed how to set up five CentOS 6.7 VMs in the host cluster, all using the shared VHDX. <BR /> <BR /> This post will set up the Linux OS cluster with the CentOS 6.7 VMs, running RHCS and the <A href="#" target="_blank"> GFS2 file system </A> .&nbsp; RHCS is specifically for use with RHEL/CentOS 6.x; RHEL/CentOS 7.x uses a different clustering software package that is not covered by this walk through.&nbsp; The GFS2 file system is specifically designed to be used on shared disks accessed by multiple nodes in a Linux cluster, and so is a natural example to use. <BR /> <BR /> Let’s get started! <BR /> <BR /> <STRONG> Setup a guest cluster with the five CentOS 6.7 VMs running RHCS + GFS2 file system </STRONG> <BR /> <OL> <BR /> <LI> On one node of the Linux OS cluster, say, my-vm1, install the web-based HA configuration tool <STRONG> luci </STRONG> </LI> <BR /> </OL> <BR /> # yum groupinstall "High Availability Management" <BR /> # chkconfig luci on; service luci start <BR /> <OL> <BR /> <LI> On all 5 nodes, install RHCS and make proper configuration change </LI> <BR /> </OL> <BR /> # yum groupinstall "High Availability" "Resilient Storage" <BR /> # chkconfig iptables off <BR /> # chkconfig ip6tables off <BR /> # chkconfig NetworkManager off <BR /> <BR /> <BR /> <P> Disable SeLinux by </P> <BR /> <BR /> edit /etc/selinux/config: SELINUX=disabled <BR /> # setenforce 0 <BR /> # passwd ricci <EM> [this user/password is to login the web-based HA configuration tool luci] </EM> <BR /> # chkconfig ricci on; service ricci start <BR /> # chkconfig cman on; chkconfig clvmd on <BR /> # chkconfig rgmanager on; chkconfig modclusterd on <BR /> # chkconfig gfs2 on <BR /> # reboot <EM> [Can also choose to start the above daemons manually without reboot] </EM> <BR /> <P> After 1 and 2, we should reboot all the nodes to make things take effect.&nbsp; Or we need to manually start or shut down the above service daemons on every node. </P> <BR /> <P> Optionally, remove the “rhgb quiet” kernel parameters for every node, so you can easily see which cluster daemon fails to start on VM bootup. </P> <BR /> <BR /> <OL> <BR /> <LI> Use a web browser to access <A href="#" target="_blank"> https://my-vm1:8084 </A> (the web-based HA configuration tool luci -- first login with root and grant the user ricci the permission to administrator and create a cluster, then logout and login with ricci) </LI> <BR /> </OL> <BR /> <BR /> <OL> <BR /> <LI> Create a 5-node cluster “my-cluster” <BR /> <IMG src="" /> <BR /> <IMG src="" /> <BR /> <IMG src="" /> <BR /> We can confirm the cluster is created properly by checking the status of the service daemons and checking the cluster status (clustat): <BR /> service modclusterd status <BR /> service cman status <BR /> service clvmd status <BR /> service rgmanager status <BR /> clustat <BR /> <BR /> <BR /> e.g., when we run the commands in my-vm3, we get: <BR /> <IMG src="" /> </LI> <BR /> <LI> Add a fencing device (we use SCSI3 Persistent Registration) and associate all the VMs with it. </LI> <BR /> <BR /> Fencing is used to prevent erroneous/unresponsive nodes from accessing the shared storage, so data consistency can be achieved. <BR /> <BR /> See the below for an excerpt of <A href="#" target="_blank"> IO fencing and SCSI3 PR </A> : <BR /> <BR /> <EM> “SCSI-3 PR, which stands for Persistent Reservation, supports multiple nodes accessing a device while at the same time blocking access to other nodes. SCSI-3 PR reservations are persistent across SCSI bus resets or node reboots and also support multiple paths from host to disk.&nbsp; SCSI-3 PR uses a concept of registration and reservation. Systems that participate, register a key with SCSI-3 device. Each system registers its own key. Then registered systems can establish a reservation. With this method, blocking write access is as simple as removing registration from a device. <STRONG> A system wishing to eject another system issues a preempt and abort command and that ejects another node. Once a node is ejected, it has no key registered so that it cannot eject others. This method effectively avoids the split-brain condition </STRONG> .” </EM> <BR /> <BR /> This is how we add SCSI3 PR in RHCS: <BR /> <BR /> <EM> <IMG src="" /> </EM> <BR /> <BR /> <IMG src="" /> <BR /> <BR /> <IMG src="" /> <BR /> <BR /> NOTE 1: in <STRONG> /etc/cluster/cluster.conf </STRONG> , we need to manually specify <STRONG> devices="/dev/sdb" </STRONG> and add a <STRONG> &lt;unfence&gt; </STRONG> for every VM <STRONG> . </STRONG> The web-based configuration tool doesn’t support this, but we do need this, otherwise cman can’t work properly. <BR /> <BR /> NOTE 2: when we change <STRONG> /etc/cluster/cluster.conf </STRONG> manually, remember to increase “config_version” by 1 and propagate the new configuration to other nodes by “ <A href="#" target="_blank"> <STRONG> cman_tool version -r </STRONG> </A> ”. <BR /> <LI> Add a Quorum Disk to help to better cope with the Split-Brain issue. <EM> “In RHCS, </EM> <A href="#" target="_blank"> <EM> CMAN (Cluster MANager) </EM> </A> <EM> keeps track of membership by monitoring messages from other cluster nodes. When cluster membership changes, the cluster manager notifies the other infrastructure components, which then take appropriate action. If a cluster node does not transmit a message within a prescribed amount of time, the cluster manager removes the node from the cluster and communicates to other cluster infrastructure components that the node is not a member. Other cluster infrastructure components determine what actions to take upon notification that node is no longer a cluster member. For example, Fencing would disconnect the node that is no longer a member. </EM> <EM> <EM> A cluster can only function correctly if there is general agreement between the members regarding their status. We say a cluster has quorum if a majority of nodes are alive, communicating, and agree on the active cluster members. For example, </EM> </EM> <EM> in a thirteen-node cluster, quorum is only reached if seven or more nodes are communicating. If the seventh node dies, the cluster loses quorum and can no longer function. </EM> <EM> A cluster must maintain quorum to prevent split-brain issues. Quorum doesn't prevent split-brain situations, but it does decide who is dominant and allowed to function in the cluster. Quorum is determined by communication of messages among cluster nodes via Ethernet. Optionally, quorum can be determined by a combination of communicating messages via Ethernet and through a quorum disk. For quorum via Ethernet, quorum consists of a simple majority (50% of the nodes + 1 extra). When configuring a quorum disk, quorum consists of user-specified conditions.” </EM> </LI> <BR /> <BR /> In our 5-node cluster, if more than 2 nodes fail, the whole cluster will stop working. <BR /> <BR /> Here we’d like to keep the cluster working even if there is only 1 node alive, that is, the “Last Man Standing” functionality (see <A href="#" target="_blank"> How to Optimally Configure a Quorum Disk in Red Hat Enterprise Linux Clustering and High-Availability Environments </A> ), so we’re going to set up a quorum disk. <BR /> </OL> <UL> <BR /> <LI> In my-vm1, use “fdisk /dev/sdc” to create a partition. Here we don’t run mkfs against it. </LI> <BR /> <LI> Run “mkqdisk -c /dev/sdc1 -l myqdisk” to initialize the qdisk partition and run “mkqdisk&nbsp; -L” to confirm it’s done successfully. </LI> <BR /> <LI> Use the web-based tool to configure the qdisk: <BR /> <IMG src="" /> <BR /> Here a heuristics is defined to help to check the healthiness of every node. On every node, the ping command is run every 2 seconds. In (2*10 = 20) seconds, if 10 successful runs of ping aren’t achieved, the node itself thinks it has failed. As a consequence, it won’t vote, and it will be fenced, and the node will try to reboot itself. <BR /> <BR /> After we “apply” the configuration in the Web GUI, /etc/cluster/cluster.conf is updated with the new lines: <BR /> <EM> &lt; <STRONG> cman expected_votes="9" </STRONG> /&gt; <BR /> </EM> <EM> &lt;quorumd label="myqdisk" min_score="1"&gt; <BR /> </EM> <EM> &lt;heuristic program="ping -c3 -t2" score="2" tko="10"/&gt; <BR /> </EM> <EM> &lt;/quorumd&gt; </EM> <BR /> And “ <STRONG> clustat </STRONG> ” and “ <STRONG> cman_tool status </STRONG> ” shows: <BR /> [root@my-vm1 ~]# <STRONG> clustat <BR /> </STRONG> Cluster Status for my-cluster @ Thu Oct 29 14:11:16 2015 <BR /> <BR /> Member Status: Quorate <BR /> Member Name&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ID&nbsp;&nbsp; Status <BR /> ------ ----&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ---- ------ <BR /> my-vm1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1 Online, Local <BR /> my-vm2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;2 Online <BR /> my-vm3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3 Online <BR /> my-vm4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4 Online <BR /> my-vm5&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;5 Online <BR /> <BR /> <STRONG> /dev/block/8:33&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 Online, Quorum Disk </STRONG> <BR /> [root@my-vm1 ~]# <STRONG> cman_tool status </STRONG> <BR /> Version: 6.2.0 <BR /> Config Version: 33 <BR /> Cluster Name: my-cluster <BR /> Cluster Id: 25554 <BR /> Cluster Member: Yes <BR /> Cluster Generation: 6604 <BR /> Membership state: Cluster-Member <BR /> Nodes: 5 <BR /> <STRONG> Expected votes: 9 <BR /> </STRONG> <STRONG> Quorum device votes: 4 <BR /> </STRONG> Total votes: 9 <BR /> Node votes: 1 <BR /> Quorum: 5 <BR /> Active subsystems: 11 <BR /> Flags: <BR /> Ports Bound: 0 11 177 178 <BR /> Node name: my-vm1 <BR /> Node ID: 1 <BR /> Multicast addresses: <BR /> Node addresses: <BR /> <BR /> Note 1: “Expected vote”: The expected votes value is used by cman to determine if the cluster has quorum.&nbsp; The cluster is quorate if the sum of votes of existing members is over half of the expected votes value.&nbsp; Here we have n=5 nodes. RHCS automatically specifies the vote value of the qdisk is n-1 = 4 and the expected votes value is n + (n -1) = 2n – 1 = 9. In the case only 1 node is alive, the effective vote value is: 1 + (n-1) = n,&nbsp;which is larger than (2n-1)/2 = n -1 (in C language), so the cluster will continue to function. <BR /> <BR /> Note 2: In practice, “ <EM> ping -c3 -t2 </EM> ” wasn’t always reliable – sometimes the ping failed after a timeout of 19 seconds and the related node was rebooted unexpectedly. Maybe it’s due to the firewall rule of the gateway server; In this case, replace “ <EM> </EM> ” with “” as a workaround. <BR /> </LI> </UL> <BR /> <BR /> <LI> Create a GFS2 file system in the shared storage /dev/sdb and test IO fencing <BR /> <UL> <BR /> <LI> Create a 30GB LVM partition with fdisk <BR /> <EM> [root@my-vm1 ~]# <STRONG> fdisk /dev/sdb <BR /> </STRONG> </EM> <EM> Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel <BR /> </EM> <EM> Building a new DOS disklabel with disk identifier 0x73312800. <BR /> </EM> <EM> Changes will remain in memory only, until you decide to write them. <BR /> </EM> <EM> After that, of course, the previous content won't be recoverable. </EM> <EM> </EM> <BR /> <EM> WARNING: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) </EM> <EM> </EM> <BR /> <EM> WARNING: DOS-compatible mode is deprecated. It's strongly recommended to </EM> <EM> switch off the mode (command 'c') and change display units to </EM> <EM> sectors (command 'u'). </EM> <BR /> <EM> Command (m for help): n </EM> <BR /> <EM> Command action <BR /> </EM> <EM> e&nbsp;&nbsp; extended <BR /> </EM> <EM> p&nbsp;&nbsp; primary partition (1-4) </EM> <BR /> <EM> p </EM> <BR /> <EM> Partition number (1-4): 1 <BR /> </EM> <EM> First cylinder (1-13054, default 1): <BR /> </EM> <EM> Using default value 1 <BR /> </EM> <EM> Last cylinder, +cylinders or +size{K,M,G} (1-13054, default 13054): +30G </EM> <BR /> <EM> Command (m for help): p </EM> <EM> </EM> <BR /> <EM> Disk /dev/sdb: 107.4 GB, 107374182400 bytes <BR /> </EM> <EM> 255 heads, 63 sectors/track, 13054 cylinders <BR /> </EM> <EM> Units = cylinders of 16065 * 512 = 8225280 bytes <BR /> </EM> <EM> Sector size (logical/physical): 512 bytes / 512 bytes <BR /> </EM> <EM> I/O size (minimum/optimal): 512 bytes / 512 bytes <BR /> </EM> <EM> Disk identifier: 0x73312800 </EM> <EM> </EM> <BR /> <EM> Device Boot&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Start&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; End&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Blocks&nbsp;&nbsp; Id&nbsp; System <BR /> </EM> <EM> /dev/sdb1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3917&nbsp;&nbsp;&nbsp; 31463271&nbsp;&nbsp; 83&nbsp; Linux </EM> <EM> </EM> <BR /> <EM> Command (m for help): t </EM> <BR /> <EM> Selected partition 1 <BR /> </EM> <EM> Hex code (type L to list codes): 8e <BR /> </EM> <EM> Changed system type of partition 1 to 8e (Linux LVM) <BR /> </EM> <BR /> <EM> Command (m for help): w </EM> <BR /> <EM> The partition table has been altered! </EM> <BR /> <EM> Calling ioctl() to re-read partition table. <BR /> </EM> <EM> Syncing disks. </EM> <BR /> <EM> [root@my-vm1 ~]# </EM> <EM> </EM> <BR /> NOTE: the above fdisk command is run in node1. On nodes 2 through 4, we need to run “partprobe /dev/sdb” command to force the kernel to discover the new partition (another method is: we can simply reboot nodes 2 through 4). </LI> <BR /> <LI> Create physical &amp; logical volumes, run mkfs.gfs2 and mount the file systemRun the following on node1: <BR /> # pvcreate /dev/sdb1 <BR /> # vgcreate my-vg1 /dev/sdb1 <BR /> #&nbsp;lvcreate -L +20G -n my-store1 my-vg1 <BR /> # lvdisplay /dev/my-vg1/my-store1 <BR /> # <BR /> # mkfs.gfs2 -p lock_dlm -t <STRONG> my-cluster </STRONG> :storage -j5 /dev/mapper/my--vg1-my--store1 <BR /> (Note: here “my-cluster” is the cluster name we used in Step 4.) <BR /> <BR /> Run the following on all the 5 notes: <BR /> # mkdir /mydata <BR /> # echo '/dev/mapper/my--vg1-my--store1 /mydata&nbsp; gfs2 defaults 0 0' &gt;&gt; /etc/fstab <BR /> # mount /mydata <BR /> </LI> <BR /> <LI> Test read/write on the GFS2 partition <BR /> <UL> <BR /> <LI> Create or write a file /mydata/a.txt on one node, say, node 1 </LI> <BR /> <LI> On other nodes, say node 3, read /mydata/a.txt and we can immediately see what node 1 wrote </LI> <BR /> <LI> On node 3, append a line into the file and on node 1 and the other nodes, the change is immediately visible. </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> Test node failure and IO fencing <BR /> <BR /> First retrieve all the registrant keys and the registration information: <BR /> [root@my-vm1 mydata]# sg_persist -i -k -d /dev/sdb <BR /> Msft&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Virtual Disk&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.0 <BR /> Peripheral device type: disk <BR /> PR generation=0x158, 5 registered reservation keys follow: <BR /> 0x63d20004 <BR /> 0x63d20001 <BR /> 0x63d20003 <BR /> 0x63d20005 <BR /> 0x63d20002 <BR /> <BR /> [root@my-vm1 mydata]# sg_persist -i -r -d /dev/sdb <BR /> Msft&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Virtual Disk&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.0 <BR /> Peripheral device type: disk <BR /> PR generation=0x158, 5 registered reservation keys follow: <BR /> 0x63d20004 <BR /> 0x63d20001 <BR /> 0x63d20003 <BR /> 0x63d20005 <BR /> 0x63d20002 <BR /> [root@my-vm1 mydata]# sg_persist -i -r -d /dev/sdb <BR /> Msft&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Virtual Disk&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.0 <BR /> Peripheral device type: disk <BR /> PR generation=0x158, Reservation follows: <BR /> Key=0x63d20005 <BR /> scope: LU_SCOPE,&nbsp; type: Write Exclusive, registrants only <BR /> <BR /> <BR /> Then pause node 5 using Hyper-V Manager, so node 5 will be considered dead. <BR /> In a few seconds, node 1 prints the kernel messages: <BR /> dlm: closing connection to node5 <BR /> GFS2: fsid=my-cluster:storage.0: jid=4: Trying to acquire journal lock... <BR /> GFS2: fsid=my-cluster:storage.0: jid=4: Looking at journal... <BR /> GFS2: fsid=my-cluster:storage.0: jid=4: Acquiring the transaction lock... <BR /> GFS2: fsid=my-cluster:storage.0: jid=4: Replaying journal... <BR /> GFS2: fsid=my-cluster:storage.0: jid=4: Replayed 3 of 4 blocks <BR /> GFS2: fsid=my-cluster:storage.0: jid=4: Found 1 revoke tags <BR /> GFS2: fsid=my-cluster:storage.0: jid=4: Journal replayed in 1s <BR /> GFS2: fsid=my-cluster:storage.0: jid=4: Done <BR /> <BR /> <BR /> And nodes 2 through 4 print these messages: <BR /> dlm: closing connection to node5 <BR /> GFS2: fsid=my-cluster:storage.2: jid=4: Trying to acquire journal lock... <BR /> GFS2: fsid=my-cluster:storage.2: jid=4: Busy, retrying... <BR /> 0x63d20004 <BR /> 0x63d20001 <BR /> 0x63d20003 <BR /> 0x63d20005 <BR /> 0x63d20002 <BR /> [root@my-vm1 mydata]# sg_persist -i -r -d /dev/sdb <BR /> Msft&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Virtual Disk&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.0 <BR /> Now on nodes 1 through 4, “clustat” shows node 5 is offline and “cman_tool status” shows the current “Total votes: 8”. &nbsp;&nbsp;And the sg_persist command show the current SCSI owner of /dev/sdb is changed from node 5 to node 1 and there are only 4 registered keys: <BR /> <BLOCKQUOTE> [root@my-vm4 ~]# sg_persist -i -k -d /dev/sdb <BR /> Msft&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Virtual Disk&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.0 <BR /> Peripheral device type: disk <BR /> PR generation=0x158, 4 registered reservation keys follow: <BR /> 0x63d20002 <BR /> 0x63d20003 <BR /> 0x63d20001 <BR /> 0x63d20004 <BR /> [root@my-vm4 ~]# sg_persist -i -r -d /dev/sdb <BR /> Msft&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Virtual Disk&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;1.0 <BR /> Peripheral device type: disk <BR /> PR generation=0x158, Reservation follows: <BR /> Key=0x63d20001 <BR /> scope: LU_SCOPE,&nbsp; type: Write Exclusive, registrants only </BLOCKQUOTE> <BR /> In a word, the dead node 5 properly became offline and was fenced, and node1 has fixed a file system issue (“Found 1 revoke tags”) by replaying node 5’s GFS2 journal, so we have no data inconsistency issue. <BR /> <BR /> Now let’s resume node 5 and we’ll find the cluster still doesn’t accept the node 5 as an online cluster member before node 5 reboots and rejoins the cluster with a known-good state. <BR /> <BR /> Note: node 5 will be automatically rebooted by the qdisk daemon. </LI> <BR /> </UL> <BR /> </LI> <BR /> <BR /> <P> If we perform the above experiment by shutting down a node’s network (by “ifconfig eth0 down”), e.g., on node 3, we’ll get the same result, that is, node 3’s access to /mydata will be rejected and eventually the qdisk daemon will reboot node 3 automatically. </P> <BR /> <BR /> <BR /> <STRONG> Wrap Up </STRONG> <BR /> <BR /> Wow!&nbsp; That’s a lot of steps, but the result is worth it.&nbsp; You now have a 5 node Linux OS cluster with a shared GFS2 file system that can be read and written from all nodes.&nbsp; The cluster uses a quorum disk to prevent split-brain issues.&nbsp; These steps to set up a RHCS cluster are the same as you would use to set up a cluster of physical servers running CentOS 6.7, but the Hyper-V environment Linux is running in guest VMs, and shared storage is created on a Shared VHDX instead of a real physical shared disk. <BR /> <BR /> In the last blog post, we’ll show setting up a web server on one of the CentOS 6.7 nodes, and demonstrate various failover cases. <BR /> <BR /> <BR /> <BR /> ~&nbsp;Dexuan Cui </BODY></HTML> Thu, 21 Mar 2019 23:57:00 GMT scooley 2019-03-21T23:57:00Z Setting up Linux Operating System Clusters on Hyper-V (1 of 3) <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Feb 19, 2016 </STRONG> <BR /> Author:&nbsp;Dexuan Cui <BR /> <H2> <STRONG> Background </STRONG> </H2> <BR /> When Linux is running on physical hardware, multiple computers may be configured in a Linux operating system cluster to provide high availability and load balancing in case of a hardware failure.&nbsp; Different clustering packages are available for different Linux distros, but for Red Hat Enterprise Linux (RHEL) and CentOS, <A href="#" title="Red Hat Cluster Suite" target="_blank"> Red Hat Cluster Suite </A> is a popular choice to achieve these goals.&nbsp; A cluster consists of two or more nodes, where each node is an instance of RHEL or CentOS.&nbsp; Such a cluster usually requires some kind of shared storage, such as iSCSI or fibre channel, that is accessible from all of the nodes. <BR /> <BR /> What happens when Linux is running in a virtual machine guest on a hypervisor, such as you might be using in your on-premises datacenter?&nbsp; It may still make sense to use a Linux OS cluster for high availability and load balancing.&nbsp; But how can you create shared storage in such an environment so that it is accessible to all of the Linux guests that will participate in the cluster?&nbsp; This series of blog posts answers these questions. <BR /> <H2> <STRONG> Overview </STRONG> </H2> <BR /> This series of blog posts walks through setting up Microsoft’s Hyper-V to create shared storage that can be used by Linux clustering software.&nbsp; Then it walks through setting up Red Hat Cluster Suite in that environment to create a five-node Linux OS cluster.&nbsp; Finally, it demonstrates an example application running in the cluster environment, and how a failover works. <BR /> <BR /> The shared storage is created using Hyper-V’s <A href="#" target="_blank"> Shared VHDX </A> feature, which <A href="#" target="_blank"> allows the VM users to create a VHDX file, and share that file among the guest cluster nodes as if the shared VHDX file were a shared Serial Attached SCSI disk </A> . &nbsp;When the Shared VHDX feature is used, the .vhdx file itself still must reside in a location where it is accessible to all the nodes of a cluster. This means it must reside in a CSV (Cluster Shared Volume) partition or in an SMB 3.0 file share. For the example in this blog post series, we’ll use a host CSV partition, which requires a host cluster with an iSCSI target (server). <BR /> <BR /> Note: To understand how clustering works, we need to first understand 3 important concepts in clustering: <A href="#" target="_blank"> split-brain, quorum and fencing </A> : <BR /> <UL> <BR /> <LI> “Split-brain” is the idea that a cluster can have communication failures, which can cause it to split into subclusters </LI> <BR /> <LI> “Fencing” is the way of ensuring that one can safely proceed in these cases </LI> <BR /> <LI> “Quorum” is the idea of determining which subcluster can fence the others and proceed to recover the cluster services </LI> <BR /> </UL> <BR /> These three concepts will be referenced in the remainder of this blog post series. <BR /> <BR /> The walk-through will be in three blog posts: <BR /> <OL> <BR /> <LI> Set up a Hyper-V host cluster and prepare for shared VHDX. Then set up five CentOS 6.7 VMs in the host cluster that use the shared VHDX.&nbsp; These five CentOS VMs will form the Linux OS cluster. </LI> <BR /> <LI> Set up a Linux OS cluster with the CentOS 6.7 VMs running RHCS and the <A href="#" target="_blank"> GFS2 file system </A> . </LI> <BR /> <LI> Set up a web server on one of the CentOS 6.7 nodes, and demonstrate various failover cases. Then Summary and conclusions. </LI> <BR /> </OL> <BR /> Let’s get started! <BR /> <H2> <STRONG> Set up a host cluster and prepare for Shared VHDX </STRONG> </H2> <BR /> (Refer to <A href="#" target="_blank"> Deploy a Hyper-V Cluster </A> , <A href="#" target="_blank"> Deploy a Guest Cluster Using a Shared Virtual Hard Disk </A> ) <BR /> <BR /> Here we first setup an iSCSI target (server) on iscsi01 and then set up a 2-node Hyper-V host cluster on hyperv01 and hyperv02.&nbsp; Both nodes of the Hyper-V host cluster are running Windows Server 2012 R2 Hyper-V, with access to the iSCSI shared storage.&nbsp; The resulting configuration looks like this: <BR /> <BR /> <IMG src="" /> <BR /> <OL> <BR /> <LI> Setup an iSCSI target on iscsi01. (Refer to <A href="#" target="_blank"> Installing and Configuring target iSCSI server on Windows Server 2012 </A> .) We don’t have to buy a real iSCSI hardware. Windows Server 2012 R2 can emulate an iSCSI target based on .vhdx files. </LI> <BR /> <BR /> So we install “File and Storage Service” on iscsi01 using Server Manager -&gt; Configure this local server -&gt; Add roles and features -&gt; Role-based or feature-based installation -&gt; … -&gt; Server Roles -&gt; File and Storage Service -&gt; iSCSI Target Server(add). <BR /> <BR /> Then in Server Manager -&gt; File and Storage Service -&gt; iSCSI, use “New iSCSI Virtual Disk…” to create 2 .vhdx files: iscsi-1.vhdx (200GB) and iscsi-2.vhdx (1GB). In “iSCSI TARGETS”, allow hyperv01 and hyperv02 as Initiators (iSCSI clients). <BR /> <BR /> <LI> On hyperv01 and hyperv02, use “iSCSI Initiator” to connect to the 2 LUNs of iscsi01. Now in “Disk Management” of both the hosts, 2 new disks should appear and one’s size is 200GB and the other’s size is 1GB. </LI> <BR /> <BR /> In one host only, for example hyperv02, in “Disk Management”, we create and format a NTFS partition in the 200GB disk (remember to choose "Do not assign a drive letter or drive path"). <BR /> <BR /> <LI> On hyperv01 and hyperv02, install “Failover Cluster Manager” </LI> <BR /> <BR /> Server Manager -&gt; Configure this local server -&gt; Add roles and features -&gt; Role-based or feature-based installation -&gt; … -&gt; Feature -&gt; Failover Clustering. <BR /> <BR /> <LI> On hyperv02, with Failover Cluster Manager -&gt; “Create Cluster”, we create a host cluster with the 2 host nodes. </LI> <BR /> <BR /> Using “Storage -&gt; Disks | Add Disk”, we add the 2 new disks: the 200GB one is used as “Cluster Shared Volume” and the 1GB one is used as Disk Witness in Quorum.&nbsp; To set the 1GB disk as the Quorum Disk, after “Storage -&gt; Disks | Add Disk”, right click the host node, choose More Actions -&gt; Configure Cluster Quorum Settings… -&gt; Next -&gt; Select the quorum witness -&gt; Configure a disk witness-&gt; …. <BR /> <BR /> <LI> Now, on both the 2 hosts, a new special shared directory C:\ClusterStorage\Volume1\ appears. </LI> <BR /> </OL> <BR /> <H2> <STRONG> Set up CentOS 6.7 VMs in the host cluster with Shared VHDX <BR /> </STRONG> </H2> <BR /> <OL> <BR /> <LI> On hyperv02, with Failover Cluster Manager -&gt; “Roles| Virtual Machines | New Virtual Machine” we create five CentOS 6.7 VMs. For the purposes of this walk through, these five VMs are given names “my-vm1”, “my-vm2”, etc., and these are the names you’ll see used in the rest of the walk through. </LI> <BR /> <BR /> Make sure to choose “Store the virtual machine in a different location” and choose C:\ClusterStorage\Volume1\.&nbsp;&nbsp; In other words, my-vm1’s configuration file and .vhdx file are stored in C:\ClusterStorage\Volume1\my-vm1\Virtual Machines\ and C:\ClusterStorage\Volume1\my-vm1\Virtual Hard Disks\. <BR /> <BR /> You can spread out the five VMs across the two Hyper-V hosts however you like, as both hosts have equivalent access to C:\ClusterStorage\Volume1\.&nbsp; The schematic diagram above shows three VMs on hyperv01 and two VMs on hyperv02, but the specific layout does not affect the operation of the Linux OS cluster or the subsequent examples in this walk through. <BR /> <LI> Use Static IP addresses and update /etc/hosts in all 5 VMs <EM> </EM> </LI> <EM> <BR /> <BR /> Note: contact your network administrator to make sure the static IPs are reserved for this use. </EM> <BR /> <BR /> So on my-vm1 in /etc/sysconfig/network-scripts/ifcfg-eth0, we have <BR /> DEVICE=eth0 <BR /> TYPE=Ethernet <BR /> UUID=2b5e2f5a-3001-4e12-bf0c-d3d74b0b28e1 <BR /> <STRONG> ONBOOT=yes <BR /> </STRONG> <STRONG> NM_CONTROLLED=no <BR /> </STRONG> <STRONG> BOOTPROTO=static <BR /> </STRONG> <STRONG> IPADDR= <BR /> </STRONG> <STRONG> NETMASK= <BR /> </STRONG> <STRONG> GATEWAY= <BR /> </STRONG> DEFROUTE=yes <BR /> PEERDNS=yes <BR /> PEERROUTES=yes <BR /> IPV4_FAILURE_FATAL=yes <BR /> IPV6INIT=no <BR /> NAME="System eth0" <BR /> And in /etc/hosts, we have <BR /> localhost localhost.localdomain localhost4 localhost4.localdomain4 <BR /> ::1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; localhost localhost.localdomain localhost6 localhost6.localdomain6 <BR />;&nbsp;&nbsp;&nbsp;&nbsp; my-vm1 <BR />;&nbsp;&nbsp;&nbsp;&nbsp; my-vm2 <BR />;&nbsp;&nbsp;&nbsp;&nbsp; my-vm3 <BR />;&nbsp;&nbsp;&nbsp;&nbsp; my-vm4 <BR />;&nbsp;&nbsp;&nbsp;&nbsp; my-vm5 <BR /> <BR /> <LI> On hyperv02, in my-vm1’s “Settings | SCSI Controller”, add a 100GB Hard Drive by using the “New Virtual Hard Disk Wizard”. Remember to store the .vhdx file in the shared host storage, e.g., <STRONG> C:\ClusterStorage\Volume1\ </STRONG> 100GB-shared-vhdx.vhdx and remember to enable the “ <STRONG> Advanced Features | Enable virtual hard disk sharing </STRONG> ”. Next we add the .vhdx file to the other 4 VMs with disk sharing enabled too. In all the 5 VMs, the disk will show as /dev/sdb.&nbsp; Later, we’ll create a clustering file system (GFS2) in it. </LI> <BR /> <LI> Similarly, we add another shared disk of 1GB (C:\ClusterStorage\Volume1\quorum_disk.vhdx) with the Shared VHDX feaure to all the 5 VMs. The small disk will show as /dev/sdc in the VMs and later we’ll use it as a Quorum Disk in RHCS. </LI> <BR /> </OL> <BR /> <BR /> <H2> <STRONG> Wrap Up </STRONG> </H2> <BR /> This completes the first phase of setting up Linux OS clusters.&nbsp; The Hyper-V hosts are running and configured, and we have five CentOS VMs running on those Hyper-V hosts.&nbsp; We have a Hyper-V Cluster Shared Volume (CSV) that is located on an iSCSI target, and containing the virtual hard disks for each of the five VMs. <BR /> <BR /> The next blog post will describe how to actually setup the Linux OS clusters using the Red Hat Cluster Suite. <BR /> <BR /> <BR /> <BR /> ~&nbsp;Dexuan Cui </BODY></HTML> Thu, 21 Mar 2019 23:55:45 GMT scooley 2019-03-21T23:55:45Z New Hyper-V Survey <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Feb 16, 2016 </STRONG> <BR /> We want to hear from you regarding shielded VMs and troubleshooting! <BR /> <BR /> When you have about 5 to 10 minutes, please take <A href="#" title="Shielded VM Survey" target="_blank"> this short survey </A> . The survey will close on February 23rd 2016, please submit your answers by then. It is recommended to take the survey on a desktop browser so the questions show up properly. <BR /> <BR /> Thank you and have a great day. <BR /> <BR /> Survey URL: <A href="#" title="Shielded VM Survey" target="_blank"> </A> <BR /> <BR /> <BR /> <BR /> Thank you for your participation! <BR /> <BR /> Lars Iwer </BODY></HTML> Thu, 21 Mar 2019 23:55:30 GMT scooley 2019-03-21T23:55:30Z Discrete Device Assignment -- GPUs <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Nov 23, 2015 </STRONG> <BR /> <P> This is the third post in a four part series. &nbsp;My previous two blog posts talked about Discrete Device Assignment ( <A href="" title="Discrete Device Assignment -- Description and background" target="_blank"> link </A> ) and the machines and devices necessary ( <A href="" title="Discrete Device Assignment -- Machines and devices" target="_blank"> link </A> ) to make it work in Windows Server 2016 TP4. This post goes into more detail, focusing on GPUs. </P> <BR /> <P> There are those of you out there who want to get the most out of Photoshop, or CATIA, or some other thing that just needs a graphics processor, or GPU. If that’s you, and if you have GPUs in your machine that aren’t needed by the Windows management OS, then you can dismount them and pass them through to a guest VM. </P> <BR /> <P> GPUs, though, are complicated beasts. People want them to run as fast as they possibly can, and to pump a lot more data through the computer’s memory than almost any other part of the computer. To manage this, GPUs run at the hairy edge of what PCI Express buses can deliver, and the device drivers for the GPUs often tune the GPU and sometimes even the underlying machine, attempting to ensure that you get a reasonable experience. </P> <BR /> <P> The catch is that, when you pass a GPU through to VM, the environment for the GPU changes a little bit. For one thing, the driver can’t see the rest of the machine, to respond to its configuration or to tune things up. Second, access to memory works a little differently when you turn on an I/O MMU, changing timings and such. So the GPU will tend to work if the machine’s BIOS has already set up the GPU optimally, and this limits the machines that are likely to work well with GPUs. Basically, these are servers which were built for hosting GPUs. They’ll be the sorts of things that the salesman wants to push on you when you use words like “desktop virtualization” and “rendering.” When I look at a server, I can tell whether it was designed for GPU work instantly, because it has lots of long (x16) PCI Express slots, really big power supplies and fans that make a spooky howling sound. </P> <BR /> <P> We’re working with the GPU vendors to see if they want to support specific GPUs, and they may decide to do that. It’s really their call, and they’re unlikely to make a support statement on more than the few GPUs that are sold into the server market. If they do, they’ll supply driver packages which convert them from being considered “use at your own risk” within Hyper-V to the supported category. When those driver packages are installed, the error and warning messages that appear when you try to dismount the GPU will disappear. </P> <BR /> <P> So, if you’re still reading and you want to play around with GPUs in your VMs, you need to know a few other things. First, GPUs can have a lot of memory. And by default, we don’t reserve enough space in our virtual machines for that memory. (We reserve it for RAM that you might add through Dynamic Memory instead, which is the right choice for most users.) You can find out how much memory space your GPU uses by looking at it in Device Manager, or through scripts by looking at the WMI Win32_PnPAllocatedResource class. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> The screen shot above is from the machine I’m using to type this. You can see two memory ranges listed, with beginning and end values expressed in hexadecimal. Doing the conversion to more straightforward numbers, the first range (the video memory, mostly) is 256MB and the second one (video setup and control registers) is 128KB. So any VM you wanted to use this GPU with would need at least 257MB of free space within it. </P> <BR /> <P> In Hyper-V within Server 2016 TP4, there are two types of VMs, Generation 1 and Generation 2. Generation 1 is intended to run older 32-bit operating systems and 64-bit operating systems which depend on the VM having a structure very like a PC. Generation 2 is intended for 64-bit operating systems which don’t depend on a PC architecture. </P> <BR /> <P> A Generation 1 VM, because it is intended to run 32-bit code, attempts to reserve as much as possible in the VM for RAM in the 32-bit address space. This leaves very little 32-bit space available for GPUs. There is, however, by default, 512MB of space available that 64-bit OS code can use. </P> <BR /> <P> A Generation 2 VM, because it is not constrained by 32-bit code, has about 2GB of space that could have any GPU placed in it. (Some GPUs require 32-bit space and some don’t, and it’s difficult to tell the difference without just trying it.) </P> <BR /> <P> Either type of VM, however, can be reconfigured so that there’s more space in it for GPUs. If you want to reserve more space for a GPU that needs 32-bit space, you can use PowerShell: </P> <BR /> <P> Set-VM pickyourvmname -LowMemoryMappedIoSpace upto3000MBofmemoryhere </P> <BR /> <P> Similarly, if you want to reserve memory for GPUs above 32-bit space: </P> <BR /> <P> Set-VM pickyourvmname -HighMemoryMappedIoSpace upto33000MBofmemoryhere </P> <BR /> <P> Note that, if your GPU supports it, you can have a lot more space above 32-bits. </P> <BR /> <P> Lastly, GPUs tend to work a lot faster if the processor can run in a mode where bits in video memory can be held in the processor’s cache for a while before they are written to memory, waiting for other writes to the same memory. This is called “write-combining.” In general, this isn’t enabled in Hyper-V VMs. If you want your GPU to work, you’ll probably need to enable it: </P> <BR /> <P> Set-VM pickyourvmname -GuestControlledCacheTypes $true </P> <BR /> <P> None of these settings above can be applied while the VM is running. </P> <BR /> <P> Happy experimenting! </P> <BR /> <P> </P> <BR /> <P> -- Jake Oshins </P> <BR /> <P> </P> <BR /> <P> </P> </BODY></HTML> Thu, 21 Mar 2019 23:55:23 GMT scooley 2019-03-21T23:55:23Z Discrete Device Assignment -- Guests and Linux <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Nov 24, 2015 </STRONG> <BR /> <P> In my previous three posts, I outlined a new feature in Windows Server 2016 TP4 called Discrete Device Assignment. This post talks about support for Linux guest VMs, and it’s more of a description of my personal journey rather than a straight feature description. If it were just a description, I could do that in one line: We’re contributing code to Linux to make this work, and it will eventually hit a distro near you. </P> <P> Microsoft has cared a lot about supporting Linux in a Hyper-V VM for a while now. We have an entire team of people dedicated to making that work better. Microsoft Azure, which uses Hyper-V, hosts many different kinds of Linux, and they’re constantly tuning to make that even better. What’s new for me is that I built the Linux front-end code for PCI Express pass-through, rather than asking a separate team to do it. I also built PCI Express pass-through for Windows (along with a couple of other people) for Windows Server 2012, which showed up as SR-IOV for networking, and Discrete Device Assignment for Windows Server 2016 TP4, which is built on the same underpinnings. </P> <P> When I first started out to work on PCIe pass-through, I realized that the changes to Hyper-V would be really extensive. Fundamentally, we’re talking about distributing ownership of the devices inside your computer among multiple operating systems, some of which are more trusted than others. I was also trying to make sure that giving a device to an untrusted guest OS wouldn’t result in that OS taking the device hostage, making it impossible to do anything related to plug-and-play in the rest of the machine. Imagine if I told you that you couldn’t hot-plug another NVMe drive into your system because your storage utility VM wasn’t playing along, or even worse yet, your SQL server with SR-IOV networking was on-line and not in a state to be interrupted. </P> <P> I actually spent a few months thinking about the problem (back in 2008 and ‘09) and how to solve it. The possible interactions between various OSes running on the same machine were combinatorically scary. So I went forth with some guiding principles, two of which were: Allow as few interactions as possible while still making it all work and keep the attack surface from untrusted VMs to an absolute minimum. The solution involved replacing the PCI driver in the guest OS with one that batched up lots of things into a few messages, passed through Hyper-V and on to the PnP Manager in the management OS, which manages the hardware as a whole. </P> <P> When I went to try to enable Linux, however, I discovered that my two principles had caused me to come up with a protocol that was perfectly tailored to what Windows needs as a guest OS. Anything that would have made it easier to accommodate Linux got left out as “extra attack surface” or “another potential failure path that needs to be handled,” even though Linux does all the same things as Windows, but just a little differently. So the first step in enabling PCIe pass-through for Linux guests was actually to add some new infrastructure to Hyper-V. I still tried to minimize attack surface, and I think that I’ve added only that which was strictly necessary. </P> <P> One of the challenges is that Linux device drivers are structured very differently from Windows device drivers. In Windows, a driver never interacts directly with the underlying driver for the bus itself. Windows drivers send I/O Request Packets (or IRPs) down to the bus driver. If they need to call a function in the bus driver in a very light-weight fashion, they send an IRP to the bus driver asking for a pointer to that function. This makes it possible, and even relatively easy, to replace the bus driver entirely, which is what we did for Windows. We replaced PCI.sys with vPCI.sys. vPCI.sys knows that it’s running in a virtual machine and that it doesn’t control the actual underlying hardware. <BR /> Linux has a lot of flexibility around PCI, of course. It runs on a vastly wider gamut of computers than Windows does. But instead of allowing the underlying bus driver to be replaced, Linux accommodates these things by allowing a device driver to supply low-level functions to the PCI code which do things like scan the bus and set up IRQs. These low-level functions required very different underlying support from Hyper-V. </P> <P> As part of this, I’ve learned how to participate in open source software development, sending iteration upon iteration of patches to a huge body of people who then tell me that I don’t know anything about software development, with helpful pointers to their blogs explaining the one true way. This process is actually ongoing. Below is a link to the latest series of changes that I’ve sent in. Given that there hasn’t be any comment on it in a couple of weeks, it seems fairly likely that this, or something quite like this, will eventually make it into “upstream” kernels. </P> <P> <A href="#" title="Linux Kernel" target="_blank"> </A> </P> <P> Once it’s in the upstream kernels, the various distributions (Ubuntu, SUSE, RHEL, etc.) will eventually pick up the code as they move on to newer kernels. They can each individually choose to include the code or not, at their discretion, though most of the distros offer Hyper-V support by default. We may actually be able to work with them to back-port this to their long-term support products, though that’s far from certain at this point. </P> <P> So if you’re comfortable patching, compiling and installing Linux kernels, and you want to play around with this, pull down the linux-next tree and apply the patch series. We’d love to know what you come up with. </P> <P> </P> <P> -- Jake Oshins </P> <P> </P> <BR /> </BODY></HTML> Thu, 21 Mar 2019 23:55:07 GMT scooley 2019-03-21T23:55:07Z Discrete Device Assignment -- Description and background <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Nov 19, 2015 </STRONG> <BR /> <P> With Windows Server 2016, we're introducing a new feature, called Discrete Device Assignment, in Hyper-V. &nbsp;Users can now take some of the PCI Express devices in their systems and pass them through directly to a guest VM. &nbsp;This is actually much of the same technology that we've used for SR-IOV networking in the past. &nbsp;And because of that, instead of giving you a lot of background on how this all works, I'll just point to an excellent series of posts that John Howard did a few years ago about SR-IOV when used for networking. </P> <BR /> <P> <A href="" title="Everything you wanted to know about SR-IOV in Hyper-V part 1" target="_blank"> Everything you wanted to know about SR-IOV in Hyper-V part 1 </A> </P> <BR /> <P> <A href="" title="Everything you wanted to know about SR-IOV in Hyper-V part 2" target="_blank"> Everything you wanted to know about SR-IOV in Hyper-V part&nbsp;2 </A> </P> <BR /> <P> <A href="" title="Everything you wanted to know about SR-IOV in Hyper-V part 3" target="_blank"> Everything you wanted to know about SR-IOV in Hyper-V part&nbsp;3 </A> </P> <BR /> <P> <A href="" title="Everything you wanted to know about SR-IOV in Hyper-V part 4" target="_blank"> Everything you wanted to know about SR-IOV in Hyper-V part&nbsp;4 </A> </P> <BR /> <P> </P> <BR /> <P> Now I've only linked the first four posts that John Howard did back in 2012 because those were the ones that discussed the internals of PCI Express and distributing actual devices among multiple operating systems. &nbsp;The rest of his series is mostly about networking, and while I do recommend it, that's not what we're talking about here. </P> <BR /> <P> At this point, I have to say that full device pass-through is actually a lot like disk pass-through -- it's a half a solution to a lot of different problems. &nbsp;We actually built full PCI Express device pass-through in 2009 and 2010 in order to test our hypervisor's handling of an I/O MMU and interrupt delivery before we asked the networking community to write new device drivers targeted at Hyper-V, enabling SR-IOV. </P> <BR /> <P> We decided at the time not to put PCIe device pass-through into any shipping product because we would have had to make a lot of compromises in the virtual machines that used them -- disabling Live Migration, VM backup, saving and restoring of VMs, checkpoints, etc. &nbsp;Some of those compromises aren't necessary any more. &nbsp;Production checkpoints now work entirely without needing to save the VM's RAM or virtual processor state, for instance. </P> <BR /> <P> But even more than these issues, we decided to forgo PCIe device pass-through because it was difficult to prove that the system would be secure and stable once a guest VM had control of the device. &nbsp;Security conferences are full of papers describing attacks on hypervisors, and allowing an attacker's code control of a PCI Express "endpoint" just makes that a lot easier, mostly in the area of Denial of Service attacks. &nbsp;If you can trigger an error on the PCI Express bus, most physical machines will either crash or just go through a hard reset. </P> <BR /> <P> Things have changed some since 2012, though, and we're getting many requests to allow full-device pass-through. &nbsp;First, it's far more common now for there to be VMs running on a system which constitute part of the hoster or IT admin's infrastructure. &nbsp;These "utility VMs" run anything from network firewalls to storage appliances to connection brokers. &nbsp;And since they're part of the hosting fabric, they are often more trusted than VMs running outward-facing workloads supplied by users or tenants. </P> <BR /> <P> On top of that, Non-Volatile Memory Express (NVMe) is taking off. &nbsp;SSDs attached via NVMe can be many times faster than SSDs attached through SATA or SAS. &nbsp;And until there's a full specification on how to do SR-IOV with NVMe, the only choice if you want full performance in a storage appliance VM is to pass the entire device through. </P> <BR /> <P> Windows Server 2016 will allow NVMe devices to be assigned to guest VMs. &nbsp;We still recommend that these VMs only be those that are under control of the same administration team that manages the host and the hypervisor. </P> <BR /> <P> GPUs (graphics processors) are, similarly, becoming a must-have in virtual machines. &nbsp;And while what most people really want is to slice up their GPU into lots of slivers and let VMs share them, you can use Discrete Device Assignment to pass them through to a VM. &nbsp;GPUs are complicated enough, though, that a full support statement must come from the GPU vendor. &nbsp;More on GPUs in a future blog post. </P> <BR /> <P> Other types of devices may work when passed through to a guest VM. &nbsp;We've tried a few USB 3 controllers, RAID/SAS controllers, and sundry other things. &nbsp;Many will work, but none will be candidates for official support from Microsoft, at least not at first, and you won't be able to put them into use without overriding warning messages. &nbsp;Consider these devices to be in the "experimental" category. &nbsp;More on which devices can work in a future blog post. </P> <BR /> <H2> <BR /> Switching it all On </H2> <BR /> <P> Managing the underlying hardware of your machine is complicated, and can quickly get you in trouble. Furthermore, we’re really trying to address the need to pass NVMe devices though to storage appliances, which are likely to be configured by people who are IT pros and want to use scripts. So all of this is only available through PowerShell, with nothing in the Hyper-V Manager. What follows is a PowerShell script that finds all the NVMe controllers in the system, unloads the default drivers from them, dismounts them from the management OS and makes them available in a pool for guest VMs. </P> <BR /> <P> # get all devices which are NVMe controllers </P> <BR /> <P> $pnpdevs = Get-PnpDevice -PresentOnly | Where-Object {$_.Class -eq "SCSIAdapter"} | Where-Object {$_.Service -eq "stornvme"} </P> <BR /> <P> </P> <BR /> <P> # cycle through them disabling and dismounting them </P> <BR /> <P> foreach ($pnpdev in $pnpdevs) { </P> <BR /> <P> disable-pnpdevice -InstanceId $pnpdev.InstanceId -Confirm:$false </P> <BR /> <P> $locationpath = ($pnpdev | get-pnpdeviceproperty DEVPKEY_Device_LocationPaths).data[0] </P> <BR /> <P> dismount-vmhostassignabledevice -locationpath $locationpath </P> <BR /> <P> $locationpath </P> <BR /> <P> } </P> <BR /> <P> </P> <BR /> <P> Depending on whether you’ve already put your NVMe controllers into use, you might actually have to reboot between disabling them and dismounting them. But after you have both disabled (removed the drivers) and dismounted (taken them away from the management OS) you should be able to find them all in a pool. You can, of course, reverse this process with the Mount-VMHostAssignableDevice and Enable-PnPDevice cmdlets. </P> <BR /> <P> Here’s the output of running the script above on my test machine where I have, alas, only one NVMe controller, followed by asking for the list of devices in the pool: </P> <BR /> <P> [jakeo-t620]: PS E:\test&gt; .\dismount-nvme.ps1 <BR /> PCIROOT(40)#PCI(0200)#PCI(0000) <BR /> [jakeo-t620]: PS E:\test&gt; Get-VMHostAssignableDevice </P> <BR /> <P> <BR /> InstanceID : PCIP\VEN_144D&amp;DEV_A820&amp;SUBSYS_1F951028&amp;REV_03\4&amp;368722DD&amp;0&amp;0010 <BR /> LocationPath : PCIROOT(40)#PCI(0200)#PCI(0000) <BR /> CimSession : CimSession: . <BR /> ComputerName : JAKEO-T620 <BR /> IsDeleted : False </P> <BR /> <P> Now that we have the NVMe controllers in the pool of dismounted PCI Express devices, we can add them to a VM. There are basically three options here, using the InstanceID above, using the LocationPath and just saying “give me any device from the pool.” You can add more than one to a VM. And you can add or remove them at any time, even when the VM is running. I want to add this NVMe controller to a VM called “StorageServer”: </P> <BR /> <P> [jakeo-t620]: PS E:\test&gt; Add-VMAssignableDevice -LocationPath "PCIROOT(40)#PCI(0200)#PCI(0000)" -VMName StorageServer </P> <BR /> <P> There are similar Remove-VMAssignableDevice and Get-VMAssignableDevice cmdlets. </P> <BR /> <P> If you don’t like scripts, the InstanceID can be found as “Device Instance Path” under the Details tab in Device Manager. The Location Path is also under Details. You can disable the device there and then use PowerShell to dismount it. <BR /> Finally, it wouldn’t be a blog post without a screen shot. Here’s Device Manager from that VM, rearranged with “View by Connection” simply because it proves that I’m talking about a VM. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> In a future post, I’ll talk about how to figure out whether your machine and your devices support all this. </P> <BR /> <P> </P> <BR /> <P> Read the next post in this series: <A href="" title="Discrete Device Assignment -- Machines and devices" target="_blank"> Discrete Device Assignment -- Machines and devices </A> </P> <BR /> <P> -- Jake Oshins </P> </BODY></HTML> Thu, 21 Mar 2019 23:54:57 GMT scooley 2019-03-21T23:54:57Z Discrete Device Assignment -- Machines and devices <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Nov 20, 2015 </STRONG> <BR /> <P> In <A href="" title="Discrete Device Assignment -- Description and background" target="_blank"> my last post </A> , I talked about a new feature of Windows Server 2016, Discrete Device Assignment. This post will discuss the machines and devices necessary for making this feature work. </P> <BR /> <P> First, we're not supporting Discrete Device Assignment in Hyper-V in Windows 10. Only Server versions of Windows support this. This isn't some wanton play for more of your hard-earned cash but rather just a side effect of being comfortable supporting server-class machines. They tend to work and be quite stable. </P> <BR /> <H2> Firmware, including BIOS and UEFI, Must Cooperate </H2> <BR /> <P> Second, in order to pass a device through to a guest VM, we have to change the way parts of the PCI Express fabric in the machine are configured. To do this, we need the firmware (BIOS, UEFI, whatever) in the machine to agree that it won't change the very same parts of PCI Express that Windows is changing while the machine is running. (Yes, the BIOS does actually do things while the machine is running.) So, in accordance with the PCI firmware specification, Windows asks the firmware for control of several different parts of PCIe. </P> <BR /> <P> This protocol was created when PCIe was new, largely because Windows XP, Windows Server 2003 and many other operating systems were published before the PCI Express specification was done. BIOSes in these machines needed to manage PCIe. Windows, or any other operating system, essentially asks the BIOS whether it is prepared to give up this control. The BIOS responds with a yes or a no. If you BIOS responds with a no, then we don't support Discrete Device Assignment because we may destabilize the machine when we enable it. </P> <BR /> <P> Another requirement of the underlying machine is that it exposes "Access Control Services" in the PCI Express Root Complex. Pretty much every machine sold today has this hardware built into it. It allows a hypervisor (or other security-conscious software) to force traffic from a PCI Express device up through the PCIe links (slots, motherboard wires, etc.) all the way to the I/O MMU in the system. This shuts down the attacks on the system that a VM might make by attempting, for instance, to read from the VRAM by using a device to do the reading on its behalf. And while nearly all the silicon in use by machine makers supports this, not all the BIOSes expose this to the running OS/hypervisor. </P> <BR /> <P> Lastly, when major computer server vendors build machines, sometimes the PCI Express devices in those machines are tightly integrated with the firmware. This might be, for instance, because the device is a RAID controller, and it needs some RAM for caching. The firmware will, in this case, take some of the installed RAM in the machine and sequester it during the boot process, so that it's only available to the RAID controller. In another example, the device, perhaps a NIC, might update the firmware with a health report periodically while the machine runs, by writing to memory similarly sequestered by the BIOS. When this happens, the device cannot be used for Discrete Device Assignment, as exposing it to a guest VM would both present a considerable security risk and because the view of memory from within the guest VM is entirely different than the view of the actual system hardware, and thus the attempts by the devices to read and write this private memory would fail or corrupt other memory. </P> <BR /> <P> There are a lot of other low level requirements, many of them documented by John Howard in his SR-IOV series, but most of the machines in the world will conform with them, so I won't go into more detail now. </P> <BR /> <H2> Beyond the BIOS, individual devices may or may not be candidates for passing through to a VM </H2> <BR /> <P> Some devices in your computer don’t mark, or tag, their traffic in way that individually identifies the device, making it impossible for the I/O MMU to redirect this traffic to the memory owned by a specific VM. These devices, mostly older PCI-style logic, can’t be assigned to a guest VM. </P> <BR /> <P> All of the devices inside your computer have some means to control them. These controls are built out of “registers” which are like locations in computer memory, but where each location has a special purpose, and some sort of action that happens when the software reads or writes that location. For instance, any devices have “doorbell” registers which tell them that they should check work lists in memory and do the work described by what is queued up on the work lists. Registers can be in locations in the computer’s memory space, in which case they’re commonly called “memory-mapped I/O” and the processor’s page tables allow the hypervisor to map the device’s register into any VM. But registers can also be in a special memory-like space called “I/O space.” When registers are in I/O space, they can’t be redirected by the hypervisor, at least not in a friction-free way that allows the device and the VM to run at full speed. As an example, USB 1 controllers use I/O ports. USB 3 controllers use memory-mapped I/O space. So USB 3 controllers might meet the criteria for passing through to a guest VM in Hyper-V. </P> <BR /> <P> PCI-style devices have two possible ways to deliver interrupts to the CPUs in a computer. They can connect a pin to a ground signal, somewhat like unplugging a light bulb, or they can write to a special location in the processor directly. The first mechanism was invented many years ago, and works well for sharing scarce “IRQs” in old PCs. Many devices can be connected to the same metaphorical light bulb, each with its own stretch of extension cord. If any device in the chain wants the attention of the software running on the CPU, it unplugs its extension cord. The CPU immediately starts to run software from each device driver asking “was it you who unplugged the cord?” The problem comes in that, when many devices share the same signal, it’s difficult to let one VM manage one of the devices in that chain. And since this mechanism of delivering interrupts is both deprecated and not particularly common for the devices people use in servers, we decided to only support the second method of delivering interrupts, which can be called Message-Signaled Interrupts, MSI or MSI-X, all of which are equivalent for this discussion. </P> <BR /> <P> Some of the properties discussed are easily seen in the Device Manager. Here’s the keyboard controller in the machine that I’m typing this with. It fails all the tests described above. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> And here’s a USB 3 controller. The Message-Signaled Interrupts show up as IRQs with negative numbers. This device happens to pass all the tests. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <H2> <BR /> Survey the Machine </H2> <BR /> <P> To help you all sort this out, I've written a script which surveys the machine and the devices within it, reporting on which of them meet the criteria for passing them through to a VM. </P> <BR /> <P> <A href="#" title="survey-dda.ps1" target="_blank"> </A> </P> <BR /> <P> You can download it by running the following command in PowerShell: </P> <BR /> <P> wget&nbsp;<A href="#" target="_blank"></A> </P> <BR /> <P> In my HP server machine, this returns a long list, including entries like: </P> <BR /> <P> Smart Array P420i Controller <BR /> BIOS requires that this device remain attached to BIOS-owned memory. Not assignable. </P> <BR /> <P> Intel(R) C600/X79 series chipset USB2 Enhanced Host Controller #2 - 1D2D <BR /> Old-style PCI device, switch port, etc. Not assignable. </P> <BR /> <P> NVIDIA GRID K520 <BR /> Express Endpoint -- more secure. <BR /> And its interrupts are message-based, assignment can work. <BR /> PCIROOT(0)#PCI(0300)#PCI(0000)#PCI(0800)#PCI(0000) </P> <BR /> <P> The first two entries are for devices which can’t be passed through to a VM. The first is in side-band communication with the HP management firmware, and the HP BIOS stipulates that communication channel must not be broken, which would happen if you passed the device through to a VM. The second is for a USB 2 controller that’s embedded in the Intel chipset. It’s actually old-style PCI logic, and thus the I/O MMU can’t tell its traffic from any other device’s so it can’t handle being giving addresses relative to a guest VM. </P> <BR /> <P> The last entry, for the NVIDIA GRID K520, is one where the device and the machine meet the criteria for passing it through to a guest VM. The last line shows the “location path” for this GPU, which is in terms of “first root PCI bus; bridge at device 3, function 0; bridge at device 0, function 0; bridge at device 8, function 0; device 0 function 0.” This string isn’t going to change, even if you plug in another device somewhere else in the machine. The classic way of describing a PCI device, by bus number, device number and function number can change when you add or remove devices. Similarly, there are things that can affect the PnP instance path to the device that Windows stores in the registry. So I prefer using this location path, since it’s durable in the face of changes. </P> <BR /> <P> There’s another sort of entry that might come up, one where the report says that the device’s traffic may be redirected to another device. Here’s an example: </P> <BR /> <P> Intel(R) Gigabit ET Dual Port Server Adapter <BR /> Traffic from this device may be redirected to other devices in the system. Not assignable. </P> <BR /> <P> Intel(R) Gigabit ET Dual Port Server Adapter #2 <BR /> Traffic from this device may be redirected to other devices in the system. Not assignable. </P> <BR /> <P> What this is saying is that memory reads or writes from the device targeted at the VM’s memory (DMA) might end up being routed to other devices within the computer. This might be because there’s a PCI Express switch in the system, and there are multiple devices connected to the switch, and the switch doesn’t have the necessary mechanism to prevent DMA from one device from being targeted at the other devices. The PCI Express specifications optionally allow all of a device’s traffic to be forced all the way up to the I/O MMU in the system. This is called “Access Control Services” and Hyper-V looks for that and enables it to be sure that your VM can’t affect others within the same machine. </P> <BR /> <P> These messages also might show up because the device is “multi-function” where that means that a single chip has more than one thing within it that looks like a PCI device. In the example above, I have an Intel two-port gigabit Ethernet adapter. You could theoretically assign one of the Ethernet ports to a VM, and then that VM could take control of the other port by writing commands to it. Again, the PCI Express specifications allow a device designer to put in controls to stop this, via Access Control Services (ACS). </P> <BR /> <P> The funny thing is that the NIC above has neither the ACS control structure in it nor the ability to target one port from the other port. Unfortunately, the only way that I know this is that I happened to have discussed it with the man at Intel who led the team that designed that NIC. There’s no way to tell in software that one NIC port can’t target the other NIC port. The official way to make that distinction is to look for ACS in the device. To deal with this, we allow you to override the ACS check when dismounting a PCI Express device. (Dismount-VMHostAssignableDevice -force) </P> <BR /> <P> Forcing a dismount will also be necessary for any device that is not supported. Below is the output of an attempt to dismount a USB 3 controller without forcing it. The text is telling you that we have no way of knowing whether the operation is secure. And without knowing all the various vendor-specific extensions in each USB 3 controller, we can’t know that. </P> <BR /> <P> PS C:\&gt; Dismount-VMHostAssignableDevice -LocationPath "PCIROOT(40)#PCI(0100)#PCI(0000)" </P> <BR /> <P> Dismount-VMHostAssignableDevice : The operation failed. <BR /> The manufacturer of this device has not supplied any directives for securing this device while exposing it to a <BR /> virtual machine. The device should only be exposed to trusted virtual machines. <BR /> This device is not supported when passed through to a virtual machine. <BR /> The operation failed. <BR /> The manufacturer of this device has not supplied any directives for securing this device while exposing it to a <BR /> virtual machine. </P> <BR /> <P> Use the “-force” options only with extreme caution, and only when you have some deep knowledge of the device involved. You have been warned. </P> <BR /> <P> </P> <BR /> <P> Read the next post in this series: <A href="" title="Discrete Device Assignment -- GPUs" target="_blank"> Discrete Device Assignment -- GPUs </A> </P> <BR /> <P> -- Jake Oshins </P> </BODY></HTML> Thu, 21 Mar 2019 23:54:40 GMT scooley 2019-03-21T23:54:40Z PSA: VMs with saved state from builds 10560-10567 will not restore on 10568 and above. <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Oct 30, 2015 </STRONG> <BR /> <P> Windows Insiders might hit a bug if they try to restore a VM with recent saved state. This is especially true for VMs created on Windows 8.1 and Windows 10 hosts. We recommend that you take action before upgrading past 10567. </P> <P> <STRONG> Problem </STRONG> </P> <P> Windows Insiders will not be able to restore VMs on build 10568 and above if the following conditions are met: </P> <OL> <BR /> <LI> The VM is configuration version 5 or 6.2. All other configuration versions are OK. Versions 5 and 6.2 are typically created on Windows 8.1/10 hosts. (You can check the configuration version by looking at the “Configuration Version” column in Hyper-V Manager.) </LI> <BR /> <LI> The VM has saved state which was generated from builds 10560 through 10567 </LI> <BR /> </OL> <P> <STRONG> Workaround </STRONG> </P> <P> If you haven’t upgraded past 10567, we recommend that you shut down your VMs before upgrading. You will then be able to start your VMs as usual after the upgrade. &nbsp;If the upgrade is already complete, you must delete the saved state before starting the VM. </P> <P> <STRONG> Note: Microsoft Emulator for Windows 10 Mobile </STRONG> </P> <P> If you use the Emulator for Windows 10 Mobile and are affected by this bug, we recommend that you delete the affected VMs and let Visual Studio recreate them on next launch. </P> <P> </P> <P> Blog brought to you by </P> <P> Theo Thompson </P> <P> </P> <BR /> </BODY></HTML> Thu, 21 Mar 2019 23:54:16 GMT scooley 2019-03-21T23:54:16Z Windows Insider Preview: Nested Virtualization <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Oct 13, 2015 </STRONG> <BR /> Earlier in the year, we announced that we will be building nested virtualization so that people could run <A href="" target="_blank"> Hyper-V Containers </A> in Hyper-V virtual machines. <BR /> <BR /> In preparation for the first public preview of Hyper-V Containers, we are releasing a preview of nested virtualization. This feature allows you to run Hyper-V in a virtual machine (note that this is Hyper-V on Hyper-V only… other hypervisors will fail). <BR /> <BR /> Although Hyper-V Containers have not been released yet, for now you can try out this feature with Hyper-V virtual machines. <BR /> <H3> Build 10565 -- It is a very early preview </H3> <BR /> Yesterday, we announced the release of <A href="#" target="_blank"> build 10565 </A> to Windows Insiders on the Fast ring. &nbsp;This build contains an early preview of nested virtualization. <BR /> <BR /> When I say it is an “early” preview, I mean it – there are plenty of known issues, and there is functionality which we still need to build. We wanted to share this feature with Insiders as soon as possible though, even if that meant things are still rough around the edges. <BR /> <BR /> This post will give a quick overview of what nested virtualization is, and briefly cover how it works. The end of this post will explain how to enable it, so you can try it out. <STRONG> Please read the “known issues” section before trying this feature. </STRONG> <BR /> <BR /> <STRONG> documentation available here: <A href="#" target="_blank"> </A> </STRONG> <BR /> <H2> What is nested virtualization? </H2> <BR /> In essence, this feature virtualizes certain hardware features that are required to run a hypervisor in a virtual machine. <BR /> <BR /> Hyper-V relies on hardware virtualization support (e.g. Intel VT-x and AMD-V) to run virtual machines. Typically, once Hyper-V is installed, the hypervisor hides this capability from guest virtual machines, preventing guests virtual machines from installing Hyper-V (and many other hypervisors, for that matter). <BR /> <BR /> Nested virtualization exposes hardware virtualization support to guest virtual machines. This allows you to install Hyper-V in a guest virtual machine, and create more virtual machines “within” that underlying virtual machine. <BR /> <BR /> In the image below, you can see a host machine running a virtual machine, which in turn is running its own guest virtual machine. This is made possible by nested virtualization. Behold, three levels of Cortana! <BR /> <BR /> <IMG src="" /> <BR /> <H2> Under the hood </H2> <BR /> Consider the diagram below, which shows the “normal” (i.e. non-nested) case. The Hyper-V hypervisor takes full control of virtualization extensions (orange arrow), and does not expose them to the guest OS. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Contrast this with the nested diagram below. In this case, Hyper-V has been configured to expose virtualization extensions to its guest VM. A guest VM can take advantage of this, and install its own hypervisor. It can then run its own guest VMs. <BR /> <BR /> <IMG src="" /> <BR /> <H2> Known issues: important! </H2> <BR /> Like I said earlier – this is still just a “preview” of this feature. Obviously, this feature should not be used in production environments. &nbsp;Below is a list of known issues: <BR /> <UL> <BR /> <LI> <STRONG> Both hypervisors need to be the latest versions of Hyper-V. Other hypervisors will not work. Windows Server 2012R2, or even builds prior to 10565 will not work. </STRONG> </LI> <BR /> <LI> Once nested virtualization is enabled in a VM, the following features are no longer compatible with that VM. These actions will either fail, or cause the virtual machine not to start if it is hosting other virtual machines: </LI> <BR /> </UL> <BR /> <UL> <BR /> <LI> Dynamic memory must be OFF. This will prevent the VM from booting. </LI> <BR /> <LI> Runtime memory resize will fail. </LI> <BR /> <LI> Applying checkpoints to a running VM will fail. </LI> <BR /> <LI> Live migration will fail --&nbsp;in other words, a VM which hosts other VMs cannot be live migrated. </LI> <BR /> <LI> Save/restore will fail. <B> Note: </B> these features still work in the “innermost” guest VM. The restrictions only apply to the first layer VM. </LI> <BR /> </UL> <BR /> <UL> <BR /> <LI> Once nested virtualization is enabled in a VM, MAC spoofing must be enabled for networking to work in its guests. </LI> <BR /> <LI> Hosts with Device Guard enabled cannot expose virtualization extensions to guests. You must first disable VBS in order to preview nested virtualization. </LI> <BR /> <LI> Hosts with Virtualization Based Security (VBS) enabled cannot expose virtualization extensions to guests. You must first disable VBS in order to preview nested virtualization. </LI> <BR /> <LI> This feature is currently Intel-only. Intel VT-x is required. </LI> <BR /> <LI> Beware: nested virtualization requires a good amount of memory. I managed to run a VM in a VM with 4 GB of host RAM, but things were tight. </LI> <BR /> </UL> <BR /> <H2> How to enable nested virtualization </H2> <BR /> <STRONG> Step 1: Create a VM </STRONG> <BR /> <BR /> <STRONG> Step 2: Run the <A href="#" target="_blank"> enablement script </A> </STRONG> <BR /> <BR /> Given the configuration requirements (e.g. dynamic memory must be off), we’ve tried to make things easier by providing <A href="#" target="_blank"> a PowerShell script </A> . <BR /> <BR /> This script will check your configuration, change anything which is incorrect (with permission), and enable nested virtualization for a VM. Note that the VM must be off. <BR /> <P> Invoke-WebRequest <A href="#" target="_blank">;-OutFile</A> ~/Enable-NestedVm.ps1 <BR /> ~/Enable-NestedVm.ps1 -VmName &lt;VmName&gt; </P> <BR /> <STRONG> Step 3: Install Hyper-V in the guest </STRONG> <BR /> <BR /> From here, you can install Hyper-V in the guest VM. <BR /> <P> Invoke-Command -VMName "myVM" -ScriptBlock { Enable-WindowsOptionalFeature -FeatureName Microsoft-Hyper-V -Online; Restart-Computer } </P> <BR /> <STRONG> Step 4: Create nested VMs </STRONG> <BR /> <H2> Give us feedback! </H2> <BR /> If you discover any issues, or have any suggestions, please consider submitting feedback with the Windows Feedback app, through the <A href="#" target="_blank"> virtualization forums </A> , or through <A href="#" target="_blank"> GitHub </A> . <BR /> We are also very interested to hear how people are using nested virtualization. Please tell us about your scenario by dropping us a line at <A href="" target="_blank"> </A> . <BR /> <BR /> Go build VMs in VMs! <BR /> <BR /> Cheers, Theo Thompson <BR /> <BR /> <STRONG> Updated: <A href="#" target="_blank"> </A> is where you go to find the most up-to-date documentation. </STRONG> </BODY></HTML> Thu, 21 Mar 2019 23:54:09 GMT scooley 2019-03-21T23:54:09Z Linux Integration Services 4.0 Update <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Aug 19, 2015 </STRONG> <BR /> <P> We are pleased to announce an update to Linux Integration Services (LIS) version 4.0: version 4.0.11. This updated release expands to Red Hat Enterprise Linux 6.7, CentOS 6.7, and Oracle Linux Red Hat Compatible Kernel 6.7. In addition to some bug fixes, this release continues improvements to networking performance on Hyper-V and Microsoft Azure. </P> <H2> Download Location </H2> <P> The LIS binaries are available either as a tar file that can be uploaded to your virtual machine or as an ISO that can be mounted. The files are available from the Microsoft Download Center here: <A href="#" target="_blank"> </A> </P> <P> A ReadMe file has been provided to provide information on the installation procedure, feature set, and issues. </P> <P> See also the TechNet article “ <A href="#" target="_blank"> Linux and FreeBSD Virtual Machines on Hyper-V </A> ” for a comparison of LIS features and best practices for use. <BR /> <BR /> As LIS code is released under the GNU Public License version 2 (GPLv2) and are freely available at the <A href="#" target="_blank"> LIS GitHub project </A> . LIS code is also regularly submitted to the upstream Linux kernel and documented on the <A href="#" target="_blank"> Linux kernel mailing list (lkml) </A> . </P> <H2> Supported Virtualization Server Operating Systems </H2> <P> Linux Integration Services (LIS) 4.0 allows Linux guests to use Hyper-V virtualization on the following host operating systems: </P> <UL> <BR /> <LI> Windows Server 2008 R2 (applicable editions) </LI> <BR /> <LI> Microsoft Hyper-V Server 2008 R2 </LI> <BR /> <LI> Windows 8 Pro, 8.1 Pro, 10 and 10 Pro </LI> <BR /> <LI> Windows Server 2012 and 2012 R2 </LI> <BR /> <LI> Microsoft Hyper-V Server 2012 and 2012 R2 </LI> <BR /> <LI> Windows Server Technical Preview </LI> <BR /> <LI> Microsoft Hyper-V Server Technical Preview </LI> <BR /> <LI> Microsoft Azure </LI> <BR /> </UL> <H2> Applicable Linux Distributions </H2> <P> Microsoft provides Linux Integration Services for a broad range of Linux distros as documented in the <A href="#" target="_blank"> Linux and FreeBSD Virtual Machines on Hyper-V topic </A> on TechNet. Per that documentation, many Linux distributions and versions have Linux Integration Services built-in and do not require installation of this separate LIS package from Microsoft. This LIS package is available for a subset of supported distributions in order to provide the best performance and fullest use of Hyper-V features. It can be installed in the listed distribution versions that do not already have LIS built, and can be installed as an upgrade in listed distribution versions that already have LIS built-in. <BR /> <BR /> This update adds support for Red Hat Enterprise Linux 6.7, CentOS 6.7, and Oracle Linux 6.7. LIS 4.0 is applicable to the following guest operating systems: </P> <UL> <BR /> <LI> Red Hat Enterprise Linux 5.5-5.11 32-bit, 32-bit PAE, and 64-bit </LI> <BR /> <LI> Red Hat Enterprise Linux 6.0-6.7 32-bit and 64-bit </LI> <BR /> <LI> Red Hat Enterprise Linux 7.0-7.1 64-bit </LI> <BR /> <LI> CentOS 5.5-5.11 32-bit, 32-bit PAE, and 64-bit </LI> <BR /> <LI> CentOS 6.0-6.7 32-bit and 64-bit </LI> <BR /> <LI> CentOS 7.0-7.1 64-bit </LI> <BR /> <LI> Oracle Linux 6.4-6.7 with Red Hat Compatible Kernel 32-bit and 64-bit </LI> <BR /> <LI> Oracle Linux 7.0-7.1 with Red Hat Compatible Kernel 64-bit </LI> <BR /> </UL> <H2> Linux Integration Services 4.0 Feature Set </H2> <P> When installed on a virtual machine that is running a supported Linux distribution, LIS 4.0 for Hyper-V provides the additional functionality over LIS 3.5 listed in the table below. </P> <UL> <BR /> <LI> Installable on Red Hat Enterprise Linux 6.6, 6.7, and 7.1 </LI> <BR /> <LI> Installable on CentOS 6.6, 6.7, and 7.1 </LI> <BR /> <LI> Installable on Oracle Linux 6.6, 6.7, and 7.1 when running the Red Hat Compatible Kernel </LI> <BR /> <LI> Networking and storage performance improvements </LI> <BR /> <LI> Dynamic Memory – Hot Add </LI> <BR /> </UL> <P> More details on individual features can be found at <A href="#" target="_blank"> </A> </P> <H2> Customer Feedback </H2> <P> Customers can provide feedback through the <A href="#" target="_blank"> Linux and FreeBSD Virtual Machines on Hyper-V forum </A> and via <A href="#" target="_blank"> Microsoft Windows Server Uservoice </A> . We look forward to hearing about your experiences with LIS. </P> <P> </P> <P> Thanks! --jrp (Josh Poulson) </P> <P> </P> <BR /> </BODY></HTML> Thu, 21 Mar 2019 23:53:30 GMT scooley 2019-03-21T23:53:30Z Virtual Machine Storage Resiliency in Windows Server 2016 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Sep 08, 2015 </STRONG> <BR /> <P> We live in an imperfect world, where things will go wrong.&nbsp; When they do, you need a private cloud which is designed to be highly available and resilient to failures in the environment. In today’s cloud scale environments transient storage failures have become more common than hard failures. Transient storage failure means that a Virtual Machine (VM) cannot access the VHDX file and that read or write requests to the disk are failing. In Windows Server 2016 there is new Hyper-V capabilities which will enable a VM to detect when storage access fails and to be seamlessly resilient. In short, by moving your private cloud to Windows Server 2016 your VMs will achieve better SLA’s! </P> <BR /> <H2> What happens when VM experiences transient storage failure </H2> <BR /> <H3> What happened in Windows Server 2012 R2 </H3> <BR /> <P> The behavior in previous releases is that when a Virtual Machine (VM) experienced a failure in reading or writing to its virtual hard disk (VHD/X), than either the VM or applications running inside the VM would crash. This is obviously very disruptive to the workload (to say the least)!. </P> <BR /> <H3> What happens in Windows Server 2016 </H3> <BR /> <P> In Windows Server 2016 new capabilities have been introduced which detects the storage failures and takes action to mitigate the impact. When a VM experiences a failure in readying or writing to its VHD/VHDX the VM will be placed into a critical pause state. The VM is frozen in time, resulting in everything inside the VM freezing and no additional I/O’s are issued. The VM will remain in this state until storage becomes responsive again. The VM then moves back to running state when it can start reading and writing to its VHD/X. Since the session state of the VM is retained, this means the VM resumes exactly where it left off. For short transient failures, this will commonly be completely transparent to clients. </P> <BR /> <P> Remember that when a VM is in a critical pause state, the VM is frozen and not accessible to clients. So there is a window where clients will not be able to access the VM. But the fact that the VM session state is retained, makes the storage outage much less impactful. </P> <BR /> <P> A VM will not stay in a critical pause state indefinitely, if storage access cannot be regained within the configurable timeout, the VM is then powered off and the next start will be a cold boot. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> </P> <BR /> Configuration Options <BR /> <P> This new functionality is an integrated part of Hyper-V and you do not need to do anything to take advantage of it.&nbsp; You can configure virtual machine storage resiliency options that defines behavior of the virtual machines during storage transient failures </P> <BR /> <OL> <BR /> <LI> <STRONG> Enable / Disable </STRONG> – If you wish to revert to the behavior of previous releases, the storage resiliency enhancements can be disabled on a per VM basis.&nbsp; It is enabled by default. <BR /> PowerShell syntax: <BR /> <BR /> <P> Set-VM -AutomaticCriticalErrorAction &lt;None | Pause&gt; </P> <BR /> </LI> <BR /> <LI> <STRONG> Timeout </STRONG> – The amount of time a VM remains in critical pause state before powering off can be configured on a per VM basis.&nbsp; The default value is 30 minutes. <BR /> PowerShell syntax: <BR /> <P> Set-VM –AutomaticCriticalErrorActionTimeout &lt;value in minutes&gt; </P> <BR /> </LI> <BR /> </OL> <BR /> Shared VHDX <BR /> <P> Shared VHDX is usually used where multiple VMs are sharing a storage space and form a guest cluster to provide high availability for applications running inside the VM. For a guest cluster there is resiliency at the application layer inside of the VM, so the preferred behavior is to have failover occur to another VM.&nbsp; The new storage resiliency feature is aware and optimized to provide the best behavior for a Shared VHDX.&nbsp; When a VM experiences a failure in reading and writing to its Shared VHDX than connection of the Shared VHDX is removed from the VM. This results in clustering within the VM to detect the storage failure and take recovery action.&nbsp; Unlike a normal VM, a VM with a Shared VHDX does not go into critical pause state and the guest cluster moves its workload to another VM which is also part of the cluster and has access to shared VHDX. The VM which has lost connection to its Shared VHDX will poll it every 10 minutes to check if storage access has been restored. As soon as it gets access to it, the Shared VHDX is reattached to VM. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> </P> <BR /> When can I use storage resiliency <BR /> <P> VM storage resiliency is supported with: </P> <BR /> <UL> <BR /> <LI> Gen1 and Gen2 VMs </LI> <BR /> <LI> VHD, VHDX and Shared VHDX </LI> <BR /> <LI> Local block storage (SAN) <BR /> <UL> <BR /> <LI> FC, iSCSI, FCoE, SAS with Cluster Shared Volumes </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> File Based storage (NAS) <BR /> <UL> <BR /> <LI> File shares using SMB (Server Message Block protocol) with Continuous availability such as a Scale-out File Server (SoFS) </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> <P> <BR /> Storage resiliency is not supported with: </P> <BR /> <UL> <BR /> <LI> VHD / VHDX on a local hard disk without Cluster Shared Volumes </LI> <BR /> <LI> Standard file servers </LI> <BR /> <LI> USB storage </LI> <BR /> <LI> Hyper-V pass-through disks </LI> <BR /> </UL> <BR /> <P> </P> <BR /> <P> In short, Windows Server 2016 will&nbsp;handle transient failures storage failures gracefully. </P> <BR /> <P> Thank you, </P> <BR /> <P> Tushita </P> <BR /> <P> </P> </BODY></HTML> Thu, 21 Mar 2019 23:53:23 GMT scooley 2019-03-21T23:53:23Z Integration components available for virtual machines not connected to Windows Update <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jul 24, 2015 </STRONG> <BR /> In Technical Preview, Hyper-V began delivering integration components through Windows Update ( <A href="" title="see blog for more information" target="_blank"> see blog for more information </A> ). <BR /> <BR /> Pushing updates through Windows Update was a good first step -- it is the easiest way to keep integration components up to date.&nbsp;&nbsp;There are situations, however, where the virtual machine isn't connected to Windows Update and sometimes it is more convenient to patch an offline (turned off) virtual machine. <BR /> <BR /> Now, in addition to receiving integration component updates automatically through Windows Update,&nbsp;you can also update integration components on&nbsp;virtual machines that aren't running or aren't connected to Windows Update using the cab files available in <A href="#" target="_blank"> KB3071740 </A> .&nbsp; (Last time I looked, the download links weren't working.&nbsp; The downloads are <A href="#" target="_blank"> here </A> ). <BR /> <BR /> <STRONG> ** </STRONG> <B> Note: </B> Everything in this blog post applies to Server 2016 Technical Preview or Windows 10 and associated preview builds or later.&nbsp; The instructions here should work for virtual machines on Server 2012R2/Windows 8.1&nbsp;but that is not tested or supported! ** <BR /> <BR /> <BR /> <BR /> <BR /> <H2> Updating integration components on an&nbsp;virtual machine that is not turned on </H2> <BR /> Here's a script that updates integration components.&nbsp; It assumes you are updating a virtual machine from the Hyper-V host.&nbsp; If the virtual machine is running, it will need to be stopped (the script below does this for you) and you'll need to locate its VHD (virtual hard drive). <BR /> <BR /> ** For step by step instructions with explanations read <A href="" target="_blank"> this post </A> but use the cabs from the KB article. <BR /> <BR /> Run the following in PowerShell as administrator.&nbsp; You will need the path to the cab file that matches the operating system running in your virtual machine and the path to your VHD. <BR /> <BR /> <BR /> <BR /> <H2> Update integration components inside the virtual machine without using Windows Update </H2> <BR /> These instructions assume you are running on the virtual machine you want to update. <BR /> <BR /> First, find the cab file that matches the operating system running in your virtual machine and download it.&nbsp; Run the following in PowerShell as administrator.&nbsp; Remember to set the right path to&nbsp;the downloaded cab file. <BR /> <BR /> <BR /> $integrationServicesCabPath="C:\Users\sarah\Downloads\" <BR /> <BR /> #Install the patch <BR /> Add-WindowsPackage -Online -PackagePath $integrationServicesCabPath <BR /> <BR /> <BR /> <BR /> Now your virtual machines can all&nbsp;have the latest integration components! <BR /> <BR /> Cheers, <BR /> Sarah </BODY></HTML> Thu, 21 Mar 2019 23:52:57 GMT scooley 2019-03-21T23:52:57Z ExpressRoute + ASR = Efficient DR solution <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jul 20, 2014 </STRONG> <BR /> <P> Microsoft recently announced the availability of Azure <STRONG> ExpressRoute </STRONG> which enabled our customers to create a private connection between their on-premises to Microsoft Azure. This ensured that the data to Azure used an alternate path to the internet where a connection to Azure could be established through an Exchange Provider or through a Network Service Provider. With ExpressRoute customers can connect in a private peering setup to both Azure Public Cloud Services as well as Private Virtual Networks. </P> <P> This opened up a new set of scenarios which otherwise was gated on the network infrastructure (or lack of it) – key among them were continuous, replication scenarios such as Azure Site Recovery. At scale, when replicating 10’s-100’s of VMs to Azure using <B> Azure Site Recovery (ASR) </B> , you can quickly send TBs of data over ExpressRoute. </P> <P> You can find tons of documentation on ExpressRoute and it’s capabilities @ <A href="#" target="_blank"> </A> and TechEd talks in Channel9 @ <A href="#" target="_blank"> </A> . ExpressRoute truly extends your datacenter to Azure and organizations can view Azure as “yet-another-branch-office”. </P> <P> ASR was truly excited with this announcement which couldn’t have come at a better time. Microsoft’s internal IT (MSIT) and Network Infrastructure Services (NIS), being the very first adopters of ExpressRoute has rolled out ExpressRoute as a Network Service, which enables true hybrid cloud experience for all internal customers. </P> <P> My partners in MSIT ( <STRONG> Arvind Rao &amp; Vik Chhabra </STRONG> ) helped me get ExpressRoute connected setup and I got a chance to play around with ASR from one of the Microsoft buildings at Puget Sound. The setup which was loaned to me by MSIT looks similar to this (except that MSIT owns both the infrastructure and the network): </P> <P> <IMG src="" /> </P> <P> At a high level, ASR replicates VM (initial and subsequent changes to the VM) to your storage account directly. The “replication” traffic is sent over the green line to “Azure public resources” such as Azure blob store. Once the VMs are failed over, we create IaaS VMs in Azure using the replicated data. Any traffic back to the corporate network (CorpNet) or from CorpNet to the IaaS VM goes over the red line in the above picture. </P> <P> The results were <STRONG> fabulous </STRONG> to say the least! High throughput was observed during initial and delta replication. Once the VMs were failed over, the traffic to our internal CorpNet and high throughput was observed for that as well. The key takeaway: Once ER was setup, ASR just worked. There was no extra configuration which was required from ASR’s perspective. </P> <P> How high is “high throughput” - in a setup, where I had 3 replicating VMs, the below picture captures the network throughput when initial replication was in progress: </P> <P> <IMG src="" /> </P> <P> </P> <P> A whooping 1.5Gbps network upload speed to Azure – <STRONG> go ExpressRoute, go! </STRONG> </P> <P> To get the above network throughput, a registry key needs to be set in *each* of the Hyper-V servers </P> <P> <IMG src="" /> </P> <P> The “UploadThreadsPerVM” controls the number of threads which is used when replicating each VM. In an “overprovisioned” network, this registry key needs to be changed from it’s default values. We support a maximum of 32 threads per replicating VM. </P> <P> In summary, ASR combined with ExpressRoute provides a powerful, compelling, efficient disaster recovery scenario to Microsoft Azure. ExpressRoute removes traditional blockers in networking when sending massive amounts of data to Azure – disaster recovery being one such scenario. And ASR removes traditional blockers of providing an easy, cost effective DR solution to a public cloud infrastructure such as Microsoft Azure. </P> <P> </P> <P> You can find more details on ASR @ <A href="#" target="_blank"> </A> . The documentation explaining the end to end workflows is available @ <A href="#" target="_blank"> </A> . And if you have questions when using the product, post them @ <A href="#" target="_blank"> </A> or in this blog. </P> <P> You can also share your feedback on your favorite features/gaps @ <A href="#" target="_blank"> </A> . As always, we love to hear from you! </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> </BODY></HTML> Thu, 21 Mar 2019 23:52:51 GMT Virtualization-Team 2019-03-21T23:52:51Z Migrate Windows Server 2012R2 Virtualized Workloads to Microsoft Azure with Azure Site Recovery <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Aug 13, 2014 </STRONG> <BR /> <P> <A href="#" target="_blank"> Azure Site Recovery </A> not only enables a low-cost, high-capability, <A href="#" target="_blank"> CAPEX-saving, and OPEX-optimizing </A> Disaster Recovery strategy for your IT Infrastructure, it can also help you quickly and easily spin-off additional development and testing environments or migrate on-premise virtual machines to Microsoft Azure. For customers who want a unified solution that reduces downtime of their production workloads during migration and that also enables verification of their applications in Azure without any impact to production, Azure Site Recovery with its in-built features makes migrating to Azure simple, reliable, and quick. The flowchart below describes a typical migration flow using Azure Site Recovery. For more information, please visit the following detailed blog on <A href="#" target="_blank"> how to migrate on-premise virtualized workloads to Azure using Azure Site Recovery </A> . </P> <P> </P> <P> <IMG src="" /> </P> </BODY></HTML> Thu, 21 Mar 2019 23:52:21 GMT Virtualization-Team 2019-03-21T23:52:21Z Share your feedback! <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Aug 25, 2014 </STRONG> <BR /> Are you a System Administrator or Security Analyst? Would you like to influence the future of securing your virtualized infrastructure? If this sounds interesting to you, Microsoft Windows Server and cloud management Program Managers would like to hear from you. <P> <STRONG> Please complete this </STRONG> <A href="#" target="_blank"> <STRONG> short survey </STRONG> </A> that will help identify your specific areas of interest and expertise to make sure our discussions fit your interests. </P> </BODY></HTML> Thu, 21 Mar 2019 23:52:09 GMT Virtualization-Team 2019-03-21T23:52:09Z Azure Site Recovery: Data Security and Privacy <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Sep 03, 2014 </STRONG> <BR /> Microsoft is committed to ensuring the Privacy and Security of our customer’s data whenever it crosses on-premises boundaries and into Microsoft Azure. Azure Site Recovery, a cloud-based Disaster Recovery Service that enables protection and orchestrated recovery of your virtualized workloads across on-premises private clouds or directly into Azure, has been designed ground up to align with <A href="#" target="_blank"> Microsoft’s privacy and security commitment </A> . <P> <I> Specifically our promise is to ensure that: </I> </P> <UL> <LI> We encrypt customer data while in transit and at rest </LI> <LI> We use best-in-class industry cryptography to protect all channels, including Perfect Forward Secrecy and 2048-bit key lengths </LI> </UL> <P> To read more about how the Azure Site Recovery architecture delivers on these key goals, check out our new blog post, <A href="#" target="_blank"> Azure Site Recovery: Our Commitment to Keeping Your Data Secure </A> on the Microsoft Azure blog. </P> </BODY></HTML> Thu, 21 Mar 2019 23:52:04 GMT Trinadh Kotturu 2019-03-21T23:52:04Z Networking 101 for Disaster Recovery to Microsoft Azure using Site Recovery <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Sep 09, 2014 </STRONG> <BR /> <P> Getting the networking requirements&nbsp;right is a critical piece in ensuring Disaster Recovery readiness of your business critical workloads. &nbsp;When an administrator evaluates the disaster recovery capabilities that she needs for&nbsp;her application(s), she needs to think through and develop a robust networking infrastructure that ensures that the application is accessible to end users once it has been failed over and that the application downtime is minimized – RTO optimization is of key importance. </P> <BR /> <P> <IMG src="" /> <BR /> </P> <BR /> <P> Head over to the Microsoft Azure blog to read our new blog that shows you how you can accomplish <A href="#" target="_blank"> Networking Infrastructure Setup for Microsoft Azure as a Disaster Recovery Site </A> . Using the example of a multi-tier application we show you how to setup the required networking infrastructure, establish network connectivity between on-premises and Azure and then conclude the post with more details on Test Failover and Planned Failover. </P> <BR /> <P> </P> </BODY></HTML> Thu, 21 Mar 2019 23:51:58 GMT Virtualization-Team 2019-03-21T23:51:58Z Announcing the GA of Disaster Recovery to Azure using Azure Site Recovery <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Oct 02, 2014 </STRONG> <BR /> I am excited to <A href="#" target="_blank"> announce the GA of the&nbsp;Disaster Recovery to Azure using Azure Site Recovery </A> .&nbsp;In addition to enabling <I> replication to </I> and <I> recovery in </I> Microsoft Azure, ASR enables automated protection of VMs, remote health monitoring, no-impact recovery plan testing, and single click orchestrated recovery - all backed by an enterprise-grade SLA. <BR /> <BR /> The DR to Azure functionality in ASR builds on top of System Center Virtual Machine Manager, Windows Server Hyper-V Replica, and Microsoft Azure to ensure that our customers can leverage existing IT investments while still helping them optimize precious CAPEX and OPEX spent in building and managing secondary datacenter sites. <BR /> <BR /> <P> The GA release also brings significant additions to the already expansive list of ASR’s DR to Azure features: </P> <UL> <LI> <STRONG> <SUP> NEW </SUP> </STRONG> <STRONG> <SUP> </SUP> </STRONG> <STRONG> ASR Recovery Plans and Azure Automation </STRONG> integrate to offer robust and simplified one-click orchestration of your DR plans <BR /> </LI> <LI> <STRONG> <SUP> NEW </SUP> </STRONG> <STRONG> <SUP> </SUP> </STRONG> <B> Track Initial Replication Progress </B> as virtual machine data gets replicated to a customer-owned and managed geo-redundant Azure Storage account. This new feature is also available when configuring DR between on-premises private clouds across enterprise sites </LI> <LI> <STRONG> <SUP> NEW </SUP> </STRONG> <STRONG> <SUP> </SUP> </STRONG> <B> Simplified Setup and Registration </B> streamlines the DR setup by removing the complexity of generating certificates and integrity keys needed to register your on-premises System Center Virtual Machine Manager server with your Site Recovery vault </LI> </UL> </BODY></HTML> Thu, 21 Mar 2019 23:51:43 GMT Virtualization-Team 2019-03-21T23:51:43Z Hyper-V integration components are available through Windows Update <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Nov 11, 2014 </STRONG> <BR /> <P> Starting in Windows Technical Preview, Hyper-V integration components will be delivered directly to virtual machines using Windows Update. </P> <P> Integration components (also called integration services) are the set of synthetic drivers which allow a virtual machine to communicate with the host operating system.&nbsp; They control services ranging from time sync to guest file copy.&nbsp; We've been talking to customers about integration component installation and update over the past year to discover that they are a huge pain point during the upgrade process. </P> <P> Historically, all new versions of Hyper-V came with new integration components. Upgrading the Hyper-V host required upgrading the integration components in the virtual machines as well.&nbsp; The new integration components were included with the Hyper-V host then they were installed in the virtual machines using vmguest.iso.&nbsp; This process required restarting the virtual machine and couldn't be batched with other Windows updates.&nbsp; Since the Hyper-V administrator had to offer vmguest.iso and the virtual machine administrator had to install them, integration component upgrade required the Hyper-V administrator have administrator credentials in the virtual machines -- which isn't always the case. </P> <P> In Windows Technical Preview, all of that hassle goes away.&nbsp; From now on, all integration components will be delivered to virtual machined through Windows Update along with other important updates. </P> <P> For the first time, Hyper-V integration components (integration services) are available through Windows Update for virtual machines running on Windows Technical Preview hosts. </P> <P> There are updates available today as KB3004908 for virtual machines running: </P> <UL> <LI> Windows Server 2012 </LI> <LI> Windows Server 2008 R2 </LI> <LI> Windows 8 </LI> <LI> Windows 7 </LI> </UL> <P> The virtual machine must be connected to Windows Update or a WSUS server.&nbsp; In the future, integration component updates will have a category ID, for this release, they are listed as Important KB3004908. </P> <P> Again, these updates will only be available to virtual machines running on Windows Technical Preview hosts. </P> <P> Enjoy! <BR /> Sarah </P> <P> </P> </BODY></HTML> Thu, 21 Mar 2019 23:51:36 GMT Virtualization-Team 2019-03-21T23:51:36Z Integration components: How we determine Windows Update applicability <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Nov 24, 2014 </STRONG> <BR /> <P> Last week we began <A href="" target="_blank"> distributing integration components through Windows Update </A> .&nbsp; In the November rollup, integration components were made available to Windows Server 2012/2008R2 virtual machines along with Windows 8/7 virtual machines running on Windows Technical Preview hosts. </P> <BR /> <P> Ben wrote a great <A href="#" target="_blank"> blog post </A> outlining how to update the integration components. </P> <BR /> <P> Using Windows Update to apply integration components brought to light an interesting set of challenges with our standard servicing tools.&nbsp; Unlike other Windows Updates, integration components are tied to the host version not just the installed OS.&nbsp; How should we check that Windows is running in a virtual machine on a Technical Preview Hyper-V host? </P> <BR /> <P> We settled on the KVP (Hyper-V data exchange) integration component.&nbsp; KVP provides a shared registry key between the host and guest OS with some useful information about the VM. </P> <BR /> <P> See HKEY_LOCAL_MACHINE/SOFTWARE/Microsoft/Virtual Machine/Guest/Parameters: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> We were particularly interested in the HostSystemOSMajor and HostSystemOSMinor for determining integration component applicability.&nbsp; Windows version is 6.4 is Technical Preview. </P> <BR /> <P> Cool. </P> <BR /> <P> So, if KVP is enabled, Windows Update checks the OS version, the HostSystemOSMajor, then the HostSystemOSMinor. If all that checks out, then Windows Update provides the integration components to that virtual machine. </P> <BR /> <P> This does have some interesting side effects. </P> <BR /> <P> First, if KVP has never been enabled, these keys will not exist and Windows Update will not know the host version and the integration components will not be offered. <BR /> Second, these registry keys are all modifiable and thus easy to spoof :). </P> <BR /> <P> As soon as we started offering integration component updates through windows update, customers started asking me when they’d be available to VMs running on down-level hosts.&nbsp; While it is no way supported, you can modify the values for HostSystemOSMajor and HostSystemOSMinor to receive integration component updates through windows update on down-level hosts right now. <BR /> The integration component changes we distributed in November are compatible with Server 2012 R2/ Windows 8.1 hosts (in fact, they’re the exact integration components that shipped in Windows 2012 R2/Windows 8.1 hosts). </P> <BR /> <P> While I in no way endorse this and it certainly isn’t supported, if one were to run the following: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> They may find that Windows Update updates the integration components in their VM to the correct version for Windows Server 2012 R2/Windows 8.1. </P> <BR /> <P> Cheers, <BR /> Sarah </P> </BODY></HTML> Thu, 21 Mar 2019 23:51:30 GMT Virtualization-Team 2019-03-21T23:51:30Z Replicate Azure Pack IaaS Workloads to Azure using ASR <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Feb 15, 2015 </STRONG> <BR /> A few months back, we announced <A href="#" target="_blank"> Azure Site Recovery </A> (ASR)’s integration with Azure Pack, which enabled our Service Providers to start offering <A href="#" target="_blank"> Managed Disaster Recovery for IaaS Workloads using ASR and Azure Pack </A> . The response, since we announced the integration, has been phenomenal - customers are appreciating the simplicity of ASR and are adopting ASR as the standard for offering DR solution in their environments. The integration of ASR and Azure Pack also enabled Service Providers to offer DR as a value added service, opening up new review streams and the ability to offer better Service Level Agreements (SLA) to their end customers. A key ask that our early adopters have expressed – and that we are delivering on today – is the ability to use Microsoft Azure as a Disaster Recovery Site for IaaS workloads when using Azure Pack. <BR /> <BR /> The new features in the ASR – Azure Pack integration will enable Service Providers to offer Azure as a DR site or to an on-premise secondary site using their Azure Pack UR 4.0 deployments. To enable these capabilities you can <A href="#" target="_blank"> download the latest ASR runbooks </A> and import them into your WAP environments. To know more about the integration, check out our <A href="#" target="_blank"> WAP deployment guide </A> . <BR /> <BR /> Getting started with <A href="#" target="_blank"> Azure Site Recovery </A> is easy – simply check out the <A href="#" target="_blank"> pricing information </A> , and <A href="#" target="_blank"> sign up for a free Microsoft Azure trial </A> . <BR /> <BR /> <P> </P> </BODY></HTML> Thu, 21 Mar 2019 23:51:02 GMT Virtualization-Team 2019-03-21T23:51:02Z Hyper-V receives Red Hat certification <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Feb 23, 2015 </STRONG> <BR /> <P> If you run Red Hat Enterprise Linux (RHEL) as a guest OS on Hyper-V, you will be interested to know that Red Hat certifies Hyper-V as a supported hypervisor for running RHEL.&nbsp; The support services you get from Red Hat via your RHEL subscription are fully available when running RHEL in an on-premises installation of Hyper-V. </P> <P> Microsoft recently obtained certification for the RHEL 7.0 release on Windows Server 2008 R2 Hyper-V, Windows Server 2012 Hyper-V, and Windows Server 2012 R2 Hyper-V.&nbsp; The 2012 R2 Hyper-V certification includes both Generation 1 and Generation 2 VMs. </P> <P> Earlier versions of RHEL are also certified by Red Hat, including RHEL 5.9 and later, and RHEL 6.4 and later.&nbsp;&nbsp; Both the 32-bit and 64-bit versions are certified. </P> <P> See <A href="#" target="_blank"> </A> for the Red Hat web page listing the certifications.&nbsp; Scroll down to get to the latest versions of Hyper-V. <BR /> </P> <P> - Michael Kelley <BR /> </P> <P> </P> <BR /> </BODY></HTML> Thu, 21 Mar 2019 23:50:55 GMT Virtualization-Team 2019-03-21T23:50:55Z Server Virtualization Sessions @ Build 2015 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Apr 30, 2015 </STRONG> <BR /> <P> If you are interested in Server Virtualization and Server Application development - today is a good day to be at Build 2015 (or following it online). </P> <P> Two big sessions to recommend are: </P> <P> <STRONG> Nano Server: A Cloud Optimized Windows Server for&nbsp;Developers </STRONG> <BR /> Time: April 30, 2015 from 2:00PM&nbsp;to&nbsp;3:00PM <BR /> Online: <A href="#" target="_blank"> </A> </P> <P> <STRONG> Windows Containers: What, Why and&nbsp;How </STRONG> <BR /> Time: April 30, 2015 from 5:00PM&nbsp;to&nbsp;6:00PM <BR /> Online: <A href="#" target="_blank"> </A> </P> <P> Cheers, <BR /> Ben </P> <P> </P> <BR /> </BODY></HTML> Thu, 21 Mar 2019 23:50:50 GMT Virtualization-Team 2019-03-21T23:50:50Z Linux Integration Services 4.0 Announcement <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 01, 2015 </STRONG> <BR /> <H2> IntroductionWe are pleased to announce the release of Linux Integration Services (LIS) version 4.0. As part of this release we have expanded the access to Hyper-V features and performance to the latest Red Hat Enterprise Linux, CentOS, and Oracle Linux versions. While Microsoft continues to develop and submit Linux Integration Services to the Linux kernel and work with enterprise Linux distribution vendors to incorporate that code into new releases, this software direct from Microsoft makes those improvements available to long-established Linux installations. <BR /> <BR /> This release continues the tradition of great support for open source software making full use of Microsoft virtualization platforms. </H2> <P> </P> <H2> Download Location </H2> <P> The LIS binaries are available either as a tar file that can be uploaded to your virtual machine or as an ISO that can be mounted. The files are available from the Microsoft Download Center here: <BR /> <BR /> <A href="#" target="_blank"> </A> <A href="#" target="_blank"> <BR /> <BR /> </A> <A> </A> <A> </A> <A> </A> <A> A </A> ReadMe file has been provided to provide information on the installation procedure, feature set, and issues. <BR /> <BR /> See also the TechNet article “ <A href="#" target="_blank"> Linux and FreeBSD Virtual Machines on Hyper-V </A> ” for a comparison of LIS features and best practices for use. <BR /> <BR /> As LIS code is released under the GNU Public License version 2 (GPLv2) and are freely available at the <A href="#" target="_blank"> LIS github project </A> . LIS code is also regularly submitted to the upstream Linux kernel and documented on the <A href="#" target="_blank"> linux kernel mailing list (lkml) </A> . </P> <H2> Supported Virtualization Server Operating Systems </H2> <P> Linux Integration Services (LIS) 4.0 allows Linux guests to use Hyper-V virtualization on the following host operating systems: </P> <UL> <LI> <P> Windows Server 2008 R2 (applicable editions) </P> </LI> <LI> <P> Microsoft Hyper-V Server 2008 R2 </P> </LI> <LI> <P> Windows 8 Pro and 8.1 Pro </P> </LI> <LI> <P> Windows Server 2012 and 2012 R2 </P> </LI> <LI> <P> Microsoft Hyper-V Server 2012 and 2012 R2 </P> </LI> <LI> <P> Windows Server Technical Preview </P> </LI> <LI> <P> Microsoft Hyper-V Server Technical Preview </P> </LI> <LI> <P> Microsoft Azure. </P> </LI> </UL> <H2> Applicable Linux Distributions </H2> <P> Microsoft provides Linux Integration Services for a broad range of Linux distros as documented in the <A href="#" target="_blank"> Linux and FreeBSD Virtual Machines on Hyper-V </A> topic on TechNet. Per that documentation, many Linux distributions and versions have Linux Integration Services built-in and do not require installation of this separate LIS package from Microsoft. This LIS package is available for a subset of supported distributions in order to provide the best performance and fullest use of Hyper-V features. It can be installed in the listed distribution versions that do not already have LIS built, and can be installed as an upgrade in listed distribution versions that already have LIS built-in. <BR /> <BR /> LIS 4.0 is applicable to the following guest operating systems: </P> <UL> <LI> <P> Red Hat Enterprise Linux 5.5-5.11 32-bit, 32-bit PAE, and 64-bit </P> </LI> <LI> <P> Red Hat Enterprise Linux 6.0-6.6 32-bit and 64-bit </P> </LI> <LI> <P> Red Hat Enterprise Linux 7.0-7.1 64-bit </P> </LI> <LI> <P> CentOS 5.5-5.11 32-bit, 32-bit PAE, and 64-bit </P> </LI> <LI> <P> CentOS 6.0-6.6 32-bit and 64-bit </P> </LI> <LI> <P> CentOS 7.0-7.1 64-bit </P> </LI> <LI> <P> Oracle Linux 6.4-6.6 with Red Hat Compatible Kernel 32-bit and 64-bit </P> </LI> <LI> <P> Oracle Linux 7.0-7.1 with Red Hat Compatible Kernel 64-bit </P> </LI> </UL> <H2> Linux Integration Services 4.0 Feature Set </H2> <P> When installed on a virtual machine that is running a supported Linux distribution, LIS 4.0 for Hyper-V provides the additional functionality listed in the table below. </P> <UL> <UL> <LI> Installable on Red Hat Enterprise Linux 6.6-7.1 </LI> </UL> </UL> <UL> <UL> <LI> Installable on CentOS 6.6-7.1 </LI> </UL> </UL> <UL> <UL> <LI> Installable on Oracle Linux 6.6-7.1 when running the Red Hat Compatible Kernel </LI> </UL> </UL> <UL> <UL> <LI> Networking and storage performance improvements </LI> </UL> </UL> <UL> <UL> <LI> Dynamic Memory – Hot Add </LI> </UL> </UL> <P> More details on individual features can be found at <A href="#" target="_blank"> </A> </P> <H2> Customer Feedback </H2> <P> Customers can provide feedback through the <A href="#" target="_blank"> Linux and FreeBSD Virtual Machines on Hyper-V forum </A> . We look forward to hearing about your experiences with LIS. <BR /> <BR /> <BR /> <BR /> </P> <P> </P> </BODY></HTML> Thu, 21 Mar 2019 23:50:46 GMT Virtualization-Team 2019-03-21T23:50:46Z Hyper-V at Ignite <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 12, 2015 </STRONG> <BR /> <P> Ignite 2015 was full of new information about Hyper-V.&nbsp; Check out some of the many talks about (or related to) Hyper-V.&nbsp; I'm sure I inevitably missed a few talks/interviews.&nbsp;&nbsp;If there are any great ones I'm missing, leave a link in the comments. </P> <P> <EM> * See also: <A href="" title="Information about Windows containers from Build and Ignite" target="_blank"> Information about Windows containers from Build and Ignite </A> <BR /> </EM> </P> <P> </P> <P> </P> <P> <STRONG> The Future Microsoft Datacenter/Fabric </STRONG> </P> <P> <A href="#" title="Mathew John and Jeff Woolsey -- Platform Vision Strategy (2 of 7): Server Virtualization Overview" target="_blank"> Mathew John and Jeff Woolsey -- Platform Vision &amp; Strategy (2 of 7): Server Virtualization Overview </A> </P> <P> <A href="#" title="Philip Moss -- The Power of Windows Server Software Defined Datacenter in Action" target="_blank"> Philip Moss -- The Power of Windows Server Software Defined Datacenter in Action </A> </P> <P> <A href="#" title="Andrew Mason and Jeffrey Snover -- Nano Server: The Future of Windows Server Starts Now" target="_blank"> Andrew Mason and Jeffrey Snover -- Nano Server:&nbsp; The Future of Windows Server Starts Now </A> </P> <P> <A href="#" title="Dan Harman and Jeffrey Snover -- Remotely Managing Nano Server" target="_blank"> Dan Harman and Jeffrey Snover -- Remotely Managing Nano Server </A> </P> <P> <STRONG> </STRONG> </P> <P> <STRONG> </STRONG> </P> <P> <STRONG> Hyper-V Nuts and Bolts </STRONG> </P> <P> <A href="#" title="Ben Armstrong and Sarah Cooley -- What's New in Windows Server Hyper-V" target="_blank"> Ben Armstrong and Sarah Cooley -- What's New in Windows Server Hyper-V </A> </P> <P> <A href="#" title="Senthil Rajaram and Jose Barreto -- Hyper-V Storage Performance with Storage Quality of Service" target="_blank"> Senthil Rajaram and Jose Barreto -- Hyper-V Storage Performance with Storage Quality of Service </A> </P> <P> <A href="#" title="Allen Marshall, Amitabh Tamhane, and Dean Wells -- Harden the Fabric: Protecting Tenant Secrets in Hyper-V" target="_blank"> Allen Marshall, Amitabh Tamhane, and Dean Wells -- Harden the Fabric: Protecting Tenant Secrets in Hyper-V </A> </P> <P> <A href="#" title="Ben Armstrong and Rob Hindman -- Upgrading your Private Cloud to Server 2012R2 and Beyond" target="_blank"> Ben Armstrong and Rob Hindman -- Upgrading your Private Cloud to Server 2012R2 and Beyond </A> </P> <P> <A href="#" title="Aidan Finn -- The Hidden Treasures of Windows Server 2012R2 Hyper-V" target="_blank"> Aidan Finn -- The Hidden Treasures of Windows Server 2012 R2&nbsp;Hyper-V </A> </P> <P> <A href="#" title="Matt McSpirit -- Migrating to Microsoft: VMWare to Hyper-V and Microsoft Azure" target="_blank"> Matt McSpirit -- Migrating to Microsoft: VMWare to Hyper-V and Microsoft Azure </A> </P> <P> <A href="#" title="Jeff Woolsey -- Preparing your Fabric and Applications for Server 2003 End of Support" target="_blank"> Jeff Woolsey -- Preparing your Fabric and Applications for Server 2003 End of Support </A> </P> <P> </P> <P> Cheers, <BR /> Sarah </P> <P> </P> <P> </P> <BR /> </BODY></HTML> Thu, 21 Mar 2019 23:50:38 GMT scooley 2019-03-21T23:50:38Z PowerShell Direct – Running PowerShell inside a virtual machine from the Hyper-V host <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 14, 2015 </STRONG> <BR /> <P> Official PS Direct documentation is located here ( <A href="#" title="PS Direct documentation" target="_blank"> </A> ). </P> <BR /> <P> </P> <BR /> <P> At Ignite we announced PowerShell Direct, and briefly demoed it’s capabilities in the <A href="#" title="&quot;What's New in Hyper-V&quot; session" target="_blank"> “What’s New in Hyper-V” session </A> .&nbsp; This is a follow up so you can get started using PowerShell Direct in your own environment. </P> <BR /> <H2> What is PowerShell Direct? </H2> <BR /> <P> It is a new way of running PowerShell commands inside a virtual machine from the host operating system easily and reliably. </P> <BR /> <P> There are no network/firewall requirements or configurations. <BR /> It works regardless of Remote Management configuration. <BR /> You still need guest credentials. </P> <BR /> <P> For people who want to try it out immediately, go ahead and (as Administrator) run either of these commands on a Windows10 Hyper-V host where VMName refers to a VM running Windows10: </P> <BR /> <P> Enter-PSSession -VMName VMName </P> <BR /> <P> Invoke-Command -VMName VMName -ScriptBlock { Commands } </P> <BR /> <P> <STRONG> *** Note: This only works from Windows 10/Windows Server Technical Preview Hosts to Windows 10/Windows Server Technical Preview guests. </STRONG> <BR /> Please let me know what guest/host operating system combinations you’d like to see and why. </P> <BR /> <P> </P> <BR /> <H2> Here is why I think this is really cool </H2> <BR /> <P> Honestly, because it’s incredibly convenient.&nbsp;&nbsp; I’ve been using PowerShell Direct for everything from scripted virtual machine configuration and deployment where each virtual machine has different roles and requirements through checking the state of my virtual machine (aka, has the guest operating system booted yet?). </P> <BR /> <P> Today, Hyper-V administrators rely on two categories of tools for connecting to a virtual machine on their Hyper-V host: </P> <BR /> <UL> <BR /> <LI> Remote management tools such as PowerShell or Remote Desktop </LI> <BR /> <LI> Hyper-V Virtual Machine Connection (VM Connect) </LI> <BR /> </UL> <BR /> <P> <BR /> Both of these technologies work well but have tradeoffs as the Hyper-V deployment grows.&nbsp; VMConnect is reliable but hard to automate while remote PowerShell is a brilliant automation and scripting tool but can be difficult to maintain/setup in some cases.&nbsp; I sometimes hear customers lament domain security policies, firewall configurations, or a lack of shared network preventing the Hyper-V host from communicating with the virtual machines running on it. <BR /> I’m also sure we’ve all had that moment where you're using remote PowerShell to modify a network setting and accidently make it so you can no longer connect to the virtual machine in the process…I know I have. </P> <BR /> <P> PowerShell Direct provides the scripting and automation experience available with remote PowerShell but with the zero configuration experience you get through VMConnect. </P> <BR /> <P> With that said, there are some PowerShell tools not available yet in PowerShell Direct.&nbsp; This is the first step.&nbsp; If you expected something to work and it didn’t, leave a comment. </P> <BR /> <P> </P> <BR /> <H2> Getting started and a few common issues </H2> <BR /> <P> I decided to make a picture of the most basic usage imaginable. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> <BR /> <STRONG> .5 – Dependencies </STRONG> <BR /> You must be connected to a Windows 10/Windows Server Technical Preview Host with Windows 10/Windows Server Technical Preview virtual machine(s). <BR /> You need to be running as Hyper-V Administrator (launch PowerShell as Administrator). <BR /> You need user credentials in the virtual machine. </P> <BR /> <P> The virtual machine you want to connect to must be running locally (on this host) and booted. <BR /> I use Get-VM as a sanity check. </P> <BR /> <P> </P> <BR /> <P> <STRONG> 1 – Enter-PSSession -VMName works.&nbsp; So does Enter-PSSession –VMGuid </STRONG> </P> <BR /> <P> Enter-PSSession -VMName VMName </P> <BR /> <P> Notice this is an interactive session.&nbsp; I am running PowerShell commands on the virtual machine directly (same behavior as Enter-PSSession usually has). </P> <BR /> <P> </P> <BR /> <P> <STRONG> 2 – Invoke-Command -VMName works.&nbsp; So does Invoke-Command -VMGuid </STRONG> </P> <BR /> <P> Invoke-Command -VMName VMName -ScriptBlock { Commands } </P> <BR /> <P> Notice this locally interprets the command(s) or script you pass in then performs those actions on the virtual machine (same behavior as Invoke-Command usually has). </P> <BR /> <P> </P> <BR /> <P> It's that easy. </P> <BR /> <P> </P> <BR /> <P> I look forward to seeing what you all build with this tool!&nbsp; Happy scripting. </P> <BR /> <P> </P> <BR /> <P> Cheers, <BR /> Sarah </P> <BR /> <P> </P> <BR /> <P> Edit: &nbsp;Read more here <A href="#" target="_blank"> </A> </P> </BODY></HTML> Thu, 21 Mar 2019 23:50:31 GMT scooley 2019-03-21T23:50:31Z When to use Hyper-V Dynamic Memory versus Runtime Memory Resize <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 26, 2015 </STRONG> <BR /> <P> </P> <BR /> <P> Starting in Windows 10/Windows Server Technical Preview, Hyper-V allows you to resize virtual machine memory without shutting down the virtual machine.&nbsp; You might be thinking, “Hyper-V already has dynamic memory… what is this about?”.&nbsp; I get a lot of questions about why you would use memory resize if enabling dynamic memory already&nbsp;automatically adds or removes memory to meet only the virtual machine's needs.&nbsp; To answer this, I’d like to tell a story about when I asked a similar question of my college roommate. </P> <BR /> <P> </P> <BR /> <H2> Are all wrenches the same? </H2> <BR /> <P> I had a roommate in college who was a mechanical engineer, and one of his hobbies was modifying his car (and then driving it too fast). Living with essentially a part-time mechanic had its pros and cons: his expertise saved me from a few trips to the shop, but it also meant that he stored his huge toolset in our cramped dorm room. </P> <BR /> <P> The biggest part of the set was this enormous box of wrenches; the kind where each wrench had its special place in the box. I admit it was a beautiful set, but it also took up a lot of space. One day I asked him why he couldn't just use one adjustable wrench. Why did he need 100 different sizes? I thought I had stumped him. </P> <BR /> <P> He admitted that a wrench set and an adjustable wrench essentially fulfilled the same purpose. What I failed to realize however, was that there are scenarios which require a simple one-sized wrench. For example, applying too much torque to an adjustable wrench will break its threads. My roommate also pointed out that when you’re trusting your life to a wrench (like when installing a seatbelt), you don’t want the wrench to wiggle. He preferred a single piece of steel for those types of jobs. </P> <BR /> <P> </P> <BR /> <P> The point is, just like wrenches, Hyper-V memory configurations are used in different scenarios. Dynamic memory is the adjustable wrench: it fits almost any situation. On the other hand, a VM without dynamic memory is like having one single-sized wrench: some situations call for it, but you’re locked into the size you started with. </P> <BR /> <P> The ability to adjust the amount of memory at runtime is like having my roommate’s fancy wrench set. A virtual machine without dynamic memory will no longer be locked into its initial memory allocation. Users can either add or remove memory based on their needs. </P> <BR /> <P> OK that's enough about wrenches; you get the idea. Let’s talk about real use cases. </P> <BR /> <P> </P> <BR /> <H2> Use Cases </H2> <BR /> <P> Many types of Hyper-V users will benefit from this feature. However, there are two types that this feature especially applies to: desktop users and hosters. </P> <BR /> <H3> Desktop users </H3> <BR /> <P> Imagine you're a desktop user, and you go to spin up a VM. It doesn't work because you were a little too ambitious allocating memory for the rest of the VMs, and so you've run out of memory to give. </P> <BR /> <P> Without this feature, you’re stuck. You have enough memory to run all of your workloads, but it is tied up in VMs that don’t need it. Your only option is to shut down your VMs and reallocate the memory when they’re off. This means painful downtime for the workloads running on your machine. In the worst case, you have something critical running in those virtual machines. Best case scenario, this is just a huge hassle. </P> <BR /> <P> With runtime memory resize, you can simply remove memory from those other VMs without needing to stop anything. Once enough memory is freed up, you can go ahead and launch the new VM. This feature allows desktop users to create VMs without being locked down to the initial memory value. </P> <BR /> <H3> Hosters </H3> <BR /> <P> Now picture you’re a hoster.&nbsp; Let's say a tenant wanted 60GB of memory in their VM at first, but they're quickly reaching that limit. Your tenant’s service business is booming, and they need more memory in their VM. They can afford double the memory in their VM, but they can’t afford to take their workload offline. Hopefully you can see how stressful this situation can be for hosters. </P> <BR /> <P> Before runtime memory resize, you would need to have a difficult conversation with your customer. You would need to explain that this is a complex change, and that it would likely mean downtime for their service (at a time when business is booming). Your tenant will certainly not be happy, and might reconsider buying a larger virtual machine. </P> <BR /> <P> With this new feature, selling more memory to existing tenants becomes trivial. You don’t need to have that difficult conversation, you just need to ask “how much?” and make sure there is enough physical memory on the system. Runtime memory resize does away with the complexity of selling more memory, which means more revenue and happier tenants. </P> <BR /> <P> </P> <BR /> <H2> Walk Through </H2> <BR /> <P> Notes: </P> <BR /> <UL> <BR /> <LI> Runtime memory resize is only supported for Windows 10/Windows Server Technical Preview </LI> <BR /> <LI> If more memory is added than is available on the system, Hyper-V will add as much memory as it can and display an error dialogue. </LI> <BR /> <LI> Memory being used by the virtual machine at the time cannot be removed. In this case, Hyper-V will remove as much memory as it can and display an error dialogue. </LI> <BR /> </UL> <BR /> <BR /> Virtual Machine Settings <BR /> <P> To adjust the amount of memory in a running virtual machine (without dynamic memory enabled), first open virtual machine settings. Enter the desired amount of memory in the “Startup RAM” field. The virtual machine’s memory should adjust to the new value. In the screenshot below, note that the virtual machine is running but you can still adjust “Startup RAM”. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <BR /> PowerShell <BR /> <P> To resize a virtual machine’s memory in PowerShell, use the following cmdlet (example below): </P> <BR /> <P> Set-VMMemory -StartupBytes </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> </P> <BR /> <P> Theo Thompson <BR /> Hyper-V Team </P> </BODY></HTML> Thu, 21 Mar 2019 23:50:15 GMT scooley 2019-03-21T23:50:15Z Announcing GA of Disaster Recovery to Azure - Purpose-Built for Branch Offices and SMB <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Dec 11, 2014 </STRONG> <BR /> <P> Today, we are excited to announce the GA for <B> <I> Branch Office and SMB Disaster Recovery to Azure </I> </B> . <A href="#" target="_blank"> Azure Site Recovery </A> delivers a simpler, reliable &amp; cost effective Disaster Recovery solution to Branch Office and SMB customers. &nbsp;ASR with new <I> Hyper-V Virtual Machine Protection </I> from <B> <I> Windows Server 2012 R2 to Microsoft Azure </I> </B> can now be used at customer owned sites&nbsp;and SCVMM is optional. </P> <P> </P> <P> </P> <P> You can visit the <A href="#" target="_blank"> Getting Started with Azure Site Recovery </A> for additional information. </P> <BR /> <BR /> <P> </P> </BODY></HTML> Thu, 21 Mar 2019 23:49:52 GMT Virtualization-Team 2019-03-21T23:49:52Z Hyper-V Replica & Proxy Servers on primary site <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Feb 08, 2014 </STRONG> <BR /> <P> I was tinkering around with my lab setup which consists of a domain, proxy server, primary and replica servers. There are some gotchas when it comes to Hyper-V Replica and proxy servers and I realized that we did not have any posts around this. So here goes. </P> <P> If the primary server is behind a proxy server (forward proxy) and if Kerberos based authentication is used to establish a connection between the primary and replica server, you might encounter an error: <EM> Hyper-V cannot connect to the specified Replica server &lt;servername&gt; due to connection timed out. Verify if a network connection exists to the Replica server or if the proxy settings have been configured appropriately to allow replication traffic. </EM> </P> <P> <IMG src="" /> </P> <P> I have a <STRONG> Forefront TMG 2010 </STRONG> acting as a proxy server and the logs in the proxy server </P> <P> <IMG src="" /> </P> <P> I also had <EM> <A href="#" target="_blank"> netmon </A> </EM> running in my primary server and the logs didn’t indicate too much other than for the fact that the connection never made it to the replica server – something happened between the primary and replica server which caused the connection to be terminated. The primary server name in this deployment is and the proxy server is </P> <P> <IMG src="" /> </P> <P> If a successful connection goes through, you will see a spew of messages on netmon </P> <P> When I had observed the issue the first time when building the product, I had reached out to the Forefront folks @ Microsoft to understand this behavior. I came to understand that the Forefront TMG proxy server terminates any outbound (or upload) connections whose content length (request header) is &gt; 4GB. </P> <P> Hyper-V Replica set a high content length as we expect to transfer large files (VHDs) and it would save us the effort to re-establish the connection each time. A closer inspection of a POST request shows the content length which is being set by Hyper-V Replica (ahem, ~500GB) </P> <P> <IMG src="" /> </P> <P> The proxy server returns a <EM> what-uh? </EM> response in the form of a bad-request </P> <P> <IMG src="" /> </P> <P> That isn’t superhelpful by any means and the error message unfortunately isn’t too specific either. But now you know the reason for the failure – the proxy server terminates the connection the connection request and it never reaches the replica server. </P> <P> So how do we work around it – there are two ways (1) Bypass the proxy server (2) Use cert based authentication (another blog for some other day). </P> <P> The ability to by pass the proxy server is provided only in PowerShell in the <EM> ByPassProxyServer </EM> parameter of the <EM> Enable-VMReplication </EM> cmdlet - <A href="#" title="" target="_blank"> </A> . When the flag is enabled, the request (for lack of better word) bypasses the proxy server. Eg: </P> <DIV id="codeSnippetWrapper"> <DIV id="codeSnippet"> Enable-VMReplication -vmname NewVM5 -AuthenticationType Kerberos -ReplicaServerName prb2 -ReplicaServerPort 25000 -BypassProxyServer $true&nbsp;Start-VMInitialReplication -vmname NewVM5 </DIV> </DIV> <BR /> <P> This is not available in the Hyper-V Manager or Failover Cluster Manager UI. It’s supported only in PowerShell (and WMI). Running the above cmdlets will create the replication request and start the initial replication. </P> </BODY></HTML> Thu, 21 Mar 2019 23:49:45 GMT Virtualization-Team 2019-03-21T23:49:45Z Hyper-V Replica Certificate based authentication and Proxy servers <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Feb 17, 2014 </STRONG> <BR /> <P> Continuing from where we left <A href="#" target="_blank"> off </A> , I have a small lab deployment which consists of a AD, DNS, Proxy server (Forefront TMG 2010 on WS 2008 R2 SP1), primary servers and replica servers. When the primary server is behind the proxy (forward proxy) and when I tried to enable replication using certificate based authentication, I got the following error message: <EM> The handle is in the wrong state for the requested operation (0x00002EF3) </EM> </P> <P> <IMG src="" /> </P> <P> That didn’t convey too much, did it? Fortunately I had <EM> netmon </EM> running in the background and the only set of network traffic which was seen was between the primary server and the proxy. A particular HTTP response caught my eye: </P> <P> <IMG src="" /> </P> <P> The highlighted text indicated that the proxy was terminating the connection and returning a ‘Bad gateway’ error. Closer look at the TMG error log indicated that the error was encountered during https-inspect state. </P> <P> After some bing’ing of the errors and the pieces began to emerge. When HTTPS inspection is enabled, the TMG server terminates the connection and establishes a new connection (in our case to the replica server) acting as a trusted man-in-the-middle. This doesn’t work for Hyper-V Replica as we mutually authenticate the primary and replica server endpoints. To work around the situation, I disabled HTTPS inspection in the proxy server </P> <P> <IMG src="" /> </P> <P> </P> <P> </P> <P> </P> <P> and things worked as expected. The primary server was able to establish the connection and replication was on track. </P> </BODY></HTML> Thu, 21 Mar 2019 23:48:53 GMT Virtualization-Team 2019-03-21T23:48:53Z Hosting Providers and HRM <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Feb 18, 2014 </STRONG> <BR /> If you are a hosting provider who is interested in offering DR as a service – you should go over this great post by <STRONG> Gaurav </STRONG> on how Hyper-V Recovery Manager ( <STRONG> HRM </STRONG> ) helps you build this capability <A href="#" title="" target="_blank"> </A> <P> The post provides a high level overview of the capability and also a detailed FAQ on the common set of queries which we have heard from our customers. If you have any further questions, leave a comment in the blog post. </P> </BODY></HTML> Thu, 21 Mar 2019 23:48:18 GMT Virtualization-Team 2019-03-21T23:48:18Z Backup of a Replica VM <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Apr 24, 2014 </STRONG> <BR /> This blog post covers the scenarios and motivations that drive the backup of a Replica VM, and product guidance to administrators. <BR /> <BR /> <H3> </H3> <BR /> <BR /> <H2> Why backup a Replica VM? </H2> <BR /> <BR /> Ever since the advent of Hyper-V Replica in Windows Server 2012, customers have been interested in backing up the Replica VM. Traditionally, IT administrators have taken backups of the VM that contains the running workload (the primary VM) and backup products have been built to cater to this need. So when a significant proportion of customers talked about the backup of Replica VMs, we were intrigued. There are a few key scenarios where backup of a Replica VM becomes useful: <BR /> <BR /> <OL> <LI> <STRONG> Reduce the impact of backup on the running workload: </STRONG> Taking the backup of a VM involves the creation of a snapshot/diff-disk to baseline the changes that need to be backed up. For the duration of the backup job, the workload is running on a diff-disk and there is an impact on the system when that happens. By offloading the backup to the Replica site, the running workload is no longer impacted by the backup operation. Of course, this is applicable only to deployments where the backup copy is stored on the remote site. For example, the daily backup operation might store the data locally for quicker restore times, but monthly or quarterly backup for long-term retention that are stored remotely can be done from the Replica VM. </LI> <LI> <STRONG> Limited bandwidth between sites: </STRONG> This is typical of Branch Office-Head Office (BO-HO) kind of deployments where there are multiple smaller remote branch office sites and a larger central Head Office site. The backup data for the branch offices is stored in the head office, and an appropriate amount of bandwidth is provisioned by administrators to transfer the backup data between the two sites. The introduction of disaster recovery using Hyper-V Replica creates another stream of network traffic, and administrators have to re-evaluate their network infrastructure. In most cases, administrators either could not or were not willing to increase the bandwidth between sites to accommodate both backup and DR traffic. However they did come to the realization that backup and DR were independently sending copies of the same data over the network – and this was an area that could be optimized. With Hyper-V Replica creating a VM in the Head Office site, administrators could save on the network transfer by backing up the Replica VM locally rather than backing up the primary VM and sending the data over the network. </LI> <LI> <STRONG> Backup of all VMs in the Hoster datacenter: </STRONG> Some customers use the Hoster datacenter as the Replica site, with the intention of not building a secondary datacenter of their own. Hosters have SLAs around the protection of all customer VMs in their datacenters – typically once a day backup. Thus the backup of Replica VMs becomes a requirement for the success of their business. </LI> </OL> <BR /> <BR /> Thus various customer segments found that the backup of a Replica VM has value for their specific scenarios. <BR /> <BR /> <H2> Data consistency </H2> <BR /> <BR /> A key aspect of the backup operation is related to the consistency of the backed-up data. Customers have a clear prioritization and preference when it comes to data consistency of backed up VMs: <BR /> <BR /> <OL> <LI> Application-consistent backup </LI> <LI> Crash-consistent backup </LI> </OL> <BR /> <BR /> And this prioritization applied to Replica VMs as well. Conversations with customers indicated that they were comfortable with crash-consistency for a Replica VM, if application-consistency was not possible. Of course, anything less than crash-consistency was not acceptable and customers preferred that backups fail rather than have inconsistent data getting backed up. <BR /> <BR /> Attempting application-consistency <BR /> <BR /> Typical backup products try to ensure application-consistency of the data being backed up (using the VSS framework) – and this works out well when the VM is running. However, the Replica VM is always turned off until a failover is initiated, and VSS is unable to guarantee application-consistent backup for a Replica VM. Thus getting application-consistent backup of a Replica VM is not possible. <BR /> <BR /> Guaranteeing crash-consistency <BR /> <BR /> In order to ensure that customers backing up Replica VMs always get crash-consistent data, a set of changes were introduced in Windows Server 2012 R2 that failed the backup operation if consistency could not be guaranteed. The virtual disk could be inconsistent when any one of the below conditions are encountered, and in these cases backup is expected to fail. <BR /> <BR /> <OL> <LI> HRL logs are being applied to the Replica VM </LI> <LI> Previous HRL log apply operation was cancelled or interrupted </LI> <LI> Previous HRL log apply operation failed </LI> <LI> Replica VM health is Critical </LI> <LI> VM is in the <EM> Resynchronization Required </EM> state or the <EM> Resynchronization in progress </EM> state </LI> <LI> Migration of Replica VM is in progress </LI> <LI> Initial replication is in progress (between the primary site and secondary site) </LI> <LI> Failover is in progress </LI> </OL> <BR /> <BR /> Dealing with failures <BR /> <BR /> These are largely treated as transient error states and the <EM> backup product is expected to retry the backup operation </EM> based on its own retry policies. With 30 second replication and apply being supported in Windows Server 2012 R2, the backup operation is expected to collide with HRL log apply more frequently – resulting in error scenario 1 mentioned above. A robust retry mechanism is needed to ensure a high backup success rate. In case the backup product is unable to retry or cope with failures then an option is to explicitly pause the replication before the backup is scheduled to run. <BR /> <BR /> <BR /> <BR /> <H2> Key Takeaways </H2> <BR /> <BR /> Impact on administrators <BR /> <BR /> <OL> <LI> Backup of Replica VMs is better with Windows Server 2012 R2. </LI> <LI> Only crash-consistent backup of a Replica VM is guaranteed. </LI> <LI> A robust retry mechanism needs to be configured in the backup product to deal with failures. Or ensure that replication is paused when backup is scheduled. </LI> </OL> <BR /> <BR /> Impact on backup vendors <BR /> <BR /> <OL> <LI> The changes introduced in Windows Server 2012 R2 would benefit customers using any backup product to take backup of Replica VMs. </LI> <LI> A robust retry mechanism would need to be built to deal with Replica VM failure. </LI> <LI> For specific details on how Data Protection Manager (DPM) deals with the backup of Replica VMs, refer to <A href="#" target="_blank"> this blog post </A> . </LI> </OL> <BR /> <BR /> <BR /> <BR /> <P> <EM> Update 25-Apr-2014:&nbsp; The DPM-specific details on this post have been moved to the <A href="#" target="_blank"> DPM blog </A> . </EM> </P> </BODY></HTML> Thu, 21 Mar 2019 23:48:11 GMT Aashish Ramdas 2019-03-21T23:48:11Z Replication Health-Windows Server 2012 R2 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 05, 2014 </STRONG> <BR /> <P> We have made improvements to the way we display Replication Health in Windows Server 2012 R2 to support Extend Replication. If you are new to measuring replication health, I would strongly suggest you to go through this <A href="#" target="_blank"> two part blog series on Interpreting Replication Health </A> . I would discuss specifically on the additional changes we made in Windows Server 2012 R2. </P> Replication Tab in Replica Site Hyper-V Manager: <P> Replication tab in Replica Site now shows replication health information for both Primary Replication Relationship and Extended Replication relationship. It neatly captures the Health values separately for both primary and extend replication in a single pane separating them by a line. </P> <P> <IMG src="" /> </P> <P> <STRONG> Replication Health Screen in Replica Site: </STRONG> </P> <P> Replication Health information about Extend Replication can be captured through “ <STRONG> Extended Replication </STRONG> ” tab in Replication Health screen. To view Replication Health Screen, go to <STRONG> Hyper-V Manager/Failover Cluster Manager </STRONG> and right click on protected VM and choose “ <STRONG> View Replication Health </STRONG> ”. </P> <P> Replication health information about primary replication relationship is shown in “Replication” tab while extended replication screen displays Replication Health information about extend replication. What’s more, Extended Replication tab looks exactly like Replication Health screen in Primary Server to give a consistent view while Replication tab continues to display the content the way it used to. You can even “Reset Statistics” or “Save as CSV file” on a relationship basis. </P> <P> <IMG src="" /> </P> <P> <IMG src="" /> </P> Replication Health through PowerShell: <P> I can get Replication Health details of Extended Replication through Powershell by setting “ <STRONG> ReplicationRelationshipType </STRONG> ” <STRONG> </STRONG> parameter to “ <STRONG> Extended </STRONG> ”. To view the health of Replication from primary to replica, use the value of “ <STRONG> Simple </STRONG> ” as input to ReplicationRelationshipType parameter. </P> <P> <EM> Measure-VMReplication –VMName &lt;name&gt; -ReplicationRelationshipType Extended </EM> </P> <P> While we have added support to display extended replication in our UI/PS, getting details about primary replication relationship remain same <IMG src="" /> </P> </BODY></HTML> Thu, 21 Mar 2019 23:48:06 GMT Trinadh Kotturu 2019-03-21T23:48:06Z Hyper-V events at TechEd North America <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 06, 2014 </STRONG> <BR /> I'm excited to report we, the Hyper-V team, will have a record high presence this year at TechEd North America in Houston.&nbsp; Come join us at the Hyper-V booth and for our Hyper-V sessions. <BR /> <BR /> <STRONG> Sessions: </STRONG> <BR /> <BR /> <STRONG> Monday: </STRONG> <BR /> <BR /> 11:00 - 12:00 <BR /> <BR /> FDN06 Transform the Datacenter: Making the Promise of Connected Clouds a Reality <BR /> Speaker(s): Brian Hillger, Elden Christensen, Jeff Woolsey, Jeffrey Snover, Matt McSpirit <BR /> <BR /> 1:15 - 2:30 <BR /> <BR /> DCIM-B319 Building a Backup Strategy for Your Private Cloud <BR /> Speaker(s): Doug Hazelman, Michael Jones, Shivam Garg, Taylor Brown, Vineeth Karinta <BR /> <BR /> 4:45 - 6:00 <BR /> <BR /> DCIM-B378 Converged Networking for Windows Server 2012 R2 Hyper-V <BR /> Speaker(s): Don Stanwyck, Taylor Brown <BR /> <BR /> <STRONG> Tuesday: </STRONG> <BR /> <BR /> 8:30 - 9:45 <BR /> <BR /> DCIM-B379 Using VMware? The Advantages of Microsoft Cloud Fundamentals with Virtualization <BR /> Speaker(s): Jeff Woolsey, Matt McSpirit <BR /> <BR /> <STRONG> Wednesday: </STRONG> <BR /> <BR /> 5:00 - 6:15 <BR /> <BR /> DCIM-B380 What’s New in Windows Server 2012 R2 Hyper-V <BR /> Speaker(s): Jeff Woolsey <BR /> <BR /> <STRONG> Thursday: </STRONG> <BR /> <BR /> 2:45 - 4:00 <BR /> <BR /> DCIM-B219 Secure Design and Best Practices for Your Private Cloud <BR /> Speaker(s): Patrick Lang, Sam Chandrashekar <BR /> <BR /> <STRONG> Booth Info: </STRONG> <BR /> <BR /> The Hyper-V booth will be in the center of the Expo floor with the Server and Cloud Tools booth block.&nbsp; Come find us when you have a chance. <BR /> <BR /> I'll post bios for the Hyper-V attendees shortly. <BR /> <BR /> <P> Cheers, <BR /> Sarah </P> </BODY></HTML> Thu, 21 Mar 2019 23:47:30 GMT Virtualization-Team 2019-03-21T23:47:30Z Optimizing Hyper-V Replica HTTPS traffic using Riverbed SteelHead <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 08, 2014 </STRONG> <BR /> <P> Hyper-V Replica support both Kerberos based authentication and certificate based authentication – the former sends the replication traffic between the two servers/sites over HTTP while the latter sends it over HTTPS. Network is a precious commodity and any optimization delivered has a huge impact on the organization’s TCO and the Recovery Point Objective (RPO). </P> <P> Around a year back, we partnered with the folks from <A href="#" target="_blank"> Riverbed </A> in Microsoft’s EEC lab, to publish a <A href="#" target="_blank"> whitepaper </A> which detailed the bandwidth optimization of replication traffic sent over HTTP. </P> <P> A few months back, we decided to revisit the setup with the latest release of RiOS (Riverbed OS which runs in the Riverbed appliance). Using the resources and appliances from EEC and Riverbed, a set of experiments were performed to study the network optimizations delivered by the Riverbed SteelHead appliance. Optimizing SSL traffic has been a tough nut to crack and we saw some really impressive numbers.&nbsp; The whitepaper documenting the results and technology is available here - <A href="#" target="_blank"> <B> </B> </A> <B> . </B> </P> <P> At a high level, in order to optimize HTTPS traffic, the Riverbed SteelHead appliance decrypts the packet from the client (the primary server). It then optimizes the payload and encrypts the payload before sending it to the server side SteelHead appliance over the internet/WAN. The server-side SteelHead appliance decrypts the payload, de-optimizes the traffic and re-encrypts it. The server side appliance finally sends it to the destination server (the replica server) which proceeds to decrypt the replication traffic. The diagram is taken from Riverbed’s user manual and explains the above technology: </P> <P> <IMG src="" /> </P> <P> When Hyper-V Replica’s inbuilt compression is disabled, the reduction delivered over WAN was ~80% </P> <IMG src="" /> <P> </P> <P> When Hyper-V Replica’s inbuilt compression is enabled, the reduction delivered over WAN was ~30% </P> <P> <IMG src="" /> </P> <P> It’s worth calling out that the % reduction delivered depends on a number of factors such as workload read, write pattern, sparseness of the disk etc but the numbers were quite impressive. </P> <P> In summary, both Hyper-V Replica and the SteelHead devices were easy to configure and worked “out-of the box”. Neither product required specific configurations to light up the scenario. The Riverbed appliance delivered ~30% on compressed, encrypted Hyper-V Replica traffic and ~80% on uncompressed, encrypted Hyper-V Replica traffic. </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> </BODY></HTML> Thu, 21 Mar 2019 23:47:26 GMT Virtualization-Team 2019-03-21T23:47:26Z TechEd North America 2014 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 08, 2014 </STRONG> <BR /> <P> This year <A href="#" target="_blank"> TechEd North America </A> is happening between 12 May and 15 May, at Houston. There are some interesting sessions around Backup and Disaster Recovery – so I would highly encourage you all to attend these sessions and interact with the folks presenting. </P> Sessions on May 12 <IMG src="" /> <P> <IMG src="" /> </P> Sessions on May 15 <P> <IMG src="" /> </P> <IMG src="" /> <P> <IMG src="" /> </P> <P> </P> <P> Looking forward to seeing you all there! </P> </BODY></HTML> Thu, 21 Mar 2019 23:46:50 GMT Aashish Ramdas 2019-03-21T23:46:50Z See you at TechEd North America! <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 09, 2014 </STRONG> <BR /> <P> Here's a quick intro to the Hyper-V people attending TechEd North America in Houston next week.&nbsp; I'm also posting booth times for each of us. </P> <BR /> <P> The booth's official name is: Datacenter &amp; Infrastructure Management: Cloud &amp; Datacenter Infrastructure Solutions </P> <BR /> <P> It will be at the center of the Expo Hall. </P> <BR /> <P> </P> <BR /> <P> <EM> <STRONG> *Note* </STRONG> </EM> </P> <BR /> <P> These are the times we have to be at a booth.&nbsp; Chances are we'll be there beyond these hours. </P> <BR /> <P> </P> <BR /> <P> <STRONG> Sam Chandrashekar </STRONG> </P> <BR /> <P> Wednesday&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3:00 PM - 6:00 PM </P> <BR /> <P> Thursday&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10:45 AM - 12:45 PM </P> <BR /> <P> </P> <BR /> <P> <STRONG> Patrick Lang </STRONG> </P> <BR /> <P> Wednesday&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3:00 PM - 6:00 PM </P> <BR /> <P> Thursday&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10:45 AM - 12:45 PM </P> <BR /> <P> </P> <BR /> <P> <STRONG> Taylor Brown </STRONG> </P> <BR /> <P> Monday &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10:15 AM - 12:15 PM </P> <BR /> <P> </P> <BR /> <P> <STRONG> Sarah Cooley </STRONG> -- I'm on loan to the Windows team. Come talk to me next door at the Windows, Phone, &amp; Devices block: Mobility </P> <BR /> <P> Monday&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5:45 PM-8:30 PM </P> <BR /> <P> Tuesday&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10:45 AM-12:30 PM </P> <BR /> <P> 2:15 PM-4:00 PM </P> <BR /> <P> Wednesday&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10:45 AM-1:00 PM </P> <BR /> <P> 3:00 PM-6:00 PM </P> <BR /> <P> Thursday&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10:45 AM-12:45 PM </P> <BR /> <P> </P> <BR /> <P> <STRONG> Ben Armstrong </STRONG> </P> <BR /> <P> Monday&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10:15 AM-12:15 PM </P> <BR /> <P> Tuesday&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10:45 AM-12:30 PM </P> <BR /> <P> Wednesday&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 12:45 PM-3:15 PM </P> <BR /> <P> Thursday&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10:45 AM-12:45 PM </P> <BR /> <P> </P> <BR /> <P> <STRONG> Jeff Woolsey </STRONG> - running around </P> <BR /> <P> </P> <BR /> <P> Again, feel free to stop by and talk to us. Looking forward to seeing you at TechEd. </P> <BR /> <P> </P> <BR /> <P> <IMG src="" /> <BR /> </P> <BR /> <P> [From left to right: Taylor, Sam, Ben, Sarah, Jeff, Patrick] </P> <BR /> <P> </P> <BR /> <P> Cheers, </P> <BR /> <P> Sarah </P> <BR /> <P> </P> </BODY></HTML> Thu, 21 Mar 2019 23:46:03 GMT Virtualization-Team 2019-03-21T23:46:03Z Excluding virtual disks in Hyper-V Replica <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 11, 2014 </STRONG> <BR /> <P> Since its introduction in Windows Server 2012, Hyper-V Replica has provided a way for users to exclude specific virtual disks from being replicated. This option is rarely exercised but can have a significant benefits when used correctly. This blog post covers the disk exclusion scenarios and the impact this has on the various operations done during the lifecycle of VM replication. This blog post has been co-authored by <A href="#" target="_blank"> <STRONG> Priyank Gaharwar </STRONG> </A> of the Hyper-V Replica test team. </P> <H2> Why exclude disks? </H2> <P> Excluding disks from replication is done because: </P> <OL> <LI> The data churned on the excluded disk is not important or doesn’t need to be replicated&nbsp;&nbsp;&nbsp; (and) </LI> <LI> Storage and network resources can be saved by not replicating this churn </LI> </OL> <P> Point #1 is worth elaborating on a little. What data isn't “important”? The lens used to judge the importance of replicated data is its usefulness at the time of Failover. Data that is not replicated <EM> should </EM> also not be needed at the time of failover. Lack of this data would then also not impact the Recovery Point Objective (RPO) in any material way. </P> <P> There are some specific examples of data churn that can be easily identified and are great candidates for exclusion – for example, <EM> page file writes </EM> . Depending on the workload and the storage subsystem, the page file can register a significant amount churn. However, replicating this data from the primary site to the replica site would be resource intensive and yet completely worthless. Thus the replication of a VM with a single virtual disk having both the OS and the page file can be optimized by: </P> <OL> <LI> Splitting the single virtual disk into two virtual disks – one with the OS, and one with the page file </LI> <LI> Excluding the page file disk from replication </LI> </OL> <H2> How to exclude disks </H2> Application impact - isolating the churn to a separate disk <P> The first step in using this feature is to first isolate the superfluous churn on to a separate virtual disk, similar to what is described above for page files. This is a change to the virtual machine and to the guest. Depending on how your VM is configured and what kind of disk you are adding (IDE, SCSI) you may have to power off your VM before any changes can be made. </P> <P> At the end, an additional disk should surface up in the guest. Appropriate configuration changes should be done in the application to change the location of the temporary files to point to the newly added disk. </P> <P> <EM> Figure 1:&nbsp; Changing the location of the System Page File to another disk/volume </EM> <IMG src="" /> </P> Excluding disks in the Hyper-V Replica UI <P> Right-click on a VM and select “ <STRONG> Enable Replication… </STRONG> ”. This will bring up the wizard that walks you through the various inputs required to enable replication on the VM. The screen titled “ <STRONG> Choose Replication VHDs </STRONG> ” is where you deselect the virtual disks that you do not want to replicate. By default, all virtual disks will be selected for replication. </P> <P> <EM> Figure 2:&nbsp; Excluding the page file virtual disk from a virtual machine </EM> <IMG src="" /> </P> Excluding disks using PowerShell <P> The <STRONG> Enable-VMReplication </STRONG> commandlet provides two optional parameters: <STRONG> –ExcludedVhd </STRONG> and <STRONG> –ExcludedVhdPath </STRONG> . These parameters should be used to exclude the virtual disks at the time of enabling replication. </P> <DIV id="codeSnippetWrapper"> <DIV id="codeSnippet"> PS C:\Windows\system32&gt; Enable-VMReplication -VMName SQLSERVER -ReplicaServerName -AuthenticationType Kerberos -ReplicaServerPort 80 -ExcludedVhdPath 'D:\Primary-Site\Hyper-V\Virtual Hard Disks\SQL-PageFile.vhdx' </DIV> </DIV> <BR /> <P> After running this command, you will be able to see the excluded disks under <STRONG> VM Settings </STRONG> &gt; <STRONG> Replication </STRONG> &gt; <STRONG> Replication </STRONG> <STRONG> VHDs </STRONG> . </P> <BR /> <P> <EM> Figure 3:&nbsp; List of disks included for and excluded from replication </EM> <IMG src="" /> </P> <BR /> <H2> Impact of disk exclusion </H2> <BR /> <TABLE> <TBODY><TR> <TD> Enable replication </TD> <TD> A placeholder disk (for use during initial replication) is not created on the Replica VM. The excluded disk doesn’t exist on the replica in any form. </TD> </TR> <TR> <TD> Initial replication </TD> <TD> The data from the excluded disks are not transferred to the replica site. </TD> </TR> <TR> <TD> Delta replication </TD> <TD> The churn on any of the excluded disks is not transferred to the replica site. </TD> </TR> <TR> <TD> Failover </TD> <TD> The failover is initiated without the disk that has been excluded. Applications that refer to the disk/volume in the guest will have their configurations incorrect. <BR /> <BR /> For page files specifically, if the page file disk is not attached to the VM before VM boot up then the page file location is automatically shifted to the OS disk. </TD> </TR> <TR> <TD> Resynchronization </TD> <TD> The excluded disk is not part of the resynchronization process. </TD> </TR> </TBODY></TABLE> <BR /> <H2> Ensuring a successful failover </H2> <BR /> <P> Most applications have configurable settings that make use of file system paths. In order to run correctly, the application expects these paths to be present. The key to a successful failover and an error-free application startup is to ensure that the configured paths are present where they should be. In the case of file system paths associated with the excluded disk, this means updating the Replica VM by adding a disk - along with any subfolders that need to be present for the application to work correctly. </P> <BR /> <P> The prerequisites for doing this correctly are: </P> <BR /> <UL> <BR /> <LI> The disk should be added to the Replica VM before the VM is started. This can be done at any time after initial replication completes, but is preferably done immediately after the VM has failed over. </LI> <BR /> <LI> The disk should be added to the Replica VM with the exact controller type, controller number, and controller location as the disk has on the primary. </LI> </UL> <BR /> <P> There are two ways of making a virtual disk available for use at the time of failover: </P> <BR /> <OL> <BR /> <LI> Copy the excluded disk manually (once) from the primary site to the replica site <BR /> </LI> <LI> Create a new disk, and format it appropriately (with any folders if required) </LI> </OL> <BR /> <P> When possible, option #2 is preferred over option #1 because of the resources saved from not having to copy the disk. The following PowerShell script can be used to green-light option #2, focusing on meeting the prerequisites to ensure that the Replica VM is exactly the same as the primary VM from a virtual disk perspective: </P> <BR /> <DIV id="codeSnippetWrapper"> <BR /> <DIV id="codeSnippet"> param ( [string]$VMNAME, [string]$PRIMARYSERVER)&nbsp;## Get VHD details from primary, replica$excludedDisks = Get-VMReplication -VMName $VMNAME -ComputerName $PRIMARYSERVER | select ExcludedDisks$includedDisks = Get-VMReplication -VMName $VMNAME | select ReplicatedDisksif( $excludedDisks -eq $null ) { exit}&nbsp;#Get location of first replica VM disk$replicaPath = $includedDisks.ReplicatedDisks[0].Path | Split-Path -Parent&nbsp;## Create and attach each excluded diskforeach( $exDisk in $excludedDisks.ExcludedDisks ){ #Get the actual disk object $pDisk = Get-VHD -Path $exDisk.Path -ComputerName $PRIMARYSERVER $pDisk #Create a new VHD on the Replica $diskpath = $replicaPath + "\" + ($pDisk.Path | Split-Path -Leaf) $newvhd = New-VHD -Path $diskpath ` -SizeBytes $pDisk.Size ` -Dynamic ` -LogicalSectorSizeBytes $pDisk.LogicalSectorSize ` -PhysicalSectorSizeBytes $pDisk.PhysicalSectorSize ` -BlockSizeBytes $pDisk.BlockSize ` -Verbose if($newvhd -eq $null) { Write-Host "It is assumed that the VHD [" ($pDisk.Path | Split-Path -Leaf) "] already exists and has been added to the Replica VM [" $VMNAME "]" continue; }&nbsp; #Mount and format the new new VHD $newvhd | Mount-VHD -PassThru -verbose ` | Initialize-Disk -Passthru -verbose ` | New-Partition -AssignDriveLetter -UseMaximumSize -Verbose ` | Format-Volume -FileSystem NTFS -Confirm:$false -Force -verbose ` #Unmount the disk $newvhd | Dismount-VHD -Passthru -Verbose&nbsp; #Attach disk to Replica VM Add-VMHardDiskDrive -VMName $VMNAME ` -ControllerType $exDisk.ControllerType ` -ControllerNumber $exDisk.ControllerNumber ` -ControllerLocation $exDisk.ControllerLocation ` -Path $newvhd.Path ` -Verbose} </DIV> </DIV> <BR /> <BR /> <P> The script can also be customized for use with <A href="#" target="_blank"> Azure Hyper-V Recovery Manager </A> , but we’ll save that for another post! </P> <BR /> <H2> Capacity Planner and disk exclusion </H2> <BR /> <P> The Capacity Planner for Hyper-V Replica allows you to forecast your resource needs. It allows you to be more precise about the replication inputs that impact the resource consumption – such as the disks that will be replicated and the disks that will not be replicated. </P> <BR /> <P> <EM> Figure 4:&nbsp; Disks excluded for capacity planning </EM> <IMG src="" /> </P> <BR /> <H2> Key Takeaways </H2> <BR /> <OL> <BR /> <LI> Excluding virtual disks from replication can save on storage, IOPS, and network resources used during replication </LI> <BR /> <LI> At the time of failover, ensure that the excluded virtual disk is attached to the Replica VM </LI> <BR /> <LI> In most cases, the excluded virtual disk can be recreated on the Replica side using the PowerShell script provided </LI> </OL> </BODY></HTML> Thu, 21 Mar 2019 23:45:36 GMT Aashish Ramdas 2019-03-21T23:45:36Z Hyper-V Replica trouble-shooting wiki <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 13, 2014 </STRONG> <BR /> <P> We are happy to announce the availability of Hyper-V Replica trouble-shooting Wiki here: </P> <P> <A href="#" title="" target="_blank"> </A> </P> <P> This guide contains links and resources to trouble-shoot some common Hyper-V Replica failure scenarios. We will be updating the guide over time! </P> <P> We would like this to be a community effort to make it social and <STRONG> <EM> you are free to add the content </EM> </STRONG> to this guide.To add content follow this high level schema for the new articles [Please feel free to add other sections as appropriate]: </P> <P> <STRONG> a. Error Messages/Event Viewer details </STRONG> – This section mentions what error messages customer will see on UI/PS/WMI and what event viewer messages are logged. </P> <P> <STRONG> b. Possible Causes </STRONG> – This section explains list of scenarios(one or more) which might have led to failure. </P> <P> <STRONG> c. Resolution </STRONG> – This section lists down the actions admin has to take in his environment to resolve the failure </P> <P> <STRONG> d. Additional resources </STRONG> – List of blogs/KB Articles/Documentation/other articles which contain more information for the customer about the failure. </P> <P> If you are new to TechNet wiki, the guide on “ <STRONG> How to contribute </STRONG> ” is <A href="#" target="_blank"> here </A> . </P> <P> Happy WIKI’ing <IMG src="" /> </P> </BODY></HTML> Thu, 21 Mar 2019 23:44:55 GMT Trinadh Kotturu 2019-03-21T23:44:55Z Replication Health Mailer <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 13, 2014 </STRONG> <BR /> <P> One of our Engineers, <STRONG> Sangeeth </STRONG> , has come up with a nifty PowerShell script which mails the replication health in a host or&nbsp; in a cluster in a nice dashboard format. We thought it would be of help to our customers to get the status of the replicating VMs and their foot print on CPU and in Memory. You can download the script <A href="#" target="_blank"> here. </A> </P> <P> The sample output from the script looks like this. You can add as many recipients as you wish <IMG src="" /> </P> <P> <IMG src="" /> </P> <P> On a cluster, you can run this script on one of the cluster nodes to get information about all Cluster VMs. You can even run this script to get information from remote host and remote Cluster using “ <STRONG> HostorClusterName </STRONG> ” parameter. In case of cluster use <STRONG> “isCluster </STRONG> ” parameter to tell the script to get information from Cluster rather than on the local node. </P> <P> Isn’t it simple and easy to get the replication information about VMs? </P> </BODY></HTML> Thu, 21 Mar 2019 23:44:44 GMT Trinadh Kotturu 2019-03-21T23:44:44Z Upcoming Preview of 'Disaster Recovery to Azure' Functionality in Hyper-V Recovery Manager <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 15, 2014 </STRONG> <BR /> <P> In the coming weeks, we will Preview functionality within Hyper-V Recovery Manager to enable Microsoft Azure as a Disaster Recovery point for virtualized workloads. The new functionality will add support for secure and seamless management of failover and failback operations using Azure IaaS Virtual Machines, thereby enabling our customers to save precious CAPEX and ongoing OPEX incurred in managing a secondary site for Disaster Recovery. Our enhanced DRaaS offering further delivers on our promise of democratizing Disaster Recovery and of making it available to <I> everyone </I> , <I> everywhere </I> . Hyper-V Recovery Manager provides enterprise-scale Disaster Recovery using a sing-click failover in the event of a disaster to an alternate enterprise data center or to an IaaS VM in Microsoft Azure. Application and Site Level Disaster Recovery is delivered via automation of overall DR workflow, smart networking, and frequent testing using DR Drills. </P> <P> </P> <P> </P> <P> </P> <P> We announced the Preview during TechEd 2014. For more details about the upcoming Preview and existing Hyper-V Recovery Manager functionality, check out the <A href="#" target="_blank"> <STRONG> DCIM-B322 </STRONG> </A> session recording. </P> <P> </P> </BODY></HTML> Thu, 21 Mar 2019 23:44:24 GMT Virtualization-Team 2019-03-21T23:44:24Z Application consistent recovery points with Windows Server 2008/2003 guest OS <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 19, 2014 </STRONG> <BR /> <P> I recently had a conversation with a customer around a very interesting problem, and the insights that were gained there are worth sharing. The issue was about VSS errors popping up in the guest event viewer while Hyper-V Replica reported the successful creation of application-consistent (VSS-based) recovery points. </P> Deployment details <P> The customer had the following setup that was throwing errors: </P> <OL> <LI> Primary site:&nbsp;&nbsp; Hyper-V Cluster with Windows Server 2012 R2 </LI> <LI> Replica site:&nbsp;&nbsp; Hyper-V Cluster with Windows Server 2012 R2 </LI> <LI> Virtual machines:&nbsp;&nbsp; SQL server instances with SQL Server 2012 SP1, SQL Server 2005, and SQL Server 2008 </LI> </OL> <P> At the time of enabling replication, the customer selected the option to create additional recovery points and have the “Volume Shadow Copy Service (VSS) snapshot frequency” as 1 hour. This means that every hour the VSS writer of the guest OS would be invoked to take an <EM> application-consistent snapshot </EM> . </P> Symptoms <P> With this configuration, there was a contradiction in the output – the guest event viewer showed errors/failure during the VSS process, while the Replica VM showed application-consistent points in the recovery history. </P> <P> Here is an example of the error registered in the guest: </P> <DIV id="codeSnippetWrapper"> <DIV id="codeSnippet"> SQLVM: Loc=SignalAbort. Desc=Client initiates abort. ErrorCode=(0). Process=2644. Thread=7212. Client. Instance=. VD=Global\*******&nbsp;BACKUP failed to complete the command BACKUP DATABASE model. Check the backup application log for detailed messages.&nbsp;BackupVirtualDeviceFile::SendFileInfoBegin: failure on backup device '{********-63**-49**-BA**-5DB6********}1'. Operating system error 995(error not found). </DIV> </DIV> <BR /> Root cause and Dealing with the errors <BR /> <P> The big question was: <EM> Why was Hyper-V Replica showing application-consistent recovery points if there are failures? </EM> </P> <BR /> <P> The behavior seen by the customer is a benign error caused because of the interaction between Hyper-V and VSS, <STRONG> especially for older versions of the guest OS </STRONG> . Details about this can be found in the KB article here: <A href="#" target="_blank"> </A> </P> <BR /> <P> The Hyper-V requestor explicitly stops the VSS operation right after the <EM> OnThaw </EM> phase. While this ensures application-consistency of the writes going to the disk, it also results in the VSS errors being logged. Meanwhile, Hyper-V returns the consistency correctly to Hyper-V Replica, which in turn makes sure that the recovery side shows application-consistent points. </P> <BR /> <P> A great way to validate whether the recovery point is application-consistent or not is to do a test failover on that recovery point. After the VM has booted up, the event viewer logs will have events pertaining to a rollback - and this would mean that the point is not application consistent. <BR /> </P> Key Takeaways <BR /> <OL> <BR /> <LI> All in all, you can rest assured that in the case of VMs with older operating systems, Hyper-V Replica is correctly taking an application-consistent snapshot of the virtual machine. <BR /> </LI> <LI> Although there are errors seen in the guest, they are benign and having a recovery history with application-consistent points is an expected behavior. </LI> </OL> <P> </P> </BODY></HTML> Thu, 21 Mar 2019 23:44:19 GMT Aashish Ramdas 2019-03-21T23:44:19Z Disaster Recovery to Microsoft Azure – Part 1 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jun 20, 2014 </STRONG> <BR /> <P> Drum roll please! </P> <P> We are super excited to announce the availability of the preview bits of <A href="#" target="_blank"> <STRONG> Azure Site Recovery </STRONG> </A> <STRONG> (ASR) </STRONG> which enables you to replicate Hyper-V VMs to Microsoft Azure for business continuity and disaster recovery purposes. </P> <P> You can now protect, replicate, and failover VMs directly to Microsoft Azure – our guarantee remains that whether you enable Disaster Recovery across On-Premise Enterprise Private Clouds or directly to Azure, your virtualized workloads will be recovered <STRONG> <I> accurately </I> , <I> consistently </I> , <EM> with </EM> <I> minimal downtime and with minimal data loss. </I> </STRONG> </P> <P> ASR supports Automated Protection and Replication of VMs, customizable Recovery Plans that enable One-Click Recovery, No-Impact Recovery Plan Testing (ensures that you meet your Audit and Compliance requirements), and best-in-class Security and Privacy features that offer maximum resilience to your business critical applications. All this with minimal cost and without the need to invest in a recovery datacenter. To know more about this announcement and what we have enabled in the Preview, check out <STRONG> </STRONG> <A href="#" target="_blank"> <STRONG> Brad Anderson’s </STRONG> In the Cloud blog </A> . </P> <P> We will cover this feature in detail in the coming weeks – stay tuned and try out the feature. We love to hear your feedback! </P> <P> </P> </BODY></HTML> Thu, 21 Mar 2019 23:44:13 GMT Virtualization-Team 2019-03-21T23:44:13Z Disaster Recovery to Microsoft Azure – Part 2 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jun 21, 2014 </STRONG> <BR /> <P> </P> <P> Continuing from the previous <A href="#" target="_blank"> blog </A> - check out the recent TechEd NA 2014 talk @ <A href="#" target="_blank"> </A> which includes a cool demo of this product. </P> <P> Love it??? Talk about it, try it and share your comments. </P> <P> Let’s retrace the journey - in Jan 2014, we announced the General Availability of <B> Hyper-V Recovery Manager </B> <STRONG> (HRM). </STRONG> HRM&nbsp; enabled customers to co-ordinate protection and recovery of virtualized workloads between SCVMM managed clouds. Using this Azure service, customers could setup, monitor and orchestrate protection and recovery of their Virtual Machines on top of Windows Server 2012, WS2012 R2 Hyper-V Replica. </P> <P> Like Hyper-V Replica, the solution works great when our customers had a secondary location. But what if it isn’t the case. After all, the CAPEX and OPEX cost of building and maintaining multiple datacenters is high. One of the common questions/suggestions/feedback to our team was around using Azure as a secondary data center. Azure provides a world class, reliable, resilient platform – at a fraction of a cost compared to running your workloads or in this case, maintaining a secondary datacenter. </P> <P> The rebranded HRM service - <B> Azure Site Recovery (ASR) </B> - delivers this capability. On 6/19, we announced the availability of the preview version of ASR which orchestrates, manages and replicates VMs <B> to </B> Azure. </P> <P> When a disaster strikes the customer’s on-premises, ASR can “ <STRONG> failover </STRONG> ” the replicated VMs <B> in </B> Azure. </P> <P> And once the customer recovers the on-premises site, ASR can “ <STRONG> failback </STRONG> ” the Azure IaaS VMs <B> to </B> the customer’s private cloud. We want you to decide which VM runs where and when! </P> <P> There is some exciting technology built on top of Azure which enables the scenario and in the coming weeks we will dive deep into the workflows and the technology. </P> <P> Top of my head, the key features in the product are: </P> <UL> <LI> <DIV> Replication from a System Center 2012 R2 Virtual Machine Manager cloud <STRONG> to Azu </STRONG> r <STRONG> e </STRONG> – From a SCVMM 2012 R2 managed private cloud, any VM (we will cover some caveats in subsequent blogs) running on Windows Server 2012 R2 hypervisor can be replicated to Azure. </DIV> </LI> </UL> <P> <IMG src="" /> </P> <UL> <LI> Replication frequency of <STRONG> 30seconds, 5mins or 15mins – </STRONG> just like the on-premises product, you can replicate to Azure at 30seconds. </LI> </UL> <P> <IMG src="" /> </P> <UL> <LI> Additional <STRONG> 24 additional recovery points </STRONG> to choose during failover – You can configure upto 24 additional recovery points at an hourly granularity. </LI> </UL> <P> </P> <UL> <LI> <STRONG> Encryption @ Rest: </STRONG> You got to love this – we encrypt the data *before* it leaves your on-premises server. We never decrypt the payload till you initiate a failover. You own the encryption key and it’s safe with you. </LI> </UL> <P> <IMG src="" /> </P> <UL> <LI> Self-service DR with <STRONG> Planned, Unplanned and Test Failover </STRONG> – Need I say more – everything is in your hands and at your convenience. </LI> </UL> <P> <IMG src="" /> </P> <UL> <LI> One click app-level failover using Recovery Plans </LI> <LI> Audit and compliance reporting </LI> <LI> .…and many more! </LI> </UL> <P> The documentation explaining the end to end workflows is available @ <A href="#" title="" target="_blank"> </A> to help you get started. </P> <P> The landing page for this service is @ <A href="#" title="" target="_blank"> </A> </P> <P> If you have questions when using the product, post them @ <A href="#" title="" target="_blank"> </A> or in this blog. </P> <P> Keep watching this blog space for more information on this capability. </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> </BODY></HTML> Thu, 21 Mar 2019 23:44:06 GMT Virtualization-Team 2019-03-21T23:44:06Z Azure Site Recovery - FAQ <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jun 25, 2014 </STRONG> <BR /> <P> Quick post to clarify some frequently asked questions on the newly announced <A href="#" target="_blank"> Azure Site Recovery </A> service which enables you to protect your Hyper-V VMs to Microsoft Azure. The FAQ will <STRONG> not </STRONG> address every feature capability - it should help you get started. <STRONG> Q1: Did you just change the name from Hyper-V Recovery Manager to Azure Site Recovery? </STRONG> </P> <BR /> <BR /> <P> A: Nope – we did more than that. Yes, we rebranded Hyper-V Recovery Manager to <A href="#" target="_blank"> Azure Site Recovery </A> (ASR) but we also brought in a bunch of new features. This includes the much awaited capability to replicate virtual machines (VMs) to Microsoft Azure. With this feature, ASR now orchestrates replication and recovery between private cloud to private cloud as well as private cloud to Azure. <BR /> <BR /> <STRONG> Q2: What did you </STRONG> <A href="#" target="_blank"> <STRONG> GA </STRONG> </A> <STRONG> in Jan 2014?… </STRONG> </P> <BR /> <BR /> <P> A: In Jan 2014, we announced the general availability of Hyper-V Recovery Manager (HRM) which enabled you to manage, orchestrate protection &amp; recovery workflows of *your* private clouds. You (as a customer) owned both the primary and secondary datacenter which was managed by SCVMM. Built on top of Windows Server 2012/R2 Hyper-V Replica, we offered a <A href="#" target="_blank"> cloud integrated Disaster Recovery Solution </A> . </P> <BR /> <BR /> <P> <STRONG> Q3: HRM was an Azure service but data was replicated between my datacenters? And this continues to work? </STRONG> </P> <BR /> <BR /> <P> A: Yes on both counts. The service was being used to provide the “at-scale” protection &amp; recovery of VMs. <BR /> <BR /> <STRONG> Q4: What is in preview as of June 2014 (now)? </STRONG> </P> <BR /> <BR /> <P> A: The rebranded service now has an added capability to protect VMs to Azure (=&gt; Azure is your secondary datacenter). If your primary machine/server/VM is down due to a planned/unplanned event, you can recover the replicated VM in Azure. You can also bring back (or failback) your VM to your private cloud once it’s recovered from a disaster. <BR /> <BR /> <STRONG> Q5: Wow, so I don’t need a secondary datacenter? </STRONG> <BR /> <BR /> A: Exactly. You don’t need to invest and maintain a secondary DC. You can reap the benefits of Azure’s SLAs by protecting your VMs on Azure. The replica VM does *NOT* run in Azure till you initiate a failover. <BR /> <BR /> <STRONG> Q6: Where is my data stored? </STRONG> <BR /> <BR /> A: Your data is stored in *your* storage account on top of world class geo-redundant storage provided by Azure. <BR /> <BR /> <STRONG> Q7: Do you encrypt my replica data? </STRONG> <BR /> <BR /> A: Yes. You can also optionally encrypt the data. You own &amp; manage the encryption key. Microsoft never requires them till you opt to failover the VM in Azure. <BR /> <BR /> <STRONG> Q8: And my VM needs to be part of a SCVMM cloud? </STRONG> <BR /> <BR /> A: Yes. For the current preview release, we need your VMs to be part of a SCVMM managed cloud. Check out the benefits of SCVMM @ <A href="#" title="" target="_blank"> </A> <BR /> <BR /> <STRONG> Q9: Can I protect <EM> any </EM> guest OS? </STRONG> <BR /> <BR /> A: Your protection and recovery strategy is tied to Microsoft Azure’s supported operating systems. You can find more details in <A href="#" title="" target="_blank"> </A> under the “Virtual Machines support&nbsp;– on premises to Azure” section. <BR /> <BR /> <STRONG> Q10: Ok, but what about the host OS on-premises? </STRONG> <BR /> <BR /> A: For the current preview release, the host OS should be Windows Server 2012 R2. <BR /> <BR /> In summary, you can replicate any supported Windows and Linux SKU mentioned in Q9 running on top of a Windows Server 2012 R2 Hyper-V server. <BR /> <BR /> <STRONG> Q11: Can I replicate Gen-2 VMs on Windows Server 2012 R2? </STRONG> <BR /> <BR /> A: For the preview release, you can protect only Generation 1 VMs. Trying to protect a Gen-2 VM will fail with an appropriate error message. <BR /> <BR /> <STRONG> Q12: Is the product guest-agnostic or should I upload any agent? </STRONG> <BR /> <BR /> A: The on-premises technology is built on top of Windows Server 2012 Hyper-V Replica which is guest, workload and storage agnostic. <BR /> <BR /> <STRONG> Q13: What about disks and disk geometries? </STRONG> <BR /> <BR /> A: We support all combinations of VHD/x with fixed, dynamic, differencing. <BR /> <BR /> <STRONG> Q14: Any restrictions on the size of the disks? </STRONG> <BR /> <BR /> A: There are certain restrictions on the size of the disks of IaaS VMs on Azure – primary being: </P> <BR /> <BR /> <UL> <LI> The OS disk cannot be more than 127GB </LI> <LI> Each data disk cannot be more than 1TB </LI> <LI> There can be upto 16 disks attached <UL> <LI> Note: When failing over a VM, ensure that you pick the right size based on the disk parameter as well. Refer to <A href="#" title="" target="_blank"> </A> </LI> </UL> </LI> </UL> <BR /> <BR /> Azure is a rapidly evolving platform and these restrictions are applicable as of June 2014. <BR /> <BR /> <STRONG> Q15: Any gotchas with network configuration or memory assigned to the VM? </STRONG> <BR /> <BR /> <P> A: Just like the previous question, when you failover your VM, you will be bound by Azure’s offerings/features of IaaS VMs. As of today, Azure supports one network adapter and upto 112GB (in the A9 VM). The product does not put a hard-block in case you have a different network and/or memory configuration on-premises. You can change the parameters with which a VM can be created in the Azure portal under the Recovery Services option. <BR /> <BR /> <B> Q16: Where can I find information about the product, pricing etc? </B> <BR /> <BR /> A: To know more about the Azure Site Recovery, pricing, documentation; visit <A href="#" target="_blank"> </A> <BR /> <BR /> <STRONG> Q17: Is there any document explaining the workflows? </STRONG> <BR /> <BR /> A: You can refer to the getting-started-guide @ <A href="#" title="" target="_blank"> </A> or post a question in our forums (see below) <BR /> <BR /> <STRONG> Q18: I faced some errors when using the product, is there any MSDN forum where I can post my query. </STRONG> <BR /> <BR /> A: Yes, please post your questions, queries @ <A href="#" target="_blank"> </A> <BR /> <BR /> <STRONG> Q19: But I really feel strongly about some of the features and I would like to share my feedback with the PG. Can I comment on the blog? </STRONG> <BR /> <BR /> A: We love to hear your feedback and feel free to leave your comments in any of our blog articles. But a more structured approach would be to post your suggestions @ <A href="#" target="_blank"> </A> <BR /> <BR /> <STRONG> Q20: Will you build everything which I suggest? </STRONG> <BR /> <BR /> A: Of course…not :)</img> But on a serious note – we absolutely love to hear from you. So don’t be shy with your feedback. </P> </BODY></HTML> Thu, 21 Mar 2019 23:43:28 GMT Virtualization-Team 2019-03-21T23:43:28Z Azure Site Recovery – case of the “network connection failure” <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jul 06, 2014 </STRONG> <BR /> <P> <STRONG> Luís Caldeira </STRONG> is one of our early adopters who had pinged us with an interesting error. Thanks for reaching out to us Luís and sharing the details of your setup. I am sure this article will come handy to folks who hit this error at some point. </P> <P> Some days back, Luís sent us a mail informing that his enable-protection workflow was consistently failing with a “network connection failure” error message. He indicated that he had followed the steps listed in the tutorial ( <A href="#" title="" target="_blank"> </A> ). He had: </P> <UL> <LI> <DIV> Setup SCVMM 2012 R2 </DIV> </LI> <LI> <DIV> Created the Site Recovery vault, uploaded the required certificate </DIV> </LI> <LI> <DIV> Installed &amp; configured the <STRONG> Microsoft Azure Site Recovery Provider </STRONG> in the VMM server </DIV> </LI> <LI> <DIV> Registered the VMM server </DIV> </LI> <LI> <DIV> And finally installed the <STRONG> Microsoft Azure Recovery Services </STRONG> agent in <STRONG> each </STRONG> of his Hyper-V servers. </DIV> </LI> </UL> <P> He was able to view his on-prem cloud in the Azure portal and could configure protection policies on it as well. However, when he tried to enable protection on a VM, the workflow failed and he saw the following set of tasks in the portal: </P> <P> <IMG src="" /> </P> <P> Clicking on ‘Error Details’ showed the following information: </P> <P> <IMG src="" /> </P> <P> Hmm, not too helpful? Luís thought as much as he reached out to us with the information through our internal DL. We did some basic debugging by looking at the Hyper-V VMMS event viewer logs and the Microsoft Azure Recovery Services event viewer log. Both of them pointed to a failure in the network with the following error message” </P> <P> <IMG src="" /> </P> <P> A snip of the error message (after removing the various Id’s): “ <STRONG> <EM> The error message read “Could not replicate changes for virtual machine VMName due to a network communication failure. (Virtual Machine ID VMid, Data Source ID sourceid, Task ID taskid)” </EM> </STRONG> </P> <P> The message was less cryptic but still did not provide a solution. The network connection from the Hyper-V server seemed okay as Luis was able to access different websites from the box. He was able to TS into other servers, firewall looked ok and inbound connection looked good as well. The Azure portal was able to enumerate the VMs running on the Hyper-V server – but the enable replication call was failing. </P> <P> You are bound to see more granular error messages @ <EM> C:\Program Files\Microsoft Azure Recovery Services Agent\Temp\CBEngineCurr.errlog </EM> and we proceeded to inspect that file. The trace indicated that the name resolution to the Azure service happened as expected but “the remote server was timing out (or) connection did not happen” </P> <P> Ok, so DNS was ruled out as well. We asked Luis to help us understand the network elements in his setup and he indicated that he had a TMG proxy server. We logged into the proxy server and enabled real time logs in the TMG proxy server. We retried the workflow and the workflow promptly failed – but interestingly, the proxy server did not register any traffic blip. That was definitely odd. So browsing from the server worked but connection to the service was failed. Hmm. </P> <P> But the lack of activity in the TMG server indicated a local failure atleast. We were not dealing with an Azure service side issue and that ruled out 50% of potential problems. At a high level, the agent (Microsoft Azure Recovery Services) which is installed in the Hyper-V server acts as a “data mover” to Azure. It is also responsible for all the authentication and connection management when sending replica data to Azure. This component was built on top of a previously released component of the Windows Azure Online Backup solution and enhanced to support this scenario. </P> <P> The good news is that the agent is quite network savvy and has a bunch of configurations to tinker around. One such configuration is the proxy server which is got by opening the “Microsoft Azure Backup” mmc. Click on the “Change properties” in the Actions menu. </P> <P> <IMG src="" /> </P> <P> We clicked on the “Proxy configuration” tab to set the proxy details in Luís’s setup. </P> <P> <IMG src="" /> </P> <P> After setting the proxy server, we retried the workflow… and it failed yet again. Luis then indicated that he was using an authenticated proxy server. Now things got interesting – as the Microsoft Azure Recovery Services agent runs in System context (unlike, say IE which runs in the user context), we needed to set the proxy authentication parameters. In the same proxy configuration page as above, we now provided the user id and password. </P> <P> <IMG src="" /> </P> <P> Now, when we retried the replication - voila! the workflow went through and initial replication was on it’s way. The same can be done using the Set-OBMachineSetting cmdlet ( <A href="#" title="" target="_blank"> </A> ) </P> <P> Needless to say, once the issue was fixed, Luís took the product out on a full tour and he totally loved it (ok, I just made up the last part). </P> <P> I encourage you to try out ASR and share your feedback. It’s extremely easy to set it up and provides a great cloud based DR solution. </P> <P> You can find more details about the service @ <A href="#" target="_blank"> </A> . The documentation explaining the end to end workflows is available @ <A href="#" target="_blank"> </A> . And if you have questions when using the product, post them @ <A href="#" target="_blank"> </A> or in this blog. You can also share your feedback on your favorite features/gaps @ <A href="#" target="_blank"> </A> </P> </BODY></HTML> Thu, 21 Mar 2019 23:43:21 GMT Virtualization-Team 2019-03-21T23:43:21Z Out-of-band Initial Replication (OOB IR) and Deduplication <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jul 11, 2014 </STRONG> <BR /> <P> A recent conversation with a customer brought out the question: <STRONG> What is the best way to create an entire Replica site from scratch? </STRONG> At the surface this seems simple enough – configure initial replication to send the data over the network for the VMs one after another in sequence. For this specific customer however, there were some additional constraints placed: </P> <OL> <LI> The network bandwidth was less than 10Mbps and it primarily catered to their daily business needs (email etc…). Adding more network was not possible within their budget. This came as quite a surprise because despite the incredible download speeds that are encountered these days, there are still places in the world where it isn't as cost effective to purchase those speeds. </LI> <LI> The VMs were of size between 150GB and 300GB each. This made it rather impractical to send the data over the wire. In the best case, it would have taken 34 hours for a single VM of size 150GB. </LI> </OL> <P> This left OOB IR as the only realistic way to transfer data. But at 300GB per VM, it is easy to exhaust a removable drive of 1TB. That left us thinking about deduplication – after all, <A href="#" target="_blank"> deduplication is supported on the Replica site </A> . So why not use it for deduplicating OOB IR data? </P> <P> So I tested this out in my lab environment with a removable USB drive, and a bunch of VMs created out of the same Windows Server 2012 VHDX file. The expectation was that at least 20% to 40% of the data would be same in the VMs, and the overall deduplication rate would be quite high and we could fit a good number of VMs into the removable USB drive. </P> <P> I started this experiment by attaching the removable drive to my server and attempted to enable deduplication on the associated volume in Server Manager. </P> Interesting discovery #1:&nbsp; Deduplication is not allowed on volumes on removable disks <P> Whoops! This seems like a fundamental block to our scenario – how do you build deduplicated OOB IR, if the deduplication is not supported on removable media? This limitation is officially documented here: <A href="#" target="_blank"> </A> , and says <EM> “Volumes that are candidates for deduplication must conform to the following requirements:&nbsp; Must be exposed to the operating system as non-removable drives. Remotely-mapped drives are not supported.” </EM> </P> <P> Fortunately my colleague <A href="#" target="_blank"> <STRONG> Paul Despe </STRONG> </A> in the Windows Server Data Deduplication team came to the rescue. There is a (slightly) convoluted way to get the data on the removable drive <EM> and </EM> deduplicated. Here goes: </P> <UL> <LI> Create a dynamically expanding VHDX file. The size doesn’t matter too much as you can always start off with the default and expand if required. </LI> </UL> <P> <IMG src="" /> </P> <UL> <LI> Using <EM> Disk Management, </EM> bring the disk online, initialize it, create a single volume, and format it with NTFS. You should be able to see the new volume in your Explorer window. I used Y:\ as the drive letter. </LI> </UL> <P> <IMG src="" /> </P> <UL> <LI> Mount this VHDX on the server you are using to do the OOB IR process. </LI> <LI> If you go to <EM> Server Manager </EM> and view this volume (Y:\), you will see that it is backed by a fixed disk. </LI> </UL> <P> <IMG src="" /> </P> <UL> <LI> In the volume view, enable deduplication on this volume by right-clicking and selecting <STRONG> ‘Configure Data Deduplication’ </STRONG> . Set the <STRONG> ‘Deduplicate files older than (in days)’ </STRONG> field to zero. </LI> </UL> <P> <IMG src="" /> </P> <P> <IMG src="" /> </P> <P> You can also enable deduplication in PowerShell with the following commandlets: </P> <DIV id="codeSnippetWrapper"> <DIV id="codeSnippet"> PS C:\&gt; Enable-DedupVolume Y: -UsageType HyperVPS C:\&gt; Set-DedupVolume Y: -MinimumFileAgeDays 0 </DIV> </DIV> <BR /> <P> Now you are set to start the OOB IR process and take advantage of the deduplicated volume. This is what I saw after 1 VM was enabled for replication with OOB IR: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> That’s about 32.6GB of storage used. Wait… shouldn’t there be a reduction in size because of deduplication? </P> <BR /> Interesting discovery #2:&nbsp; Deduplication doesn’t work on-the-fly <BR /> <P> Ah… so if you were expecting that the VHD data would arrive into the volume in deduplicated form, this is going to be a bit of a surprise. At the first go, the VHD data will be present in the volume <EM> in its original size. </EM> Deduplication happens as post-facto as a job that crunches the data and reduces the size of the VHD after it has been fully copied as a part of the OOB IR process. This is because deduplication needs an exclusive handle on the file in order to go about doing its work. </P> <BR /> <P> The good part is that you can trigger the job on-demand and start the deduplication as soon as the first VHD is copied. You can do that by using the PowerShell commandlet provided: </P> <BR /> <DIV id="codeSnippetWrapper"> <BR /> <DIV id="codeSnippet"> PS C:\&gt; Start-DedupJob Y: -Type Optimization </DIV> </DIV> <BR /> <P> There are other parameters provided by the commandlet that allow you to control the deduplication job. You can explore the various options in the TechNet documentation: <A href="#" title="" target="_blank"> </A> . </P> <BR /> <P> This is what I got after the deduplication job completed: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> That’s a 54% saving with just one VM – a very good start! </P> <BR /> Deduplication rate with more virtual machines <BR /> <P> After this I threw in a few more virtual machines with completely different applications installed and here is the observed savings after each step: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> I think the excellent results speak for themselves! <IMG src="" /> Notice how between VM2 and VM3, almost all of the data (~9GB) has been absorbed by deduplication with an increase of only 300MB! As the deduplication team as published on TechNet, VDI VMs would have a high degree of similarity in their disks and would result in a much higher deduplication rate. A random mix of VMs yields surprisingly good results as well. </P> <BR /> Final steps <BR /> <P> Once you are done with the OOB IR and deduplication of your VMs, you need to do the following steps: </P> <BR /> <OL> <BR /> <LI> Ensure that no deduplication job is running on the volume <BR /> </LI> <LI> Eject the fixed disk – this should disconnect the VHD from the host <BR /> </LI> <LI> Compact the VHD using the <STRONG> “Edit Virtual Hard Disk Wizard” </STRONG> . At the time I disconnected the VHD from the host, the size of the VHD was 36.38GB. After compacting it the size came down to 28.13GB… and this is more in line with the actual disk consumed that you see in the graph above <BR /> </LI> <LI> Copy the VHD to the Replica site, mount it on the Replica host, and complete the OOB IR process! </LI> </OL> <BR /> <P> </P> <BR /> <P> Hope this blog post helps with setting up your own Hyper-V Replica sites from scratch using OOB IR! Try it out and let us know your feedback. </P> </BODY></HTML> Thu, 21 Mar 2019 23:42:22 GMT Aashish Ramdas 2019-03-21T23:42:22Z Azure Site Recovery adds InMage Scout to Its Portfolio for Any Virtual and Physical Workload Disaster Recovery <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jul 17, 2014 </STRONG> <BR /> <P> Azure Site Recovery with Hyper-V Replica already supports the ability to set up disaster recovery between your two Windows Server 2012+ Hyper-V data centers, or between your Windows Server 2012 R2 Hyper-V data center and Microsoft Azure. Recently we acquired <A href="#" target="_blank"> <STRONG> InMage Systems Inc </STRONG> . </A> , an innovator in the emerging area of cloud-based business continuity. InMage offers migration and disaster recovery capabilities for heterogeneous IT environments with workloads running on <STRONG> any hypervisor (e.g. VMware) or even on physical servers </STRONG> . InMage’s flagship product Scout is now available as a limited period <STRONG> free trial </STRONG> from the Azure Site Recovery management portal. </P> <P> To learn more, download and try out Azure Site Recovery with InMage Scout, please read my colleague Gaurav Daga’s blog <A href="#" target="_blank"> Azure Site Recovery Now Offers Disaster Recovery for Any Physical or Virtualized IT Environment with InMage Scout </A> . </P> <P> You can find more details about Azure Site Recovery @ <A href="#" target="_blank"> </A> . If you have questions when using the product, post them @ <A href="#" target="_blank"> </A> or in this blog. You can also share your feedback on your favorite features/gaps @ <A href="#" target="_blank"> </A> </P> <P> </P> </BODY></HTML> Thu, 21 Mar 2019 23:40:55 GMT Virtualization-Team 2019-03-21T23:40:55Z Using an existing VM for initial replication in Hyper-V Replica <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Aug 27, 2013 </STRONG> <BR /> <P> Hyper-V Replica provides three methods to do initial replication: </P> <BR /> <OL> <BR /> <LI> Send data over the network (Online IR) </LI> <BR /> <LI> Send data <A href="" target="_blank"> using external media </A> (OOB IR) </LI> <BR /> <LI> Use an existing virtual machine as the initial copy </LI> <BR /> </OL> <BR /> <P> Each option for initial replication has a specific scenario for which it excels. In this post we will dive into the underlying reasons for including option 3 in Hyper-V Replica, the scenarios where it is advantageous, and cover its usage. This blog post is co-authored by <STRONG> Shivam Garg </STRONG> , Senior Program Manager Lead. </P> <BR /> <P> </P> <BR /> <H3> Choosing an existing virtual machine </H3> <BR /> <P> This method of initial replication is rather self-explanatory – it takes an existing VM on the replica site as the baseline to be synced with the primary. However, it’s not enough to pick any virtual machine on the replica site to use as an initial copy. Hyper-V Replica places certain requirements on the VM that can be used in this method of initial replication: </P> <BR /> <OL> <BR /> <LI> It has to have the same virtual machine ID as that of the primary VM </LI> <BR /> <LI> It should have the same disks (and disk properties) as that of the primary VM </LI> <BR /> </OL> <BR /> <P> Given the restrictions placed on the existing VM that can act as an initial copy, there are a few clear ways to get such a VM: </P> <BR /> <UL> <BR /> <LI> <STRONG> Restore the VM from backup </STRONG> . Historically, the disaster recovery strategy for most companies involved taking backups and restoring the datacenter from these backups. This strategy also implies that there is a mechanism in place to transport the backed-up data to the recovery site. This makes the backed-up copies an excellent start point for Hyper-V Replica’s disaster recovery process. The data will be older – depending on the backup policies – but it will satisfy the criteria to use this initial replication method. Of course, it is suggested to use the latest backup data so as to keep the delta changes to the minimum. </LI> <BR /> <LI> <STRONG> Export the VM from the primary and import on the replica </STRONG> . Of course, the exported VM needs to be transported to the other site so this option is similar to out-of-band initial replication using external media. </LI> <BR /> <LI> <STRONG> Use an older Replica VM. </STRONG> When a replication relationship is removed, the Replica VM remains – and this VM can be used as the initial copy when replication is enabled again for the same VM in the future. </LI> <BR /> </UL> <BR /> <P> </P> <BR /> <H3> Syncing the primary and Replica VMs </H3> <BR /> <P> Although there is a complete VM on the replica side, the Replica VM lags behind the primary VM in terms of the freshness of the data. So as a part of the initial replication process the two VMs have to be brought into sync. This process is very similar to <A href="" target="_blank"> resynchronization </A> and is very IOPS intensive. Depending on the differences between the primary and Replica VHDs, there could also be significant network traffic to transfer the delta changes from the primary site to the replica site. </P> <BR /> <P> </P> <BR /> <H3> When to use this initial replication method </H3> <BR /> <P> The biggest advantage that comes from using an existing VM is that the VHDs are already present on the replica site. But this is also based on the assumption that most of the data is already present in those VHDs. For example, when restoring the VM from backup, the backup copy would be a few hours behind the primary… perhaps a day behind. The assumption is that the delta changes [between the restored VM and the current primary VM] are small enough to be sent over the network. Thus the data difference between the primary VHDs and Replica VHDs should not be too large – otherwise Online IR would be more efficient from an IOPS perspective. </P> <BR /> <P> We also need to consider the size of the VHDs. If the primary VM has large VHDs then Online IR might not be preferred to begin with, and OOB IR would be used for initial replication. However, if the set of delta changes that can be sent over the network is small enough then this method could be quicker than OOB IR as well. Thus if the data difference between the primary VHDs and Replica VHDs is large <EM> and the VHDs are also large, </EM> then it might be simpler to use OOB IR. With large VHDs and a large data difference between primary and Replica VHDs, this replication method will consume a large number of IOPS and choke the network. </P> <BR /> <P> Thus a replication scenario that involves (1) <EM> large VHDs </EM> that to be replicated and (2) a <EM> smaller set of delta changes for syncing </EM> [when compared to the size of the VHDs] will be an attractive option for using an existing virtual machine for initial replication. </P> <BR /> <P> </P> <BR /> <H3> Making this happen with UI and PowerShell </H3> <BR /> <P> Using this option through the UI is extremely simple – you simply need to select the option with <EM> “Use an existing virtual machine on the Replica server as the initial copy” </EM> . This option is presented to you during the <STRONG> Enable Replication </STRONG> wizard. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> </P> <BR /> <P> When using PowerShell, there is a sequence of 3 commands that need to be executed: </P> <BR /> <DIV id="codeSnippetWrapper"> <BR /> <DIV id="codeSnippet"> <BR /> PS C:\&gt; Enable-VMReplication -ComputerName -VMName Test-VM -AsReplica <BR /> <BR /> PS C:\&gt; Enable-VMReplication -ComputerName -VMName Test-VM -ReplicaServerName -ReplicaServerPort 80 -AuthenticationType Kerberos <BR /> <BR /> PS C:\&gt; Start-VMInitialReplication -ComputerName -VMName Test-VM -UseBackup <BR /> </DIV> <BR /> </DIV> <BR /> <P> The <STRONG> –UseBackup </STRONG> option in the <STRONG> Start-VMInitialReplication </STRONG> commandlet is the one that indicates the use of an existing VM on the replica site for the purposes of initial replication. </P> <BR /> <P> As with the other methods of initial replication, you can also schedule when the initial replication process occurs. </P> <BR /> <P> </P> <BR /> Working with clusters <BR /> <P> If the Replica VM is on a cluster, <EM> ensure that it is made Highly Available (HA) before any further actions are taken </EM> . This is a prerequisite and it enables the VM to be picked up by the Failover Cluster service – and consequently by the Hyper-V Replica Broker. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> </P> <BR /> <P> Failing to do so will throw errors similar to this (Event ID 29410): </P> <BR /> <P> Cannot perform the requested Hyper-V Replica operation for virtual machine 'Test-VM' because the virtual machine is not highly available. Make virtual machine highly available using Microsoft Failover Cluster Manager and try again. (Virtual machine ID 6DDC63C1-0135-40CA-B998-A606D91080E9) </P> <BR /> <P> </P> <BR /> <P> Also, the replica server used in the commandlets and the UI will be the name of the Hyper-V Replica Broker instance in the cluster (Note: setting the VM <EM> AsReplica </EM> has to be done with the actual replica host and not the broker on the replica site). </P> <BR /> <DIV id="codeSnippetWrapper"> <BR /> <DIV id="codeSnippet"> <BR /> PS C:\&gt; Enable-VMReplication -ComputerName -VMName Test-VM –AsReplica <BR /> <BR /> PS C:\&gt; Enable-VMReplication -ComputerName -VMName Test-VM -ReplicaServerName -ReplicaServerPort 80 -AuthenticationType Kerberos <BR /> <BR /> PS C:\&gt; Start-VMInitialReplication -ComputerName -VMName Test-VM –UseBackup <BR /> </DIV> <BR /> </DIV> <BR /> <P> </P> <BR /> <P> </P> <BR /> <P> </P> <BR /> <P> </P> <BR /> <P> Which initial replication method do you use on your setup? We would be interested in hearing your feedback! </P> </BODY></HTML> Thu, 21 Mar 2019 23:40:48 GMT Aashish Ramdas 2019-03-21T23:40:48Z The Hyper-V Team at VMworld 2013 - Fun Times and Frozen Custard <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Aug 28, 2013 </STRONG> <BR /> Many VMworld 2013 attendees looked at Hyper-V a long time ago, and haven’t kept up with the progress our platform has made in recent years. If you count yourself among this group, we at Microsoft would love to show you how far we’ve come. However, you needn’t take my word for it – I encourage you to find out for yourself. <BR /> <BR /> The Hyper-V team took their message and information to VMworld 2013 in a fun and creative way. Varun Chhabra, Senior Product Marketing Manager, Server and Tools blogs about the experience in “ <A href="#" target="_blank"> Planes, trucks and frozen custard - The Hyper-V Team at VMworld 2013 </A> ” where he had the opportunity to talk with customers, potential customers, and members of the VMWare staff.&nbsp; It is an insightful view.&nbsp; Check it out! <BR /> <BR /> <P> And for those of you interested in downloading some of the other products and trying them, here are some resources to help you: </P> <UL> <LI> Windows Server 2012 R2 Preview <A href="#" target="_blank"> download </A> </LI> <LI> System Center 2012 R2 Preview <A href="#" target="_blank"> download </A> </LI> <LI> SQL Server 2014 Community Technology Preview 1 (CTP1) <A href="#" target="_blank"> download </A> </LI> <LI> Windows 8.1 Enterprise Preview <A href="#" target="_blank"> download </A> </LI> </UL> </BODY></HTML> Thu, 21 Mar 2019 23:40:25 GMT Virtualization-Team 2019-03-21T23:40:25Z Monitoring Hyper-V Replica using System Center Operations Manager <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Sep 13, 2013 </STRONG> <BR /> <P> <A href="#" target="_blank"> </A> <A href="#" target="_blank"> </A> <A href="#" target="_blank"> </A> <A href="#" target="_blank"> </A> <A href="#" target="_blank"> </A> <A href="#" target="_blank"> </A> <A href="#" target="_blank"> </A> <A href="#" target="_blank"> </A> <A href="#" target="_blank"> </A> <A href="#" target="_blank"> </A> Customers asked us if they can have a monitoring mechanism for Hyper-v Replica in a rainy day scenario. With System Center Operations Manager 2012 SP1, customers can now monitor Hyper-V Replica using a Management Pack available for free from the catalogue of SCOM. This blog post will deal with adding the management packs to SCOM setup to monitor Hyper-V Replica . If you <A> haven’t completed your setup </A> , return to this blog after setting up SCOM and installing agents.[ You can refer to <A href="#" target="_blank"> Installing Operations Manager On a Single Server </A> , <A href="#" target="_blank"> Deploying SCOM </A> for installation and <A href="#" target="_blank"> Managing Agents </A> for discovering and installing agents] </P> <BR /> <P> Before we start monitoring Hyper-v Replica, we need to import necessary management packs into SCOM. SCOM catalogue provides a management pack named “Microsoft Windows Hyper-V 2012 Monitoring” to monitor the state changes of Hyper-v Replica. </P> <BR /> <H3> Import Management Pack </H3> <BR /> <P> To import this management pack, </P> <BR /> <P> <A> 1. Go to “Authoring Workspace” and click on “ <B> Import management packs </B> ”. This will open “Import Management packs” form. </A> </P> <BR /> <P> <IMG src="" /> <A href="#" target="_blank"> </A> </P> <BR /> <P> 2. Click on “ <STRONG> Add </STRONG> ” and from the drop down select “ <A> <STRONG> Add from Catalog … </STRONG> </A> ”. This will open <A> Catalog </A> Menu. </P> <BR /> <P> 3. In the Find field, type “ <B> Hyper-V 2012 Monitoring </B> ” and click <STRONG> Search </STRONG> . </P> <BR /> <P> <IMG src="" /> <A href="#" target="_blank"> </A> </P> <BR /> <P> 4. Select “ <B> Microsoft Windows Hyper-V 2012 Monitoring </B> ” and Click “ <B> Add </B> ” and then Click “ <B> OK </B> ”. </P> <BR /> <P> 5. If you come across a screen like below, it means that required dependent management packs are not imported. Click on “ <B> Resolve </B> ”. </P> <BR /> <P> <IMG src="" /> <A href="#" target="_blank"> </A> </P> <BR /> <P> 6. In the Dependency Warning that pops up, Click <B> Resolve </B> . This action will list all the dependent management packs that needs to be imported. Click <STRONG> Install </STRONG> . </P> <BR /> <P> 7. Once all packs are imported, click on <B> Close </B> . </P> <BR /> <P> You can cross verify the importing of management pack by going to Monitoring workspace and looking for “ <STRONG> Microsoft Windows Server Hyper-V </STRONG> ”: </P> <BR /> <P> <IMG src="" /> <A href="#" target="_blank"> </A> </P> <BR /> <P> To get a list of Available monitors click on, <B> Tools-&gt;Search-&gt;Monitors </B> and in the search field type “ <B> Replica </B> ”. This will list all 9 monitors provided by Management pack. </P> <BR /> <P> The supported Monitors and the situations in which they will get triggered is summarized in the following table: </P> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <B> Monitor </B> </P> <BR /> </TD> <TD> <BR /> <P> <B> Root cause </B> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> Hyper-V 2012 Replica Windows Firewall Rule Monitor </P> <BR /> </TD> <TD> <BR /> <P> The Windows Firewall rule to allow replication traffic to the Replica site has not been enabled. </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> Hyper-V 2012 Replication Critical Suspended state monitor. </P> <BR /> </TD> <TD> <BR /> <P> Network bandwidth is not sufficient to send the accumulated changes from the primary server to the replica server </P> <BR /> <P> Storage subsystem on either the primary or replica site is not properly provisioned from space and IOPS perspective. </P> <BR /> <P> The replication is paused on either primary or replica VM </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> Hyper-V 2012 Replication Reverse Replication not initiated. </P> <BR /> </TD> <TD> <BR /> <P> Failover has been initiated but reverse replication to primary is not initiated. </P> <BR /> <P> Replication is not enabled for failed over VM. </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> Hyper-V 2012 Replication not started monitor. </P> <BR /> </TD> <TD> <BR /> <P> Initial replication has not been completed after setting up a replication relationship. </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> Hyper-V 2012 Replica out of sync. </P> <BR /> </TD> <TD> Lack of network connectivity between the primary and replica servers. <BR /> <BR /> Network bandwidth is not sufficient to send the accumulated changes from the primary server to the replica server. <BR /> <BR /> Storage subsystem on either the primary or replica site is not properly provisioned from space and IOPS perspective. <BR /> <BR /> The replication between on the primary or replica VM might be paused. </TD> </TR> <TR> <TD> <BR /> <P> Hyper-V 2012 Node's Replication broker configuration monitor. </P> <BR /> </TD> <TD> <BR /> <P> Cluster service stopped unexpectedly </P> <BR /> <P> Hyper-V Replica Broker is unable to come up on the destination node after a cluster migration. </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> Hyper-V 2012 Replica Network Listener </P> <BR /> </TD> <TD> <BR /> <P> Conflict on the network port configured for Replica or SPN Registration might have failed(Kerberos) </P> <BR /> <P> Certificate provided is either invalid or doesn’t meet pre-requisites(HTTPS) </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> Hyper-V 2012 Replication Resync Required state monitor. </P> <BR /> </TD> <TD> <BR /> <P> VM went into Resync required mode. </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> Hyper-V 2012 Replication Count Percent Monitor </P> <BR /> </TD> <TD> <BR /> <P> The replicating virtual machine has missed more than the configured percentage of replication cycles </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> Viewing the properties of the monitor: <BR /> <P> To view the properties of the monitor, you can select the monitor (you can select results from Search and Click “ <B> View-&gt;Monitors </B> ”) and right click on the monitor and Click on “ <B> Properties </B> ”. </P> <BR /> <P> General Properties: Defines Name; Gives a description of Monitor and mentions the target. It also mentions which parent monitor it belongs to. (More on Monitors, <A href="#" target="_blank"> here </A> ) </P> <BR /> <P> Health: Mentions the conditions which trigger the monitor health state change. </P> <BR /> <P> Alerting: Settings related to Generation of Alerts are displayed here. </P> <BR /> <P> Diagnostic and Recovery: User can create a diagnostic task and configure whether to run automatically or trigger manually once an alert is generated. User can also create a recovery task either in VB Script or in J Script or can create a PowerShell commandlet recovery task. </P> <BR /> <P> Configuration: Mentions important parameters of Monitor’s default properties. </P> <BR /> <P> Product Knowledge: This tab provides with Summary of what monitor tries to achieve, causes of state change and handful resolutions to return to Healthy State. </P> <BR /> <P> <IMG src="" /> <A href="#" target="_blank"> </A> </P> <BR /> Changing the properties of the Monitor: <BR /> <P> You can control the way alerts are being generated and their triggering properties. To change the properties of a monitor, go to Monitor (you can select results from Search and Click “ <B> View-&gt;Monitors </B> ”) and click “ <B> Overrides-&gt;Override the Monitor </B> ” and select the appropriate objects for which you want to change the properties of the monitor. </P> <BR /> <P> <IMG src="" /> <A href="#" target="_blank"> </A> </P> <BR /> <P> After you have selected override option, you will be presented with following UI. Select the property you want to change and check the “ <B> Override </B> ” checkbox and change the properties. You can select Management pack in which you want to put up the updated monitor. </P> <BR /> <P> <IMG src="" /> <A href="#" target="_blank"> </A> </P> <BR /> <BR /> Diagnostic and Recovery Task: <BR /> <P> You can add a diagnostic and Recovery task for a monitor through Monitor properties UI as discussed above. To Create a Diagnostic or Recovery task, Click on “ <B> Add-&gt;Diagnostic for Critical Health State </B> ” from “ <B> Configure Diagnostic Tasks </B> ” section under Diagnostic and Recovery tab. You can either run a command or a script as a diagnostic task. You can select health state for which this will get executed and have the option of executing this command or script automatically once the monitor state has changed. You can also edit or remove previously added tasks. </P> <BR /> <P> To trigger a diagnostic or recovery task manually for an alert follow these steps: </P> <BR /> <P> 1. Select the alert in Monitoring workspace and Click on <B> Health Explorer </B> . </P> <BR /> <P> 2. In health explorer, select the Monitor and on the right hand side, Click on “ <B> State Changes </B> ” tab. </P> <BR /> <P> 3. Diagnostic tasks are listed immediately after Context while Recovery Tasks can be found at the bottom of the page. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <BR /> Management Pack from Codeplex: <BR /> <P> One of our field engineers, Cristian Edwards Sabathe, has developed <A> </A> a Management pack which displays the state of the replication in a dashboard. Download the pack from <A href="#" target="_blank"> here </A> . </P> <BR /> <P> Once you have downloaded the pack, to import pack into SCOM follow these steps: </P> <BR /> <P> 1. Go to authoring workspace and click on Import Management Pack. </P> <BR /> <P> 2. Click on “ <B> Add </B> ” and select add from disk option. </P> <BR /> <P> 3. Give path of MPs to the folder to which you have downloaded packs from above link. </P> <BR /> <P> 4. If there are dependent management packs missing, it will report it in UI. Click on “ <B> Resolve </B> ” to import all dependency packs. </P> <BR /> Hyper-V Replica DashBoard: <BR /> <P> Hyper-V Replica dashboard will be present in Monitoring view. It is part of “ <B> Hyper-V MP Extension 2012-&gt; Hyper V Replica </B> ” folder. Dashboard will display the source of the virtual machine and its health state using icons. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Primary VMs/Recovery VMs view will show the primary VMs, their health state, replication state and replication health(1-normal;2-warning;3-critical), primary and recovery servers for the VM, mode of replication along with many other useful fields which can be customized using “ <B> Personalize view </B> ” option. </P> <BR /> <P> <IMG src="" /> </P> <BR /> Notification of alerts <BR /> <P> Alerts are generated whenever a state change occurs, Great!! But do I have to look through the SCOM screen 24x7 to see if an alert is generated? Fortunately, the answer is <STRONG> NO </STRONG> . SCOM provides a subscription mechanism through which user gets the alert via an email or an SMS or an IM or can raise a ticket. </P> <BR /> <P> 1. To create a subscription, select any alert and select “ <B> Subscription-&gt;Create </B> ” in the right hand side of the UI in Authoring workspace. This will open up Notification Subscription wizard. </P> <BR /> <P> <IMG src="" /> <A href="#" target="_blank"> </A> </P> <BR /> <P> 2. In the wizard, specify a Name and Description to Subscription and click <B> Next </B> . </P> <BR /> <P> 3. In the Conditions box, check “created by specific rules or monitors”. In the criteria description box, click on the already existing monitor to bring up “ <B> Monitor and Rule Search </B> ” form. </P> <BR /> <P> <IMG src="" /> <A href="#" target="_blank"> </A> </P> <BR /> <P> 4. In the Monitor and Rule search form, type “ <B> Replica </B> ” in Filter By field and click on “ <B> Search </B> ”. This will list down all 9 monitors in available rules and monitors. Select the monitors for whose alerts you want to receive a notification and add them by clicking on “ <B> Add </B> ” button. Once you have added all the desired monitors to receive notifications, click on “ <B> OK </B> ”. </P> <BR /> <P> <IMG src="" /> <A href="#" target="_blank"> </A> </P> <BR /> <P> 5. Click “ <B> Next </B> ” In notification Subscription wizard. Complete the wizard as per the subscription requirements. You can refer to <A href="#" target="_blank"> How to Create Notification Subscribers </A> and <A href="#" target="_blank"> Subscribing to Alert Notifications </A> on how to complete the wizard. </P> <BR /> <P> In Summary Management packs from Catalogue and CodePlex provide a great way to monitor the Hyper-V Replica through System Center Operations manager and will integrate with it seamlessly. </P> </BODY></HTML> Thu, 21 Mar 2019 23:40:20 GMT Trinadh Kotturu 2019-03-21T23:40:20Z Replicating fixed disks to dynamic disks in Hyper-V Replica <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Sep 24, 2013 </STRONG> <BR /> <P> A recent conversation with a hosting provider using Hyper-V Replica brought an interesting question to the fore. The hosting provider’s services were aimed primarily towards Small and Medium Businesses (SMBs), with one service being DR-as-a-Service. A lot of the virtual disks being replicated were fixed, had sizes greater than 1 TB, and were mostly <EM> empty </EM> as the space had been carved out and reserved for future growth. However this situation presented a pretty problem for our hosting provider – storing a whole bunch of large and empty virtual disks eats up real resources. It also means investment in physical resources is done upfront rather than gradually/over a period of time. Surely there had to be a better way, right? Well, this wouldn’t be a very good blog post if there wasn’t a better way! :) </P> <P> A great way to trim those fat, fixed virtual disks is to convert them into dynamic disks, and use the dynamic disks on the Replica side. So the replication would happen between the SMB datacenter (fixed disk) to the hosting provider’s datacenter (dynamic disk). Dynamic disks take up only as much physical storage as is present inside the disk, making them very efficient for storage and very useful to hosting providers. The icing on the cake is that Hyper-V Replica works great in such a configuration! </P> <P> But what about the network – does this method help save any bandwidth? At the time of enabling replication, the compression option is selected by default. This means that when Hyper-V Replica encounters large swathes of empty space in the virtual disk, it is able to compress this data and then send the data across. So the good news is that excessive bandwidth usage is not a concern to begin with. </P> <P> One of the early decisions to be made is whether this change is done on the primary side by the customer, or on the replica side by the hosting provider. Asking each customer to change from fixed disks to dynamic disks would be a long drawn out process – and customers might want to keep their existing configuration. The more likely scenario is that the hosting provider will make the changes and it will be transparent to the customer that is replicating. </P> <P> So let’s deep-dive into how to make this happen. </P> <H3> Converting a disk from fixed to dynamic </H3> <P> This process is simple enough, and can be done through the <STRONG> Edit Disk </STRONG> wizard in the Hyper-V Manager UI. Choose the virtual disk that needs editing, choose <STRONG> Convert </STRONG> as the action to be taken, and pick <STRONG> Dynamically expanding </STRONG> <EM> </EM> as the target disk type. Continue till the end and your disk will be converted from fixed to dynamic. </P> <P> <STRONG> NOTE 1: </STRONG> An important constraint to remember is that the target disk format should be the same as the source disk format. This means that you should pick the disk format as VHD if your fixed disk has a VHD extension, and you should pick VHDX if your fixed disk has a VHDX extension. </P> <P> <STRONG> NOTE 2: </STRONG> The name of your dynamic disk should be <EM> exactly the same </EM> as the name of your fixed disk. </P> <P> <IMG src="" /> </P> <P> <EM> (The destination location has been changed so that the same filename can be kept) </EM> </P> <P> To get the same result using PowerShell, use the following command: </P> <DIV id="codeSnippetWrapper"> <DIV id="codeSnippet"> PS C:\&gt; Convert-VHD –Path c:\FixedDisk.vhdx –DestinationPath f:\FixedDisk.vhdx –VHDType Dynamic <BR /> </DIV> <BR /> </DIV> <BR /> <BR /> <H3> </H3> <BR /> <BR /> <H3> Making it work with Hyper-V Replica </H3> <BR /> <BR /> <OL> <BR /> <LI> Enable replication from the customer to the hosting provider using online IR or <A href="#" target="_blank"> out-of-band IR </A> . </LI> <BR /> <BR /> <LI> The hosting provider waits for the IR to complete. </LI> <BR /> <BR /> <LI> The hosting provider can then pause the replication at any time on the Replica server – this will prevent HRL log apply on the disk while it is being converted. </LI> <BR /> <BR /> <LI> The hosting provider can then convert the disk from fixed to dynamic using the technique mentioned above. Ensure that there is adequate storage space to hold both disks until the process is complete. </LI> <BR /> <BR /> <LI> The hosting provider then replaces the fixed disk with the dynamic disk <EM> at the same path and with the same name. </EM> </LI> <BR /> <BR /> <LI> The hosting provider resumes replication on the Replica site. </LI> <BR /> </OL> <BR /> <BR /> <P> Now Hyper-V Replica will use the dynamic disk seamlessly and the hosting provider’s storage consumption is reduced. </P> <BR /> <BR /> Additional optimization for out-of-band IR <BR /> <BR /> <P> In out-of-band IR, the data is transferred to the Replica site using an external medium like a USB device. It becomes possible to convert the disk from fixed to dynamic <EM> before importing it on the Replica site. </EM> The disks on the external medium are directly used as the source and removes the need to have additional storage while the conversion operation completes (for step 4 in the above process). Thus the hosting provider can import and store only the dynamic disk. </P> <BR /> <BR /> <P> Do try this out and let us know the feedback! </P> </BODY></HTML> Thu, 21 Mar 2019 23:38:12 GMT Aashish Ramdas 2019-03-21T23:38:12Z Hyper-V Replica BPA Rules <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Oct 01, 2013 </STRONG> <BR /> <P> A frequent question from our customers is on whether there are standard “best practices” when deploying Hyper-V Replica (or any Windows Server role for that matter). These questions come in many avatars - Does the Product Group have any configuration gotchas based on internal testing, is my server properly configured, should I change any replication configuration etc. </P> <P> <STRONG> Best Practices Analyzer (BPA) </STRONG> is a powerful inbox tool which scans the server for any potential ‘best practice’ violations. The report describes the problem and also provides recommendation to fix the issue. You can use the BPA both from UI as well as PowerShell. </P> <P> From the Server Manager Dashboard, click on <STRONG> Hyper-V, </STRONG> scroll down to the <STRONG> Best Practices Analyzer </STRONG> option, click on <STRONG> Tasks </STRONG> , followed by <STRONG> Start BPA Run </STRONG> </P> <P> <IMG src="" /> </P> <P> Once the scan is complete, you can filter the issues based on Warning or Errors, Excluded Results, Compliant Results. </P> <P> The same can be done through PowerShell by executing the following cmdlets </P> <DIV id="codeSnippetWrapper"> <DIV id="codeSnippet"> Invoke-BpaModel -ModelId Microsoft/Windows/Hyper-V <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> Get-BpaResult -ModelId Microsoft/Windows/Hyper-V <BR /> </DIV> <BR /> </DIV> <BR /> <BR /> <P> To filter non-compliant rules, issue the following cmdlet </P> <BR /> <BR /> <DIV id="codeSnippetWrapper"> <BR /> <DIV id="codeSnippet"> <BR /> Get-BpaResult -ModelId Microsoft/Windows/Hyper-V -Filter Noncompliant <BR /> </DIV> <BR /> </DIV> <BR /> <BR /> <P> In a Windows Server 2012 server, the following rules constitute the Hyper-V BPA. The Hyper-V Replica specific rules are between rules 37-54. </P> <BR /> <BR /> <DIV id="codeSnippetWrapper"> <BR /> <P> RuleId Title <BR /> <BR /> ------ ----- <BR /> <BR /> <BR /> 3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; The Hyper-V Virtual Machine Management Service should be configured to start automatically <BR /> <BR /> <BR /> 4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Hyper-V should be the only enabled role <BR /> <BR /> <BR /> 5&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; The Server Core installation option is recommended for servers running Hyper-V <BR /> <BR /> <BR /> 6&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Domain membership is recommended for servers running Hyper-V <BR /> <BR /> <BR /> 7&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Avoid pausing a virtual machine <BR /> <BR /> <BR /> 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Offer all available integration services to virtual machines <BR /> <BR /> <BR /> 9&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Storage controllers should be enabled in virtual machines to provide access to attached storage <BR /> <BR /> <BR /> 10&nbsp;&nbsp;&nbsp;&nbsp; Display adapters should be enabled in virtual machines to provide video capabilities <BR /> <BR /> <BR /> 11&nbsp;&nbsp;&nbsp;&nbsp; Run the current version of integration services in all guest operating systems <BR /> <BR /> <BR /> 12&nbsp;&nbsp;&nbsp;&nbsp; Enable all integration services in virtual machines <BR /> <BR /> <BR /> 13&nbsp;&nbsp;&nbsp;&nbsp; The number of logical processors in use must not exceed the supported maximum <BR /> <BR /> <BR /> 14&nbsp;&nbsp;&nbsp;&nbsp; Use RAM that provides error correction <BR /> <BR /> <BR /> 15&nbsp;&nbsp;&nbsp;&nbsp; The number of running or configured virtual machines must be within supported limits <BR /> <BR /> <BR /> 16&nbsp;&nbsp;&nbsp;&nbsp; Second-level address translation is required when running virtual machines enabled for RemoteFX <BR /> <BR /> <BR /> 17&nbsp;&nbsp;&nbsp;&nbsp; At least one GPU on the physical computer should support RemoteFX and meet the minimum requirements for DirectX when virtual machines are configured with a RemoteFX 3D video adapter <BR /> <BR /> <BR /> 18&nbsp;&nbsp;&nbsp;&nbsp; Avoid installing RemoteFX on a computer that is configured as an Active Directory domain controller <BR /> <BR /> <BR /> 19&nbsp;&nbsp;&nbsp;&nbsp; Use at least SMB protocol version 3.0 for file shares that store files for virtual machines. <BR /> <BR /> <BR /> 20&nbsp;&nbsp;&nbsp;&nbsp; Use at least SMB protocol version 3.0 configured for continuous availability on file shares that store files for virtual machines. <BR /> <BR /> <BR /> 37&nbsp;&nbsp;&nbsp;&nbsp; A Replica server must be configured to accept replication requests <BR /> <BR /> <BR /> 38&nbsp;&nbsp;&nbsp;&nbsp; Replica servers should be configured to identify specific primary servers authorized to send replication traffic <BR /> <BR /> <BR /> 39&nbsp;&nbsp;&nbsp;&nbsp; Compression is recommended for replication traffic <BR /> <BR /> <BR /> 40&nbsp;&nbsp;&nbsp;&nbsp; Configure guest operating systems for VSS-based backups to enable application-consistent snapshots for Hyper-V Replica <BR /> <BR /> <BR /> 41&nbsp;&nbsp;&nbsp;&nbsp; Integration services must be installed before primary or Replica virtual machines can use an alternate IP address after a failover <BR /> <BR /> <BR /> 42&nbsp;&nbsp;&nbsp;&nbsp; Authorization entries should have distinct tags for primary servers with virtual machines that are not part of the same security group. <BR /> <BR /> <BR /> 43&nbsp;&nbsp;&nbsp;&nbsp; To participate in replication, servers in failover clusters must have a Hyper-V Replica Broker configured <BR /> <BR /> <BR /> 44&nbsp;&nbsp;&nbsp;&nbsp; Certificate-based authentication is recommended for replication. <BR /> <BR /> <BR /> 45&nbsp;&nbsp;&nbsp;&nbsp; Virtual hard disks with paging files should be excluded from replication <BR /> <BR /> <BR /> 46&nbsp;&nbsp;&nbsp;&nbsp; Configure a policy to throttle the replication traffic on the network <BR /> <BR /> <BR /> 47&nbsp;&nbsp;&nbsp;&nbsp; Configure the Failover TCP/IP settings that you want the Replica virtual machine to use in the event of a failover <BR /> <BR /> <BR /> 48&nbsp;&nbsp;&nbsp;&nbsp; Resynchronization of replication should be scheduled for off-peak hours <BR /> <BR /> <BR /> 49&nbsp;&nbsp;&nbsp;&nbsp; Certificate-based authentication is configured, but the specified certificate is not installed on the Replica server or failover cluster nodes <BR /> <BR /> <BR /> 50&nbsp;&nbsp;&nbsp;&nbsp; Replication is paused for one or more virtual machines on this server <BR /> <BR /> <BR /> 51&nbsp;&nbsp;&nbsp;&nbsp; Test failover should be attempted after initial replication is complete <BR /> <BR /> <BR /> 52&nbsp;&nbsp;&nbsp;&nbsp; Test failovers should be carried out at least monthly to verify that failover will succeed and that virtual machine workloads will operate as expected after failover <BR /> <BR /> <BR /> 53&nbsp;&nbsp;&nbsp;&nbsp; VHDX-format virtual hard disks are recommended for virtual machines that have recovery history enabled in replication settings <BR /> <BR /> <BR /> 54&nbsp;&nbsp;&nbsp;&nbsp; Recovery snapshots should be removed after failover <BR /> <BR /> <BR /> 55&nbsp;&nbsp;&nbsp;&nbsp; At least one network for live migration traffic should have a link speed of at least 1 Gbps <BR /> <BR /> <BR /> 56&nbsp;&nbsp;&nbsp;&nbsp; All networks for live migration traffic should have a link speed of at least 1 Gbps <BR /> <BR /> <BR /> 57&nbsp;&nbsp;&nbsp;&nbsp; Virtual machines should be backed up at least once every week <BR /> <BR /> <BR /> 58&nbsp;&nbsp;&nbsp;&nbsp; Ensure sufficient physical disk space is available when virtual machines use dynamically expanding virtual hard disks <BR /> <BR /> <BR /> 59&nbsp;&nbsp;&nbsp;&nbsp; Ensure sufficient physical disk space is available when virtual machines use differencing virtual hard disks <BR /> <BR /> <BR /> 60&nbsp;&nbsp;&nbsp;&nbsp; Avoid alignment inconsistencies between virtual blocks and physical disk sectors on dynamic virtual hard disks or differencing disks <BR /> <BR /> <BR /> 61&nbsp;&nbsp;&nbsp;&nbsp; VHD-format dynamic virtual hard disks are not recommended for virtual machines that run server workloads in a production environment <BR /> <BR /> <BR /> 62&nbsp;&nbsp;&nbsp;&nbsp; Avoid using VHD-format differencing virtual hard disks on virtual machines that run server workloads in a production environment. <BR /> <BR /> <BR /> 63&nbsp;&nbsp;&nbsp;&nbsp; Use all virtual functions for networking when they are available <BR /> <BR /> <BR /> 64&nbsp;&nbsp;&nbsp;&nbsp; The number of running virtual machines configured for SR-IOV should not exceed the number of virtual functions available to the virtual machines <BR /> <BR /> <BR /> 65&nbsp;&nbsp;&nbsp;&nbsp; Configure virtual machines to use SR-IOV only when supported by the guest operating system <BR /> <BR /> <BR /> 66&nbsp;&nbsp;&nbsp;&nbsp; Ensure that the virtual function driver operates correctly when a virtual machine is configured to use SR-IOV <BR /> <BR /> <BR /> 67&nbsp;&nbsp;&nbsp;&nbsp; Configure the server with a sufficient amount of dynamic MAC addresses <BR /> <BR /> <BR /> 68&nbsp;&nbsp;&nbsp;&nbsp; More than one network adapter should be available <BR /> <BR /> <BR /> 69&nbsp;&nbsp;&nbsp;&nbsp; All virtual network adapters should be enabled <BR /> <BR /> <BR /> 70&nbsp;&nbsp;&nbsp;&nbsp; Enable all virtual network adapters configured for a virtual machine <BR /> <BR /> <BR /> 72&nbsp;&nbsp;&nbsp;&nbsp; Avoid using legacy network adapters when the guest operating system supports network adapters <BR /> <BR /> <BR /> 73&nbsp;&nbsp;&nbsp;&nbsp; Ensure that all mandatory virtual switch extensions are available <BR /> <BR /> <BR /> 74&nbsp;&nbsp;&nbsp;&nbsp; A team bound to a virtual switch should only have one exposed team interface <BR /> <BR /> <BR /> 75&nbsp;&nbsp;&nbsp;&nbsp; The team interface bound to a virtual switch should be in default mode <BR /> <BR /> <BR /> 76&nbsp;&nbsp;&nbsp;&nbsp; VMQ should be enabled on VMQ-capable physical network adapters bound to an external virtual switch <BR /> <BR /> <BR /> 77&nbsp;&nbsp;&nbsp;&nbsp; One or more network adapters should be configured as the destination for Port Mirroring <BR /> <BR /> <BR /> 78&nbsp;&nbsp;&nbsp;&nbsp; One or more network adapters should be configured as the source for Port Mirroring <BR /> <BR /> <BR /> 79&nbsp;&nbsp;&nbsp;&nbsp; PVLAN configuration on a virtual switch must be consistent <BR /> <BR /> <BR /> 80&nbsp;&nbsp;&nbsp;&nbsp; The WFP virtual switch extension should be enabled if it is required by third party extensions <BR /> <BR /> <BR /> 81&nbsp;&nbsp;&nbsp;&nbsp; A virtual SAN should be associated with a physical host bus adapter <BR /> <BR /> <BR /> 82&nbsp;&nbsp;&nbsp;&nbsp; Virtual machines configured with a virtual Fibre Channel adapter should be configured for high availability to the Fibre Channel-based storage <BR /> <BR /> <BR /> 83&nbsp;&nbsp;&nbsp;&nbsp; Avoid enabling virtual machines configured with virtual Fibre Channel adapters to allow live migrations when there are fewer paths to Fibre Channel logical units (LUNs) on the destination than on the source <BR /> <BR /> <BR /> 106&nbsp;&nbsp;&nbsp; Avoid using snapshots on a virtual machine that runs a server workload in a production environment <BR /> <BR /> <BR /> 107&nbsp;&nbsp;&nbsp; Configure a virtual machine with a SCSI controller to be able to hot plug and hot unplug storage <BR /> <BR /> <BR /> 108&nbsp;&nbsp;&nbsp; Configure SCSI controllers only when supported by the guest operating system <BR /> <BR /> <BR /> 109&nbsp;&nbsp;&nbsp; Avoid configuring virtual machines to allow unfiltered SCSI commands <BR /> <BR /> <BR /> 110&nbsp;&nbsp;&nbsp; Avoid using virtual hard disks with a sector size less than the sector size of the physical storage that stores the virtual hard disk file <BR /> <BR /> <BR /> 111&nbsp;&nbsp;&nbsp; Avoid configuring a child storage resource pool when the directory path of the child is not a subdirectory of the parent <BR /> <BR /> <BR /> 112&nbsp;&nbsp;&nbsp; Avoid mapping one storage path to multiple resource pools. <BR /> <BR /> <BR /> </P> <BR /> </DIV> <BR /> <BR /> <P> </P> <BR /> <BR /> <P> Go ahead and run the BPA, you might learn something interesting from the non-compliant rules! Fix the errors which are reported as part of the non-compliant rules and re-run the rules. The BPA scan is non-intrusive and should not impact your production workload. </P> </BODY></HTML> Thu, 21 Mar 2019 23:37:57 GMT Virtualization-Team 2019-03-21T23:37:57Z Replica Clusters behind a NAT <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Oct 10, 2013 </STRONG> <BR /> <P> When a Hyper-V Replica Broker is configured in your DR site to accept replication traffic, Hyper-V along with Failover Clustering intelligently percolates these settings to all the nodes of the clusters. A network listener is started in each node of the cluster on the configured port. </P> <P> <IMG src="" /> </P> <P> </P> <P> </P> <P> While this seamless configuration works for a majority of our customers, we have heard from customers on the need to bring up the network listener in different ports in <STRONG> each </STRONG> of the replica server (eg: port 8081 in, port 8082 in and so on). One such scenario is around placing a NAT in front of the Replica cluster which has port based rules to redirect traffic to appropriate servers. </P> <P> Before going any further, a quick refresher on how the placement logic and traffic redirection happens in Hyper-V Replica. </P> <P> 1) When the primary server contacts the Hyper-V Replica Broker, it (the broker) finds a replica server on which the replica VM can reside and returns the FQDN of the replica server (eg: and the port to which the replication traffic needs to be sent. </P> <P> 2) Any subsequent communication happens between the primary server and the replica server ( without the Hyper-V Replica Broker’s involvement. </P> <P> 3) If the VM migrates from to, the replication between the primary server and fails as the VM is unavailable on After retrying a few time, the primary server contacts the Hyper-V Replica Broker indicating that it is unable to find the VM on the replica server ( In response, the Hyper-V Replica broker looks into the cluster and returns the information that the replica-VM now resides in It also provides the port number as part of this response. Replication is now established to </P> <P> It’s worth calling out that the above steps happen without any manual intervention. </P> <P> In a NAT environment where port-based-address translation is used (i.e traffic is routed to a particular server based on the destination ports) the above communication mechanism fails. This is due to the fact that the network listener on each of the servers (R1, R2, comes up on the <STRONG> same port </STRONG> . As the Hyper-V Replica broker returns the same port number in each of it’s response (to the primary server), any incoming request which hits the NAT server cannot be uniquely identified. </P> <P> Needless to say, if there is an one to one mapping between the ‘public’ IP address exposed by the NAT and the ‘private’ IP address of the servers (R1, R2…, the default configuration works fine. </P> <P> So, how do we address this problem – Consider the following 3 node cluster with the following names and IP address: @, @ and @ </P> <P> 1) Create the Hyper-V Replica Broker resource using the following cmdlets with a static IP address of your choice ( in this example) </P> <DIV id="codeSnippetWrapper"> <DIV id="codeSnippet"> $BrokerName = “HVR-Broker” <BR /> <BR /> <BR /> Add-ClusterServerRole -Name $BrokerName –StaticAddress <BR /> <BR /> <BR /> Add-ClusterResource -Name “Virtual Machine Replication Broker” -Type "Virtual Machine Replication Broker" -Group $BrokerName <BR /> <BR /> <BR /> Add-ClusterResourceDependency “Virtual Machine Replication Broker” $BrokerName <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> Start-ClusterGroup $BrokerName <BR /> </DIV> <BR /> </DIV> <BR /> <BR /> <P> 2) <B> Hash table of server name, port: </B> Create a hash table map table of the server name and the port on which the listener should come up in the particular server. </P> <BR /> <BR /> <DIV id="codeSnippetWrapper"> <BR /> <DIV id="codeSnippet"> <BR /> $portmap=@{""=8081; “"=8082; ""=8003, “”=8080} <BR /> </DIV> <BR /> </DIV> <BR /> <BR /> <P> 3) Enable the replica server to receive replication traffic by providing the hash table as an input </P> <BR /> <BR /> <DIV id="codeSnippetWrapper"> <BR /> <DIV id="codeSnippet"> <BR /> Set-VMReplicationServer -ReplicationEnabled $true -ReplicationAllowedFromAnyServer $true <BR /> <BR /> <BR /> -DefaultStorageLocation "C:\ClusterStorage\Volume1" <BR /> <BR /> <BR /> -AllowedAuthenticationType Kerberos <BR /> <BR /> <BR /> -KerberosAuthenticationPortMapping $portmap <BR /> </DIV> <BR /> </DIV> <BR /> <BR /> <P> 4) <B> NAT Table: </B> Configure the NAT device with the same mapping as provided in the enable replication server cmdlet. The below picture is applicable for a RRAS based NAT device – similar configuration can be done in any vendor of your choice. The screen shot below captures the mapping for the Hyper-V Replica Broker. Similar mapping needs to be done for each of the replica servers. </P> <IMG src="" /> <BR /> <BR /> <P> 5) Ensure that the primary server can resolve the replica servers and broker to the public IP address of the NAT device and ensure that the appropriate firewall rules have been enabled. </P> <BR /> <BR /> <P> That’s it – you are all set! Replication works seamlessly as before and now you have the capability to reach the Replica server in a port based NAT environment. </P> </BODY></HTML> Thu, 21 Mar 2019 23:37:42 GMT Virtualization-Team 2019-03-21T23:37:42Z What’s new in Hyper-V Replica in Windows Server 2012 R2 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Oct 22, 2013 </STRONG> <BR /> <P> 18th October 2013 marked the General Availability of Windows Server 2012 R2. The teams have accomplished an amazing set of features in this short release cycle and Brad’s post @ <A href="#" title="" target="_blank"> </A> captures the investments made across the board. We encourage you to update to the latest version and share your feedback. </P> <BR /> <P> This post captures the top 8 improvements done to <STRONG> Hyper-V Replica in Windows Server 2012 R2. </STRONG> We will be diving deep into each of these features in the coming weeks through blog posts and TechNet articles. </P> <BR /> <H2> Seamless Upgrade </H2> <BR /> <P> You can upgrade from Windows Server 2012 to Windows Server 2012 R2 <STRONG> without having to re-IR your protected VMs </STRONG> . With new features such as cross-version live migration, it is easy to maintain your DR story across OS upgrades. You can also choose to upgrade your primary site and replica site at different times as Hyper-V Replica will replicate your virtual machines from a Windows Server 2012 environment to a Windows Server 2012 R2 environment. </P> <BR /> <H2> 30 second replication frequency </H2> <BR /> <P> Windows Server 2012 allowed customers to replicate their virtual machines at a preset 5minute replication frequency. Our aspirations to bring down this replication frequency was backed by customer’s asks on providing the flexibility to set different replication frequencies to different virtual machines. With Windows Server 2012 R2, you can now asynchronously replicate your virtual machines at either <STRONG> 30second, 5mins or 15mins </STRONG> frequency. <BR /> <BR /> <IMG src="" /> <STRONG> </STRONG> </P> <BR /> <H2> Additional Recovery Points </H2> <BR /> <P> Customers can now have a longer retention with <STRONG> 24 recovery points </STRONG> . These 24 (up from 16 in Windows Server 2012) recovery points are spaced at an hour’s interval. <BR /> <BR /> <IMG src="" /> <STRONG> </STRONG> </P> <BR /> <H2> Linux guest OS support </H2> <BR /> <P> Hyper-V Replica, since it’s first release has been agnostic to the application and guest OS. However certain capabilities were unavailable on non-Windows guest OS in it’s initial avatar. With Windows Server 2012 R2, we are tightly integrated with non-Windows OS to provide file-system consistent snapshots and inject IP addresses as part of the failover workflow. <BR /> <STRONG> </STRONG> </P> <BR /> <H2> Extended Replication </H2> <BR /> <P> You can now ‘extend’ your replica copy to a third site using the ‘Extended replication’ feature. The functionality provides an added layer of protection to recover from your disaster. You can now have a replica copy within your site (eg: ClusterA-&gt;ClusterB in your primary datacenter) and extend the replication for the protected VMs from ClusterB-&gt;ClusterC (in your secondary data center). <BR /> <BR /> <IMG src="" /> </P> <BR /> <P> To recover from a disaster in ClusterA, you can now quickly failover to the VMs in ClusterB and continue to protect them to ClusterC. More on extended replication capabilities in the coming weeks. </P> <BR /> <H2> Performance Improvements </H2> <BR /> <P> Significant architectural investments were made to lower the IOPS and storage resources required on the Replica server. The most important of these was to move away from snapshot-based recovery points to “undo logs” based recovery points. These changes have a profound impact on the way the system scales up and consumes resources, and will be covered in greater detail in the coming weeks. </P> <BR /> <H2> Online Resize </H2> <BR /> <P> <STRONG> </STRONG> In Windows Server 2012 Hyper-V Replica was closely integrated with the various Hyper-V features such as VM migration, storage migration etc. Windows Server 2012 R2 allows you to resize a running VM and if your VM is protected – you can continue to replicate the virtual machine without having to re-IR the VM. <BR /> <STRONG> </STRONG> </P> <BR /> <H2> Hyper-V Recovery Manager </H2> <BR /> <P> We are also excited to announce the paid preview of <STRONG> Hyper-V Recovery Manager (HRM) </STRONG> ( <A href="#" title="" target="_blank"> </A> ) <STRONG> . </STRONG> This is a Windows Azure Service that allows you to manage and orchestrate various DR workflows between the primary and recovery datacenters. HRM does * <STRONG> not </STRONG> * replicate virtual machines to Windows Azure – your data is replicated directly between the primary and recovery datacenter. HRM is the disaster recovery “management head” which is offered as a service on Azure. </P> </BODY></HTML> Thu, 21 Mar 2019 23:37:16 GMT Virtualization-Team 2019-03-21T23:37:16Z Online resize of virtual disks attached to replicating virtual machines <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Nov 14, 2013 </STRONG> <BR /> <P> In Windows Server 2012 R2, Hyper-V added the ability to <A href="#" target="_blank"> resize the virtual disks attached to a running virtual machine </A> without having to shutdown the virtual machine. In this blog post we will talk about how this feature works with Hyper-V Replica, the benefits of this capability, and how to make the most of it. </P> <H3> Works better with Hyper-V Replica </H3> <P> There is an obvious benefit in having the ability to resize a virtual disk while the VM is running – there is no need for downtime of the VM workload. There is however a subtle nuance and very key benefit for virtual machines that have also been enabled for replication – <EM> there is no need to resync the VM after modifying the disk, and definitely no need to delete and re-enable replication! </EM> </P> <P> There is some history to this that needs explaining. Starting with Windows Server 2012, Hyper-V Replica provided a way to track the changes that a guest OS was making on the disks attached to the VM – and then replicated these changes to provide DR. However the tracking and replication was applicable only to <EM> running </EM> VMs. This meant that when a VM was switched off, Hyper-V Replica had no way to track and replicate any changes that might be done to the virtual disks <EM> outside of the guest </EM> . To guarantee that the replica VM was always in sync with the primary, Hyper-V Replica put the virtual machine into <EM> “Resynchronization Required” </EM> state if it suspected that the primary virtual disks had been modified offline. </P> <P> So in Windows Server 2012, the immediate consequence of resizing your disk offline is also that the VM will go into resync when started up again. Resyncing the VM could get very expensive in terms of IOPS consumption and you would lose any additional recovery points that were already created. </P> <P> Naturally, we made sure that it all went away in the Windows Server 2012 R2 release - no workload downtime, no resync, no loss of additional recovery points! </P> <H3> Making it happen – workflows for replicating VMs </H3> <P> The resize of the virtual disks need to be done on each site separately, and resizing the primary site virtual disks doesn’t automatically resize the replica site virtual disks. Here is the suggested workflow for making this happen: </P> <OL> <LI> <P> On the primary site, select the virtual disk that needs to be resized and use the <EM> Edit disk wizard </EM> to increase/decrease the size of the disk. You can also use the <A href="#" target="_blank"> Resize-VHD </A> PowerShell commandlet. At this point, replication isn’t really impacted and continues uninterrupted. This is because the newly created space shows up as “Unallocated”. That is, it has not been formatted and presented to the guest workload to use, and so there are no writes to that region that need to be tracked and replicated. </P> </LI> <LI> <P> On the replica site, select the corresponding virtual disk and resize it using the <EM> Edit disk wizard </EM> or the Resize-VHD PowerShell commandlet. Not resizing the replica site virtual disk can cause replication errors in the future – and we will cover that in greater detail. </P> </LI> <LI> <P> Use <EM> Disk Management </EM> or an equivalent tool in the guest VM to consume this unallocated space. </P> </LI> </OL> <P> Voila! That’s it. Nothing extraordinary required for replicating VMs. Sounds too good to be true? Well, it is :). In fact, you can automate steps 1 and 2 using some nifty PowerShell scripting. </P> <DIV id="codeSnippetWrapper"> <DIV id="codeSnippet"> param ( <BR /> <BR /> <BR /> [string]$vmname = $(throw "-VMName is required"), <BR /> <BR /> <BR /> [string]$vhdpath = $(throw "-VHDPath is required"), <BR /> <BR /> <BR /> [long]$size = $(throw "-Size is required") <BR /> <BR /> <BR /> ) <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> #Resize the disk on the primary site <BR /> <BR /> <BR /> Resize-VHD -Path $vhdpath -SizeBytes $size -Verbose <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> $replinfo = Get-VMReplication -VMName $vmname <BR /> <BR /> <BR /> $replicaserver = $replinfo.CurrentReplicaServerName <BR /> <BR /> <BR /> $id = $replinfo.Id <BR /> <BR /> <BR /> $vhdname = $vhdpath.Substring($vhdpath.LastIndexOf("\")) <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> #Find the VM on the replica site, find the right disk, and resize it <BR /> <BR /> <BR /> Invoke-Command -ComputerName $replicaserver -Verbose -ScriptBlock { <BR /> <BR /> <BR /> $vhds = Get-VHD -VMId $Using:id <BR /> <BR /> <BR /> foreach( $disk in $vhds ) { <BR /> <BR /> <BR /> if($disk.Path.contains($Using:vhdname)) { <BR /> <BR /> <BR /> Resize-VHD -Path $disk.Path -SizeBytes $Using:size -Verbose <BR /> <BR /> <BR /> } <BR /> <BR /> <BR /> } <BR /> <BR /> <BR /> } <BR /> </DIV> <BR /> </DIV> <BR /> <BR /> <H3> Handling error scenarios </H3> <BR /> <BR /> <P> If the resized virtual disk on the primary is consumed before the replica has been resized, then you can expect the replica site to throw up errors. This is because the changes on the primary site cannot be applied correctly on the replica site. Fortunately, the error message is friendly enough to put you on the right track to fixing it:&nbsp; “ <EM> An out-of-bounds write was encountered on the Replica virtual machine. The primary server VHD might have been resized. Ensure that the disk sizes of the Primary and Replica virtual machines are the same.” </EM> </P> <BR /> <BR /> <P> <IMG src="" /> </P> <BR /> <BR /> <P> The fix is just as simple: </P> <BR /> <BR /> <OL> <BR /> <LI> Resize the virtual disk on the Replica site (as was meant to be done). </LI> <BR /> <BR /> <LI> Resume replication on the VM from the Primary site – it will replicate and apply pending logs, without triggering resynchronization. </LI> <BR /> </OL> <BR /> <BR /> <P> A similar situation will be encountered if the VM is put into resync after the resize operation. The resync operation will not proceed as the two disks have different sizes. Ensuring that the Replica disk is resized appropriately and resuming replication will be sufficient for resynchronization to continue. </P> <BR /> <BR /> <H3> Nuances during failover </H3> <BR /> <BR /> <P> If you keep additional recovery points for your replicating VM, there are some key points to be noted: </P> <BR /> <BR /> <OL> <BR /> <LI> Expanding a virtual disk that is replicating will have no impact on failover. However, the size of the disk will not be reduced if you fail over to an older point that was created before the expand operation. </LI> <BR /> <BR /> <LI> Shrinking a virtual disk that is replicating <EM> will have an impact </EM> on failover. Attempting to fail over to an older point that was created before the shrink operation will result in an error. </LI> <BR /> </OL> <BR /> <BR /> <P> This behavior is seen because failing over to an older point only changes the content on the disk – and not the disk itself. Irrespective, in all cases, failing over to the latest point is not impacted by the resize operations. </P> <BR /> <BR /> <H3> </H3> <BR /> <BR /> <P> Hope this post has been useful! We welcome you to share your experience and feedback with us. </P> </BODY></HTML> Thu, 21 Mar 2019 23:36:41 GMT Aashish Ramdas 2019-03-21T23:36:41Z Upgrading to Windows Server 2012 R2 with Hyper-V Replica <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Dec 02, 2013 </STRONG> <BR /> <P> The TechNet article <A href="#" title="" target="_blank"> <STRONG> </STRONG> </A> provides detailed guidance on migrating Hyper-V VMs from a Windows Server 2012 deployment to a Windows Server 2012 R2 deployment. </P> <P> <A href="#" title="" target="_blank"> <STRONG> </STRONG> </A> calls out the various VM migration techniques which are available as part of upgrading your deployment. The section titled “Hyper-V Replica” calls out further guidance for deployments which have replicating virtual machines. </P> <P> At a very high level, if you have a Windows Server 2012 setup containing replicating VMs, we recommend that you use the cross version live migration feature to migrate your replica VMs first. This is followed by fix-ups in the primary replicating VM (eg: changing replica server name). Once replication is back on track, you can migrate your primary VMs from a Windows Server 2012 server to a Windows Server 2012 R2 server without any VM downtime. The authorization table in the replica server may require to be updated once the primary VM migration is complete. </P> <BR /> <BR /> The above approach does not require you to re-IR your VMs, ensures zero downtime for your production VMs and gives you the flexibility to stagger the upgrade process on your replica and primary servers. <P> </P> </BODY></HTML> Thu, 21 Mar 2019 23:36:26 GMT Virtualization-Team 2019-03-21T23:36:26Z Hyper-V Replica: Extend Replication <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Dec 09, 2013 </STRONG> <BR /> <P> With Hyper-V Extend Replication feature in Windows Server 2012 R2, customers can have multiple copies of data to protect them from different outage scenarios. For example, as a customer I might choose to keep my second DR site in the same campus or a few miles away while I want to keep my third copy of data across the continents to give added protection for my workloads. Hyper-V Replica <STRONG> Extend replication </STRONG> exactly addresses this problem by providing one more copy of workload at an extended site apart from replica site. As mentioned in <A href="#" target="_blank"> What’s new in Hyper-V Replica in Windows Server 2012 R2 </A> , user can extend the replication from Replica site and continue to protect the virtualized work loads even in case of disaster at primary site!! </P> <P> This is so cool and exactly what I was looking for. But how do I enable this feature in Windows Server 2012 R2? Well, I will walk you through different ways in which you can enable replication and you will be amazed to see how similar is the experience is to enable replication wizard. </P> <H2> Extend Replication through UI: </H2> <P> Before you Extend Replication to third site, you need to establish the replication between a primary server and replica server. Once that is done, go to replica site and from Hyper-V UI manager select the VM for which you want to extend the replication. Right click on VM and select “ <STRONG> Replication-&gt;Extend Replication …”. </STRONG> This will open Extend Replication Wizard which is similar to Enable Replication Wizard. Few points to be taken care are: </P> <P> 1. In <STRONG> Configure Replication frequency </STRONG> screen , note that Extend Replication only supports 5 minute and 15 minute Replication frequency. Also note that replication frequency of extend replication should be at least equal to or greater than primary replication relationship. </P> <P> 2. In <STRONG> Configure Additional Recovery Points </STRONG> screen, you can mention the recovery points you need on the extended replica server. Please note that you cannot configure App-Consistent snapshot frequency in this wizard. </P> <P> Click <STRONG> Finish </STRONG> and you are done!! Isn’t it very similar to Enable Replication Wizard??? </P> <P> If you are working with clusters, in replica site go to Failover Cluster manager UI and select the VM for which you want to extend replication from Roles tab in the UI. Right Click on VM and select “ <STRONG> Replication-&gt;Extend Replication </STRONG> ”.&nbsp; Configure the extended replica cluster/server in the same way as you did above. </P> <H2> Extend Replication using PowerShell: </H2> <P> You can use the same PowerShell cmdlet which you used for enabling Replication to create extended replication relationship. However as stated above, you can only choose a replication frequency of either 5 minutes or 15 minutes. </P> <P> <EM> <STRONG> Enable-VMReplication –VMName &lt;vmname&gt; -ReplicaServerName &lt;extended_server_name&gt; -ReplicaServerPort &lt;Auth_port&gt; -AuthenticationType &lt;Certificate/Kerberos&gt; -ReplicationFrequencySec &lt;300/900&gt; [--other optional parameters if needed—] </STRONG> </EM> </P> <H2> Status and Health of Extended Replication: </H2> <P> Once you extend replication from replica site, you can check Replication tab in Replica Site Hyper-V UI and you will see details about extend replication being present along with Primary Relation ship. </P> <P> <IMG src="" /> </P> <P> You can also check-up Health Statistics of Extended Replication from Hyper-V UI. Go to VM in Replica site and right click and select “Replication-&gt;View replication Health” . Extended Replication health statistics are displayed under a separate tab named “Extended Replication”. </P> <P> <IMG src="" /> </P> <P> You can also query PowerShell on the replica site to see details about Extended Replication Relationship. </P> <P> <EM> <STRONG> Measure-VMReplication –VMName &lt;name&gt; -ReplicationRelationshipType Extended | select * </STRONG> </EM> </P> <P> This is all great. But how do I carry out failover in case of Extended Replication? I will reserve that to my next blog post. Until then happy extended Replication <IMG src="" /> </P> </BODY></HTML> Thu, 21 Mar 2019 23:36:20 GMT Trinadh Kotturu 2019-03-21T23:36:20Z Hyper-V Replica in Windows Server 2012 R2 and System Center Operations Manager 2012 R2 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Dec 10, 2013 </STRONG> <BR /> <P> Continuing from my previous post <A href="#" target="_blank"> Monitoring Hyper-V Replica using Systems Center Operations Manager </A> , in this blog post I will walk through some of the things to be taken care of while using System Center Operations Manager 2012 R2 for monitoring Windows Server 2012 R2 hosts. If you haven’t read previous blog, I request you go through it before you start monitoring Windows Server 2012 R2 machines. The best part of this story is all the monitors present in the previous version of SCOM&nbsp; work with this new version of OS. </P> <P> As mentioned in <A href="#" target="_blank"> What is New in Windows Server 2012 R2, </A> we have added ability to configure the replication frequency of VMs. User now can replicate at 30sec, 5 minute and 15 minutes. Correspondingly alerts we generate for “Hyper-V 2012 Replication Count Percent Monitor” change based on the VM. If you have VMs with varying replication frequency, coming up with a percentage number becomes tricky. For this we suggest, to <STRONG> set the count percent to a number which can catch missed number of cycles for your least replication frequency VM </STRONG> . For example, if I have VMs of replication frequency 30ec, 5 minutes and 15 minutes in my environment and if I want to get notified for&nbsp; even if I miss one replication cycle in an interval of one hour, this means a percentage number of 1/(2*60) for 30sec Replication frequency VMs; a percentage number of 1/12 for 5 Minute replication frequency VMs and a percentage number of 1/4 for 15 minute replication frequency VMs. By setting count percentage value to 1/(2*60), I can catch alerts from all VMs which missed one replication cycle in the interval period of 60 minutes. </P> <P> Rest of the monitors just work as they work in Windows Server 2012. What is more, Hyper-V Extensions management pack written by Cristian Edwards Sabathe, now supports Windows Server 2012 R2. You can download the pack from <A href="#" target="_blank"> here </A> . In addition to the existing dash boards, it now supports Extended Replica VMs. Go try it out!! </P> <P> </P> </BODY></HTML> Thu, 21 Mar 2019 23:35:49 GMT Trinadh Kotturu 2019-03-21T23:35:49Z Using data deduplication with Hyper-V Replica for storage savings <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Dec 23, 2013 </STRONG> <BR /> <P> Protection of data has always been a priority for customers, and disaster recovery allows the protection of data with better restore times and lower data loss at the time of failover. However, as with all protection strategies, additional storage is a cost that needs to be incurred. With storage usage growing exponentially, a strategy is needed to help enterprises control their spend on storage hardware. This is where data deduplication comes in. Deduplication itself has been around for years, but in this blog post we will talk about how users of Hyper-V Replica (HVR) can benefit from it. This blog post has been written collaboratively with the Windows Server Data Deduplication team. </P> <BR /> <H3> Deduplication considerations </H3> <BR /> <P> To begin with, it is important to acknowledge the workloads that are suitable for deduplication using Windows Server 2012 R2. There is an excellent <A href="#" target="_blank"> TechNet article </A> that covers this aspect and would be applicable in the case of Hyper-V Replica as well. It is important to remember that deduplication of running virtual machines is only officially supported starting with Windows Server 2012 R2 for Virtual Desktop Infrastructure (VDI) workloads with VHDs running on a remote file server. Generic VM (non-VDI) workloads may run on a deduplication enabled volume but the performance is not guaranteed. Windows Server 2012 deduplication is only supported for cold data (files not open). </P> <BR /> <H3> Why use deduplication with Hyper-V Replica? </H3> <BR /> <P> One of the most common deployment scenarios of VDI involves a golden image that is read-only. VDI virtual machines are built using diff-disks that have this golden image as the parent. The setup would look roughly like this: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> This deployment saves a significant amount of storage space. However, when Hyper-V Replica is used to replicate these VMs, each diff-disk chain is treated as a single unit and is replicated. So on the replica site there will be 3 copies of the golden image as a part of the replication. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Data deduplication becomes a great way to reclaim that space used. </P> <BR /> <H3> Deployment options </H3> <BR /> <P> Data deduplication is applicable at a volume level, and the volume can be made available with either SMB 3.0, CSV FS, or NTFS. The deployments (at either the Primary or Replica site) would broadly look like these: </P> <BR /> 1. SMB 3.0 <BR /> <P> <IMG src="" /> </P> <BR /> 2. CSVFS <BR /> <P> <IMG src="" /> </P> <BR /> 3. NTFS <BR /> <P> <IMG src="" /> </P> <BR /> <P> Ensure that the VHD files that need to be deduplicated are placed in the right volume – and this can be done using <A href="" target="_blank"> authorization entries </A> . Using HVR in conjunction with Windows Server Data Deduplication will require some additional planning to take into consideration possible performance impacts to HVR when running on a volume enabled for deduplication. </P> <BR /> <H3> Deduplication on the Primary site </H3> <BR /> <P> Enabling data deduplication on the primary site volumes will not have an impact on HVR. No additional configurations or changes need to be done to use Hyper-V Replica with deduplicated data volumes. </P> <BR /> <H3> Deduplication on the Replica site </H3> <BR /> WITHOUT ADDITIONAL RECOVERY POINTS <BR /> <P> Enabling data deduplication on the replica site volumes will not have an impact on HVR. No additional configurations or changes need to be done to use Hyper-V Replica with deduplicated data volumes. </P> <BR /> WITH ADDITIONAL RECOVERY POINTS <BR /> <P> Hyper-V Replica allows the user to have additional recovery points for replicated virtual machines that allows the user to go back in time during a failover. Creating the recovery points involves reading the existing data from the VHD before the log files are applied. When the Replica VM is stored on a deduplication-enabled volume, reading the VHD is slower and this impacts the time taken by the overall process. The apply time on a deduplication-enabled VHD can be between 5X and 7X more than without deduplication. When the time taken to apply the log exceeds the replication frequency then there will be a log file pileup on the replica server. Over a period of time this can lead to the health of the VM degrading. The other side effect is that the VM state will always be <EM> “Modifying” </EM> and in this state other Hyper-V operations and backup will not be possible. </P> <BR /> <P> There are two mitigation steps suggested: </P> <BR /> <OL> <BR /> <LI> <STRONG> Defragment the deduplication-enabled volume </STRONG> on a regular basis. This should be done at least once every 3 days, and preferably once a day. </LI> <BR /> <LI> <STRONG> Increase the frequency of deduplication optimization </STRONG> . For instance, set the deduplication policy to optimize data older than 1 day instead of the default 3 days. Increasing the deduplication frequency will allow the deduplication service on the recovery server to keep up better with the changes made by HVR. This can be configured via the deduplication settings in Server Manager –&gt;File and Storage Services –&gt; Volume –&gt; Configure Data Deduplication, or via PowerShell: </LI> <BR /> </OL> <BR /> <DIV id="codeSnippetWrapper"> <BR /> <DIV id="codeSnippet"> <BR /> Set-DedupVolume &lt;volume&gt; -MinimumFileAgeDays 1 <BR /> </DIV> <BR /> </DIV> <BR /> <H3> Other resources: </H3> <BR /> <P> <A href="" title="" target="_blank"> </A> </P> <BR /> <P> <A href="" title="" target="_blank"> </A> </P> </BODY></HTML> Thu, 21 Mar 2019 23:35:42 GMT Aashish Ramdas 2019-03-21T23:35:42Z Measuring Replication Health in a cluster <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Dec 30, 2013 </STRONG> <BR /> <P> As part of running a mini-scale run in my lab, I had to frequently monitor the replication health and also note down the replication statistics. The statistics is available by by right clicking on the VM (in the Hyper-V Manager or Failover Cluster Manager) and choosing the <STRONG> Replication </STRONG> submenu and clicking on the <STRONG> View Replication Health… </STRONG> option. </P> <P> <IMG src="" /> </P> <P> Clicking on the above option, displays the replication statistics which I am looking for. </P> <P> <IMG src="" /> </P> <P> Clicking on the ‘Reset Statistics’ clears the statistics collected so far and resets the start (“From time” field) time. </P> <P> In a large deployment, it’s not practical to right click on each VM to get the health statistics.&nbsp; Hyper-V PowerShell cmdlets help in simplifying the task. I had two requirements: </P> <UL> <LI> Requirement #1: Get a report of the average size of the log files which were being sent during the VMs replication interval </LI> <LI> Requirement #2: Snap all the VMs replication statistics to the same start time (“From time”) field and reset the statistics </LI> </UL> <P> <STRONG> Measure-VMReplication </STRONG> provides the replication statistics for each of the replicating VMs. As I am only interested in the average replication size, the following cmdlet provides the required information. </P> <DIV id="codeSnippetWrapper"> Measure-VMReplication | select VMName,AvgReplSize </DIV> <P> </P> <P> Like most of the other PowerShell cmdlets Measure-VMReplication takes the computer name as an input. To get the replication stats for all the VMs in the cluster, I would need to enumerate the nodes of the cluster and pipe the output to this cmdlet. The <STRONG> Get-ClusterNode </STRONG> is used to get the nodes of the cluster. </P> <DIV id="codeSnippetWrapper"> <DIV id="codeSnippet"> $ClusterName = "&lt;Name of your cluster&gt;"Get-ClusterNode -Cluster $ClusterName </DIV> </DIV> <BR /> <P> We can pipe the output of each node of the cluster and the replication health of the VMs present on that node </P> <BR /> <DIV id="codeSnippetWrapper"> <BR /> <DIV id="codeSnippet"> Get-ClusterNode -Cluster $ClusterName | foreach-object {Measure-VMReplication -ComputerName $_ | Select VMName, AvgReplSize, PrimaryServerName, CurrentReplicaServerName | ft} </DIV> </DIV> <BR /> <P> </P> <BR /> <P> Requirement #1 is met, now let’s look at requirement #2. To snap all the replicating VMs statistics to a common start time, I used the <STRONG> Reset-VMReplicationStatistics </STRONG> which takes the VMName as an input. However if Reset-VMReplicationStatistics is used on a non-replicating VM, the cmdlet errors out with the following error message: </P> <BR /> <DIV id="codeSnippetWrapper"> <BR /> <DIV id="codeSnippet"> Reset-VMReplicationStatistics : 'Reset-VMReplicationStatistics' is not applicable on virtual machine 'IOMeterBase'.The name of the virtual machine is IOMeterBase and its ID is c1922e67-7a8b-4f36-a868-5174e7b6821a.At line:1 char:1+ Reset-VMReplicationStatistics -vmname IOMeterBase+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidOperation: (Microsoft.Hyper...l.VMReplication:VMReplication) [Reset-VMReplicationStatistics], VirtualizationOperationFailedException + FullyQualifiedErrorId : InvalidOperation,Microsoft.HyperV.PowerShell.Commands.ResetVMReplicationStatisticsCommand </DIV> </DIV> <BR /> <P> </P> <BR /> <P> It’s a touch messy and to address the issue, we would need to isolate the replicating VMs in a given server. This can be done by querying only for those VMs whose <STRONG> ReplicationMode </STRONG> is set (to either Primary or Replica). The output of <STRONG> Get-VM </STRONG> is shown below </P> <BR /> <DIV id="codeSnippetWrapper"> <BR /> <DIV id="codeSnippet"> PS C:\&gt; get-vm | select vmname, ReplicationMode | fl&nbsp;VMName : Cluster22-TPCC3ReplicationMode : Primary&nbsp;VMName : IOMeterBaseReplicationMode : None </DIV> </DIV> <BR /> <P> Cluster22-TPCC3 is a replicating VM (Primary VM) while replication has not been enabled on IOMeterBase VM. Putting things together, to get all the replicating VMs in the cluster use the Get-VM cmdlet and filter on ReplicationMode (Primary or Replica. You could also use the not-equal to operation get both primary and replica VMs) </P> <BR /> <DIV id="codeSnippetWrapper"> <BR /> <DIV id="codeSnippet"> Get-ClusterNode -Cluster $ClusterName | ForEach-Object {Get-VM -ComputerName $_ | Where-Object {$_.ReplicationMode -eq "Primary"}} </DIV> </DIV> <BR /> <P> </P> <BR /> <P> To reset the statistics, pipe the above cmdlet to Reset-VMReplicationStatistics </P> <BR /> <DIV id="codeSnippetWrapper"> <BR /> <DIV id="codeSnippet"> PS C:\&gt; Get-ClusterNode -Cluster $ClusterName | ForEach-Object {Get-VM -ComputerName $_ | Where-Object {$_.ReplicationMode -eq "Primary"} | Reset-VMReplicationStatistics} </DIV> </DIV> <BR /> <P> </P> Wasn’t that a lot easier than right clicking on each VM in your cluster and clicking on the ‘Reset Statistics’ button? :) </BODY></HTML> Thu, 21 Mar 2019 23:34:55 GMT Virtualization-Team 2019-03-21T23:34:55Z Linux Integration Services 3.5 Announcement <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jan 02, 2014 </STRONG> <BR /> We are pleased to announce the release of Linux Integration Services version (LIS) 3.5. As part of this release not only have we included several awesome features much desired by our customers but we have also expanded our distribution support to include Red Hat Enterprise Linux/CentOS 5.5 and Red Hat Enterprise Linux/CentOS 5.6. This release is another significant milestone in our ongoing commitment to provide great support for open source software on Microsoft virtualization platforms. The following paragraphs provide a brief overview of what is being delivered as part of this release. <BR /> <BR /> <B> Download Location </B> <BR /> <BR /> The LIS binaries are available as RPM installables in an ISO file which can be downloaded from the following link on technet: <BR /> <BR /> <A href="#" target="_blank"> </A> <BR /> <BR /> As always, a ReadMe file has also been provided to provide information on installation procedure, feature set and known issues. <BR /> <BR /> In the true spirit of open source development, we now also have a github repository hosted at the following link: <BR /> <BR /> <A href="#" target="_blank"> </A> <BR /> <BR /> All code has been released under the GNU Public License v2. We hope that many of you will use it for your custom development and extension. <BR /> <BR /> <B> Supported Linux Distributions and Windows Server Releases </B> <BR /> <BR /> LIS 3.5 supports the following guest operating systems: <BR /> <BR /> <UL> <LI> Red Hat Enterprise Linux (RHEL) 5.5-5.8, 6.0-6.3 x86 and x64 </LI> <LI> CentOS 5.5-5.8, 6.0-6.3 x86 and x64 </LI> </UL> <BR /> <BR /> All of the above distributions are supported on the following Windows Server releases: <BR /> <BR /> <UL> <LI> Windows Server 2008 R2 Standard, Windows Server 2008 R2 Enterprise, and Windows Server 2008 R2 Datacenter </LI> <LI> Microsoft Hyper-V Server 2008 R2 </LI> <LI> Windows 8 Pro </LI> <LI> Windows 8.1 Pro </LI> <LI> Windows Server 2012 </LI> <LI> Windows Server 2012 R2 </LI> <LI> Microsoft Hyper-V Server 2012 </LI> <LI> Microsoft Hyper-V Server 2012 R2 </LI> </UL> <BR /> <BR /> <B> Feature Set </B> <BR /> <BR /> The LIS 3.5 release brings much coveted features such as dynamic memory and live virtual machine backup to older RHEL releases. The check marks in the table below indicate the features that have been implemented in LIS 3.5. For comparative purposes we also provide the feature set of LIS 3.4 so that our customers can decide if they need to upgrade the current version of their LIS drivers. More details on individual features can be found at <A href="#" target="_blank"> </A> . <BR /> <BR /> <STRONG> Table Legend </STRONG> <BR /> <BR /> <STRONG> √ </STRONG> - Feature available <BR /> <BR /> ( <EM> blank </EM> ) - Feature not available <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <STRONG> Feature </STRONG> <P> </P> </TD> <TD> <P> <STRONG> Hyper-V Version </STRONG> </P> </TD> <TD> <P> <STRONG> RHEL/CentOS 6.0-6.3 </STRONG> </P> </TD> <TD> <P> <STRONG> RHEL/CentOS 5.7-5.8 </STRONG> </P> </TD> <TD> <P> <STRONG> RHEL/CentOS 5.5-5.6 </STRONG> </P> </TD> </TR> <TR> <TD> <P> <STRONG> Availability </STRONG> </P> </TD> <TD> <P> </P> </TD> <TD> <P> <B> LIS 3.5 </B> </P> </TD> <TD> <P> <B> LIS 3.4 </B> </P> </TD> <TD> <P> <B> LIS 3.5 </B> </P> </TD> <TD> <P> <B> LIS 3.4 </B> </P> </TD> <TD> <P> <B> LIS 3.5 </B> </P> </TD> <TD> <P> <B> LIS 3.4 </B> </P> </TD> </TR> <TR> <TD> <P> <STRONG> Core </STRONG> </P> </TD> <TD> <P> 2012 R2, 2012, 2008 R2 </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> </TR> <TR> <TD> <P> <STRONG> Networking </STRONG> </P> </TD> <TD> <P> </P> </TD> </TR> <TR> <TD> <P> Jumbo frames </P> </TD> <TD> <P> 2012 R2, 2012, 2008 R2 </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> </TR> <TR> <TD> <P> VLAN tagging and trunking </P> </TD> <TD> <P> 2012 R2, 2012, 2008 R2 </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> </TR> <TR> <TD> <P> Live Migration </P> </TD> <TD> <P> 2012 R2, 2012, 2008 R2 </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> </TR> <TR> <TD> <P> Static IP Injection </P> </TD> <TD> <P> 2012 R2, 2012 </P> </TD> <TD> <P> <STRONG> √ </STRONG> Note 1 </P> </TD> <TD> <P> <STRONG> √ </STRONG> Note 1 </P> </TD> <TD> <P> <STRONG> √ </STRONG> Note 1 </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> Note 1 </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> </TR> <TR> <TD> <P> <STRONG> Storage </STRONG> </P> </TD> <TD> <P> </P> </TD> </TR> <TR> <TD> <P> VHDX resize </P> </TD> <TD> <P> 2012 R2 </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> </TR> <TR> <TD> <P> Virtual Fibre Channel </P> </TD> <TD> <P> 2012 R2 </P> </TD> <TD> <P> <STRONG> √ </STRONG> Note 2 </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> Note 2 </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> Note 2 </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> </TR> <TR> <TD> <P> Live virtual machine backup </P> </TD> <TD> <P> 2012 R2 </P> </TD> <TD> <P> <STRONG> √ </STRONG> Note 3, 4 </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> Note 3, 4 </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> Note 3, 4 </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> </TR> <TR> <TD> <P> TRIM support </P> </TD> <TD> <P> 2012 R2 </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> </TR> <TR> <TD> <P> <STRONG> Memory </STRONG> </P> </TD> <TD> <P> </P> </TD> </TR> <TR> <TD> <P> Configuration of MMIO gap </P> </TD> <TD> <P> 2012 R2 </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> </TR> <TR> <TD> <P> Dynamic Memory – Hot Add </P> </TD> <TD> <P> 2012 R2, 2012, 2008 R2 </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> </TR> <TR> <TD> <P> Dynamic Memory – Ballooning </P> </TD> <TD> <P> 2012 R2, 2012 </P> </TD> <TD> <P> <STRONG> √ </STRONG> Note 5 </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> Note 5 </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> Note 5 </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> </TR> <TR> <TD> <P> <STRONG> Video </STRONG> </P> </TD> <TD> <P> </P> </TD> </TR> <TR> <TD> <P> Hyper-V Specific&nbsp; Video device </P> </TD> <TD> <P> 2012 R2, 2012, 2008 R2 </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> </TR> <TR> <TD> <P> <STRONG> Miscellaneous </STRONG> </P> </TD> <TD> <P> </P> </TD> </TR> <TR> <TD> <P> Key-Value Pair </P> </TD> <TD> <P> 2012 R2, 2012, 2008 R2 </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> </TR> <TR> <TD> <P> Non-Maskable Interrupt </P> </TD> <TD> <P> 2012 R2 </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> </TR> <TR> <TD> <P> PAE Kernel Support </P> </TD> <TD> <P> 2012 R2, 2012, 2008 R2 </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> √ </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> </TR> </TBODY></TABLE> <BR /> <BR /> <P> <BR /> <BR /> <B> Notes </B> <BR /> <BR /> </P> <OL> <LI> Static IP injection might not work if Network Manager has been configured for a given Hyper-V-specific network adapter on the virtual machine. To ensure smooth functioning of static IP injection, ensure that either Network Manager is turned off completely, or has been turned off for a specific network adapter through its Ifcfg-ethX file. </LI> <LI> When you use Virtual Fibre Channel devices, ensure that logical unit number 0 (LUN 0) has been populated. If LUN 0 has not been populated, a Linux virtual machine might not be able to mount Virtual Fibre Channel devices natively. </LI> <LI> If there are open file handles during a live virtual machine backup operation, the backed-up virtual hard disks (VHDs) might have to undergo a file system consistency check (fsck) when restored. </LI> <LI> Live backup operations can fail silently if the virtual machine has an attached iSCSI device or a physical disk that is directly attached to a virtual machine (“pass-through disk”). </LI> <LI> LIS 3.5 only provides Dynamic Memory ballooning support—it does not provide hot-add support. In such a scenario, the Dynamic Memory feature can be used by setting the Startup memory parameter to a value which is equal to the Maximum memory parameter. This results in all the requisite memory being allocated to the virtual machine at boot time—and then later, depending upon the memory requirements of the host, Hyper-V can freely reclaim any memory from the guest. Also, ensure that Startup Memory and Minimum Memory are not configured below distribution recommended values. </LI> </OL> <BR /> <BR /> <B> Customer Feedback </B> <BR /> <BR /> Customer can provide feedback through Linux Integration Services for Microsoft Hyper-V forum located at <A href="#" target="_blank"> </A> <A href="#" target="_blank"> </A> <BR /> <BR /> <A> </A> <A> We </A> are eager to listen to your experiences and any issues that you may face while using LIS 3.5. We hope that this release helps you maximize your investment in Hyper-V and Windows Server. <BR /> <BR /> <P> - Abhishek Gupta <BR /> </P> </BODY></HTML> Thu, 21 Mar 2019 23:34:31 GMT Virtualization-Team 2019-03-21T23:34:31Z