Storage at Microsoft articles https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/bg-p/FileCAB Storage at Microsoft articles Tue, 26 Oct 2021 05:00:33 GMT FileCAB 2021-10-26T05:00:33Z Bogus event 2505 https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/bogus-event-2505/ba-p/2883350 <P><SPAN>Heya folks,&nbsp;</SPAN><A href="#" target="_self" rel="nofollow noopener noreferrer">Ned</A><SPAN>&nbsp;here again. A few customers have reported this known issue on Windows <STRONG>11&nbsp;</STRONG>machines &amp; you may see this event at boot up and perhaps occasionally afterwards. In the Event Log, in the System channel, you see:&nbsp;</SPAN></P> <P>&nbsp;</P> <PRE class="lia-indent-padding-left-30px">Log Name: System<BR />Source: <STRONG>Server</STRONG><BR />Date: 10/25/2021 3:01:46 PM<BR />Event ID: <STRONG>2505</STRONG><BR />Task Category: None<BR />Level: Error<BR />Keywords: Classic<BR />User: N/A<BR />Computer: darbydoo<BR />Description:<BR />The server could not bind to the transport \Device\NetBT_Tcpip_{C0E1EDC5-9B33-4911-A3B3-AE69C8115AF6} because another computer on the network has the same name. The server could not start.<BR /><BR /></PRE> <P>&nbsp;</P> <P>This can be seen when booting up, repeatedly disabling and enabling a RDMA NIC, or by a network adapter that goes offline and online for some reason.&nbsp;This leads to multiple SMB server service notifications about the same network interface that has already been bound to the SMB server.&nbsp;You can&nbsp;<STRONG>ignore</STRONG> this event's description, there is not a duplicate computer and the Server service is not stopped. This SMB event is purely cosmetic. We plan to fix this in a later update to Windows.&nbsp;</P> <P>&nbsp;</P> <P>- Ned Pyle</P> <P class="lia-indent-padding-left-30px">&nbsp;</P> Mon, 25 Oct 2021 20:52:37 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/bogus-event-2505/ba-p/2883350 Ned Pyle 2021-10-25T20:52:37Z Windows 11 Insider 22449 update for SMB compression https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/windows-11-insider-22449-update-for-smb-compression/ba-p/2750995 <P><SPAN>Heya folks,&nbsp;</SPAN><A href="#" target="_blank" rel="noopener nofollow noreferrer">Ned</A><SPAN>&nbsp;here again. </SPAN><SPAN class="css-901oao css-16my406 r-poiln3 r-bcqeeo r-qvutc0" style="font-family: inherit;">We released a change to SMB compression for <A href="#" target="_self">Windows 11 Insider Preview build 22449 &amp; later</A>.&nbsp;</SPAN></P> <P>&nbsp;</P> <H2><SPAN class="css-901oao css-16my406 r-poiln3 r-bcqeeo r-qvutc0" style="font-family: inherit;">TL/DR</SPAN></H2> <P><SPAN class="css-901oao css-16my406 r-poiln3 r-bcqeeo r-qvutc0" style="font-family: inherit;">We stopped being so algorithmically cute: when you request compression, we now just try to do it because you said so. :)</img>&nbsp;</SPAN></P> <P>&nbsp;</P> <H2><SPAN class="css-901oao css-16my406 r-poiln3 r-bcqeeo r-qvutc0" style="font-family: inherit;">Full explanation</SPAN></H2> <P class="">We first introduced<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">SMB compression</A><SPAN>&nbsp;</SPAN>in Windows Server 2022 &amp; Windows 11. SMB compression allows an administrator, user, or application to request compression of files as they transfer over the network. This removes the need to first deflate a file manually with an application, copy it, then inflate on the destination PC. Compressed files will consume less network bandwidth and take less time to transfer, at the cost of slightly increased CPU usage during transfers.</P> <P class="">&nbsp;</P> <P>Based on testing and analysis, we have changed the default behavior of compression. Previously, the SMB compression decision algorithm would attempt to compress the first 524,288,000 bytes (500MiB) of a file during transfer and track that at least 104,857,600 bytes (100MiB) compressed within that 500-MB range. If fewer than 100 MiB were compressible, SMB compression stopped trying to compress the rest of the file. If at least 100 MiB compressed, SMB compression attempted to compress the rest of the file. This meant that very large files with compressible data – for instance, a multi-gigabyte virtual machine disk – were likely to compress but a relatively small file – even a very compressible one – would not compress.</P> <P>&nbsp;</P> <P>Starting in Build 22449, we will no longer use this decision algorithm by default. Instead, if compression is requested, we will always attempt to compress. If you wish to modify this new behavior to return to a decision algorithm, please see this article:<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">Understanding and controlling compression behaviors</A>.</P> <P>&nbsp;</P> <P>Please use the Feedback Hub to give feedback or report issues with SMB compression, using the Files, Folders, and Online Storage &gt; File Sharing category.</P> <P>&nbsp;</P> <P>Until next time,</P> <P>&nbsp;</P> <P>Ned "smooooosh" Pyle</P> Tue, 14 Sep 2021 23:04:33 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/windows-11-insider-22449-update-for-smb-compression/ba-p/2750995 Ned Pyle 2021-09-14T23:04:33Z Azure File Sync survey https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/azure-file-sync-survey/ba-p/2750685 <P><SPAN>Heya folks,&nbsp;</SPAN><A href="#" target="_blank" rel="noopener nofollow noreferrer">Ned</A><SPAN>&nbsp;here again.&nbsp;</SPAN>Are you a current or past user of Azure File Sync? Are you considering using Azure File Sync?&nbsp;The Azure Files team is updating their roadmap and need your feedback!&nbsp;Please take a few minutes to complete this anonymous survey, it will help them get you the features your organization needs!&nbsp;</P> <P>&nbsp;</P> <H2><A href="#" target="_self">Take survey</A></H2> <P>&nbsp;</P> <P>Thanks!</P> <P>&nbsp;</P> <P>- Ned Pyle</P> Tue, 14 Sep 2021 22:57:02 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/azure-file-sync-survey/ba-p/2750685 Ned Pyle 2021-09-14T22:57:02Z Storage Innovations in Windows Server 2022 https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-innovations-in-windows-server-2022/ba-p/2714214 <P>We are excited to release new features in Storage for Windows Server 2022. Our focus is to provide customers with greater resiliency, performance, and flexibility. While some of these topics have been covered elsewhere, we wanted to provide a single place where our Server 2022 customers can read about these innovations and features.</P> <P>&nbsp;</P> <H1>Advanced Caching in Storage Spaces</H1> <P>&nbsp;</P> <DIV id="tinyMceEditorAndrewHansen_0" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <DIV id="tinyMceEditorAndrewHansen_1" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="sbc-map.png" style="width: 649px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/307987i9DFD0FC109E71FAA/image-dimensions/649x396?v=v2" width="649" height="396" role="button" title="sbc-map.png" alt="sbc-map.png" /></span></P> <P> </P> <P>&nbsp;</P> <P>During Windows Server 2019, we realized that many deployments are on single, standalone server platforms (i.e., non-clustered). We wanted to deliver innovations that specifically target this popular segment of our customer base.</P> <P>&nbsp;</P> <P>We developed a new storage cache for standalone servers that can significantly improve overall system performance, while maintaining storage efficiency and keeping the operational costs low. Similar to caching in Storage Spaces Direct, this feature binds together faster media (for example, SSD) with slower media (for example, high-capacity HDD) to create tiers.</P> <P>&nbsp;</P> <P>Windows Server 2022 contains new, advanced caching logic that can automatically place critical data on the fastest storage volumes, while placing less-critical data on the slower, high-capacity storage devices. While highly-configurable, Windows offers a highly optimized set of defaults that allow an IT admin to “set and forget” while still achieving significant performance gains.</P> <P>&nbsp;</P> <P>Learn more: <A href="#" target="_blank" rel="noopener">Storage bus cache on Storage Spaces</A></P> <H1>&nbsp;</H1> <H1>Faster Repair/Resync</H1> <P>We heard your feedback. Storage resync and rebuilds after events like node reboots and disk failures are <FONT color="#FF6600"><STRONG>now TWICE as fast</STRONG></FONT>. Also, repairs have <STRONG>less variance</STRONG> in time taken so you can be more sure of how long the repairs will take. No longer will you get those long hanging repair jobs. We've done this through adding more granularity to our data tracking. This allows use to move <EM>only</EM> the data that needs to be moved. Less moved data, means less work, less system resources used, and less time taken - also less stress for you!</P> <P>&nbsp;</P> <H1>Adjustable Storage Repair/Resync Speed</H1> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AndrewHansen_1-1630627435985.jpeg" style="width: 674px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/307835iE7B80920D83F0C39/image-dimensions/674x420?v=v2" width="674" height="420" role="button" title="AndrewHansen_1-1630627435985.jpeg" alt="AndrewHansen_1-1630627435985.jpeg" /></span></P> <P>&nbsp;</P> <P>We know that resiliency and availability are incredibly important for our users. Waiting for storage to resync after a node reboot or a replaced disk shouldn't be a nail-biting experience. We also know that storage rebuilds can consume system resources that users would rather keep reserved for business applications. For these reasons we have focused on improving storage resync and rebuilds.</P> <P>&nbsp;</P> <P>In Windows Server 2022 we cut repair speeds in half! And you can go even faster with <EM>adjustable</EM> storage repair speeds.</P> <P>This means you have more control over the data resync process by allocating resources to either repair data copies (resiliency) or run active workloads (performance). Now you can prioritize rebuilds during servicing hours or run the resync in the background to keep your business apps in priority. Faster repairs means increasing the availability of your cluster nodes and allows you to service clusters more flexibly and efficiently.</P> <P>&nbsp;</P> <P>Learn more: <A href="#" target="_blank" rel="noopener">Adjustable storage repair speed in Azure Stack HCI and Windows Server clusters</A></P> <H1>&nbsp;</H1> <H1>ReFS File Snapshots</H1> <P>Microsoft’s resilient file system (ReFS) now includes the ability to snapshot files using a quick metadata operation. Snapshots are different than <A href="#" target="_blank" rel="noopener">ReFS block cloning</A> in that clones are writable, snapshots are read-only. This functionality is especially handy in virtual machine backup scenarios with VHD files. ReFS snapshots are unique in that they take a constant time irrespective of file size.</P> <P>&nbsp;</P> <P>Functionality for snapshots is currently built into <A href="#" target="_blank" rel="noopener">ReFSutil</A> or available as an API.</P> <P>&nbsp;</P> <H1>&nbsp;</H1> <H1>Other improvements:</H1> <UL> <LI><A href="#" target="_blank" rel="noopener">ReFS Block Cloning</A> performance improvements.</LI> <LI>ReFS improvements to storage mapping in the background mapping to reduce latency.</LI> <LI>Various other perf and scale improvements.</LI> </UL> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> Fri, 03 Sep 2021 16:46:29 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-innovations-in-windows-server-2022/ba-p/2714214 AndrewHansen 2021-09-03T16:46:29Z Windows Server 2022 is full of new file services! https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/windows-server-2022-is-full-of-new-file-services/ba-p/2637323 <P>Heya folks,&nbsp;<A href="#" target="_blank" rel="noopener">Ned</A>&nbsp;here again.&nbsp;As you’ve heard by now, <A href="#" target="_blank" rel="noopener">Windows Server 2022 is available</A> and supported for production deployments. This new OS brings many new <A href="#" target="_blank" rel="noopener">features</A> around security, storage, networking, web, containers, applications, virtualization, edge, and Azure hybrid.</P> <P>&nbsp;</P> <P>Today I’ll highlight what we’ve introduced for the single most used scenario in organizations: File Services. Our goal with Windows Server 2022 was to make the same generational leap that we did with Windows Server 2012, where we first introduced SMB 3.0 and its security, scale, and performance options. With Windows Server 2022, we focused on today's world of hybrid cloud computing, mobile and telecommuter edge users, and increasingly congested and untrusted networks. We ended up with a large new catalog of options for your organization to stay productive.</P> <P>&nbsp;</P> <P>Mount up!</P> <P>&nbsp;</P> <H1>SMB Compression</H1> <P>Windows Server 2022 introduces <A href="#" target="_blank" rel="noopener">SMB compression</A>, which shrinks files as they transfer over the network. Compressed files use less bandwidth and take less time to transfer, at the cost of slightly increased CPU usage during transfers. For files with compressible space, the savings can be huge – watch my demo below of copying VHD files!</P> <P>&nbsp;</P> <P><LI-VIDEO vid="https://youtu.be/zpMS6w33H7U" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/zpMS6w33H7U/hqdefault.jpg" external="url"></LI-VIDEO></P> <H6>Narrated demo of SMB compression on Youtube</H6> <P>&nbsp;</P> <P>You configure this feature with Windows Admin Center or PowerShell, and you can set compression on SMB shares, client mapped drives, clients, servers, or even on individual file copies using robocopy. SMB compression is most effective on networks with less bandwidth, such as a client's 1Gbps ethernet or Wi-Fi network. It supports your favorite SMB options like SMB signing, encryption, and multichannel. Windows 11 and Windows Server 2022 both have this new capability.</P> <P>&nbsp;</P> <P>This is a game changing feature in a world where files and are bigger than ever on networks that are fuller than ever.</P> <P>&nbsp;</P> <H1>SMB Security</H1> <P>We added a raft of new <A href="#" target="_blank" rel="noopener">SMB security features</A> in Windows Server 2022 for use with client-server scenarios as well as failover clustering and high speed RDMA networking. Security is ever evolving and besides the new options below, we have additional security features coming retroactively to Windows 11 and Windows Server 2022.</P> <P>&nbsp;</P> <H6><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-08-30_17-30-37.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/307129i2DA36886F9CBF856/image-size/large?v=v2&amp;px=999" role="button" title="2021-08-30_17-30-37.png" alt="2021-08-30_17-30-37.png" /></span></H6> <H6>Windows Admin Center and the SMB settings for encryption and signing</H6> <P>&nbsp;</P> <H2>AES-256</H2> <P>SMB encryption protects your data from being read off the network. Windows Server 2022 now supports AES-256-GCM and AES-256-CCM, in addition to the primary AES-128 encryption used today. This isn’t some implication that <A href="#" target="_blank" rel="noopener">AES</A> security is flawed – there are no practical attacks that would allow someone without a key to decrypt AES-128 and none reasonably forecast, despite what the quantum computing marketing folks would have you believe – it’s to ensure that we met mandates like FIPS 197, NSA Suite B Cryptography, and others required by top secret networks. In fact, we still use AES-128 by default for performance reasons and let you opt into 256 bits with group policy and management tools. Windows 11 and Windows Server 2022 have this new capability.</P> <P>&nbsp;</P> <H2>Accelerated signing</H2> <P>SMB signing prevents tampering with data on the network, as well as relay, interception, and spoofing attacks. With Windows Server 2022, we added AES-<A href="#" target="_blank" rel="noopener">GMAC</A> acceleration to signing, meaning improved AES-NI hardware offloading provided by CPUs. When two AES-128-GMAC machines are signing SMB and running at least Nehalem processors – i.e., newer than 12 years old – you’ll see the latency of signed SMB really drop and transfer speeds improve, especially on busy processors. For more info on signing, check out <A href="#" target="_blank" rel="noopener">Configure SMB Signing with Confidence</A>. Windows 11 and Windows Server 2022 have this new capability.</P> <P>&nbsp;</P> <H2>SMB Direct and RDMA encryption</H2> <P>SMB Direct and RDMA supply high bandwidth, low latency networking fabric for workloads like Storage Spaces Direct, Storage Replica, Hyper-V, Scale-out File Server, and MS SQL server. Windows Server 2022 SMB Direct now supports encryption. Previously, <A href="#" target="_blank" rel="noopener">enabling SMB encryption disabled direct data placement</A>; this was intentional, but seriously impacted performance. Now we encrypt data before placement, leading to far less performance degradation while adding AES-128 and AES-256 protected packet privacy. Windows 11 Enterprise, Education, and Pro Workstation, as well as Windows Server 2022, have this new capability.</P> <P>&nbsp;</P> <H2>Failover Cluster East-West security control</H2> <P>Windows Server failover clusters use intra-node communications instances for cluster shared volumes (CSV) and the storage bus layer (SBL). Windows Server 2022 now allows granular control of encrypting intra-node storage communications for those networks beyond the “default” instance of SMB Server. This means that when using clusters and RDMA networks, you can decide to encrypt or sign east-west communications within the cluster itself for higher security, but not necessarily for north-south communications like file copies or Hyper-V Live Migration – and the reverse as well. You can control this with PowerShell currently, there’s more info at <A href="https://gorovian.000webhostapp.com/?exam=t5/failover-clustering/security-settings-for-failover-clustering/ba-p/2544690" target="_blank" rel="noopener">Security Settings for Failover Clustering</A>. Windows Server 2022 failover clusters have this new capability.</P> <P>&nbsp;</P> <H1>Storage Migration Service</H1> <P>The Storage Migration Service migrates servers, their storage, and their SMB config from old Windows and Linux to modern Windows Servers and clusters running on-prem or in Azure. You can now also migrate from NetApp FAS arrays running NetApp ONTAP 9 or later. The process is almost identical to migrating from an old Windows Server 2008 or 2012 machine, except that now you point to a NetApp FAS and pick from its running CIFS SVMs, and you can migrate the data into folders on one or more volumes as NetApp storage doesn’t have drive letters.</P> <P>&nbsp;</P> <P>Furthermore, SMS now integrates with <A href="#" target="_blank" rel="noopener">Azure File Sync</A> cloud tiering! AFS cloud tiering allows you to use less storage in your Windows File Server and tier your data into Azure for backups and added ransomware protection.</P> <P>&nbsp;</P> <H6><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-08-31_15-53-53.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/307130iCF344AA5BEC31A87/image-size/large?v=v2&amp;px=999" role="button" title="2021-08-31_15-53-53.png" alt="2021-08-31_15-53-53.png" /></span></H6> <H6>Windows Admin Center and the Storage Migration Service enabling AFS sync on each volume</H6> <P>&nbsp;</P> <P>Applications must be aware of AFS cloud tiering, though; when they copy a large data set and AFS says “hey, this volume is full – please wait for me to dehydrate these old files into Azure,” the app has to pause IO. SMS understands this and you can configure AFS cloud tiering on your destination so that the migration and AFS happen simultaneously. For more info on AFS cloud tiering, see <A href="#" target="_blank" rel="noopener">Understand Azure File Sync cloud tiering</A>.</P> <P>&nbsp;</P> <H6><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-08-31_15-55-19.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/307131iD6827F9091559097/image-size/large?v=v2&amp;px=999" role="button" title="2021-08-31_15-55-19.png" alt="2021-08-31_15-55-19.png" /></span></H6> <H6>Windows Admin Center and the Storage Migration Service about to begin a transfer with AFS enabled</H6> <P>&nbsp;</P> <P>For steps on SMS, see <A href="#" target="_blank" rel="noopener">Use Storage Migration Service to migrate a server</A>. Windows Server 2022 and Windows Server 2019 with an update both have this new capability.</P> <P>&nbsp;</P> <H1>Storage and Networking</H1> <P>While not specific to File Services, the Networking, Hyper-V, and local storage teams have been busy adding more performance options for SMB and file servers to gain on Windows Server 2022. You’ll be configuring the new standalone storage bus cache feature based on your drive configuration; the networking features are on by default and just work!</P> <P>&nbsp;</P> <H2>TCP &amp; UDP Networking</H2> <P>Windows Server 2022 implements TCP <A href="#" target="_blank" rel="noopener">HyStart++</A> to reduce packet loss during connection start up - especially helpful in high-speed networks - and <A href="#" target="_blank" rel="noopener">RACK</A> to reduce Retransmit Time Outs (RTO). These features are enabled in the transport stack by default and provide a smoother network data flow with better performance at high speeds. Windows 11 and Windows Server 2022 both have this new capability.</P> <P>&nbsp;</P> <P>Furthermore, UDP is becoming a more popular protocol carrying more networking traffic. In Windows Server 2022 we added the game changing UDP Segmentation Offload (USO). USO moves much of the work needed to send UDP packets from the CPU to the NIC's specialized hardware. We also added UDP Receive Side Coalescing (UDP RSC) which coalesces packets and reduces CPU usage for UDP processing. “But” you exclaim, “SMB uses TCP and RDMA, what does this matter?” Read on, noble IT Pro, to see why this is important. Windows 11 and Windows Server 2022 both have this new capability.</P> <P>&nbsp;</P> <H2>vSwitch networking</H2> <P>The odds that your file servers are virtualized are extremely high and Windows Server 2022 improves the Hyper-V vSwitch with updated Receive Segment Coalescing (RSC). This allows the hypervisor network to coalesce packets and process as one larger segment. This reduces CPU cycles and segments will remain coalesced across the entire data path until processed by the intended application. This means improved performance in both network traffic from an external host, received by a virtual NIC, as well as from a virtual NIC to another virtual NIC on the same host. Windows Server 2022 has this new capability.</P> <P>&nbsp;</P> <H2>Storage bus cache</H2> <P>Windows failover clusters with Storage Spaces Direct have supported a <A href="#" target="_blank" rel="noopener">caching feature</A> for many years and now the option is available for standalone servers using Storage Spaces as the <A href="#" target="_blank" rel="noopener">storage bus cache</A>. Storage bus cache can significantly improve read and write performance by tiering smaller fast flash drives above slower but higher capacity drives.</P> <P>&nbsp;</P> <H6><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-08-31_15-25-14.png" style="width: 728px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/307132i0371E6B1C8B4EDEB/image-size/large?v=v2&amp;px=999" role="button" title="2021-08-31_15-25-14.png" alt="2021-08-31_15-25-14.png" /></span></H6> <H6>A diagram of SSD and HDD drives arranged into mirror cache tiers and parity capacity tiers</H6> <P>&nbsp;</P> <P>SBC can work both as a read cache and a read-write cache, depending on your hardware. It adds performance to a file server without breaking the bank. Windows Server 2022 has this new capability.</P> <P>&nbsp;</P> <H1>SMB over QUIC (preview)</H1> <P>Finally, we come to the next generation of hybrid file services: <A href="#" target="_blank" rel="noopener">SMB over QUIC</A>. SMB over QUIC (preview) provides secure and reliable connectivity to edge file servers over untrusted networks like the Internet, and brings an "SMB VPN" to telecommuters, mobile device users, and high security organizations. <A href="https://gorovian.000webhostapp.com/?exam=t5/networking-blog/what-s-quic/ba-p/2683367" target="_blank" rel="noopener">QUIC</A> is an IETF-standardized protocol with many benefits over TCP, including:</P> <P>&nbsp;</P> <UL> <LI>All applications using QUIC travel over UDP and, by default, the firewall-friendly port 443</LI> <LI>TLS 1.3 ensures encryption of all packets and a certificate provides non-repudiation</LI> <LI>Parallel streams of reliable and unreliable application data</LI> <LI>Improved congestion control and loss recovery</LI> </UL> <P>&nbsp;</P> <P>All SMB traffic, including authentication and authorization, occurs within the tunnel and is never exposed to the underlying network. SMB over QUIC supports Kerberos but even if you decide to use NTLMv2, no challenge or response is exposed to the network.</P> <P>&nbsp;</P> <P>Here's a demonstration of deploying SMB over QUIC and the user experience:</P> <P>&nbsp;</P> <P><LI-VIDEO vid="https://youtu.be/L0yl5Z5wToA" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/L0yl5Z5wToA/hqdefault.jpg" external="url"></LI-VIDEO></P> <H6>Narrated demo of SMB over QUIC on Youtube</H6> <P>&nbsp;</P> <P>SMB behaves normally within the QUIC tunnel; as you saw in the demo, the user experience doesn't change. SMB features like multichannel, signing, compression, continuous availability, and directory leasing work normally. Windows 11 and <A href="#" target="_blank" rel="noopener">Automanage for Windows Server Services (with Windows Server 2022 Datacenter: Azure Edition Preview)</A> both have this new capability, and it is coming to Azure Files and Android Phones later.</P> <P>&nbsp;</P> <P>This is another game changer feature.</P> <P>&nbsp;</P> <H1>Summary</H1> <P>Woo that’s a bunch of new stuff to learn. If you’re looking to get your feet wet, I recommend <A href="#" target="_blank" rel="noopener">SMB compression</A> as the quickest place to start, then dig into <A href="#" target="_blank" rel="noopener">SMB over QUIC</A>. For more important info on Windows Server 2022 in general:</P> <P>&nbsp;</P> <UL> <LI>Get Started -&nbsp;<A href="#" target="_blank" rel="noopener">https://docs.microsoft.com/windows-server/get-started/get-started-with-windows-server</A></LI> <LI>What's New -&nbsp;<A href="#" target="_blank" rel="noopener">https://docs.microsoft.com/windows-server/get-started/whats-new-in-windows-server-2022</A></LI> <LI>Editions Comparisons -&nbsp;<A href="#" target="_blank" rel="noopener">https://docs.microsoft.com/windows-server/get-started/editions-comparison-windows-server-2022</A></LI> <LI>Hardware requirements -&nbsp;<A href="#" target="_blank" rel="noopener">https://docs.microsoft.com/windows-server/get-started/hardware-requirements</A></LI> <LI>Removed Features -&nbsp;<A href="#" target="_blank" rel="noopener">https://docs.microsoft.com/windows-server/get-started/removed-deprecated-features-windows-server-2022</A></LI> <LI>Windows 11 Insider Preview -&nbsp;<A href="#" target="_blank" rel="noopener">Home | Windows Insider Blog</A></LI> </UL> <P>&nbsp;</P> <P>Don’t forget to add <A href="#" target="_blank" rel="noopener">Microsoft Ignite</A> to your calendar for November – there will be tons of Windows Server info. And <EM>really </EM>don’t forget to register for the <A href="#" target="_blank" rel="noopener">free Windows Server Summit on September 16, 2021</A>. I have a presentation with Rick Claus that features great info, discussion, demos, and a special guest! Who could it be?</P> <P>&nbsp;</P> <H6><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture1.png" style="width: 540px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/307133i0EE25F7F167EB9F5/image-size/large?v=v2&amp;px=999" role="button" title="Picture1.png" alt="Picture1.png" /></span></H6> <H6>Author's very embarrassed dogs wearing Halloween costumes of a viking and cowboy-riding horse</H6> <P>&nbsp;</P> <P>Until then,</P> <P>&nbsp;</P> <P>- Ned “So Much Beauty” Pyle</P> Thu, 02 Sep 2021 01:22:54 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/windows-server-2022-is-full-of-new-file-services/ba-p/2637323 Ned Pyle 2021-09-02T01:22:54Z Configure SMB Signing with Confidence https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/configure-smb-signing-with-confidence/ba-p/2418102 <P>Heya folks, <A href="#" target="_self">Ned</A> here again. Many years ago, we made configuring SMB signing in Windows pretty complicated. Then, years later, we made it even more complicated in an attempt to be less complicated. Today I'm here to explain the SMB signing rules once and for all.&nbsp;Probably.</P> <P>&nbsp;</P> <P>Sign me up!</P> <P>&nbsp;</P> <H2>What is signing and why do you care</H2> <P>SMB signing means that every SMB 3.1.1 message contains a signature generated using session key and AES. The client puts a <A href="#" target="_self">hash of the entire message</A> into the signature field of the SMB2 header.&nbsp;If anyone changes the message itself later on the wire, the hash won't match and SMB knows that someone tampered with the data. It also confirms to sender and receiver that they are who they say they are, breaking relay attacks. Ideally, you are using Kerberos instead of NTLMv2 so that your session key starts strong; don't connect to shares with IP addresses and <A href="https://gorovian.000webhostapp.com/?exam=t5/core-infrastructure-and-security/using-computer-name-aliases-in-place-of-dns-cname-records/ba-p/259064" target="_self">don't use CNAME records - Kerberos is here to help</A>!</P> <P>&nbsp;</P> <P>By default, domain controllers require SMB signing of anyone connecting to them, typically for SYSVOL and NETLOGON to get group policy and those sweet logon scripts. Less well known is that - starting in Windows 10 - <A href="#" target="_self">UNC Hardening</A> from the client&nbsp;<EM>also&nbsp;</EM>requires signing when talking to those same two shares and goes further by requiring Kerberos (it technically requires mutual auth, but for Windows, that means Kerberos).&nbsp;</P> <P>&nbsp;</P> <P>SMB signing first appeared in Windows 2000, NT 4.0, and Windows 98, it's old enough to drink.&nbsp;Signing algorithms have evolved over time; SMB 2.02 signing was improved with HMAC SHA-256, replacing the old MD5 method from the late 1990s that was in SMB1 (may it burn in Hades for all eternity). SMB 3.0 added AES-CMAC. In Windows Server 2022 and Windows 11, we added <A href="#" target="_self">AES-128-GMAC signing acceleration</A>, so if you're looking for the best performance and protection combo, start planning your upgrades.&nbsp;</P> <P>&nbsp;</P> <H2>The confusing bit</H2> <P>This 20+ year evolutionary process brings me to the confusing bit: "requiring" versus "enabling" signing in Windows security policy. We have four settings to control SMB signing,&nbsp;<EM>but they behave and mean things differently with SMB2+ and SMB1.</EM></P> <P>&nbsp;</P> <UL> <LI>Policy: "Microsoft network client: Digitally sign communications (<STRONG>always</STRONG>)"</LI> </UL> <P class="lia-indent-padding-left-90px">HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\LanManWorkstation\Parameters</P> <P class="lia-indent-padding-left-90px"><STRONG>Require</STRONG>SecuritySignature = 1 or 0</P> <P class="lia-indent-padding-left-90px">&nbsp;</P> <UL> <LI>Microsoft network client: Digitally sign communications (<STRONG>if server agrees</STRONG>)</LI> </UL> <P class="lia-indent-padding-left-90px">HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\LanManWorkstation\Parameters</P> <P class="lia-indent-padding-left-90px"><STRONG>Enable</STRONG>SecuritySignature = 1 or 0</P> <P>&nbsp;</P> <UL> <LI>Microsoft network server: Digitally sign communications (<STRONG>always</STRONG>)</LI> </UL> <P class="lia-indent-padding-left-90px">HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\LanManServer\Parameters</P> <P class="lia-indent-padding-left-90px"><STRONG>Require</STRONG>SecuritySignature = 1 or 0</P> <P>&nbsp;</P> <UL> <LI>Microsoft network server: Digitally sign communications (<STRONG>if client agrees</STRONG>)</LI> </UL> <P class="lia-indent-padding-left-90px">HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\LanManServer\Parameters</P> <P class="lia-indent-padding-left-90px"><STRONG>Enable</STRONG>SecuritySignature = 1 or 0</P> <P>&nbsp;</P> <P>Note my use of <STRONG>bold</STRONG>. "Always" means "<EM>required.</EM>" "If agrees" means "<EM>enabled</EM>." If I could go back in time and find out who decided to use synonyms here instead of the actual words, it would be my next stop after buying every share of MSFT I could get my hands on in 1986.&nbsp;</P> <P>&nbsp;</P> <P>These settings live here in the classic Security Settings of computer group policy you'll see by launching GPMC.MSC or GEPEDIT.MSC.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-29_18-25-55.jpg" style="width: 902px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/299441i63B18AA11C071CA1/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-29_18-25-55.jpg" alt="2021-07-29_18-25-55.jpg" /></span></P> <P>&nbsp;</P> <P>With me so far? Cool.</P> <P>&nbsp;</P> <H2>Understanding 'Required'&nbsp;</H2> <P>The “enabled” registry setting for SMB2+ client and SMB2+ Server is <EM><STRONG>ignored</STRONG></EM>. It does nothing at all. It is pointless unless you are using SMB1. SMB2 signing is controlled solely by being required or not, and if either the server or client require it, you will sign. Only if they&nbsp;<EM>both </EM>have signing set to 0 will signing not occur. Again, SMB signing is <EM>always enabled</EM> in SMB2+.&nbsp;</P> <P>&nbsp;</P> <TABLE style="width: 850px;" width="850"> <TBODY> <TR> <TD width="242">&nbsp;</TD> <TD width="249"> <P>Server – <STRONG>Require</STRONG>SecuritySignature=1</P> </TD> <TD width="264"> <P>Server – <STRONG>Require</STRONG>SecuritySignature=0</P> </TD> </TR> <TR> <TD width="242"> <P>Client – <STRONG>Require</STRONG>SecuritySignature=1</P> </TD> <TD width="249"> <P>Signed&nbsp;</P> </TD> <TD width="264"> <P>Signed&nbsp;</P> </TD> </TR> <TR> <TD width="242"> <P>Client – <STRONG>Require</STRONG>SecuritySignature=0</P> </TD> <TD width="249"> <P>Signed&nbsp;</P> </TD> <TD width="264"> <P><EM>Not Signed </EM></P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <H2>Understanding 'Enabled'&nbsp;</H2> <P>The legacy SMB1 client that <A href="#" target="_self">is no longer installed by default in Windows 10 or Windows 2019</A> commercial editions had a more complex (i.e. bad) behavior based on the naïve idea that clients and servers should sign <EM>if they feel like it</EM> but that it was ok not to sign otherwise, known as "enabled", i.e. "if agrees". Signing is still possible in any case, nothing turns the signing code off. This is not a security model we follow anymore but everyone was wearing 1-strap undone overalls and baggy windbreakers at this point in the 90s and thinking they looked good. SMB1 also had the "required" setting, for those who wanted more strictness, and that will override the "if I feel like it" behavior as you'd hope. So we end up with this complex matrix. Again, <A href="#" target="_self">it only matters for the SMB1 protocol that you are not supposed to be using.</A></P> <P>&nbsp;</P> <TABLE style="width: 1025px;" width="1025"> <TBODY> <TR> <TD width="245">&nbsp;</TD> <TD width="246"> <P>Server – <STRONG>Require</STRONG>SecuritySignature=1</P> </TD> <TD width="246"> <P>Server – <STRONG>Enable</STRONG>SecuritySignature=1</P> </TD> <TD width="246"> <P>Server – <STRONG>Enable</STRONG>SecuritySignature=0</P> </TD> </TR> <TR> <TD width="245"> <P>Client – <STRONG>Require</STRONG>SecuritySignature=1</P> </TD> <TD width="246"> <P>Signed</P> </TD> <TD width="246"> <P>Signed&nbsp;</P> </TD> <TD width="246"> <P>Signed&nbsp;</P> </TD> </TR> <TR> <TD width="245"> <P>Client – <STRONG>Enable</STRONG>SecuritySignature=1</P> </TD> <TD width="246"> <P>Signed&nbsp;</P> </TD> <TD width="246"> <P>Signed&nbsp;</P> </TD> <TD width="246"> <P><EM>Not signed&nbsp;</EM></P> </TD> </TR> <TR> <TD width="245"> <P>Client – <STRONG>Enable</STRONG>SecuritySignature=0</P> </TD> <TD width="246"> <P>Signed&nbsp;</P> </TD> <TD width="246"> <P><EM>Not Signed&nbsp;</EM></P> </TD> <TD width="246"> <P><EM>Not Signed&nbsp;</EM></P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <H2>And another thing</H2> <P data-unlink="true">The idea that the server should mandate these settings in either case isn't great either; it leads to attacks where someone intercepts the negotiation and says "nah, don't sign, you're fine". Which is why years ago we created <A href="#" target="_self">pre-authentication integrity</A> protection, <A href="#" target="_self">UNC Hardening</A>, and added the ability to require signing when mapping drives with NET USE and New-SmbMapping. All of this client-side security requirement is the proper technique, where the client decides it wants security and if it doesn't get it, closes the connection. Requiring Kerberos by disabling the use of NTLM&nbsp;and enabling UNC hardening will make things much more secure. In fact, I have a long article on all of this you should read once, then five times more:&nbsp;</P> <P>&nbsp;</P> <P><STRONG><A href="https://gorovian.000webhostapp.com/?exam=t5/itops-talk-blog/how-to-defend-users-from-interception-attacks-via-smb-client/ba-p/1494995" target="_self">How to Defend Users from Interception Attacks via SMB Client Defense</A></STRONG></P> <P>&nbsp;</P> <H2>The big sum up</H2> <P>If you really,&nbsp;<EM>really</EM> want to understand SMB signing, the article to read is&nbsp;<A href="#" target="_self">SMB 2 and SMB 3 security in Windows 10: the anatomy of signing and cryptographic keys</A> by Edgar Olougouna, who works in our dev support org and is a seriously smart man to be trusted in all things SMB.&nbsp;</P> <P>&nbsp;</P> <P>As for all these weird ideas we had around signing back in the late 90s - I wasn't around for these decisions but it's ok, you can still blame me if you want. At least I never wore the 1-strap overalls.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-29_19-35-39.jpg" style="width: 784px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/299466iB025134C06F97F24/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-29_19-35-39.jpg" alt="2021-07-29_19-35-39.jpg" /></span></P> <P>&nbsp;</P> <P>Until next time,</P> <P>&nbsp;</P> <P>- Ned "I saw the sign" Pyle</P> <P>&nbsp;</P> Wed, 04 Aug 2021 18:33:00 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/configure-smb-signing-with-confidence/ba-p/2418102 Ned Pyle 2021-08-04T18:33:00Z What the heck is the File Server "role" in Windows Server??? https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/what-the-heck-is-the-file-server-quot-role-quot-in-windows/ba-p/2418147 <P><SPAN>Heya folks,&nbsp;</SPAN><A href="#" target="_self" rel="nofollow noopener noreferrer">Ned</A><SPAN>&nbsp;here again. Today I clear up an old idiosyncrasy&nbsp;of Windows&nbsp;Server: if the SMB Server service is always installed, why is there a role called "File Server" and what does enabling it do?&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN>Let's... role ;)</img></SPAN></P> <P>&nbsp;</P> <P><STRONG>Default SMB firewall behavior</STRONG></P> <P><SPAN>The SMB Server service - "Server", aka "Lanmanserver" - always exists in Windows and isn't something you install; it's just there, as soon as you install the OS. However, since Windows XP and Windows Server 2003, that service can't be contacted from remote machines by default because the built-in firewall blocks it. SMB needs, at a minimum, TCP/445 inbound and without that port opening, there is no remote file serving in SMB2+ on any supported versions of Windows. Even though the C$ and ADMIN$ built-in shares exist by default, no one can access them from a remote machine by default.&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN>But you probably don't remember <EM>opening</EM> a firewall port on your file server, right? You created a share and it just worked. That's because as soon as you create a custom SMB share, SMB Server automatically enable the various SMB firewall rules for file servers for access, administration, applications, etc. Watch:</SPAN></P> <P>&nbsp;</P> <P><SPAN>Brand new machine with no custom shares, viewed via <A href="#" target="_self">Windows Admin Center</A>:&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-20_12-30-39.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297197iD5E37F3601BF2660/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-20_12-30-39.png" alt="2021-07-20_12-30-39.png" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>Firewall on a brand new machine:</SPAN></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-20_12-26-33.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297200i3746F57FFF85E358/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-20_12-26-33.jpg" alt="2021-07-20_12-26-33.jpg" /></span></P> <P>&nbsp;</P> <P><SPAN>I make a custom share:</SPAN></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-20_12-32-17.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297198iD0570AFA3307E9CF/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-20_12-32-17.png" alt="2021-07-20_12-32-17.png" /></span></P> <P>&nbsp;</P> <P><SPAN>The firewall afterwards:</SPAN></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-20_12-34-58.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297199iE10B1CFD8461D1C2/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-20_12-34-58.png" alt="2021-07-20_12-34-58.png" /></span></P> <P>&nbsp;</P> <P><STRONG>The File Server role</STRONG></P> <P>That works well for dedicated file servers - as soon as you add a share, everything is taken care of. But we also needed a way to just enable file server administration and grant administrators access to the built-in system shares C$ and Admin$ using SMB2+ on all Windows Servers. We didn't want them to have to create a share just to access some existing built-in shares. And we didn't want them to dig around in the firewall looking for the right rules to enable for management. So when you "install" the file server role, we just enable the basic ports needs for file server administration and accessing those built-in SMB shares; no legacy stuff or historical app compat, just the very basic. In fact, it's very possible the server is not a "file server", so much as one you just want to copy a few files to or from as an administrator.&nbsp;</P> <P>&nbsp;</P> <P>Here I am adding the File Server role:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-20_13-36-16.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297201i27592BEE2E668261/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-20_13-36-16.png" alt="2021-07-20_13-36-16.png" /></span></P> <P>&nbsp;</P> <P>And here are the firewall rules enabled:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-20_13-39-56.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297202iA8BC8617B1F5B4FF/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-20_13-39-56.png" alt="2021-07-20_13-39-56.png" /></span></P> <P>&nbsp;</P> <P>So now you know. I'm thinking about changing the default firewall rules opened by creating a share as they are a legacy from older times; we'd do this in the Windows Insider builds first and see how many tens of thousands of applications I can break that were piggybacking on those. It's going to take awhile. &gt;_&lt;</P> <P>&nbsp;</P> <P>You are now ready for File Server trivia night at any bar or restaurant near Microsoft campus. I prefer PostDoc, myself.</P> <P>&nbsp;</P> <P>Until next time,</P> <P>&nbsp;</P> <P>- Ned "the name 'firewall' is very dumb, a real firewall allows nothing through, ever" Pyle</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> Tue, 27 Jul 2021 18:57:57 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/what-the-heck-is-the-file-server-quot-role-quot-in-windows/ba-p/2418147 Ned Pyle 2021-07-27T18:57:57Z SMB over QUIC is now in public preview! https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/smb-over-quic-is-now-in-public-preview/ba-p/2482964 <P><SPAN>Heya folks,&nbsp;</SPAN><A href="#" target="_self" rel="nofollow noopener noreferrer">Ned</A><SPAN>&nbsp;here again. Today I announced the new SMB over QUIC feature for Windows Server 2022 Datacenter: Azure Edition and Windows Insider at the&nbsp;<A href="#" target="_self" rel="noopener noreferrer">Windows Server 2022, Best on Azure webinar</A>. If you want to cut right to the chase, head to&nbsp;<A href="#" target="_self">SMB over QUIC (PREVIEW) on Docs</A></SPAN>.</P> <P>&nbsp;</P> <P>SMB over QUIC (Preview) offers an "SMB VPN" for telecommuters, mobile device users, and high security organizations. The server certificate creates a TLS 1.3-encrypted tunnel over the internet-friendly UDP port 443 instead of TCP/445. All SMB traffic, including authentication and authorization within the tunnel is never exposed to the network. Inside that tunnel, SMB behaves totally normally with all its usual capabilities.</P> <P>&nbsp;</P> <P>Here's a demo of turning on the SMB over QUIC feature &amp; using it.&nbsp;</P> <P>&nbsp;</P> <P><LI-VIDEO vid="https://youtu.be/OslBSB8IkUw" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/OslBSB8IkUw/hqdefault.jpg" external="url"></LI-VIDEO></P> <P>&nbsp;</P> <P>To learn more about SMB over QUIC, see demos, and try it out for yourself, head over&nbsp;<A href="#" target="_self">SMB over QUIC (PREVIEW) on Docs</A>!&nbsp;</P> <P>&nbsp;</P> <P>- Ned "quick!" Pyle</P> Mon, 19 Jul 2021 20:29:40 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/smb-over-quic-is-now-in-public-preview/ba-p/2482964 Ned Pyle 2021-07-19T20:29:40Z SMB Compression in Windows Server 2022 and Windows Insider https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/smb-compression-in-windows-server-2022-and-windows-insider/ba-p/2481646 <P>Heya folks,&nbsp;<A href="#" target="_self" rel="nofollow noopener noreferrer">Ned</A>&nbsp;here again. Today I announced the new SMB compression feature for Windows Server 2022 and Windows Insider at the <A href="#" target="_self">Windows Server 2022, Best on Azure webinar</A>. A proper article is now up on&nbsp;<A href="#" target="_self">SMB compression at docs.microsoft.com</A>.&nbsp;</P> <P>&nbsp;</P> <P><SPAN>SMB compression allows an administrator, user, or application to request compression of files as they transfer over the network. This removes the need to first manually deflate a file with an application, copy it, then inflate on the destination computer. Compressed files will consume less network bandwidth and take less time to transfer, at the cost of slightly increased CPU usage during transfers. SMB compression is most effective on networks with less bandwidth, such as a client's 1Gbs ethernet or or Wi-Fi network; a file transfer over an uncongested 100Gbs ethernet network between two servers with flash storage may be just as fast without SMB compression in practice, but will still create less congestion for other applications.</SPAN></P> <P>&nbsp;</P> <P><SPAN>Here's SMB compression in action!</SPAN></P> <P>&nbsp;</P> <P><SPAN><LI-VIDEO vid="https://www.youtube.com/watch?v=zpMS6w33H7U" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/zpMS6w33H7U/hqdefault.jpg" external="url"></LI-VIDEO></SPAN></P> <P>&nbsp;</P> <P><SPAN>You can try this out right now, get&nbsp;</SPAN></P> <DIV><BR /> <UL> <LI><SPAN>A&nbsp;file&nbsp;server&nbsp;running&nbsp;Windows&nbsp;Server&nbsp;2022&nbsp;</SPAN><SPAN><A href="#" target="_self">on-premises</A>&nbsp;</SPAN><SPAN>or&nbsp;</SPAN><A href="#" target="_self"><SPAN>in&nbsp;Azure</SPAN></A></LI> <LI><SPAN style="font-family: inherit;">A&nbsp;<A href="#" target="_self">Windows&nbsp;Insider&nbsp;Dev&nbsp;Channel&nbsp;client</A></SPAN></LI> <LI><SPAN style="font-family: inherit;"><A href="#" target="_self">Windows&nbsp;Admin&nbsp;Center</A>&nbsp;</SPAN></LI> </UL> <P>&nbsp;</P> <P><SPAN style="font-family: inherit;">For more details, head over to&nbsp;<A href="#" target="_self">the main article on docs.microsoft.com</A>&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN style="font-family: inherit;">- Ned "smoosh it" Pyle&nbsp;</SPAN></P> </DIV> Thu, 15 Jul 2021 19:11:21 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/smb-compression-in-windows-server-2022-and-windows-insider/ba-p/2481646 Ned Pyle 2021-07-15T19:11:21Z Storage Migration Service now supports NetApp https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-migration-service-now-supports-netapp/ba-p/2373420 <P>Heya folks, <A href="#" target="_self">Ned</A> here again. Eagle-eyed readers may have noticed in the <A href="#" target="_self">April 22, 2021-KB5001384 monthly update for Windows Server 2019</A>&nbsp;- and now in the May 2021 Patch Tuesday - we added support for migrating from NetApp FAS arrays onto Windows Servers and clusters. I already updated all our SMS documentation on&nbsp;<A href="#" target="_blank" rel="noopener">https://aka.ms/storagemigrationservice</A> so you're good-to-go on steps. There's a new version of the Windows Admin Center SMS extension that will also automatically update to support the scenario.</P> <P>&nbsp;</P> <P>We tried to make this experience change as little as possible from the previous scenarios of migrating from Windows Server and Samba. The one big thing to know - and we reiterate this in the docs and WAC - is that you <STRONG>must</STRONG> have a NetApp support contract, because you need to install the NetApp PowerShell Toolkit, which is only available behind that licensed customers-only support site.&nbsp; &nbsp;</P> <P>&nbsp;</P> <P>You'll see the new option once you patch your orchestrator server with the May monthly update, update your WAC SMS extension, and install the NetApp PowerShell toolkit on your orchestrator:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="newjob.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282401iF2D57CB1B6133FB8/image-size/large?v=v2&amp;px=999" role="button" title="newjob.png" alt="newjob.png" /></span></P> <P>&nbsp;</P> <P>After you create a new job and see the new Prerequisites helper screen, you simply give the network info and creds for your NetApp FAS array and we'll find all the CIFS (SMB) SVMs running. It's actually a bit easier than Windows Server sources; since there is a known host array to enumerate, we save you typing in all the SMB servers.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="selectnas.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282403iB68C1E23C7328CBF/image-size/large?v=v2&amp;px=999" role="button" title="selectnas.png" alt="selectnas.png" /></span></P> <P>&nbsp;</P> <P>Then you provide the Windows admin source credentials just like usual, and pick which CIFS (virtual) servers you want to migrate.&nbsp;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="creds.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282400i5FA653A685FAFD23/image-size/large?v=v2&amp;px=999" role="button" title="creds.png" alt="creds.png" /></span></P> <P>&nbsp;</P> <P>After that, the migration experience is almost exactly the same as you're used to with Windows and Samba source migrations. You still perform inventory, transfer, cutover just like before with exactly the same steps. The one difference is that since NetApp CIFS servers don't use DHCP, you will choose to assign static IP addresses or use NetApp "subnets" before you start cutover. Voila.</P> <P>&nbsp;</P> <P>I'll get a video demo of the experience up here when I find the time. Busy bee with many new things coming soon, eating up my blogging time. :D</img></P> <P>&nbsp;</P> <P>- Ned "NedApp" Pyle&nbsp;</P> Thu, 20 May 2021 19:02:37 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-migration-service-now-supports-netapp/ba-p/2373420 Ned Pyle 2021-05-20T19:02:37Z Discover cloud storage solutions at Azure Storage Day on April 29, 2021 https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/discover-cloud-storage-solutions-at-azure-storage-day-on-april/ba-p/2270652 <P><EM>Guest post from the Azure Storage team</EM></P> <P>&nbsp;</P> <P><A href="#" target="_self"><EM><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="foo.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272563i763E7E7AE8FEEB0B/image-size/medium?v=v2&amp;px=400" role="button" title="foo.png" alt="foo.png" /></span></EM></A></P> <P>&nbsp;</P> <P>We are excited to announce Azure Storage Day, <STRONG>a free digital event on April 29, 2021</STRONG>, where you can explore cloud storage solutions for all your enterprise workloads. Join us to:</P> <P>&nbsp;</P> <UL> <LI>Understand cloud storage trends and innovations—and plan for the future.</LI> <LI>Map Azure Storage solutions to your different enterprise workloads.</LI> <LI>See demos of Azure disk, object, and file storage services.</LI> <LI>Learn how to optimize your migration with best practices.</LI> <LI>Find out how real customers are accelerating their cloud adoption with Azure Storage.</LI> <LI>Get answers to your storage questions from product experts.</LI> </UL> <P><BR />This digital event is your opportunity to engage with the cloud storage community, see Azure Storage solutions in action, and discover how to build a foundation for all of your enterprise workloads at every stage of your digital transformation.<BR />The need for reliable cloud storage has never been greater. More companies are investing in digital transformation to become more resilient and agile in order to better serve their customers. The rapid pace of digital transformation has resulted in exponential data growth, driving up demand for dependable and scalable cloud data storage services.</P> <H2><BR /><STRONG>Register <A href="#" target="_self">here</A></STRONG>.</H2> <P>&nbsp;</P> <P>Hope to see you there!</P> <P>&nbsp;</P> <P>- Azure Storage Marketing Team</P> <P>&nbsp;</P> <P><A href="#" target="_self"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="foo2.png" style="width: 320px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272564i3A9BC5C765FBE296/image-size/large?v=v2&amp;px=999" role="button" title="foo2.png" alt="foo2.png" /></span></A></P> Tue, 13 Apr 2021 17:41:39 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/discover-cloud-storage-solutions-at-azure-storage-day-on-april/ba-p/2270652 Ned Pyle 2021-04-13T17:41:39Z Register for Microsoft Plugfest 33 https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/register-for-microsoft-plugfest-33/ba-p/2203609 <P><EM>(guest post from Amrit Shandilya from the Microsoft File System Minifilter Team)</EM></P> <P>&nbsp;</P> <P>Hello!&nbsp;We are excited to announce that Plugfest 33 has been scheduled!&nbsp;</P> <P>&nbsp;</P> <P>In light of global health concerns due to novel coronavirus (COVID-19), Plugfest is going virtual for the first time ever!&nbsp;Health and well-being of our employees, partners, customers and other guests remain our ultimate priority.&nbsp;In this Virtual Plugfest, VMs hosted on Azure will take the place of physical HW.</P> <P>&nbsp;</P> <P><FONT color="#FF0000"><STRONG>Update 4/24/2021: registration is now closed</STRONG></FONT></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="march.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262924i64F5CD6326CD7011/image-size/medium?v=v2&amp;px=400" role="button" title="march.png" alt="march.png" /></span>&nbsp;&nbsp;<span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="april.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262925i78A015564845ED5C/image-size/medium?v=v2&amp;px=400" role="button" title="april.png" alt="april.png" /></span></P> <P>&nbsp;</P> <P><STRONG>What is Plugfest?</STRONG></P> <P>The goal of Plugfest is to help you prepare your file system minifilter, network filter, or boot encryption driver for the next version of Windows by performing interoperability testing with other products.</P> <P><STRONG>&nbsp;</STRONG></P> <P><STRONG>When is Plugfest 33?</STRONG></P> <P>Plugfest 33 will take place in batches between Monday, April 12, 2021 and Monday, April 26, 2021.</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;</P> <P><STRONG>Who are we looking for?</STRONG></P> <P>Independent Software Vendors (ISVs), developers writing file system minifilter drivers and/or network filter drivers for Windows.</P> <P>&nbsp;</P> <P><STRONG>How much does it cost?</STRONG></P> <P><STRONG>FREE </STRONG>- There is no cost to attend this event.</P> <P>&nbsp;</P> <P><STRONG>Why should I go?</STRONG></P> <P>Here are four major benefits of Plugfest:</P> <P>&nbsp;</P> <UL> <LI>The opportunity to test products extensively for interoperability with other vendors' products and with Microsoft products. This has traditionally been a great way to understand interoperability scenarios and flush out any interoperability-related bugs.</LI> <LI>Get exclusive talks and informative sessions organized by the File System Minifilter team about topics that affect the filter driver community.</LI> <LI>Great opportunity to meet with the file systems team, the network team, the cluster team, and various other teams at Microsoft and get answers to your unanswered technical questions.</LI> <LI>Early exposure to new hardware innovations, that affect filter functionality</LI> </UL> <P><STRONG>&nbsp;</STRONG></P> <P><STRONG>How do I register?</STRONG></P> <P><FONT color="#FF0000"><STRONG>Update 4/24/2021: registration is now closed</STRONG></FONT></P> <P>&nbsp;</P> <P>You can learn more about <A href="#" target="_self">File System Minifilters</A> and <A href="#" target="_self">Network Filters</A> in the given links.</P> <P>&nbsp;</P> <P>If you have any additional questions please email us at <A href="https://gorovian.000webhostapp.com/?exam=mailto:amshandi@microsoft.com" target="_blank" rel="noopener">amshandi@microsoft.com</A> (Amrit Shandilya). &nbsp;</P> <P>&nbsp;</P> <P>- Amrit &amp; The Microsoft File System Minifilter Team</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Microsoft-logo_rgb_c-gray.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262926iC1B7436888A7B993/image-size/medium?v=v2&amp;px=400" role="button" title="Microsoft-logo_rgb_c-gray.png" alt="Microsoft-logo_rgb_c-gray.png" /></span></P> Wed, 24 Mar 2021 17:55:31 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/register-for-microsoft-plugfest-33/ba-p/2203609 Ned Pyle 2021-03-24T17:55:31Z SMS "HRESULT E_FAIL from COM component" workaround coming https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/sms-quot-hresult-e-fail-from-com-component-quot-workaround/ba-p/1878264 <P><SPAN>(Updated Nov 12, 2020)</SPAN></P> <P>&nbsp;</P> <P><SPAN>Heya, <A href="#" target="_self">Ned</A> here again. Storage Migration Service admins: if you're seeing "<STRONG>Error HRESULT E_FAIL has been returned from a call to a COM component</STRONG>" during transfer validation after installing the November cumulative update, we have a workaround in WAC:</SPAN></P> <P>&nbsp;</P> <OL> <LI>Install latest SMS WAC tool (1.115 as of today). It will automatically appear in the WAC feed.</LI> <LI>Navigate to the "Adjust Settings" step of the Transfer phase.</LI> <LI>Enable "Override Transfer Validation"</LI> <LI>Proceed with your transfer, either without running "Validate" or running it and ignoring the E_FAIL error.</LI> </OL> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2020-11-12_16-54-04.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/233329iD81AF6707CFE3809/image-size/large?v=v2&amp;px=999" role="button" title="2020-11-12_16-54-04.png" alt="2020-11-12_16-54-04.png" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>If you don't see the new checkbox, clear your browser cache &amp; it will appear. Browsers are the absolute worst.</SPAN></P> <P>&nbsp;</P> <P><SPAN>Once you do this, you will be able to ignore the error (which is bogus) and proceed with transfer; this will allow us time to diagnose the real issue and have a real fix inside the Orchestrator itself. We still cannot reproduce this issue ourselves and suspect some networking environment issue in a small subset of customer environments.</SPAN></P> <P>&nbsp;</P> <P><SPAN>Do <STRONG>not&nbsp;</STRONG>uninstall the November update as a workaround. I say again, do <STRONG>NOT</STRONG> do that. The update upgrades your database and cannot be reversed without deleting the DB.</SPAN></P> <P>&nbsp;</P> <P><SPAN>- Ned "Hang in there" Pyle</SPAN></P> Fri, 13 Nov 2020 01:11:52 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/sms-quot-hresult-e-fail-from-com-component-quot-workaround/ba-p/1878264 Ned Pyle 2020-11-13T01:11:52Z Storage Migration Service October 2020 Update https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-migration-service-october-2020-update/ba-p/1781790 <P data-unlink="true">Heya folks,<SPAN>&nbsp;</SPAN><A href="#" target="_self" rel="nofollow noopener noreferrer">Ned</A><SPAN>&nbsp;</SPAN>here again. We've released our next set of features and fixes for the Windows Server 2019 Storage Migration Service, as part of cumulative update <A href="#" target="_self">KB4580390 (Windows Server 2019) &amp;</A>&nbsp;<A href="#" target="_self">KB4580386 (SAC 1903, 1909)</A><FONT color="#000000">&nbsp;This update comes optionally in the third week of October and then as part of the normal "Patch Tuesday" on November 10. Read on for details.</FONT></P> <P data-unlink="true">&nbsp;</P> <P data-unlink="true">We also have big new SMS features coming in early 2021 that I'll be talking about at the <A href="#" target="_self">Windows Server Summit&nbsp;</A>on&nbsp;<SPAN class="date">Thursday, October 29, 2020. <A href="#" target="_self">Register now</A>, it's free!</SPAN></P> <P>&nbsp;</P> <H2 id="toc-hId-1817398839">What's new</H2> <P>Installing&nbsp;<FONT color="#ff6600"><FONT color="#000000">these KBs</FONT>&nbsp;</FONT>on your SMS orchestrator and any WS2019 destination servers gives you:</P> <P>&nbsp;</P> <UL> <LI>Significant file transfer and especially <EM>re-transfer</EM> performance improvements</LI> <LI>Significant inventory performance improvement with large number of files and folders&nbsp;</LI> <LI>Inventory validation</LI> <LI>Fix for transfers failing if a source computer had an extremely large number of shares</LI> <LI>Fix for the "Source not available" error when migrating account was missing permissions on shares</LI> <LI>Fix for cluster network migrations</LI> <LI>Fix for Windows Server 2003 inventory failure (yes I said 2003!)</LI> <LI>Test for Windows Server 2008 R2 required update</LI> <LI>Various Samba-Linux migration fixes</LI> <LI>Various database and reliability fixes</LI> </UL> <P>There is also a new Windows Admin Center releasing into the feed. It includes:</P> <P>&nbsp;</P> <UL> <LI>A new prerequisites page to help you get ready for a migration</LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Capture1.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/228101i39D61E757BF43B55/image-size/large?v=v2&amp;px=999" role="button" title="Capture1.PNG" alt="Capture1.PNG" /></span></P> <P>&nbsp;</P> <UL> <LI>Inventory validation option</LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Capture2.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/228100i0B522191EBF307D4/image-size/large?v=v2&amp;px=999" role="button" title="Capture2.PNG" alt="Capture2.PNG" /></span><BR /><BR /></P> <UL> <LI>Automatic install of the cluster management tools on the orchestrator if not installed</LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Capture3.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/228104i581BC4A140F787DB/image-size/large?v=v2&amp;px=999" role="button" title="Capture3.PNG" alt="Capture3.PNG" /></span></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Capture5.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/228102i18E6295776D60C31/image-size/large?v=v2&amp;px=999" role="button" title="Capture5.PNG" alt="Capture5.PNG" /></span><BR /><BR /></P> <UL> <LI>Various ease of use and layout changes based on customer feedback</LI> <LI>Note: we're also going to add automatic installation of the SMS Proxy on the destination server if not already installed, but it will be in another WAC update a few weeks after this one.</LI> </UL> <P>&nbsp;</P> <H2 id="toc-hId-1941515922">Sum up</H2> <P>I'll be updating the official SMS docs on some of these points, I know your boss doesn't believe in blogs even though I wrote both articles. :D</img>&nbsp;</P> <P>&nbsp;</P> <P data-unlink="true">The &nbsp;<A href="#" target="_self">KB4580390 (Win10 1809, Windows Server 2019) &amp;</A>&nbsp;<A href="#" target="_self">KB4580386 (1903, 1909)</A><FONT color="#000000">&nbsp;</FONT>cumulative updates are available on the Microsoft Catalog for download, and will automatically install via Windows Update in the November patch Tuesday release. Let me know how it goes. And remember, more big features coming early next year, come watch the <A href="#" target="_self">Windows Server Summit&nbsp;</A>on&nbsp;<SPAN class="date">Thursday, October 29, 2020 to learn more</SPAN></P> <P>&nbsp;</P> <P>- Ned "grind grind grind" Pyle</P> Fri, 13 Nov 2020 01:12:54 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-migration-service-october-2020-update/ba-p/1781790 Ned Pyle 2020-11-13T01:12:54Z Azure Files NFS now in preview https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/azure-files-nfs-now-in-preview/ba-p/1674391 <P>Heya folks, <A href="#" target="_self">Ned</A> here again.&nbsp;Azure Files isn't just SMB anymore, we now offer NFS 4.1 as a service! Extreme high availability &amp; durability, full file system semantics, AES-256 encryption at rest, it will do 100K IOPS &amp; 80 Gibps throughput. More info at:&nbsp;</P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px"><A href="#" target="_self">NFS 4.1 support for Azure Files is now in preview</A></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Get started with your first NFS share <A href="#" target="_self">here.</A></P> <P>&nbsp;</P> <P>- Ned "Azure Files isn't just for SMB anymore" Pyle</P> Wed, 16 Sep 2020 17:12:41 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/azure-files-nfs-now-in-preview/ba-p/1674391 Ned Pyle 2020-09-16T17:12:41Z Defending SMB Clients from Interception Attacks https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/defending-smb-clients-from-interception-attacks/ba-p/1496515 <P>Heya folks,<SPAN>&nbsp;</SPAN><A href="#" target="_self" rel="nofollow noopener noreferrer">Ned</A><SPAN>&nbsp;</SPAN>here again. I recently wrote a guest post on the IT Ops Talk blog about increasing security on your SMB clients. It's about defending against interception attacks (previously called "man-in-the-middle" attacks) and includes specific recommendations, steps, and best practices.&nbsp;</P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px"><A href="https://gorovian.000webhostapp.com/?exam=t5/itops-talk-blog/how-to-defend-users-from-interception-attacks-via-smb-client/ba-p/1494995" target="_self">How to Defend Users from Interception Attacks via SMB Client Defense</A></P> <P>&nbsp;</P> <P>You should check it out.&nbsp;</P> <P>&nbsp;</P> <P>- Ned Pyle</P> Fri, 13 Nov 2020 01:14:08 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/defending-smb-clients-from-interception-attacks/ba-p/1496515 Ned Pyle 2020-11-13T01:14:08Z SMB Traffic Control https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/smb-traffic-control/ba-p/1471680 <P>Heya folks, <A href="#" target="_self">Ned</A> here again. I recently wrote a guest post on the IT Ops Talk blog about increasing security by controlling SMB traffic's ingress, egress, and lateral movement. You'll learn best practices, hands-on steps, and gain a deeper understanding of the Windows Defender Firewall's capabilities.&nbsp; &nbsp;&nbsp;</P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px"><A href="https://gorovian.000webhostapp.com/?exam=t5/itops-talk-blog/beyond-the-edge-how-to-secure-smb-traffic-in-windows/ba-p/1447159" target="_self">Beyond the Edge: How to Secure SMB Traffic in Windows</A></P> <P>&nbsp;</P> <P>I consider this required reading now for IT Pros. :)</img></P> <P>&nbsp;</P> <P>- Ned Pyle</P> Wed, 17 Jun 2020 17:55:44 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/smb-traffic-control/ba-p/1471680 Ned Pyle 2020-06-17T17:55:44Z SMB over QUIC https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/smb-over-quic/ba-p/1205082 <P data-unlink="true">Heya folks, <A href="#" target="_self">Ned</A> here again. I wrote a new blog post, <A href="https://gorovian.000webhostapp.com/?exam=t5/itops-talk-blog/smb-over-quic-files-without-the-vpn/ba-p/1183449" target="_self">SMB over QUIC: Files Without the VPN</A>.&nbsp;Learn about this new optional feature coming to replace TCP/IP for scenarios like hybrid computing and mobile workers - VPN'less SMB 3.1.1 file access, always secured under TLS 1.3. The post has a demo and more details.</P> <P>&nbsp;</P> <P>It's on the <A href="https://gorovian.000webhostapp.com/?exam=t5/itops-talk-blog/bg-p/ITOpsTalkBlog" target="_self">IT Ops Talk Blog</A>, which you should&nbsp;<EM>definitely</EM> subscribe to for other good reasons. I've got a series of posts coming your way every week.</P> <P>&nbsp;</P> <P>- Ned&nbsp; &nbsp;</P> <P>&nbsp;</P> Fri, 13 Nov 2020 01:14:38 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/smb-over-quic/ba-p/1205082 Ned Pyle 2020-11-13T01:14:38Z SMB is Dead, Long Live SMB! https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/smb-is-dead-long-live-smb/ba-p/1185401 <P>Hello again, James Kehr here with another guest post. Titles are hard to do. They must convey the topic to the reader while being both interesting and informative, all at the same time. Doing this with a technical article makes life even harder. Now imagine my dilemma when starting an article about SMB1 behaviors in modern Windows. Think about that for a minute. Go ahead. The article will still be here.</P> <P>&nbsp;</P> <P>Today I'll explain why you still see a little SMB1 on your network even after you uninstalled SMB1 from Windows, and why it's a good thing.&nbsp;</P> <P>&nbsp;</P> <H4>SMB1 is Dead!</H4> <P>The end of SMB version 1 (SMB1) topic has been discussed in great detail by Ned Pyle, who runs the SMB show here at Microsoft. Go read <A href="#" target="_self">this article</A> if you have not.&nbsp;</P> <P>&nbsp;</P> <P>At first glance this seems like I’m beating a dead horse. If that’s what you thought, you’d be right. Unfortunately, this figuratively dead horse needs to be beaten<SUP>1</SUP> some more.</P> <P>&nbsp;</P> <P>Please stop using SMB1. Please get rid of those ancient, legacy systems that only support SMB1. We constantly get cases from customers asking why modern Windows 10 doesn’t support SMB1 out-of-the-box so it will work with their old, insecure systems.</P> <P>&nbsp;</P> <P>Let’s go over this one last time.</P> <P>&nbsp;</P> <UL> <LI>The only versions of Windows that require SMB1 are end-of-support (EOS). By <U>years</U>! These are Windows Server 2003 (EOS July 2015), Windows 2000 Server (EOS July 2010), their client editions, and older.</LI> <LI>Samba and Linux distros like Ubuntu have retired SMB1 as well. If you have a Linux/Unix-like distro that only supports SMB1, it’s time to upgrade.</LI> <LI>Not only does Microsoft not support these EOS operating systems (OS’s), we do not support interoperability with them. Meaning, if the latest version of Windows 10 does no work with an EOS version of Windows over SMB, Microsoft will not support you.</LI> </UL> <P>&nbsp;</P> <P>Why not? Let’s start by putting the age of Windows 2000 (W2000) and 2003 (W2003) into perspective.</P> <P>&nbsp;</P> <UL> <LI>EOS Windows versus Apple:&nbsp; <UL> <LI>Windows 2000 was released 7 years before the first iPhone.</LI> <LI>Windows 2003/XP was released 4 years before the first iPhone.</LI> <LI>Apple computers were still running IBM PowerPC processors.</LI> <LI>Asking for EOS Windows support is like asking Apple to support PowerPC Macs. I’m sure Apple support would get a good laugh out of the request, but I imagine that’s as far as the request would go.</LI> </UL> </LI> </UL> <P>&nbsp;</P> <UL> <LI>… vs Android <UL> <LI>Didn’t even exist.</LI> </UL> </LI> </UL> <P>&nbsp;</P> <UL> <LI>… vs Linux <UL> <LI>Kernel 2.2.14 was released the same year as Windows 2000.</LI> <LI>Version 2.4 was the newest kernel when Windows Server 2003 launched.</LI> <LI>Support for the last version of the version 2 kernel, 2.6.32, ended in 2016.</LI> <LI>How fast do you think the “no” would come back from Linux distro support if you asked for support on kernel 2.2 or 2.4? Assuming your distro of choice even existed back then.</LI> </UL> </LI> </UL> <P>By asking Microsoft to support EOS Windows, people are effectively asking us to support an OS that is so old that the modern smartphone didn’t even exist yet. Not counting Pocket PC or Windows Mobile here. An era when dial-up internet was still <A href="#" target="_blank" rel="noopener">dominant</A> and the world was still learning how high-speed Internet would impact computer security.</P> <P>&nbsp;</P> <P>Multi-core processors didn’t exist yet, outside of the mainframe space. Those didn’t come around until 2004 (AMD) and 2005 (Intel). X86 64-bit processors didn’t exist when W2000 was released and they were brand new for W2003. Running legacy OS’s is not just bad security, it’s scary security because you are running an OS built for a completely different era of computing.</P> <P>&nbsp;</P> <P>The real question here is: Why are you still running an OS or device that is so old it requires SMB1?</P> <P>&nbsp;</P> <H4>The SMB1 Problem</H4> <P>The biggest problem with SMB1 is that it was developed for the pre-Internet era. The first dialect came out in 1983 from IBM. Security and performance were designed for closed token ring networks and old fashion spinny disks. As EternalBlue and WannaCry would later prove, it is not a protocol that has aged well and it is no longer safe to use.</P> <P>&nbsp;</P> <P>Unlike most other deprecated protocols, however, SMB1 controls the keys to the kingdom: data, services, file systems, accounts, and more. This makes SMB1 exploits critically harmful.</P> <P>&nbsp;</P> <P>When Microsoft decided to retire SMB1 for real, and stop asking nicely, we tore off that band-aid by removing it completely from Windows 10 Spring 2017 Update (Win10 1703), when Windows detected that SMB1 was not in use. No SMB1 dialect was sent during negotiation, no SMB1 was allowed at all. And that broke things.</P> <P>&nbsp;</P> <P>It turned out that some devices which only know about SMB1 weren’t quite sure what to do when getting an SMB request with no SMB1 in it. This caused a lot of strange behavior on the Windows-side; namely, hanging or pausing until everything finally timed out. This manifested in Windows as an unresponsive Windows Explorer (the technical name for the yellow folder icon you click on to access your files). People don’t like that. I don’t like that.</P> <P>&nbsp;</P> <P>We ended up making changes to mitigate this without actually enabling SMB1.</P> <P>&nbsp;</P> <UL> <LI>Windows 10 1709 (2017 Fall Update) and newer will send SMB1 dialects as part of the SMB negotiate. We do this to help interoperability with legacy devices. I.E. prevent Windows Explorer from pausing/hanging.</LI> <LI>We will not actually allow an SMB1 connection when SMB1 is disabled. We only pretend to. The connection will end up getting closed when the server or client tries to use an SMB1 dialect.</LI> </UL> <P>In addition to preventing uncomfortably long waits for Windows users, it lets us bubble up messages about SMB1 only devices on your network. System admins can look in the<STRONG> Event Viewer &gt; Applications and Services Logs &gt; Microsoft &gt; Windows &gt; SMBServer-Operational log</STRONG> for event ID <STRONG>1001</STRONG>, which is created when SMB1 is used.</P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px"><EM>Log Name:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Microsoft-Windows-SMBServer/Operational</EM></P> <P class="lia-indent-padding-left-30px"><EM>Source:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Microsoft-Windows-SMBServer</EM></P> <P class="lia-indent-padding-left-30px"><EM>Date:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 9/17/2019 12:17:41 PM</EM></P> <P class="lia-indent-padding-left-30px"><EM>Event ID:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1001</EM></P> <P class="lia-indent-padding-left-30px"><EM>Task Category: (1001)</EM></P> <P class="lia-indent-padding-left-30px"><EM>Level:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;Information</EM></P> <P class="lia-indent-padding-left-30px"><EM>Keywords:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; (8)</EM></P> <P class="lia-indent-padding-left-30px"><EM>User:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A</EM></P> <P class="lia-indent-padding-left-30px"><EM>Computer:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; DC01</EM></P> <P class="lia-indent-padding-left-30px"><EM>Description:</EM></P> <P class="lia-indent-padding-left-30px"><EM>A client attempted to access the server using SMB1 and was rejected because SMB1 file sharing support is disabled or has been uninstalled.</EM></P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P class="lia-indent-padding-left-30px"><EM>Guidance:</EM></P> <P class="lia-indent-padding-left-30px"><EM>An administrator has disabled or uninstalled server support for SMB1. Clients running Windows XP / Windows Server 2003 R2 and earlier will not be able to access this server. Clients running Windows Vista / Windows Server 2008 and later no longer require SMB1. To determine which clients are attempting to access this server using SMB1, use the Windows PowerShell cmdlet Set-SmbServerConfiguration to enable SMB1 access auditing.</EM></P> <P>&nbsp;</P> <P>SMB1 auditing can be also be enabled to get more details about what is using SMB1 on your network.</P> <P>&nbsp;</P> <PRE><STRONG>Set-SmbServerConfiguration -AuditSmb1Access $true</STRONG></PRE> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px"><EM>Log Name:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Microsoft-Windows-SMBServer/Audit</EM></P> <P class="lia-indent-padding-left-30px"><EM>Source:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Microsoft-Windows-SMBServer</EM></P> <P class="lia-indent-padding-left-30px"><EM>Date:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 12/13/2019 11:37:53 AM</EM></P> <P class="lia-indent-padding-left-30px"><EM>Event ID:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3000</EM></P> <P class="lia-indent-padding-left-30px"><EM>Task Category: None</EM></P> <P class="lia-indent-padding-left-30px"><EM>Level:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Information</EM></P> <P class="lia-indent-padding-left-30px"><EM>Keywords:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</EM></P> <P class="lia-indent-padding-left-30px"><EM>User:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A</EM></P> <P class="lia-indent-padding-left-30px"><EM>Computer:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; DC01.Contoso.com</EM></P> <P class="lia-indent-padding-left-30px"><EM>Description:</EM></P> <P class="lia-indent-padding-left-30px"><EM>SMB1 access</EM></P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P class="lia-indent-padding-left-30px"><EM>Client Address: 192.168.1.214</EM></P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P class="lia-indent-padding-left-30px"><EM>Guidance:</EM></P> <P class="lia-indent-padding-left-30px"><EM>This event indicates that a client attempted to access the server using SMB1. To stop auditing SMB1 access, use the Windows PowerShell cmdlet Set-SmbServerConfiguration.</EM></P> <P>&nbsp;</P> <H1>Negotiation</H1> <P>&nbsp;</P> <P>The SMB Negotiate command is where the SMB dialect is …well… negotiated.</P> <P>&nbsp;</P> <P>The SMB Client – the system requesting access to the remote file system – sends a list of all the dialects it supports. A dialect is a revision of the SMB protocol specification. Every revision of the SMB protocol has, so far, gotten a new dialect. Though SMB 3.1.1 was built to be more extensible so it may be a while before the next dialect is created.</P> <P>&nbsp;</P> <P>The SMB Server – the system hosting the file system – then selects the newest dialect that both client and server support. When the server supports none of the client protocols it aborts the connection with a TCP RST (reset).</P> <P>&nbsp;</P> <P>How does faux SMB1 support work with Negotiate? Delicately.</P> <P>&nbsp;</P> <P>Here’s what it looks like when a Windows SMB Server rejects an SMB1 only connection from an SMB Client. This is the SMB1 only request, as seen by Wireshark.</P> <P>&nbsp;</P> <TABLE> <TBODY> <TR> <TD> <P><STRONG>No.</STRONG></P> </TD> <TD> <P><STRONG>Time</STRONG></P> </TD> <TD> <P><STRONG>Source</STRONG></P> </TD> <TD> <P><STRONG>Destination</STRONG></P> </TD> <TD> <P><STRONG>Protocol</STRONG></P> </TD> <TD> <P><STRONG>Length</STRONG></P> </TD> <TD> <P><STRONG>Info</STRONG></P> </TD> </TR> <TR> <TD> <P>24</P> </TD> <TD> <P>6.892775</P> </TD> <TD> <P>192.168.1.214</P> </TD> <TD> <P>192.168.1.215</P> </TD> <TD> <P>SMB</P> </TD> <TD> <P>191</P> </TD> <TD> <P>Negotiate Protocol Request</P> </TD> </TR> </TBODY> </TABLE> <P>…</P> <P>SMB (Server Message Block Protocol)</P> <P>&nbsp;&nbsp;&nbsp; SMB Header</P> <P>&nbsp;&nbsp;&nbsp; Negotiate Protocol Request (0x72)</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Word Count (WCT): 0</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Byte Count (BCC): 98</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Requested Dialects</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Dialect: PC NETWORK PROGRAM 1.0&nbsp;&nbsp; &lt;&lt;&lt;&lt;&lt;&lt; These are all the old SMB1 dialects</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Dialect: LANMAN1.0</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Dialect: Windows for Workgroups 3.1a</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Dialect: LM1.2X002</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Dialect: LANMAN2.1</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Dialect: NT LM 0.12</P> <P>&nbsp;</P> <P>This is Windows saying “NOPE!” to SMB1 in the form of an immediate TCP reset. Get thee gone, SMB1!</P> <P>&nbsp;</P> <TABLE> <TBODY> <TR> <TD> <P>25</P> </TD> <TD> <P>6.893143</P> </TD> <TD> <P>192.168.1.215</P> </TD> <TD> <P>192.168.1.214</P> </TD> <TD> <P>TCP</P> </TD> <TD> <P>54</P> </TD> <TD> <P>445 → 49769 [RST, ACK] Seq=1 Ack=138 Win=0 Len=0</P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P>Now let’s look at the Windows SMB1 discovery packet. In this test I have disabled SMB2 support on a Windows Server. The SMB Client is a standard Win10 1909 client.</P> <P>&nbsp;</P> <TABLE width="624"> <TBODY> <TR> <TD width="42"> <P>No.</P> </TD> <TD width="96"> <P>Time</P> </TD> <TD width="114"> <P>Source</P> </TD> <TD width="114"> <P>Destination</P> </TD> <TD width="60"> <P>Protocol</P> </TD> <TD width="198"> <P>Info</P> </TD> </TR> <TR> <TD width="42"> <P>4</P> </TD> <TD width="96"> <P>15:50:48.03</P> </TD> <TD width="114"> <P>192.168.1.211</P> </TD> <TD width="114"> <P>192.168.1.109</P> </TD> <TD width="60"> <P>SMB</P> </TD> <TD width="198"> <P>Negotiate Protocol Request</P> </TD> </TR> </TBODY> </TABLE> <P>…</P> <P>SMB (Server Message Block Protocol)</P> <P>&nbsp;&nbsp;&nbsp; SMB Header</P> <P>&nbsp;&nbsp;&nbsp; Negotiate Protocol Request (0x72)</P> <P>&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Word Count (WCT): 0</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Byte Count (BCC): 34</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Requested Dialects</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Dialect: NT LM 0.12</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Dialect: SMB 2.002</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Dialect: SMB 2.???</P> <P>&nbsp;</P> <P>There are three Dialects listed in the Negotiate Protocol Request frame:</P> <UL> <LI>NT LM 0.12 – This is the final SMB1 dialect created, also known as the CIFS dialect.</LI> <LI>SMB 2.002 – This is the first SMB2 dialect released with Windows Vista.</LI> <LI>SMB 2.??? – This is the wildcard SMB2 dialect. Which I won’t go into here.</LI> </UL> <P>NT LM 0.12 (CIFS) is a red herring. This is the newer mechanism in Win10 to flush out SMB1 only devices. My test server, which only has SMB1 enabled, does what it’s supposed to do and tries to catch the red herring.</P> <P>&nbsp;</P> <TABLE width="624"> <TBODY> <TR> <TD width="39"> <P>No.</P> </TD> <TD width="103"> <P>Time</P> </TD> <TD width="119"> <P>Source</P> </TD> <TD width="119"> <P>Destination</P> </TD> <TD width="79"> <P>Protocol</P> </TD> <TD width="164"> <P>Info</P> </TD> </TR> <TR> <TD width="39"> <P>5</P> </TD> <TD width="103"> <P>15:50:48.04</P> </TD> <TD width="119"> <P>192.168.1.109</P> </TD> <TD width="119"> <P>192.168.1.211</P> </TD> <TD width="79"> <P>SMB</P> </TD> <TD width="164"> <P>Negotiate Protocol Response</P> </TD> </TR> </TBODY> </TABLE> <P>…</P> <P>SMB (Server Message Block Protocol)</P> <P>&nbsp;&nbsp;&nbsp; SMB Header</P> <P>&nbsp;&nbsp;&nbsp; Negotiate Protocol Response (0x72)</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Word Count (WCT): 17</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Selected Index: 0: NT LM 0.12&nbsp; &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt; The server took the bait.</P> <P>…</P> <P>&nbsp;</P> <P>Can you guess what happens next? If you guessed that the Win10 SMB Client sent the SMB Server a big fat “NOPE!” in the form of a TCP RST, you would be correct.</P> <P>&nbsp;</P> <TABLE> <TBODY> <TR> <TD width="39"> <P>No.</P> </TD> <TD width="103"> <P>Time</P> </TD> <TD width="119"> <P>Source</P> </TD> <TD width="119"> <P>Destination</P> </TD> <TD width="79"> <P>Protocol</P> </TD> <TD width="158"> <P>Info</P> </TD> </TR> <TR> <TD width="39"> <P>6</P> </TD> <TD width="103"> <P>15:50:48.04</P> </TD> <TD width="119"> <P>192.168.1.211</P> </TD> <TD width="119"> <P>192.168.1.109</P> </TD> <TD width="79"> <P>TCP</P> </TD> <TD width="158"> <P>53994 → 445 [RST, ACK] Seq=74 Ack=132 Win=0 Len=0</P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P>&nbsp;“Why not just fix SMB1?” you may ask. We did! It’s called SMB2.</P> <P>&nbsp;</P> <H1>Long Live SMB2</H1> <P>&nbsp;</P> <P>Devs and marketing teams like bigger version numbers. Devs because it allows them to track changes. Marketing teams because bigger numbers mean you have something new to sell. The number of changes made between SMB1 and SMB2 was staggering. The entire protocol was redesigned from the ground up. New commands, WAN optimizations galore, tightened security, more features…it was the stuff of marketer’s dreams.</P> <P>&nbsp;</P> <P>Instead of adding yet another dialect to SMB, it was decided that a new major version of SMB was needed. This was a justified move given that the entire protocol spec was essentially rewritten. Only concepts from the original SMB and CIFS protocols were adopted. And thus, <A href="#" target="_self">MS-SMB2</A> was born.</P> <P>&nbsp;</P> <P>SMB2 has now become SMB3. This is more of a marketing move since SMB3 still uses the MS-SMB2 protocol spec. There were just enough changes between SMB 2.1 and SMB 3.0 to justify a new major version. On an interesting historical note, SMB3 was originally SMB 2.2 until the marketing team got involved.</P> <P>&nbsp;</P> <P>Now, on top of the fancy protocol redesign, Microsoft has added several cool new features, and continues to do so. Things like SMB Encryption allows full AES encryption of data payloads to prevent man-in-the-middle (MITM) snooping and attacks. Continuous Availability provides seamless failover between clustered file servers. SMB Direct allows multiple hundreds of Gbps of throughput between RDMA capable servers while only sipping CPU cycles. The list goes on and new features are added all the time to adapt SMB to the ever-changing network landscape, like SMB over QUIC and SMB compression arriving in future builds of Windows. Or, now, if you’re reading this far enough in the future.</P> <P>&nbsp;</P> <P>There really is no reason to keep SMB1 around anymore. Managers like to cite cost as the reason, but, when you think about the potential cost of a data breach, is it really worth the risk? Why would anyone want to keep around a system that can only use network protocols known to be exploitable and control the keys to the kingdom? Hopefully, the answer to these questions are, no and heck no.</P> <P>&nbsp;</P> <P>SMB is dead, long live SMB!</P> <P>&nbsp;</P> <P><SUP>1</SUP> – No animals were harmed during the creation of this article. Some electrons were deeply offended, but that’s about it.</P> <P>&nbsp;</P> Wed, 26 Feb 2020 17:22:36 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/smb-is-dead-long-live-smb/ba-p/1185401 JamesKehr 2020-02-26T17:22:36Z SMB and Null Sessions: Why Your Pen Test is Probably Wrong https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/smb-and-null-sessions-why-your-pen-test-is-probably-wrong/ba-p/1185365 <P>Hi everyone, James Kehr here with a guest post. One of the SMB cases we get regularly at Microsoft Support is, “my pen test says you allow Null sessions!” Followed by a string of CVE numbers; like, CVE-1999-0519 and CVE-1999-0520. Which can sometimes lead to, “Why hasn’t Microsoft fixed this? It’s been 20 years!” This post will show why this is probably a false positive on modern Windows. And if it’s not, someone may have done something very bad to your Windows installation.</P> <P>&nbsp;</P> <H4>Null sessions</H4> <P>One of the technologies I have worked with the most during my time at Microsoft is SMB. You may also know SMB by one of the other common names: CIFS and Samba. While these are technically three different things, many people use the terms interchangeably to describe the same network file system protocol. Why? That’s a long story involving IBM, Microsoft, Linux, and about 35 years of history. All you need to know is that at Microsoft we use the term SMB (Server Message Block).</P> <P>&nbsp;</P> <P>But if you must know, the simplified version goes something like this: SMB is the protocol, CIFS is an old dialect of SMB, and Samba is the Linux/Unix-like implementation of the SMB protocol. People and companies get familiar with one of those terms and stick to it, which has made the three names interchangeable outside of technical documentation.</P> <P>&nbsp;</P> <P>What is a Null Session you may ask? A null session implies that access to a network resource, most commonly the IPC$ "Windows Named Pipe" share, was granted without authentication. Also known as anonymous or guest access. Windows has not allowed null or anonymous access for a very long time.</P> <P>&nbsp;</P> <H4>Credentials and SMB</H4> <P>Most intrusion detection software doesn’t seem to understand how Windows auth works over SMB in an Active Directory (AD) environment, and that is usually the cause of the false positive. Windows and SMB really want people to make a successful connection to a file share and they go out of their way to try every possible credential available to complete the connection.</P> <P>&nbsp;</P> <P>People tend to think of a username as the only authentication mechanism and, in a workgroup, that is mostly right. Add AD to the mix and the authentication story changes. The act of joining a computer to a domain creates a computer object. The computer object (&lt;hostname&gt;$) is a valid authentication object in AD and can be used to authenticate to Windows and an SMB share. Though it is rare that SMB falls back to the computer, or machine, account, it is possible.</P> <P>&nbsp;</P> <P>By way of example, Hyper-V can be setup to access virtual hard drives over SMB 3 without using S2D or a SAN. This <A href="#" target="_self">setup</A> uses the machine account of the Hyper-V host(s) to access the SMB share rather than a user or a service account.</P> <P>&nbsp;</P> <P>When no account is explicitly provided SMB will try implicit credentials. First, the logged-on user’s account, and then, sometimes, the computer object. Some shares and third-party file servers with certain permissions will allow computer accounts to connect. You may have limited or no usable access, but it will authenticate.</P> <P>&nbsp;</P> <P>Using implicit credentials is not a null session connection since credentials are being provided; even though, they were not explicitly provided. This means the SMB session is being authorized, and therefore not a null session.</P> <P>&nbsp;</P> <P>What are implicit and explicit credentials, exactly? An explicit credential is one that is provided as part of authentication. It’s like clicking the “Connect using different credentials” checkbox when mapping a drive with File Explorer, or /user with “net use”. &nbsp;Implicit credentials are the opposite of that. When no explicit credential is provided it is implied that the operating system should use the credentials of the current logged-on user.</P> <P>&nbsp;</P> <H4>The Test</H4> <P>You don’t believe me, do you? The pen test said it’s a null session. You even ran the commands to prove it. Now you think Microsoft is up to their old tricks. The 20-year-old null session bug still isn’t fixed!</P> <P>&nbsp;</P> <P>Fine, let me prove it to you. These are all tests anyone can run to confirm. I’ll use Wireshark, the industry standard packet capture and analysis tool, with three main tests for your edification.</P> <P>&nbsp;</P> <OL> <LI><STRONG>Workgroup to Workgroup</STRONG> – Two non-domain joined Windows 10 1903 (Spring 2019 Update) systems. All updates installed, through Oct 2019. No changes made other than setting up a file share.</LI> <LI><STRONG>Workstation to Workstation</STRONG> – Two domain joined Windows 10 1903 (Spring 2019 Update) systems. All updates installed, through Oct 2019. No changes made other than domain join and setting up a file share.</LI> <LI><STRONG>Workstation to Domain Controller (DC)</STRONG> – One domain joined workstation to the DC. Workstation running fully patched Win10 1903, DC running fully patched Windows Server 2019. Default domain policies, no hardening, no extra policies or configuration.</LI> </OL> <P>There are two commands commonly used to test null sessions, and I’ll be testing both, plus one extra scenario-based test. This first command explicitly sets a NULL user (/user:) and password ("")</P> <P>&nbsp;</P> <PRE><STRONG>net use </STRONG><STRONG>\\&lt;IP ADDRESS&gt;\IPC$ "" /user:</STRONG></PRE> <P>The second command sets no explicit credentials. This is where the more interesting behavior will happen because it leaves room for Windows to try implicit credentials.</P> <P>&nbsp;</P> <PRE><STRONG>net use </STRONG><STRONG>\\&lt;IP ADDRESS&gt;\IPC$</STRONG></PRE> <P>A normal share for non-IPC$ testing.</P> <P>&nbsp;</P> <PRE><STRONG>net use \\&lt;IP Address&gt;\share</STRONG></PRE> <P>For domain testing I’ll use the domain’s SYSVOL share.</P> <P>&nbsp;</P> <PRE><STRONG>net use \\&lt;DC&gt;\SYSVOL</STRONG></PRE> <P>&nbsp;</P> <H4>Understanding IPC$</H4> <P>The inter-process communication share ("IPC$") is a special case. It’s the share that allows remote Named Pipe access. Names Pipes are an old-school method used to allow two services to talk with each other, even over a network connection. IPC$ functionality has been around for ages and default access rules to IPC$ has changed with each release of Windows. Older versions of Windows may behave differently than these tests.</P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px">Network setup:</P> <P class="lia-indent-padding-left-30px">Subnet: 10.19.0.0/24</P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P class="lia-indent-padding-left-30px">Server: &nbsp;&nbsp;&nbsp;10.19.0.1</P> <P class="lia-indent-padding-left-30px">RedWrk:&nbsp;&nbsp;&nbsp; 10.19.0.2</P> <P class="lia-indent-padding-left-30px">BlueWrk:&nbsp;&nbsp; 10.19.0.3</P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P class="lia-indent-padding-left-30px">Domain User accounts:</P> <P class="lia-indent-padding-left-30px">RedUser</P> <P class="lia-indent-padding-left-30px">BlueUser</P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P class="lia-indent-padding-left-30px">Domain:</P> <P class="lia-indent-padding-left-30px">CONSTOSO</P> <P class="lia-indent-padding-left-30px">Local accounts:</P> <P class="lia-indent-padding-left-30px">LclRed</P> <P class="lia-indent-padding-left-30px">LclBlue</P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px">Computer names:</P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P class="lia-indent-padding-left-30px">RedWrk</P> <P class="lia-indent-padding-left-30px">BlueWrk</P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <H4>How SMB Connects</H4> <P>There’s a bit of basic knowledge that may be needed before we proceed. There are three key SMB commands used for authentication and authorization: Negotiate, Session Setup, and Tree Connect.</P> <P>&nbsp;</P> <UL> <LI><STRONG>Negotiate</STRONG> – This command determines what dialect of SMB (major.minor version) will be used, discovers basic settings, and can perform some pre-authentication, depending on dialect.</LI> <LI><STRONG>Session Setup</STRONG> – This is where authentication is performed. Credentials, Kerberos tickets, or security tokens are exchanged here, and general authorization is either granted or denied at this step.</LI> <LI><STRONG>Tree Connect</STRONG> – This is where authorization to a share happens. Tree Connect takes the security account from Session Setup and uses that to determine whether access to the individual share(s) should be granted.</LI> </UL> <P>Because of the way SMB works, it’s possible to authenticate successfully but not get access to any resources. This is why it’s important to look under the proverbial covers to see what’s really going on before making final judgement.</P> <P>&nbsp;</P> <P>On to the tests…</P> <P>&nbsp;</P> <H4>Workgroup to Workgroup</H4> <P>This is the basic home scenario. Two computers used by a regular folks who just want things to work without ever opening a settings console in their entire life.</P> <P>&nbsp;</P> <P>Command:</P> <PRE><STRONG>net use \\10.19.0.3\IPC$ "" /user:</STRONG></PRE> <P>Result: No access granted.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="JamesKehr_0-1582228625717.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/172302i0DEC2EDE552B7D34/image-size/medium?v=v2&amp;px=400" role="button" title="JamesKehr_0-1582228625717.png" alt="JamesKehr_0-1582228625717.png" /></span></P> <P>&nbsp;</P> <PRE><STRONG>net use \\10.19.0.3\IPC$</STRONG></PRE> <P>Result: No access granted.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="JamesKehr_1-1582228625720.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/172303iC76D71CA59EDAA45/image-size/medium?v=v2&amp;px=400" role="button" title="JamesKehr_1-1582228625720.png" alt="JamesKehr_1-1582228625720.png" /></span></P> <P>&nbsp;</P> <P>NOTE: Windows refused to complete the connection without supplying credentials.</P> <P>&nbsp;</P> <PRE><STRONG>net use \\10.19.0.3\share</STRONG></PRE> <P>Result: No access granted.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="JamesKehr_2-1582228625722.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/172304iA9EB04F8490F0C4E/image-size/medium?v=v2&amp;px=400" role="button" title="JamesKehr_2-1582228625722.png" alt="JamesKehr_2-1582228625722.png" /></span></P> <P>&nbsp;</P> <P><EM>NOTE:</EM> Windows refused to complete the connection without supplying credentials.</P> <P>&nbsp;</P> <P>This is all expected behavior because RedWrk and BlueWrk have no inherent trust between each other. No centralized authentication method means that each workgroup member must rely on their local security database, which does not contain details about the other workgroup member(s) unless those details are explicitly added. This means that all anonymous and implicit authentication methods will fail.</P> <P>&nbsp;</P> <H4>Workstation to Workstation (Domain-joined)</H4> <P>RedWrk and BlueWrk were joined to the domain for the next step. The same “net use” commands were run from RedWrk to BlueWrk. This is where things start to get interesting for us.</P> <P>&nbsp;</P> <P>Command:</P> <PRE><STRONG>net use \\10.19.0.3\IPC$ "" /user:</STRONG></PRE> <P>Result: No change in behavior.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="JamesKehr_3-1582228625726.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/172305iF9524A6FBE374A67/image-size/medium?v=v2&amp;px=400" role="button" title="JamesKehr_3-1582228625726.png" alt="JamesKehr_3-1582228625726.png" /></span></P> <P>&nbsp;</P> <P>This first example, with “<STRONG>/user:</STRONG>”, is an explicit null credential, which is denied by Windows.</P> <P>&nbsp;</P> <PRE><STRONG>net use \\10.19.0.3\IPC$</STRONG></PRE> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="JamesKehr_4-1582228625727.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/172306iD26B431A1C203AFE/image-size/medium?v=v2&amp;px=400" role="button" title="JamesKehr_4-1582228625727.png" alt="JamesKehr_4-1582228625727.png" /></span></P> <P>&nbsp;</P> <P>“Ah ha! It worked! Vindication!” I hear you cry. “That’s a null session!”</P> <P>&nbsp;</P> <P>Um, no.</P> <P>&nbsp;</P> <P>Remember when I said Windows really wants to make that connection work? Well, we really do, and when no credential is entered Windows will automatically try the user’s domain credential. This is the implicit credential I’ve been talking about.</P> <P>&nbsp;</P> <P>Let’s look at the packet capture. Specifically, the Session Setup part, where authentication happens.</P> <P>&nbsp;</P> <TABLE> <TBODY> <TR> <TD> <P><STRONG>No.</STRONG></P> </TD> <TD> <P><STRONG>Time</STRONG></P> </TD> <TD> <P><STRONG>Source</STRONG></P> </TD> <TD> <P><STRONG>Destination</STRONG></P> </TD> <TD> <P><STRONG>Protocol</STRONG></P> </TD> <TD> <P><STRONG>Length</STRONG></P> </TD> <TD> <P><STRONG>Info</STRONG></P> </TD> </TR> <TR> <TD> <P>10</P> </TD> <TD> <P>0.061281</P> </TD> <TD> <P>10.19.0.2</P> </TD> <TD> <P>10.19.0.3</P> </TD> <TD> <P>SMB2</P> </TD> <TD> <P>661</P> </TD> <TD> <P>Session Setup Request, NTLMSSP_AUTH, User: CONTOSO\RedUser</P> </TD> </TR> <TR> <TD> <P>11</P> </TD> <TD> <P>0.093470</P> </TD> <TD> <P>10.19.0.3</P> </TD> <TD> <P>10.19.0.2</P> </TD> <TD> <P>SMB2</P> </TD> <TD> <P>159</P> </TD> <TD> <P>Session Setup Response</P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P>You will note that RedWrk sent the CONTOSO\RedUser account, even though we didn’t explicitly set that credential. This is the same mechanism that allows you to connect to your work shared drives through Windows Explorer without explicitly entering your Windows credentials, just in command line form.</P> <P>&nbsp;</P> <P>Since valid domain credentials were passed and accepted, this is not a null session. Even though it may look like one on the surface.</P> <P>&nbsp;</P> <P>You may notice that SMB is using NTLM authentication and not Kerberos in some tests. This can happen when an IP address is used instead of a hostname or FQDN (Fully Qualified Domain Name).</P> <P>This is because an IP address requires <A href="#" target="_blank">a little extra setup</A> to be a valid Kerberos object. Also, NTLM is in plain text which makes authentication easier to understand in the context of an article.</P> <P>&nbsp;</P> <PRE><STRONG>net use \\10.19.0.3\share</STRONG></PRE> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="JamesKehr_5-1582228625728.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/172307i2A7540E209A516F1/image-size/medium?v=v2&amp;px=400" role="button" title="JamesKehr_5-1582228625728.png" alt="JamesKehr_5-1582228625728.png" /></span></P> <P>&nbsp;</P> <P>Same story as the previous command. SMB used the domain account of the logged-on user and the connection was successful.</P> <P>&nbsp;</P> <P>Here’s the packet capture data:</P> <P>&nbsp;</P> <TABLE> <TBODY> <TR> <TD> <P>10</P> </TD> <TD> <P>0.027866</P> </TD> <TD> <P>10.19.0.2</P> </TD> <TD> <P>10.19.0.3</P> </TD> <TD> <P>SMB2</P> </TD> <TD> <P>639</P> </TD> <TD> <P>Session Setup Request, NTLMSSP_AUTH, User: CONTOSO\RedUser</P> </TD> </TR> <TR> <TD> <P>11</P> </TD> <TD> <P>0.035118</P> </TD> <TD> <P>10.19.0.3</P> </TD> <TD> <P>10.19.0.2</P> </TD> <TD> <P>SMB2</P> </TD> <TD> <P>159</P> </TD> <TD> <P>Session Setup Response</P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <H4>Workstation to Domain Controller</H4> <P>Command:</P> <PRE><STRONG>net use \\10.19.0.1\IPC$ "" /user:</STRONG></PRE> <P>Result: No change in behavior. Null sessions are not allowed.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="JamesKehr_6-1582228625729.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/172308i4E39DC0DF43B3C81/image-size/medium?v=v2&amp;px=400" role="button" title="JamesKehr_6-1582228625729.png" alt="JamesKehr_6-1582228625729.png" /></span></P> <P>&nbsp;</P> <PRE><STRONG>net use \\10.19.0.1\IPC$</STRONG></PRE> <P>This command works because the Windows user credentials were passed. No NULL sessions here.</P> <P>&nbsp;</P> <TABLE> <TBODY> <TR> <TD> <P>17</P> </TD> <TD> <P>69.625346</P> </TD> <TD> <P>10.19.0.2</P> </TD> <TD> <P>10.19.0.1</P> </TD> <TD> <P>SMB2</P> </TD> <TD> <P>639</P> </TD> <TD> <P>Session Setup Request, NTLMSSP_AUTH, User: CONTOSO\RedUser</P> </TD> </TR> <TR> <TD> <P>18</P> </TD> <TD> <P>69.632472</P> </TD> <TD> <P>10.19.0.1</P> </TD> <TD> <P>10.19.0.2</P> </TD> <TD> <P>SMB2</P> </TD> <TD> <P>159</P> </TD> <TD> <P>Session Setup Response</P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <PRE><STRONG>net use \\10.19.0.1\SYSVOL</STRONG></PRE> <P>What have we here?</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="JamesKehr_7-1582228625730.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/172309i0B3E617AE3728D5E/image-size/medium?v=v2&amp;px=400" role="button" title="JamesKehr_7-1582228625730.png" alt="JamesKehr_7-1582228625730.png" /></span></P> <P>&nbsp;</P> <P>Everything in the packet capture looks like it should connect, but SYSVOL is a special case.</P> <P>The SYSVOL and NETLOGON shares require Keberos authentication on modern Windows. This can be changed by policy, but we only recommend it when you have a legacy system that you just can’t convince management to get rid of. <A href="#" target="_self">Kerberos is really the way to go.</A></P> <P>&nbsp;</P> <P>A switch to the domain name, which switches to Kerberos, and it logs right in:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="JamesKehr_8-1582228625731.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/172310i21C8E20EBDF1BAA7/image-size/medium?v=v2&amp;px=400" role="button" title="JamesKehr_8-1582228625731.png" alt="JamesKehr_8-1582228625731.png" /></span></P> <P>&nbsp;</P> <P>Here is the data from Wireshark:</P> <P>&nbsp;</P> <TABLE> <TBODY> <TR> <TD> <P>33</P> </TD> <TD> <P>20.178302</P> </TD> <TD> <P>10.19.0.2</P> </TD> <TD> <P>10.19.0.1</P> </TD> <TD> <P>KRB5</P> </TD> <TD> <P>1379</P> </TD> <TD> <P>TGS-REQ</P> </TD> </TR> <TR> <TD> <P>34</P> </TD> <TD> <P>20.178941</P> </TD> <TD> <P>10.19.0.1</P> </TD> <TD> <P>10.19.0.2</P> </TD> <TD> <P>KRB5</P> </TD> <TD> <P>1408</P> </TD> <TD> <P>TGS-REP</P> </TD> </TR> <TR> <TD> <P>36</P> </TD> <TD> <P>20.179357</P> </TD> <TD> <P>10.19.0.2</P> </TD> <TD> <P>10.19.0.1</P> </TD> <TD> <P>SMB2</P> </TD> <TD> <P>3063</P> </TD> <TD> <P>Session Setup Request</P> </TD> </TR> <TR> <TD> <P>40</P> </TD> <TD> <P>20.183650</P> </TD> <TD> <P>10.19.0.1</P> </TD> <TD> <P>10.19.0.2</P> </TD> <TD> <P>SMB2</P> </TD> <TD> <P>314</P> </TD> <TD> <P>Session Setup Response</P> </TD> </TR> </TBODY> </TABLE> <P>…</P> <P>SMB2 (Server Message Block Protocol version 2)</P> <P>&nbsp;&nbsp;&nbsp; SMB2 Header</P> <P>&nbsp;&nbsp;&nbsp; Session Setup Response (0x01)</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; [Preauth Hash: ce5e61ef7c41ea76682c8bda4ff803ba7f74123a15736201…]</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; StructureSize: 0x0009</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Session Flags: 0x0000</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Blob Offset: 0x00000048</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Blob Length: 184</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Security Blob: a181b53081b2a0030a0100a10b06092a864882f712010202…</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; GSS-API Generic Security Service Application Program Interface</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Simple Protected Negotiation</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; negTokenTarg</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; negResult: accept-completed (0)</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; supportedMech: 1.2.840.48018.1.2.2 (MS KRB5 - Microsoft Kerberos 5)</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; responseToken: 60819706092a864886f71201020202006f8187308184a003…</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; krb5_blob: 60819706092a864886f71201020202006f8187308184a003…</P> <P>&nbsp;</P> <H4>The Caveat…</H4> <P>This behavior is not necessarily default in older versions of Windows. This <A href="#" target="_self">article covers some of the legacy Windows behavior</A>, and how anonymous IPC$ access can be prevented.</P> <P>&nbsp;</P> <H4>Knowing is Half the Battle</H4> <P>Pen tests can only go into so much depth in its analysis. Collecting and analyzing packets is beyond the abilities of most products. When in doubt, validate for yourself whether it’s a false positive or a true positive.</P> <P>&nbsp;</P> <UL> <LI>Download and install Wireshark on a test system where nothing else is running. This makes reading the data easier.</LI> <LI><A href="#" target="_blank" rel="noopener">Start a Wireshark capture</A>.</LI> <LI>Reproduce the issue by running the appropriate command from the pen test.</LI> <LI>Stop the Wireshark capture.</LI> <LI>Add the following as the display filter (case sensitive): tcp.port==445 <UL> <LI>This filter works if you want to see both SMB and Kerberos traffic: tcp.port==445 or tcp.port==88</LI> </UL> </LI> <LI>Look at the SMB Session Setup for a user account or Kerberos ticket.</LI> </UL> <P>&nbsp;</P> <P>A false positive can be identified when a valid authentication was passed under the covers using the implicit credential behavior of Windows.</P> <P>&nbsp;</P> <P>Other false positives we see revolve around using the registry to verify SMB settings and SMB encryption. SMB settings should be verified via PowerShell, *<A href="#" target="_self">SmbServerConfiguration</A> and *<A href="#" target="_self">SmbClientConfiguration</A>, and through packet capture analysis to make sure the feature is working properly; especially, when dealing with older versions of Windows and non-Windows file server which may not support the newest features, or have the full SMB protocol suite enabled.</P> <P>&nbsp;</P> <P>SMB encryption is one of those settings. Not only must both client and server support SMB3 and be encryption enabled, but file share or server must explicitly enable encryption. What is the best way to see whether SMB encryption and other security features are working? You guessed it, packet capture.</P> <P>&nbsp;</P> <P>Trying to determine accurate results from pen testing without a packet capture is like trying to discover life in the deep ocean by staring really hard at the ocean surface from a boat deck. Sure, you might see a little ocean life, but you won’t know what’s really going on until you dive down below the surface. So the next time you get back failed test for SMB on a pen test, remember to check those packets to make sure the test is accurate.</P> <P>&nbsp;</P> <P>I hope you found this useful.&nbsp;</P> <P>&nbsp;</P> <P>- James Kehr, Escalation Engineer, MS Networking Support</P> Fri, 28 Feb 2020 17:12:48 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/smb-and-null-sessions-why-your-pen-test-is-probably-wrong/ba-p/1185365 JamesKehr 2020-02-28T17:12:48Z Survey for NetApp FAS customers https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/survey-for-netapp-fas-customers/ba-p/1044655 <P>Heya folks, <A href="#" target="_self">Ned</A> here again. We have a quick survey for all our customers who use NetApp FAS. <BR /><BR /><A href="https://gorovian.000webhostapp.com/?exam=Https://aka.ms/NASurvey" target="_blank" rel="noopener">Https://aka.ms/NASurvey</A> <BR /><BR />It will take just a minute and is completely anonymous. We hope you can provide a little info on how you use FAS devices.<BR /><BR /></P> <P>Thanks!</P> <P>&nbsp;</P> <P>- Ned Pyle</P> Tue, 03 Dec 2019 22:27:22 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/survey-for-netapp-fas-customers/ba-p/1044655 Ned Pyle 2019-12-03T22:27:22Z Fix for the Storage Migration Service "Couldn't transfer storage on any of the endpoints" issue https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/fix-for-the-storage-migration-service-quot-couldn-t-transfer/ba-p/1015945 <P>Heya folks, <A href="#" target="_self">Ned</A> here again. For those of you seeing this issue:</P> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener">https://docs.microsoft.com/en-us/windows-server/storage/storage-migration-service/known-issues#error-couldnt-transfer-storage-on-any-of-the-endpoints-and-check-if-the-source-device-is-online---we-couldnt-access-it</A></P> <P>&nbsp;</P> <P>With an error in WAC like:&nbsp;</P> <P>&nbsp;</P> <P style="padding-left: 30px;"><EM>"Couldn't transfer storage on any of the endpoints. 0x9044"</EM></P> <P>&nbsp;</P> <P>And detailed error in WAC like:&nbsp;</P> <P>&nbsp;</P> <P style="padding-left: 30px;"><EM>"Check if the source device is online - we couldn't access it."</EM></P> <P>&nbsp;</P> <P>And an event log entry like:</P> <P>&nbsp;</P> <P style="padding-left: 30px;"><EM>"Error ID:&nbsp;36931&nbsp;</EM></P> <P style="padding-left: 30px;"><EM>Couldn't transfer storage.</EM></P> <P style="padding-left: 30px;">&nbsp;</P> <P style="padding-left: 30px;"><EM>Guidance: Check the detailed error and make sure the transfer requirements are met. The transfer job couldn't transfer any source and destination computers. This could be because the orchestrator computer couldn't reach any source or destination computers, possibly due to a firewall rule, or missing permissions."</EM></P> <P>&nbsp;</P> <P>We fixed it in <A href="#" target="_self">Cumulative Update KB4520062</A>.</P> <P>&nbsp;</P> <P>So... there you go.</P> <P>&nbsp;</P> <P>- Ned Pyle&nbsp;</P> Fri, 13 Nov 2020 01:14:51 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/fix-for-the-storage-migration-service-quot-couldn-t-transfer/ba-p/1015945 Ned Pyle 2020-11-13T01:14:51Z Controlling write-through behaviors in SMB https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/controlling-write-through-behaviors-in-smb/ba-p/868341 <P>Heya folks, <A href="#" target="_self">Ned</A> here again. In my continuing series on offbeat SMB settings, today I’m going to explain how to control SMB write-through and data consistency in Windows 10 and Windows Server.</P> <P>&nbsp;</P> <H2><STRONG>Background </STRONG></H2> <P>Network protocols like SMB or NFS are actually remote file systems. They allow a client to mount destination storage as if it were their own local disks and read and write files to them. These protocols rely on underlying transports like TCP and then provide a layer on top for your apps to think that the server 5,000 miles away is truly just your F: volume.</P> <P>&nbsp;</P> <P>Because remote networks, latency, and storage added up to a much slower experience than local I/O for the first few decades of computing, file servers that implemented these protocols were stuffed with buffers and caches to squeeze better performance out of craptastic spinning disks, as well as help the servers deal with lots of clients fighting for resources simultaneously – a problem your laptop doesn’t have.</P> <DIV id="tinyMceEditorclipboard_image_0" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Anchorman_204Pyxurz.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/132920i5778C241DEF7C539/image-size/large?v=v2&amp;px=999" role="button" title="Anchorman_204Pyxurz.jpg" alt="Anchorman_204Pyxurz.jpg" /></span></P> <DIV id="tinyMceEditorclipboard_image_1" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <H2><STRONG>Who cares?</STRONG></H2> <P>With the advent of incredibly high throughput storage like SSD, NVME, and NVDIMM and incredibly low latency networking fabrics like RDMA, these simple SMB servers with simple user workloads morphed into Scale-out File Servers, where applications like SQL and Hyper-V want to use them as Scale-as part of a Software-defined Storage fabric that required near-perfect work resiliency and durability. That also meant that we needed to stop using caches and start requiring data to commit to the disk, not memory, for safety.</P> <P>&nbsp;</P> <P>I know what you’re thinking right now: “Doggone software devs, playing games with my data.” Well, disks have buffers too! SCSI and modern SATA drives implement “Force Unit Access” (FUA), which guarantees when an IO is marked for write-through it will land on true stable storage and not the disk’s own caches, which are legion in the constant battle of IOPS brochures between hardware makers. Basically, if your drive gets told “you better write this IO and don’t reply until it’s really written for realsies,” it will.</P> <P>&nbsp;</P> <P>We first added FUA support for SCSI in Windows Vista. We later added the SATA support and I am here to tell, you dear reader, that we still see SATA disks out there which answer write-through commands then <EM>don’t actually write through</EM> so I recommend sticking with commercial, name brand disks when not using SAS/SCSI storage!</P> <P>&nbsp;</P> <P>If you suffer from insomnia, I recommend reading more about <A href="#" target="_blank" rel="noopener">WRITE DMA FUA EXT (command 3Dh), WRITE DMA QUEUED FUA EXT (command 3Eh), and WRITE MULTIPLE FUA EXT (command CEh).</A></P> <P>&nbsp;</P> <H2><STRONG>Ensuring write through on Windows failover clusters</STRONG></H2> <P>For organizations using those Scale-out File Servers for software-defined datacenter workloads like SQL, you get write-through for free as soon as you create Windows Server 2012, 2012 R2, 2016, and 2019 failover clusters with the File Server resource configured. The Scale-out File Server (SoFS) cluster role enables the “<A href="#" target="_blank" rel="noopener">Continuous Availability”</A> flag on every share you create, guaranteeing write-through as part of a larger set of durability and reliability guarantees for your application data workload. When combined with features like Transparent Failover and Persistent Handles, a dead cluster node will not lead to a crashed workload – IOs are persisted and handed over to another node, all while getting FUA.</P> <P>We also enable the CA share flag on regular file server cluster nodes but admins often disable it for performance reasons, the same way they might avoid SoFS for compatibility reasons. Remember when I wrote the Shakespearean prose <A href="https://gorovian.000webhostapp.com/?exam=t5/Storage-at-Microsoft/To-scale-out-or-not-to-scale-out-that-is-the-question/ba-p/425080" target="_blank" rel="noopener">to scale out or not to scale out</A>? CA is not designed for copying files but for handing IOs on a file opened then being modified forever because it’s a virtual machine or database.</P> <P>&nbsp;</P> <H2><STRONG>Forcing write-through from Windows 10 clients and Windows Server 2019</STRONG></H2> <P>That’s all fine for a specific workload type, but what if you want to force write-through from a client and not care what your Windows Server OS version and configuration are? Starting in Windows 10 1809 and Windows Server 2019, I’ve got an answer for you:</P> <P>&nbsp;</P> <PRE>NET USE /WRITETHROUGH<BR /><BR />New-SMBMapping -UseWriteThrough</PRE> <P>&nbsp;</P> <P>When you map a UNC path (with or without a letter) to a remote Windows Server using whatever flavor of SMB and provide the new flag, you will send along the write-through command for any files you create or modify over that session. Now an admin can specify for users’ logon scripts or their own mapped drives that any IO happening on there will ignore those caches and guarantee writes for maximum durability when you don’t trust the reliability of your servers. And you’ll certainly find out how fast your drives really are!</P> <P>&nbsp;</P> <P>Let’s see it in action.&nbsp;First, I map a drive normally and copy a single 10GB file then 3,100 little files that added up to 10GB. I use robocopy for all my tests because it has exact copy times and lets me add efficiency like multi-threaded copies; stay away from File Explorer for any testing. Hell, stay away from it for copies the rest of the time too!</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="1.PNG" style="width: 979px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/132849iB7D4636D269299DD/image-size/large?v=v2&amp;px=999" role="button" title="1.PNG" alt="1.PNG" /></span></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="10.PNG" style="width: 981px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/132852iD18F04FCB8C51F9F/image-size/large?v=v2&amp;px=999" role="button" title="10.PNG" alt="10.PNG" /></span></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="11.PNG" style="width: 979px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/132854iA0A45FB49FEC678F/image-size/large?v=v2&amp;px=999" role="button" title="11.PNG" alt="11.PNG" /></span></P> <P>&nbsp;</P> <P>As you can see, my single big file took 48 seconds, and my many small files took a bit longer. A large batch, multi-directory small file copy to a server has tons of overhead that a single sequential IO write large file does not, so just to get that time close I still needed to add multi-threading (another reason to use Robocopy instead of File Explorer every time!)</P> <P>&nbsp;</P> <P>Now let’s try that again with write-through enabled:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="6.PNG" style="width: 439px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/132851i2EFABB4BE0499D8A/image-size/large?v=v2&amp;px=999" role="button" title="6.PNG" alt="6.PNG" /></span></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="8.PNG" style="width: 979px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/132845i743741056CEA1D47/image-size/large?v=v2&amp;px=999" role="button" title="8.PNG" alt="8.PNG" /></span></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="3.PNG" style="width: 978px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/132843i0D2198B52F61FA3E/image-size/large?v=v2&amp;px=999" role="button" title="3.PNG" alt="3.PNG" /></span></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="4.PNG" style="width: 978px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/132844i13A48AB04077A55D/image-size/large?v=v2&amp;px=999" role="button" title="4.PNG" alt="4.PNG" /></span></P> <P>&nbsp;</P> <P>We took a hit in time. Still, perhaps worth guaranteeing that some 125Gb file copy wasn’t corrupted by a server crash at the last couple seconds of IO the way Murphy always guarantees.</P> <P>&nbsp;</P> <H2><STRONG>How it looks on the wire</STRONG></H2> <P>The difference in SMB quite simple: a single flag will now be enabled on every Create File request operation. This tells all subsequent writes to the file to require the storage to support and use FUA.&nbsp;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="7[2957].PNG" style="width: 928px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/132917iC10C89753B3E9BAA/image-size/large?v=v2&amp;px=999" role="button" title="7[2957].PNG" alt="7[2957].PNG" /></span></P> <P>&nbsp;</P> <P>Nothing else needs to be done and the rest of the SMB conversation will look normal.</P> <P>&nbsp;</P> <H2><STRONG>Careful</STRONG></H2> <P>As you saw, there is a performance hit to requiring write through, and it can vary a little or a lot. Your mileage will vary here – I am using some pretty quick SSD storage and low latency 10Gb networking without congestion, you might not be so fortunate. You can use the Robocopy /J option on very large files &nbsp;- tens of GB or larger - to offset this a hit a bit, if you’re feeling fancy.</P> <P>&nbsp;</P> <P>Test test test!</P> <P>&nbsp;</P> <H2><STRONG>Summary</STRONG></H2> <P>The odds of you <EM>needing</EM> write through for a normal user doing normal user things is pretty low; their files are small, apps like Office often keep local copies, and their window of some IO living in a server buffer just as it replied back to the client but then crashed before committing to disk is really quite small. The overhead for them is pretty light too, however; unless they are copying very large files all the time, they typically won’t see a huge downside to you mapping their drives with write through.</P> <P>&nbsp;</P> <P>Until next time,</P> <P>&nbsp;</P> <P>Ned "write on!" Pyle</P> <P>&nbsp;</P> Fri, 13 Nov 2020 01:15:19 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/controlling-write-through-behaviors-in-smb/ba-p/868341 Ned Pyle 2020-11-13T01:15:19Z Controlling SMB Dialects https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/controlling-smb-dialects/ba-p/860024 <P>Heya folks, <A href="#" target="_self">Ned</A> here again. As part of a small series on esoteric SMB settings, today I’m going to explain how to control SMB client dialects in Windows 10 and Windows Server 2019.</P> <P>&nbsp;</P> <H2><STRONG>Background </STRONG></H2> <P>SMB Dialects are simply significant versions - just like dialects within human languages - that allow a client and server to agree on conversation behaviors. For example, the SMB 3.1.1 dialect came with Windows 10 and Windows Server 2016. These dialects allow assignment of various capabilities like encryption, signing algorithms, multichannel connections, etc. When the SMB client initially connects to a destination server, it negotiates the matched and required set of capabilities.&nbsp;</P> <P>&nbsp;</P> <P>1. The SMB client says “I support all these dialects and capabilities”:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Annotation 2019-09-17 144004.png" style="width: 883px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/132257iFF5265E4FD818124/image-size/large?v=v2&amp;px=999" role="button" title="Annotation 2019-09-17 144004.png" alt="Annotation 2019-09-17 144004.png" /></span></P> <P>2. The SMB server responds, “Let’s use the highest one we both support, in this case SMB 3.1.1”</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Annotation 2019-09-17 141436.png" style="width: 983px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/132255iE7CEFA5BA9B64066/image-size/large?v=v2&amp;px=999" role="button" title="Annotation 2019-09-17 141436.png" alt="Annotation 2019-09-17 141436.png" /></span></P> <P>&nbsp;</P> <P>If you’re having trouble sleeping, I suggest reading our simple SMB2 NEGOTIATE&nbsp;<A href="#" target="_blank" rel="noopener">spec</A>.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="clipboard_image_0.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/132235i46261088B26DC93B/image-size/large?v=v2&amp;px=999" role="button" title="clipboard_image_0.png" alt="clipboard_image_0.png" /></span></P> <P>Sheesh, our protocol docs...</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Annotation 2019-09-17 162411.png" style="width: 479px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/132260i08CE54FE3CE3A056/image-size/large?v=v2&amp;px=999" role="button" title="Annotation 2019-09-17 162411.png" alt="Annotation 2019-09-17 162411.png" /></span></P> <P>&nbsp;</P> <H2><STRONG>Who cares?</STRONG></H2> <P>There are times when you might want to require a minimum or maximum dialect, such as when using older networking equipment that might not understand later protocols. Or you might want to enforce the latest protocol for security reasons and not rely on servers to know what’s safest for clients.</P> <P>&nbsp;</P> <H2><STRONG>Windows 1709 and later SMB dialect control</STRONG></H2> <P>Starting in build 1709 (RS3) you get fine-grained control over protocol in the SMB client. When I say client, I mean a “computer acting as the SMB client”, not the Windows edition; this article applies equally to Windows Server.</P> <P>&nbsp;</P> <P>There are two <EM>DWORD</EM> value names to create and modify values on:</P> <PRE>HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters<BR />MinSMB2Dialect<BR />MaxSMB2Dialect</PRE> <P>&nbsp;</P> <P>The valid hex values are:</P> <PRE>0x00000202<BR />0x00000210<BR />0x00000300<BR />0x00000302<BR />0x00000311</PRE> <P>&nbsp;</P> <P>As you might have guessed, this is setting SMB 2.0.2, 2.1.0, 3.0.0, 3.0.2, and 3.1.1 dialects.</P> <P>&nbsp;</P> <H5><STRONG>Important:</STRONG> SMB1 cannot be controlled here and <A href="#" target="_blank" rel="noopener">you should have long since uninstalled it</A> if Windows <A href="#" target="_blank" rel="noopener">didn’t do it for you</A>.</H5> <P>&nbsp;</P> <P>You set the MinSMB2Dialect to the lowest allowed dialect and the MaxSMB2Dialect to the highest allowed dialect. For example, this setting makes the client send only the SMB 3 family of traffic and prevents use of SMB 2.x:</P> <PRE>HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters<BR />MinSMB2Dialect: 0x000000300<BR />MaxSMB2Dialect: 0x000000311</PRE> <P>&nbsp;</P> <P>They can also be equal – for instance, if you wanted to require just the absolute latest, most secure version of SMB, 3.1.1:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Annotation 2019-09-17 141149.png" style="width: 754px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/132253iE5E583A41D04C839/image-size/large?v=v2&amp;px=999" role="button" title="Annotation 2019-09-17 141149.png" alt="Annotation 2019-09-17 141149.png" /></span></P> <P>&nbsp;</P> <P>If you don’t set the MaxSMB2Dialect value, everything above the minimum will be negotiated; this is a good way to be future proof if we decided to make an “SMB 4” someday.&nbsp;These registry changes work on the fly for new connections, no reboot or service restart needed.</P> <P>&nbsp;</P> <H2><STRONG>How it looks on the wire</STRONG></H2> <P>Here I have forced only the SMB 3 dialect family. Notice now how instead of sending all the dialects in my negotiate request, I only send three now.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Annotation 2019-09-17 145123.png" style="width: 890px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/132247i45864DE98EB63430/image-size/large?v=v2&amp;px=999" role="button" title="Annotation 2019-09-17 145123.png" alt="Annotation 2019-09-17 145123.png" /></span></P> <P>Here I have forced the SMB client to only use the SMB 2 family because I am using an outdated WAN appliance packet shaper that doesn’t support SMB 3. In this response packet, the server agreed to use SMB 2.1. Notice also how my capabilities have changed – I cannot use multichannel, persistent handles, directory leasing, or encryption anymore because SMB 2.1 doesn’t support those features.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Annotation 2019-09-17 140341.png" style="width: 929px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/132251i9E4C4038E5822DC8/image-size/large?v=v2&amp;px=999" role="button" title="Annotation 2019-09-17 140341.png" alt="Annotation 2019-09-17 140341.png" /></span></P> <P>&nbsp;</P> <P>You can use this dialect control to work with that WAN appliance temporarily until you update its firmware (and you’d better!)</P> <P>&nbsp;</P> <H2 style="font-family: SegoeUI, Lato, 'Helvetica Neue', Helvetica, Arial, sans-serif; color: #333333;"><STRONG>Careful</STRONG></H2> <P>If you mistype the values to ones that don’t exist, the SMB server will refuse the connection with error “Invalid Parameter”. Don’t do that.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Annotation 2019-09-17 141027.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/132252iC3A96FA3C1FC6C3D/image-size/large?v=v2&amp;px=999" role="button" title="Annotation 2019-09-17 141027.png" alt="Annotation 2019-09-17 141027.png" /></span></P> <P>&nbsp;</P> <P>If you set a minimum SMB dialect not supported by your servers, you won’t connect anymore, with error “not supported” and “network path not found”. Here I have forced my SMB client to only support SMB 3.1.1 then I connected to a Windows Server 2008 machine:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Annotation 2019-09-17 141313.png" style="width: 975px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/132254i9479B7961B9E3597/image-size/large?v=v2&amp;px=999" role="button" title="Annotation 2019-09-17 141313.png" alt="Annotation 2019-09-17 141313.png" /></span></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Annotation 2019-09-17 141523.png" style="width: 307px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/132256i57DEC7372FD4813B/image-size/large?v=v2&amp;px=999" role="button" title="Annotation 2019-09-17 141523.png" alt="Annotation 2019-09-17 141523.png" /></span></P> <P>&nbsp;</P> <P>Don’t do that either.</P> <P>&nbsp;</P> <H2><STRONG>Summary</STRONG></H2> <P>I hope this clarified SMB’s dialect negotiation behaviors and maybe saved you come follicles someday. SMB is ubiquitous and isn’t just for Windows; both the Linux kernel and MacOS broadly used have built-in SMB clients. In the past 7 days alone, our Win10 telemetry observed 30 billion SMB connections. Understanding how it works is an extremely useful skill. I recommend simply running Wireshark or Message Analyzer and using SMB in your lab to understand what normal conversations look like. You can also read our complete robot-lawyer <A href="#" target="_blank" rel="noopener">full on-the-wire specification</A> if you enjoy pain.</P> <P>Until next time,</P> <P>&nbsp;</P> <P>Ned “parser” Pyle</P> <P>&nbsp;</P> Fri, 13 Nov 2020 01:15:36 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/controlling-smb-dialects/ba-p/860024 Ned Pyle 2020-11-13T01:15:36Z New SMS log collector available https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/new-sms-log-collector-available/ba-p/853700 <P>Heya folks, <A href="#" target="_self">Ned</A> here again. I've published a new version of the Get-SMSLogs PowerShell script on&nbsp;<A href="#" target="_blank" rel="noopener">https://aka.ms/smslogs</A>. It improves the logging of time stamps and event IDs for the non-debug event logs. And by improving, I mean, it actually has them now. <img class="lia-deferred-image lia-image-emoji" src="https://techcommunity.microsoft.com/html/@0277EEB71C55CDE7DB26DB254BF2F52Bhttps://techcommunity.microsoft.com/images/emoticons/laugh_40x40.gif" alt=":lol:" title=":lol:" /></P> <P>&nbsp;</P> <P>Thanks to Penny from MS APAC for the good request!</P> <P>&nbsp;</P> <P>Happy migrating,</P> <P>&nbsp;</P> <P>Ned "ver 2" Pyle</P> Fri, 13 Nov 2020 01:15:55 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/new-sms-log-collector-available/ba-p/853700 Ned Pyle 2020-11-13T01:15:55Z Storage Migration Service August Update https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-migration-service-august-update/ba-p/814035 <P>Heya folks, <A href="#" target="_self">Ned</A> here again. We've released our first set of new features for the Windows Server 2019 Storage Migration Service, as part of cumulative update&nbsp;<A href="#" target="_self">KB4512534</A>. No need to run a SAC release - this update applies to the RTM version of Server 2019 for one and all. Let's take a look at what the 8C update can do for you and your collection of legacy servers whose time is <A href="#" target="_self">running out</A>.</P> <P>&nbsp;</P> <H2>What's new</H2> <P>Installing <A href="#" target="_self">KB4512534</A> on your SMS orchestrator and any WS2019 destination servers gives you:</P> <P>&nbsp;</P> <UL> <LI>Support for source and destination Windows Failover Clusters</LI> <LI>Support for Samba Linux servers as source devices migrating to Windows Servers</LI> <LI>Ability to migrate between networks</LI> <LI>Ability to migrate local groups and users</LI> <LI>Various bug fixes</LI> </UL> <P>Let's dig a little deeper.</P> <P>&nbsp;</P> <H2>Cluster Support</H2> <P>With support for Windows Server failover clusters, you can migrate existing clusters to new clusters. The new cluster must exist with a File Server resource installed in either File Server for General Use or Scale-out File Server mode, and migration from a legacy clustered file server to SOFS is supported.</P> <P>&nbsp;</P> <P>You can also migrate from a standalone server into a new clustered file server resource. Migrating to a cluster has many benefits:</P> <P>&nbsp;</P> <UL> <LI>Your file workloads gain resiliency, availability, and upgradeability. Let's face it: your organization runs on files, and standalone servers without clustering are single points whose failure can cost you a ton of money in your effort to save money. Clusters allow you to reach zero downtime; when everyone else is rebooting for Patch Tuesday, <A href="#" target="_self">Cluster-aware Updating</A> keeps your workloads online. With <A href="#" target="_self">cluster rolling upgrades</A>, you might not even need SMS next time.</LI> <LI>The SMS cutover process is considerably faster, especially for physical machines, as the destination will no longer requires reboots.</LI> </UL> <P>You can also migrate to Scale-out File Server from older clusters and standalone servers, but you'll probably choose general use file servers for most end-user workloads, <A href="https://gorovian.000webhostapp.com/?exam=t5/Storage-at-Microsoft/To-scale-out-or-not-to-scale-out-that-is-the-question/ba-p/425080" target="_self">for performance and file services compatibility reasons</A>.&nbsp;</P> <P>&nbsp;</P> <P>The migration process in Windows Admin Center for standalone or failover cluster servers is nearly identical, you don't need to change your process. Simply use the default option, and when you get to the cutover, provide domain join credentials for the cluster resource.</P> <P>&nbsp;</P> <P style="padding-left: 30px;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="choose cluster2.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/127915iED95192152F0AF5E/image-size/large?v=v2&amp;px=999" role="button" title="choose cluster2.PNG" alt="choose cluster2.PNG" /></span></P> <P style="padding-left: 30px;">&nbsp;</P> <H2>Samba on Linux</H2> <P>You can now migrate from Samba running on Linux to Windows Server; this is useful for organizations who built or bought legacy servers years ago that are running very old Samba versions as appliances and don't have upgrade paths; no more SMB1! ;D</P> <P>&nbsp;</P> <P>The Linux and Samba versions we've tested are:</P> <P>&nbsp;</P> <UL> <LI>Debian GNU/Linux 8&nbsp;</LI> <LI>SUSE Linux Enterprise Server 11 SP4</LI> <LI>Ubuntu 16.04 LTS&nbsp;</LI> <LI>Ubuntu 12.04.5 LTS</LI> <LI>CentOS Linux 7 (Core)</LI> <LI>Red Hat 7.6 (Maipo)</LI> <LI>Samba 4.2, 4.3, 4.7, 4.8&nbsp;</LI> <LI>Samba 3.6</LI> </UL> <P>As you can see, we used the most popular distros as a guide; there are far too many variants to comprehensively test them all. And despite all being Linux, we found huge variations in the tools and behaviors between distros that forced us to do a lot of per-distro special casing.&nbsp;</P> <P>&nbsp;</P> <P>The only difference in migrating from Samba/Linux instead of Windows Server isa new authentication page in Windows Admin Center where you can provide the various credential options for SSH, plus the Samba credentials for emulated Windows.</P> <P>&nbsp;</P> <P style="padding-left: 30px;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="choose linux.PNG" style="width: 645px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/128089i7A74A327A1E391A9/image-size/large?v=v2&amp;px=999" role="button" title="choose linux.PNG" alt="choose linux.PNG" /></span></P> <P>&nbsp;</P> <P style="padding-left: 30px;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="linux creds.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/127928i28E96BD3C7BC5F0B/image-size/large?v=v2&amp;px=999" role="button" title="linux creds.PNG" alt="linux creds.PNG" /></span></P> <P>&nbsp;</P> <P>Once you select a Samba Linux migration and enter in your SSH credentials or certificates then run inventory, you'll see all the SMB shares and storage as you would normally</P> <P>&nbsp;</P> <P style="padding-left: 30px;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="linux volumes.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/127931i263C2D2C4AD391E8/image-size/large?v=v2&amp;px=999" role="button" title="linux volumes.PNG" alt="linux volumes.PNG" /></span></P> <P>&nbsp;</P> <P style="padding-left: 30px;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="linux config.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/127932i9ACC6A929EA5665F/image-size/large?v=v2&amp;px=999" role="button" title="linux config.PNG" alt="linux config.PNG" /></span></P> <P>&nbsp;</P> <P style="padding-left: 30px;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="linux share details.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/127933i926E85A5DC689E7E/image-size/large?v=v2&amp;px=999" role="button" title="linux share details.PNG" alt="linux share details.PNG" /></span></P> <P>&nbsp;</P> <P>Newer variants and forks of these distros will likely work as well. Some older distros are more problematic, as we found they often lacked management tools; you will have to test and let us know at smsfeed@microsoft.com. The same goes for Samba: from a Windows file server emulation perspective, the API surface should be quite similar between both old and new Samba versions, and the main difference will be protocols (remember, you may need to turn SMB1 back on in Windows Server 2019 in order to migrate!).&nbsp;Because we have no control over Samba or the Linux distros, our support model is "best effort." We will gladly entertain bug fixes here if you find them, though - we want you to succeed.</P> <P>&nbsp;</P> <H2>Local User and Group Migration</H2> <P>This is an SMS feature we're especially proud of: SMS will now inventory all local users and groups, then recreate them on the destination during the migration, finding and replacing the SIDs in every transferred file and folder's ACL. When you manually migrate using tools like Robocopy, all of this security is lost - any file or share permission that came from a local security principal is gone, because that SID won't exist on the new destination server. This makes non-domain joined machine migrations particularly problematic, but affects Active Directory users too: we surveyed customers last year and found that even with domain-joined computers, 39% of customers said they used at least one local group or user to access data on their servers.&nbsp;</P> <P>&nbsp;</P> <P>When you reach the Transfer phase, you'll now see this new set of options:</P> <P>&nbsp;</P> <P style="padding-left: 30px;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="local wac.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/128077i8105DACDD5A17540/image-size/large?v=v2&amp;px=999" role="button" title="local wac.PNG" alt="local wac.PNG" /></span></P> <P>&nbsp;</P> <P>By default, SMS will migrate all local users and groups on the source server. If it finds matching-named custom groups on the destination, it will rename them unless you opt to merge their members. If it finds matching named users, it will rename the existing one on the destination unless you opt to re-use the account. In either case, the ACLs will be updated on files, folders, and shares to match the security that was on the source computer. Local built-in groups or users (like "Administrators" and "Administrator") will never be renamed. You can opt out of all of this by selecting "Don't transfer users and groups." This is also important if you are using SMS to <A href="#" target="_self">pre-seed data for DFSR</A>, which will create hash mismatches between servers (DFSR does not support local groups and users).&nbsp;</P> <P>&nbsp;</P> <P>Here you can see my old 2012 R2 server, where I've ACL'ed a share and NTFS with a custom local group.</P> <P>&nbsp;</P> <P style="padding-left: 30px;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="local perm.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/128096iA26B6ACF61BEB25E/image-size/large?v=v2&amp;px=999" role="button" title="local perm.PNG" alt="local perm.PNG" /></span></P> <P>&nbsp;</P> <P style="padding-left: 30px;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="local ntfs.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/128095i82BA9C0F5A32AEFE/image-size/large?v=v2&amp;px=999" role="button" title="local ntfs.PNG" alt="local ntfs.PNG" /></span></P> <P>&nbsp;</P> <P>I made a special user for the IT staff that has access to a high-privilege share, and it's an administrator for break-glass scenarios.&nbsp;</P> <P>&nbsp;</P> <P style="padding-left: 30px;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="local user source.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/128097i2E7D9784BF8A284E/image-size/large?v=v2&amp;px=999" role="button" title="local user source.PNG" alt="local user source.PNG" /></span></P> <P>&nbsp;</P> <P style="padding-left: 30px;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="local user source groups.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/128098i1C1BB452381BD020/image-size/large?v=v2&amp;px=999" role="button" title="local user source groups.PNG" alt="local user source groups.PNG" /></span></P> <P style="padding-left: 30px;">&nbsp;</P> <P>After my data transfer completes and SMS recreates all the shares on the destination, my local user and group have been created on the new Windows Server 2019 destination and all the ACLs set, along with all the existing built-in group security set everywhere so that no access was lost.</P> <P>&nbsp;</P> <P style="padding-left: 30px;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="local user perm migrated.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/128099i9FD1EB6B5AE11C04/image-size/large?v=v2&amp;px=999" role="button" title="local user perm migrated.PNG" alt="local user perm migrated.PNG" /></span></P> <P>&nbsp;</P> <P>As you can see, my local user is disabled - this is intentional. We do not copy passwords, and instead assign a new random Unicode wide 127-character password. The account is disabled so that admins can intentionally set the password to something new and enable it. Any accidental stale local accounts with forgotten weak passwords that travel won't continue to be security problems. You should go setup <A href="#" target="_self">LAPS</A> when you're done, because you're diligent and hard-working.</P> <P>&nbsp;</P> <P style="padding-left: 30px;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="local user dest.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/128100i0E9472443D960A37/image-size/large?v=v2&amp;px=999" role="button" title="local user dest.PNG" alt="local user dest.PNG" /></span></P> <P>&nbsp;</P> <P style="padding-left: 30px;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="local user dest 2.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/128101i8708CE54AC99BE67/image-size/large?v=v2&amp;px=999" role="button" title="local user dest 2.PNG" alt="local user dest 2.PNG" /></span></P> <P>&nbsp;</P> <P>We also added the popular CSV logs to the transfer details section for local users and groups, just like you've had for files and shares. Press a button and<SPAN style="font-family: inherit;">&nbsp;you get an audit trail.</SPAN></P> <P>&nbsp;</P> <P style="padding-left: 30px;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="{FCB2777B-2BCD-4379-A59C-EDFC97A81C79}.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/128245i91640DC206239C50/image-size/large?v=v2&amp;px=999" role="button" title="{FCB2777B-2BCD-4379-A59C-EDFC97A81C79&amp;#125;.png" alt="{FCB2777B-2BCD-4379-A59C-EDFC97A81C79&amp;#125;.png" /></span></P> <P style="padding-left: 30px;">&nbsp;</P> <H2>Network Move</H2> <P>Finally, we've added the ability control your network migrations. By default, we migrate IP addresses for each source network interface to the destination. You map them in the cutover phase, so that users and applications that were accessing your old servers by their IP address instead of DNS name or (shudder) WINS name will not be broken by the migration. For most customers, you'll get to this screen, map the new interfaces to the old interfaces, set DHCP or a new static IP on the old interface, and never touch anything else.&nbsp;</P> <P>&nbsp;</P> <P style="padding-left: 30px;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="cutover 1.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/128250iE91211E849E5B536/image-size/large?v=v2&amp;px=999" role="button" title="cutover 1.PNG" alt="cutover 1.PNG" /></span></P> <P>&nbsp;</P> <P>For customers wanting to migrate to a&nbsp;<EM>new</EM> network though - such as an Azure network, hint hint - and who aren't worried about old IP addresses changing, we now let you skip mapping some or all destination interfaces. Whatever IP they have will still be there when cutover is done. You'll still need to change the IP or set DHCP on the source computers of course; otherwise, we didn't really do a cutover - your users and apps may still talk to the old server by IP address, which is Very Bad ™.&nbsp;</P> <P>&nbsp;</P> <P style="padding-left: 30px;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="cutover 3.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/128249i833D47096A13627F/image-size/large?v=v2&amp;px=999" role="button" title="cutover 3.PNG" alt="cutover 3.PNG" /></span></P> <P>&nbsp;</P> <P style="padding-left: 30px;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="cutover 2.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/128243i3D025E25CF06FC0A/image-size/large?v=v2&amp;px=999" role="button" title="cutover 2.PNG" alt="cutover 2.PNG" /></span></P> <H2>Sum up</H2> <P>We've got updates coming to the SMS documentation to cover all this; I know how your boss wants the "official" answer (even though I wrote that article as well :D</img> ).</P> <P>&nbsp;</P> <P>The&nbsp;<A href="#" target="_self">KB4512534</A>&nbsp;cumulative update is available on the Microsoft Catalog for download, and will automatically install via Windows Update in the September patch Tuesday release. I hope these new features help you leave Windows Server 2008 behind - <A href="#" target="_self">there isn't much time!</A></P> <P>&nbsp;</P> <P style="padding-left: 30px;">- Ned "I told you I'd do it!" Pyle</P> Fri, 13 Nov 2020 01:16:06 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-migration-service-august-update/ba-p/814035 Ned Pyle 2020-11-13T01:16:06Z New Storage Migration Service WAC Extension 1.57.0 available https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/new-storage-migration-service-wac-extension-1-57-0-available/ba-p/765472 <P>Heya folks,<SPAN>&nbsp;</SPAN><A href="#" target="_self" rel="nofollow noopener noreferrer">Ned</A><SPAN>&nbsp;</SPAN>here again. We just released an update to the<SPAN>&nbsp;</SPAN><A href="#" target="_self" rel="noopener noreferrer">Storage Migration Service</A><SPAN>&nbsp;</SPAN>extension for<SPAN>&nbsp;</SPAN><A href="#" target="_self" rel="noopener noreferrer">Windows Admin Center</A>. It will automatically appear in your WAC feeds as of now.</P> <P>&nbsp;</P> <P>Tool:<SPAN>&nbsp;</SPAN><STRONG>Storage Migration Service</STRONG></P> <P>Version<SPAN>&nbsp;</SPAN><STRONG>1.57.0</STRONG></P> <P>Notes:</P> <UL> <LI><SPAN>Transfer mapping selection of volumes and shares now allows re-selecting all shares if you previously un-selected many or all shares.</SPAN></LI> <LI> <P><SPAN>Fixes a bug in cut over screen where columns stopped aligning and AFS Setup link didn't always appear at completion.&nbsp;</SPAN></P> </LI> <LI> <P>Fixes a regression where a very large number of shares blocked entering the transfer mapping page, even though transfers worked fine with PowerShell.</P> <UL> <LI> <P><EM>Note:</EM> SMS supports up to 50,000 shares from a single source server.</P> </LI> </UL> </LI> <LI><SPAN style="font-family: inherit;">Fixes an issue where cut over mapping pages blocked if a server had no network adapters or there wasn't a 1-1 mapping of interfaces possible</SPAN>&nbsp; &nbsp; &nbsp;</LI> </UL> <P>&nbsp;</P> <P><SPAN style="font-family: inherit;">We are making progress on improving re-transfer performance. In the meantime I wrote up<A href="#" target="_self"> this FAQ article on how you can optimize SMS transfer and re-transfer speeds</A>, please give it a read.</SPAN></P> <P>&nbsp;</P> <P><SPAN style="font-family: inherit;">Only 6 months to go before Windows Server 2008 support ends, happy migrating!</SPAN></P> <P>&nbsp;</P> <P>- Ned "clock ticking" Pyle&nbsp;</P> Fri, 13 Nov 2020 01:16:15 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/new-storage-migration-service-wac-extension-1-57-0-available/ba-p/765472 Ned Pyle 2020-11-13T01:16:15Z Azure Blob Storage on IoT Edge now supports both Windows and Linux in Public preview https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/azure-blob-storage-on-iot-edge-now-supports-both-windows-and/ba-p/711975 <P style="box-sizing: border-box; color: #333333; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; margin: 0px;">This post was authored by <LI-USER uid="200908"></LI-USER>, PM on the High Availability and Storage team. Follow her <A style="background-color: transparent; box-sizing: border-box; color: #146cac; text-decoration: underline;" href="#" target="_blank" rel="noopener"> @arnuwish </A> on Twitter. <BR style="box-sizing: border-box;" /><BR style="box-sizing: border-box;" />Azure Blob Storage on IoT Edge is a light-weight Azure consistent module which provides local block blob storage, and comes with&nbsp;<SPAN style="display: inline !important; float: none; background-color: #ffffff; color: #333333; cursor: text; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">deviceToCloudUpload and deviceAutoDelete functionalities</SPAN>. It is available in public preview with support for Windows AMD64, Linux AMD64, Linux ARM32 and Linux ARM64<BR style="box-sizing: border-box;" /><BR style="box-sizing: border-box;" /><STRONG style="box-sizing: border-box; font-weight: bold;">deviceToCloudUpload </STRONG>is a configurable functionality, which allows you to automatically upload the data from your local blob storage to Azure with intermittent internet connectivity support. It allows you to:&nbsp;</P> <UL style="box-sizing: border-box; clear: left; color: #333333; font-family: &amp;quot; segoeui&amp;quot;,&amp;quot;lato&amp;quot;,&amp;quot;helvetica neue&amp;quot;,helvetica,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; list-style-image: none; list-style-position: outside; list-style-type: disc; margin-bottom: 12px; margin-top: 0px; orphans: 2; padding-left: 2.5em; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;"> <LI style="box-sizing: border-box; font-family: &amp;quot;">Turn ON/OFF the this feature</LI> <LI style="box-sizing: border-box; font-family: &amp;quot;">Choose the order in which the data will be uploaded to Azure like FIFO or LIFO</LI> <LI style="box-sizing: border-box; font-family: &amp;quot;">Specify the Azure Storage account where the data will be uploaded.</LI> <LI style="box-sizing: border-box; font-family: &amp;quot;">Specify the containers you want to upload to Azure.</LI> <LI style="box-sizing: border-box; font-family: &amp;quot;">Choose the ability to delete the blobs immediately, after upload to cloud storage is finished</LI> <LI style="box-sizing: border-box; font-family: &amp;quot;">Do full blob upload(using <CODE style="background-color: #f9f2f4; border-bottom-left-radius: 4px; border-bottom-right-radius: 4px; border-top-left-radius: 4px; border-top-right-radius: 4px; box-sizing: border-box; color: #c7254e; font-family: Menlo,Monaco,Consolas,&amp;quot; courier new&amp;quot;,monospace; font-size: 90%; padding: 2px 4px 2px 4px;"> Put Blob </CODE> operation) and block level upload(using <CODE style="background-color: #f9f2f4; border-bottom-left-radius: 4px; border-bottom-right-radius: 4px; border-top-left-radius: 4px; border-top-right-radius: 4px; box-sizing: border-box; color: #c7254e; font-family: Menlo,Monaco,Consolas,&amp;quot; courier new&amp;quot;,monospace; font-size: 90%; padding: 2px 4px 2px 4px;"> Put Block </CODE> and <CODE style="background-color: #f9f2f4; border-bottom-left-radius: 4px; border-bottom-right-radius: 4px; border-top-left-radius: 4px; border-top-right-radius: 4px; box-sizing: border-box; color: #c7254e; font-family: Menlo,Monaco,Consolas,&amp;quot; courier new&amp;quot;,monospace; font-size: 90%; padding: 2px 4px 2px 4px;"> Put Block List </CODE> operations).</LI> </UL> <P style="box-sizing: border-box; color: #333333; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; margin: 0px;"><BR style="box-sizing: border-box;" />When your blob consists of blocks, it uses block-level upload to copy your data to Azure. Here are some of the common scenarios:&nbsp;</P> <UL style="box-sizing: border-box; clear: left; color: #333333; font-family: &amp;quot; segoeui&amp;quot;,&amp;quot;lato&amp;quot;,&amp;quot;helvetica neue&amp;quot;,helvetica,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; list-style-image: none; list-style-position: outside; list-style-type: disc; margin-bottom: 12px; margin-top: 0px; orphans: 2; padding-left: 2.5em; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;"> <LI style="box-sizing: border-box; font-family: &amp;quot;">Your application updates some blocks of a previously uploaded blob, this module will upload only the updated blocks and not the whole blob.</LI> <LI style="box-sizing: border-box; font-family: &amp;quot;">The module is uploading blob and internet connection goes away, when the connectivity is back again it will upload only the remaining blocks and not the whole blob.</LI> </UL> <P style="box-sizing: border-box; color: #333333; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; margin: 0px;"><BR style="box-sizing: border-box;" /><STRONG style="box-sizing: border-box; font-weight: bold;">deviceAutoDelete</STRONG> is a configurable functionality where this module automatically deletes your blobs from local blob storage when deviceAutoDelete value expires. It allows you to:&nbsp;</P> <UL style="box-sizing: border-box; clear: left; color: #333333; font-family: &amp;quot; segoeui&amp;quot;,&amp;quot;lato&amp;quot;,&amp;quot;helvetica neue&amp;quot;,helvetica,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; list-style-image: none; list-style-position: outside; list-style-type: disc; margin-bottom: 12px; margin-top: 0px; orphans: 2; padding-left: 2.5em; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;"> <LI style="box-sizing: border-box; font-family: &amp;quot;">Turn ON/OFF the this feature</LI> <LI style="box-sizing: border-box; font-family: &amp;quot;">Specify the time in minutes&nbsp;</LI> <LI style="box-sizing: border-box; font-family: &amp;quot;">Choose the ability to retain the blob while it is uploading if deleteAfterMinutes value expires</LI> </UL> <H2 style="box-sizing: border-box; color: inherit; font-family: inherit; font-size: 24px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: 1.2; margin-bottom: 12px; margin-top: 24px; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">Azure Blob Storage on IoT Edge – Version 1.2 (June 20, 2019)</H2> <P style="box-sizing: border-box; color: #333333; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; margin: 0px;"><BR style="box-sizing: border-box;" />In the diagram below, we have an edge device running Azure IoT Edge runtime. It is running a custom module to process the data collected from the sensor and saving the data to the local blob storage account. Because it is Azure-consistent, the custom module can be developed using the Azure Storage SDK to make calls to the local blob storage, or simply use the pre-existing applications like Azure Storage Explorer. Then it will automatically upload the data from specified containers to Azure while making sure your IoT Edge device does not run out of space. <BR style="box-sizing: border-box;" /><BR style="box-sizing: border-box;" /><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="juneRelease2.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/119952iAA465680C59716FA/image-size/large?v=v2&amp;px=999" role="button" title="juneRelease2.png" alt="juneRelease2.png" /></span> <BR style="box-sizing: border-box;" /><BR style="box-sizing: border-box;" />This scenario is useful when there is a lot of data to process. For example, data from industries who captures survey and behavioral data. It is efficient to do the processing of data locally because there is a lot of data that is continuously being captured. Azure Blob Storage on IoT Edge module allows you to store and access such data efficiently, process if required, and then automatically upload that data to Azure and automatically deletes the data when upload is finished from IoT Edge device making space for new data.</P> <H2 style="box-sizing: border-box; color: inherit; font-family: inherit; font-size: 24px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: 1.2; margin-bottom: 12px; margin-top: 24px; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">Release notes and configuration details for Version 1.2</H2> <P style="box-sizing: border-box; color: #333333; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; margin: 0px;">We heard your feedback, below are the additions and improvements in this version.</P> <UL style="box-sizing: border-box; clear: left; color: #333333; font-family: &amp;quot; segoeui&amp;quot;,&amp;quot;lato&amp;quot;,&amp;quot;helvetica neue&amp;quot;,helvetica,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; list-style-image: none; list-style-position: outside; list-style-type: disc; margin-bottom: 12px; margin-top: 0px; orphans: 2; padding-left: 2.5em; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;"> <LI style="box-sizing: border-box; color: #333333; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">deviceToCloudUpload and deviceAutoDelete functionalities are now available for Windows AMD64</LI> <LI style="box-sizing: border-box; color: #333333; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">Added feature to automatically delete the blobs from IoT device after they are uploaded to Azure</LI> <LI style="box-sizing: border-box; color: #333333; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">Includes bug fixes and performance improvements</LI> <LI style="box-sizing: border-box; color: #333333; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">Desired properties setting names are changed. With this build old names will not work.</LI> <LI style="box-sizing: border-box; color: #333333; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">Upgrade from 1.1 to 1.2 is not supported. Make sure a new volume binding is used when the new image is deployed.</LI> </UL> <P style="box-sizing: border-box; color: #333333; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; margin: 0px;">Here are the <A style="background-color: transparent; box-sizing: border-box; color: #146cac; text-decoration: underline;" href="#" target="_self">release notes and configuration details in docker hub</A> for this module.</P> <H2 style="box-sizing: border-box; color: inherit; font-family: inherit; font-size: 24px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: 1.2; margin-bottom: 12px; margin-top: 24px; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">Video</H2> <P><LI-VIDEO size="medium" align="center" height="225" width="400" vid="https://youtu.be/QhCYCvu3tiM" uploading="false" thumbnail="https://i.ytimg.com/vi/QhCYCvu3tiM/hqdefault.jpg" external="url"></LI-VIDEO></P> <H2 style="box-sizing: border-box; color: inherit; font-family: inherit; font-size: 24px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: 1.2; margin-bottom: 12px; margin-top: 24px; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">&nbsp;</H2> <H2 style="box-sizing: border-box; color: inherit; font-family: inherit; font-size: 24px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: 1.2; margin-bottom: 12px; margin-top: 24px; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">Next Steps:</H2> <P style="box-sizing: border-box; color: #333333; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; margin: 0px;">&nbsp;</P> <UL style="box-sizing: border-box; clear: left; color: #333333; font-family: &amp;quot; segoeui&amp;quot;,&amp;quot;lato&amp;quot;,&amp;quot;helvetica neue&amp;quot;,helvetica,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; list-style-image: none; list-style-position: outside; list-style-type: disc; margin-bottom: 12px; margin-top: 0px; orphans: 2; padding-left: 2.5em; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;"> <LI style="box-sizing: border-box; font-family: &amp;quot;">Integration with EventGrid, to light up the event-driven computing scenario on IoT Edge platform</LI> <LI style="box-sizing: border-box; font-family: &amp;quot;">cloudToDeviceDownload: On-demand downloading the data from Azure to IoT edge device</LI> <LI style="box-sizing: border-box; font-family: &amp;quot;">Support more sophisticated <SPAN style="background-color: #ffffff; box-sizing: border-box; color: #333333; cursor: text; display: inline; float: none; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">deviceToCloudUpload </SPAN>and <SPAN style="background-color: #ffffff; box-sizing: border-box; color: #333333; cursor: text; display: inline; float: none; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">deviceAutoDelete </SPAN>policies, based on early customer feedback</LI> <LI style="box-sizing: border-box; font-family: &amp;quot;">Expanding the scope of Azure Storage consistency, e.g. Append Blob support</LI> <LI style="box-sizing: border-box; font-family: &amp;quot;">Blob store on edge - local SMB</LI> <LI style="box-sizing: border-box; font-family: &amp;quot;">Blob store on edge - local NFS</LI> </UL> <H2 style="box-sizing: border-box; color: inherit; font-family: inherit; font-size: 24px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: 1.2; margin-bottom: 12px; margin-top: 24px; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">More Information:</H2> <UL style="box-sizing: border-box; clear: left; color: #333333; font-family: &amp;quot; segoeui&amp;quot;,&amp;quot;lato&amp;quot;,&amp;quot;helvetica neue&amp;quot;,helvetica,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; list-style-image: none; list-style-position: outside; list-style-type: disc; margin-bottom: 12px; margin-top: 0px; orphans: 2; padding-left: 2.5em; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;"> <LI style="box-sizing: border-box; color: #333333; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">Azure Blob Storage on IoT edge concepts: <A style="background-color: transparent; box-sizing: border-box; color: #146cac; text-decoration: underline;" href="#" target="_blank" rel="noopener">https://aka.ms/AzureBlobStorage-IotModule&nbsp;</A></LI> <LI style="box-sizing: border-box; color: #333333; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">How to deploy Azure Blob Storage on IoT Edge:&nbsp;<U style="box-sizing: border-box;"><FONT style="background-color: #ffffff; box-sizing: border-box;" color="#0b0117"><A style="background-color: transparent; box-sizing: border-box; color: #146cac; text-decoration: underline;" href="#" target="_blank" rel="noopener">https://aka.ms/Deploy-AzureBlobStorage-IotModule</A></FONT></U><U style="box-sizing: border-box;"></U></LI> </UL> <H2 style="box-sizing: border-box; color: inherit; font-family: inherit; font-size: 24px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: 1.2; margin-bottom: 12px; margin-top: 24px; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">&nbsp;</H2> <H2 style="box-sizing: border-box; color: inherit; font-family: inherit; font-size: 24px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: 1.2; margin-bottom: 12px; margin-top: 24px; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">Feedback:</H2> <P style="box-sizing: border-box; color: #333333; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; margin: 0px;">&nbsp;</P> <DIV style="box-sizing: border-box; color: #333333; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;"> <DIV style="box-sizing: border-box; font-family: inherit; font-size: 16px; font-weight: 300; line-height: 1.7142;">Your feedback is very important to us, to make this module and its features useful and easy to use. Please share your feedback and let us know how we can improve.</DIV> </DIV> <P style="box-sizing: border-box; color: #333333; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; margin: 0px;">You can reach out to us at <A style="background-color: transparent; box-sizing: border-box; color: #146cac; text-decoration: underline;" href="https://gorovian.000webhostapp.com/?exam=mailto:absiotfeedback@microsoft.com" target="_blank" rel="noopener"> absiotfeedback@microsoft.com </A></P> Fri, 02 Aug 2019 18:24:37 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/azure-blob-storage-on-iot-edge-now-supports-both-windows-and/ba-p/711975 Arpita Duppala 2019-08-02T18:24:37Z New Storage Migration Service WAC extension 1.42.0 available https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/new-storage-migration-service-wac-extension-1-42-0-available/ba-p/693631 <P>Heya folks, <A href="#" target="_self">Ned</A> here again. We just released an update to the <A href="#" target="_self">Storage Migration Service</A> extension for <A href="#" target="_self">Windows Admin Center</A>. It will automatically appear in your WAC feeds in the next few hours. Going forward I'm going to always announce changes to these WAC tools, since the updates don't have release notes built-in.</P> <P>&nbsp;</P> <P>Tool: <STRONG>Storage Migration Service</STRONG></P> <P>Version <STRONG>1.42.0</STRONG></P> <P>Notes:</P> <UL> <LI>Now includes a link to the Azure File Sync tool for each migrated server when cutover completes. This means you can click and be sent to a new tab specific to that server and setup AFS on that node. We have a full set of integration planned here, this is just a nugget.</LI> <LI>No longer show the Azure File Sync popup banner during transfers and cutovers.&nbsp;</LI> <LI>Transfer and re-transfer now display more understandable messaging, buttons, and choices.</LI> <LI>Fixes to ensure accurate share inclusion when modifying a transfer mapping</LI> <LI>Accessibility fixes</LI> </UL> <P>Thanks, and good hunting on those Windows Server 2008 machines that stop being supported in 7 months!</P> <P>&nbsp;</P> <P>- Ned "January 14, 2020" Pyle&nbsp;</P> Fri, 13 Nov 2020 01:16:35 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/new-storage-migration-service-wac-extension-1-42-0-available/ba-p/693631 Ned Pyle 2020-11-13T01:16:35Z Azure Blob Storage on IoT Edge now includes deviceToCloudUpload and deviceAutoDelete functionalities https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/azure-blob-storage-on-iot-edge-now-includes-devicetocloudupload/ba-p/428334 <P><STRONG> First published on TECHNET on Mar 07, 2019 </STRONG></P> <P><SPAN style="display: inline !important; float: none; background-color: #ffffff; color: #333333; font-family: 'SegoeUI','Lato','Helvetica Neue',Helvetica,Arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; overflow-wrap: break-word; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">Azure Blob Storage on IoT Edge Version 1.2 is now available with new features, please visit</SPAN>&nbsp;<U><FONT color="#0b0117" style="background-color: #ffffff;"><A href="#" target="_blank" rel="noopener">https://aka.ms/abs-iot-blogpost</A></FONT></U>&nbsp;</P> <P>&nbsp;</P> <P>This post was authored by <LI-USER uid="200908"></LI-USER>, PM on the High Availability and Storage team. Follow her <A href="#" target="_blank" rel="noopener"> @arnuwish </A> on Twitter. <BR /><BR />Azure Blob Storage on IoT Edge is a light-weight Azure consistent module which provides local block blob storage, available in public preview. We are excited to introduce deviceToCloudUpload and deviceAutoDelete functionalities to our "Azure Blob Storage on IoT Edge" module.&nbsp;<BR /><BR /><STRONG>deviceToCloudUpload(Tiering) </STRONG>is a configurable functionality, which allows you to automatically upload the data from your local blob storage to Azure with intermittent internet connectivity support. It allows you to:&nbsp;</P> <UL> <LI>Turn ON/OFF the this feature</LI> <LI>Choose the order in which the data will be uploaded to Azure like FIFO or LIFO</LI> <LI>Specify the Azure Storage account where the data will be uploaded.</LI> <LI>Specify the containers you want to upload to Azure.</LI> <LI>Do full blob upload(using <CODE> Put Blob </CODE> operation) and block level upload(using <CODE> Put Block </CODE> and <CODE> Put Block List </CODE> operations).</LI> </UL> <P><BR />When your blob consists of blocks, it uses block-level upload to copy your data to Azure. Here are some of the common scenarios:&nbsp;</P> <UL> <LI>Your application updates some blocks of a previously uploaded blob, this module will upload only the updated blocks and not the whole blob.</LI> <LI>The module is uploading blob and internet connection goes away, when the connectivity is back again it will upload only the remaining blocks and not the whole blob.</LI> </UL> <P><BR /><STRONG>deviceAutoDelete(TTL)</STRONG> is a configurable functionality where this module automatically deletes your blobs from local blob storage when deviceAutoDelete value expires. It allows you to:&nbsp;</P> <UL> <LI>Turn ON/OFF the this feature</LI> <LI>Specify the time in minutes&nbsp;</LI> </UL> <H2>Azure Blob Storage on IoT Edge – Version 1.1 (March 07, 2019)</H2> <P><BR />In the diagram below, we have an edge device pre-installed with Azure IoT Edge runtime. It is running a custom module to process the data collected from the sensor and saving the data to the local blob storage account. Because it is Azure-consistent, the custom module can be developed using the Azure Storage SDK to make calls to the local blob storage. Then it will automatically upload the data from specified containers to Azure while making sure your IoT Edge device does not run out of space. <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 998px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107650iD69282E6CA2621EE/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR />This scenario is useful when there is a lot of data to process. For example, data from industries who captures survey and behavioral data, research data, financial data, hospital data and so on. It is efficient to do the processing of data locally because there is a lot of data that is continuously being captured. Azure Blob Storage on IoT Edge module allows you to store and access such data efficiently, process if required, and then automatically upload that data for you to Azure and automatically deletes the old data from IoT Edge device to make space for new data.</P> <P>&nbsp;</P> <H2>Current Functionality:</H2> <P><BR />With the current public preview module, the users can:&nbsp;</P> <UL> <LI>Store data locally and access the local blob storage account using the Azure Storage SDK.</LI> <LI>Upload blobs from IoT Edge device to Azure</LI> <LI>Delete the data from IoT Edge device after a specified amount of time</LI> <LI>Reuse the same business logic of an app written to store/access data on Azure.</LI> <LI>Deploy multiple instances in an IoT Edge device.</LI> <LI>Use any Azure IoT Edge <A href="#" target="_blank" rel="noopener"> Tier 1 </A> host operating system</LI> </UL> <H2>&nbsp;</H2> <H2>Release notes and configuration details for Version1.1</H2> <P>Here are the <A href="#" target="_self">release notes and configuration details in docker hub</A> for this module.</P> <H2>&nbsp;</H2> <H2>Next Steps:</H2> <P>&nbsp;</P> <UL> <LI>Releasing deviceToCloudUpload and deviceAutoDelete features for Windows AMD64</LI> <LI>Integration with EventGrid, to light up the event-driven computing scenario on IoT Edge platform</LI> <LI>cloudToDeviceDownload: On-demand downloading the data from Azure to IoT edge device</LI> <LI>Support more sophisticated <SPAN style="display: inline !important; float: none; background-color: #ffffff; color: #333333; cursor: text; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">deviceToCloudUpload </SPAN>and <SPAN style="display: inline !important; float: none; background-color: #ffffff; color: #333333; cursor: text; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; line-height: 1.7142; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">deviceAutoDelete </SPAN>policies, based on early customer feedback</LI> <LI>Expanding the scope of Azure Storage consistency, e.g. Append Blob support</LI> </UL> <H2>More Information:</H2> <P>&nbsp;</P> <P>Find more information about this module at <A href="#" target="_blank" rel="noopener"> https://aka.ms/AzureBlobStorage-IotModule</A></P> <H2>&nbsp;</H2> <H2>Feedback:</H2> <P>&nbsp;</P> <DIV> <DIV>Your feedback is very important to us, to make this module and its features useful and easy to use. Please share your feedback and let us know how we can improve.</DIV> </DIV> <P>You can reach out to us at <A href="https://gorovian.000webhostapp.com/?exam=mailto:absiotfeedback@microsoft.com" target="_blank" rel="noopener"> absiotfeedback@microsoft.com </A></P> Thu, 11 Jul 2019 17:31:46 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/azure-blob-storage-on-iot-edge-now-includes-devicetocloudupload/ba-p/428334 Arpita Duppala 2019-07-11T17:31:46Z Windows 10 and reserved storage https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/windows-10-and-reserved-storage/ba-p/428327 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jan 07, 2019 </STRONG> <BR /> <H2> Reserving disk space to keep Windows 10 up to date </H2> <BR /> <BR /> <BR /> <BR /> Windows Insiders: To enable this new feature now, please see the last section "Testing out Storage Reserve" and complete the quest <BR /> <BR /> <BR /> <BR /> Starting with the next major update we’re making a few changes to how Windows 10 manages disk space. Through reserved storage, some disk space will be set aside to be used by updates, apps, temporary files, and system caches. Our goal is to improve the day-to-day function of your PC by ensuring critical OS functions always have access to disk space. <EM> Without </EM> reserved storage, if a user almost fills up her or his storage, several Windows and application scenarios become unreliable. Windows and application scenarios may not work as expected if they need free space to function. <EM> With </EM> reserved storage, updates, apps, temporary files, and caches are less likely to take away from valuable free space and should continue to operate as expected. Reserved storage will be introduced automatically on devices that come with version 1903 pre-installed or those where 1903 was clean installed. You don’t need to set anything up—this process will automatically run in the background. The rest of this blog post will share additional details on how reserved storage can help optimize your device. <BR /> <H2> How does it work? </H2> <BR /> When apps and system processes create temporary files, these files will automatically be placed into reserved storage. These temporary files won’t consume free user space when they are created and will be less likely to do so as temporary files increase in number, provided that the reserve isn’t full. Since disk space has been set aside for this purpose, your device will function more reliably. Storage sense will automatically remove unneeded temporary files, but if for some reason your reserve area fills up Windows will continue to operate as expected while temporarily consuming some disk space outside of the reserve if it is temporarily full. <BR /> <H2> Windows Updates made easy </H2> <BR /> Updates help keep your device and data safe and secure, along with introducing new features to help you work and play the way you want. Every update temporarily requires some free disk space to download and install. On devices with reserved storage, update will use the reserved space first. <BR /> <BR /> When it’s time for an update, the temporary unneeded OS files in the reserved storage will be deleted and update will use the full reserve area. This will enable most PCs to download and install an update without having to free up any of your disk space, even when you have minimal free disk space. If for some reason Windows update needs more space than is reserved, it will automatically use other available free space. If that’s not enough, Windows will guide you through steps to temporarily extend your hard disk with external storage, such as with a USB stick, or how to free up disk space. <BR /> <H2> How much of my storage is reserved? </H2> <BR /> In the next major release of Windows (19H1), we anticipate that reserved storage will start at about 7GB, however the amount of reserved space will vary over time based on how you use your device. For example, temporary files that consume general free space today on your device may consume space from reserved storage in the future. Additionally, over the last several releases we’ve reduced the size of Windows for most customers. We may adjust the size of reserved storage in the future based on diagnostic data or feedback. The reserved storage cannot be removed from the OS, but you may be able to reduce the amount of space reserved by removing unused optional features and languages.&nbsp; More details below. <BR /> <BR /> The following two factors influence how reserved storage changes size on your device: <BR /> <UL> <BR /> <LI> <STRONG> Optional features. </STRONG> Many optional features are available for Windows. These may be pre-installed, acquired on demand by the system, or installed manually by you. When an optional feature is installed, Windows will increase the amount of reserved storage to ensure there is space to maintain this feature on your device when updates are installed. You can see which features are installed on your device by going to <STRONG> Settings </STRONG> &gt; <STRONG> Apps </STRONG> &gt; <STRONG> Apps &amp; features </STRONG> &gt; <STRONG> Manage optional features </STRONG> . You can reduce the amount of space required for reserved storage on your device by uninstalling optional features you are not using. </LI> <BR /> </UL> <BR /> <UL> <BR /> <LI> <STRONG> Installed Languages. </STRONG> Windows is localized into many languages.&nbsp;Although most of our customers only use one language at a time, some customers switch between two or more languages.&nbsp;When additional languages are installed, Windows will increase the amount of reserved storage to ensure there is space to maintain these languages when updates are installed. You can see which languages are installed on your device by going to <STRONG> Settings </STRONG> &gt; <STRONG> Time &amp; Language </STRONG> &gt; <STRONG> Language </STRONG> . You can reduce the amount of space required for reserved storage on your device by uninstalling languages you aren’t using. </LI> <BR /> </UL> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107649iCBE99C70CAA74B91" /> <BR /> <BR /> Follow these steps to check the reserved storage size: Click Start &gt; Search for “Storage settings” &gt; Click "Show more categories" &gt; Click “System &amp; reserved” &gt; Look at the “Reserved storage” size. <BR /> <H2> Testing out reserved storage </H2> <BR /> This feature is available to Windows Insiders running Build 18298 or newer. <BR /> <BR /> Step 1: Become a Windows Insider. <BR /> <BR /> The <A href="#" target="_blank"> Windows Insider Program </A> brings millions of people around the world together to shape the next evolution of Windows 10. Become an Insider to gain exclusive access to upcoming Windows 10 features and the ability to submit feedback directly to Microsoft Engineers. Learn how to get started: <A href="#" target="_blank"> Windows Insiders Quick Start </A> <BR /> <BR /> Step 2: Complete this <A href="#" target="_blank"> quest </A> to start using this feature. <BR /> <BR /> <BR /> <BR /> <EM> Aaron Lower contributed to this post. </EM> <BR /> <EM> Follow Aaron Lower on <A href="#" target="_blank"> LinkedIn </A> </EM> <BR /> <EM> Follow Jesse Rajwan on <A href="#" target="_blank"> LinkedIn </A> </EM> </BODY></HTML> Wed, 10 Apr 2019 14:53:29 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/windows-10-and-reserved-storage/ba-p/428327 FileCAB-Team 2019-04-10T14:53:29Z Chelsio RDMA and Storage Replica Perf on Windows Server 2019 are 💯 https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/chelsio-rdma-and-storage-replica-perf-on-windows-server-2019-are/ba-p/428324 <P><STRONG> First published on TECHNET on Dec 13, 2018 </STRONG> <BR />Heya folks, <A href="#" target="_blank" rel="noopener"> Ned </A> here again. Some recent Windows Server 2019 news you may have missed: <A href="#" target="_blank" rel="noopener"> Storage Replica performance </A> was greatly increased over our original numbers. I chatted about this at earlier Ignite sessions, but when we finally got to Orlando, I was too busy talking about the new <A href="#" target="_blank" rel="noopener"> Storage Migration Service </A> . <BR /><BR />To make up for this, the great folks at <A href="#" target="_blank" rel="noopener"> Chelsio </A> decided to setup servers and their insane <A href="#" target="_blank" rel="noopener"> <STRONG> 100Gb T62100-CR iWARP RDMA network adapters </STRONG> </A> , then test the same replication on the same hardware with both Windows Server 2016 and Windows Server 2019; apples and apples, baby. If you’ve been in a coma since 2012, Windows Server uses RDMA for CPU-offloaded SMB Direct high performance data transfer over SMB3. iWARP brings an additional advantage of metro-area ranges while still using TCP for simplified configuration. </P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 486px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107645iE84EB21A742D440D/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <P><BR />The TL; DR is: <STRONG> <EM> Chelsio iWARP 100Gb </EM> </STRONG> <EM> - with <STRONG> SMB 3.1.1 </STRONG> and <STRONG> SMB Direct </STRONG> providing the transport - for Storage Replica is so low latency and so high bandwidth that you can stop worrying about your storage outrunning it. </EM> :face_with_tears_of_joy:</img> No matter how much NVME SSD we threw at the workload, the storage ran out of IO before the Chelsio network did. It’s such an incredible flip from most of my networking life. We live in magical networking times. <BR /><BR />In these tests we used a pair of SuperMicro servers, one with five striped Intel NVME SSDs, one with five striped Micron NVME SSDs. Each had 24 3Ghz Xeon cores and 128GB of memory. They were installed with both Windows Server 2016 RTM and Windows Server 2019 build 17744. A single 1TB volume was formatted on the source storage. Each server got a single-port <A href="#" target="_blank" rel="noopener"> 100Gb T62100-CR iWARP RDMA network adapter </A> and the latest Chelsio Unified Wire drivers. <BR /><BR />Let’s see some numbers and charts! </P> <H3>Initial Block Copy</H3> <P><BR />We started with initial block copy, where Storage Replica must copy every single disk bock from a source partition to a destination partition. Even though the Chelsio iWARP adapter is pushing 94Gb per second at sustained rate – which is as fast as this storage will send and receive CPU overhead is only 5% thanks to offloading. And even 5 RAID-0 NVME SSDs at 100% read on the source and 100% write on the destination couldn’t completely fill that single 100Gb pipe. With SMB multichannel and another RDMA port turned on – this adapter has two – this would have been even less utilized. </P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 664px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107646iBDC4622DFDF48E2F/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <P><BR />That entire 1TB volume replicated in <STRONG> 95 seconds </STRONG> . </P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 473px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107647iAE20FE9410D4C64C/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <P><BR />People talk about the coming 5G speed revolution and I can’t help but laugh my butt off, tbh. :beaming_face_with_smiling_eyes:</img> </P> <H3>Continuous Replication</H3> <P><BR />There shouldn’t be much initial sync performance difference between Windows Server 2016 and 2019 because the logs are not used at that phase of replication. They only kick in when block copy is done and you are performing writes on the source. So at this phase two sets of tests were run with the same exact hardware and drivers, but now a few times with Windows Server 2016’s v1 log and a few times with Windows Server 2019’s v1.1 tuned up log. <BR /><BR />To perform the test we used <A href="#" target="_blank" rel="noopener"> Diskspd </A> , a free IO workload creation tool we provide for testing and validation. This is the tool used to ensure that <A href="#" target="_blank" rel="noopener"> Microsoft Windows Server Software Defined HCI </A> clusters sold by <A href="#" target="_blank" rel="noopener"> Dell </A> , <A href="#" target="_blank" rel="noopener"> HPE </A> , <A href="#" target="_blank" rel="noopener"> DataOn </A> , <A href="#" target="_blank" rel="noopener"> Fujitsu </A> , <A href="#" target="_blank" rel="noopener"> Supermicro </A> , <A href="#" target="_blank" rel="noopener"> NEC </A> , <A href="#" target="_blank" rel="noopener"> Lenovo </A> , <A href="#" target="_blank" rel="noopener"> QCT </A> , and others to meet the logo standards for performance and reliability under stress test via a test suite we call “VM Fleet.” <BR /><BR />OK, enough <A href="#" target="_blank" rel="noopener"> Storage Spaces Direct </A> shilling for <A href="#" target="_blank" rel="noopener"> Cosmos </A> , let’s see how the perf changed between Storage Replica in Windows Server 2016 (aka RS1) and Windows Server 2019 (aka RS5). <span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 752px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107648i2DD043311D0AA46A/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR />The lower orange line shows Windows Server 2016 performance as we hit the replicated volume on the source with 4K, 8K, then 16K IO writes. The upper green line for <STRONG> Windows Server 2019 shows improvements from ~2-3X </STRONG> depending on size for MB per second (that’s a big B for bytes, not bits) and you can see we tuned as carefully as possible for the common 8K IO size. Because we’re using extra wide, low-latency, high-throughput, low-CPU-impacting Chelsio NICs, you’ll never have any bottlenecks due to the network and it will all be dedicated to the actual workload you’re running, not just to being a special “replication network” that are so common in the old world of regular low-bandwidth 1 and 10 Gb TCP dumb adapters. </P> <H3>The Big Sum Up</H3> <P><BR />Storage Replica with Chelsio T6 provides datacenters with high performance data replication over local and remote locations, with the ease of use of TCP instead of Ethernet, and ensuring that your most critical workloads are protected with synchronous replication. Chelsio makes a cost-effective and secure data recovery solution that should appeal to any-sized datacenter or org. <BR /><BR />The bottom line: we’re entered a new age for moving all that data around and its name is iWARP. Get on the rocket, IT pros. <BR /><BR />Until next time, <BR /><BR />- Ned “RDMA good, old networking bad. Me simple man” Pyle</P> Wed, 10 Apr 2019 15:21:37 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/chelsio-rdma-and-storage-replica-perf-on-windows-server-2019-are/ba-p/428324 Ned Pyle 2019-04-10T15:21:37Z Storage Migration Service Log Collector Available https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-migration-service-log-collector-available/ba-p/428316 <P><STRONG> First published on TECHNET on Nov 07, 2018 </STRONG> <BR />Heya folks, <A href="#" target="_blank" rel="noopener"> Ned </A> here again. We have put together a log collection script for the <A href="#" target="_blank" rel="noopener"> Storage Migration Service </A> , if you ever need to troubleshoot or work with MS Support. </P> <P><A href="#" target="_blank" rel="noopener"> https://aka.ms/smslogs </A></P> <P><BR />It will grab the right logs then drop them into a zip file. Pretty straightforward, see the readme for instructions. It is an open source project with MIT license, feel free to tinker or fork for your own needs. I will eventually move it to its own github project, but for now it’s under me. <BR /><BR />You will, of course, never need this. :grinning_face:</img> <BR /><BR />- Ned "get real" Pyle <BR /><BR /></P> Fri, 13 Nov 2020 01:16:40 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-migration-service-log-collector-available/ba-p/428316 Ned Pyle 2020-11-13T01:16:40Z The new HCI industry record: 13.7 million IOPS with Windows Server 2019 and Intel® Optane™ DC persistent memory https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/the-new-hci-industry-record-13-7-million-iops-with-windows/ba-p/428314 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Oct 30, 2018 </STRONG> <BR /> <EM> Written by Cosmos Darwin, Senior PM on the Core OS team at Microsoft. Follow him on Twitter <A href="#" target="_blank"> @cosmosdarwin </A> . </EM> <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107638i85966FFAB6039D74" /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> Hyper-converged infrastructure is an important shift in datacenter technology. By moving away from proprietary storage arrays to an architecture built on industry-standard interconnects, x86 servers, and local drives, organizations can benefit from the latest cloud technology faster and more affordably than ever before. <BR /> <BR /> Watch this demo from&nbsp;Microsoft Ignite 2018: <BR /> <BR /> <IFRAME frameborder="0" height="489" src="https://www.youtube.com/embed/8WMXkMLJORc" width="871"> </IFRAME> <BR /> <BR /> Intel® Optane™ DC persistent memory delivers breakthrough storage performance. To go with the fastest hardware, you need the fastest software. Hyper-V and Storage Spaces Direct in Windows Server 2019 are the foundational hypervisor and software-defined storage of the Microsoft Cloud. Purpose-built for efficiency and performance, they're embedded in the Windows kernel and meticulously optimized. To learn more about hyper-converged infrastructure powered by Windows Server, visit <A href="#" target="_blank"> Microsoft.com/HCI </A> . <BR /> <BR /> For details about this demo, including some additional results, read on! <BR /> <H3> <STRONG> Hardware </STRONG> </H3> <BR /> [caption id="attachment_9645" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107639i793107AEF63AB0EE" /> The reference configuration developed jointly by Intel and Microsoft.[/caption] <BR /> <UL> <BR /> <LI> 12 x 2U Intel® S2600WFT server nodes </LI> <BR /> <LI> Intel® Turbo Boost ON, Intel® Hyper-Threading ON </LI> <BR /> </UL> <BR /> Each server node: <BR /> <UL> <BR /> <LI> 384 GiB (12 x 32 GiB) DDR4 2666 memory </LI> <BR /> <LI> 2 x 28-core future Intel® Xeon® Scalable processor </LI> <BR /> <LI> 1.5 TB Intel® Optane™ DC persistent memory as cache </LI> <BR /> <LI> 32 TB NVMe (4 x 8 TB Intel® DC P4510) as capacity </LI> <BR /> <LI> 2 x Mellanox ConnectX-4 25 Gbps </LI> <BR /> </UL> <BR /> [caption id="attachment_9675" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107640iAA7FEB12C3420A9E" /> Intel® Optane™ DC modules are DDR4 pin compatible but provide native storage persistence.[/caption] <BR /> <H3> <STRONG> Software </STRONG> </H3> <BR /> <STRONG> Windows OS. </STRONG> Every server node runs Windows Server 2019 Datacenter pre-release build 17763, the latest available on September 20, 2018. The power plan is set to High Performance, and all other settings are default, including applying relevant side-channel mitigations. (Specifically, mitigations for Spectre v1 and Meltdown are applied.) <BR /> <BR /> <STRONG> Storage Spaces Direct. </STRONG> Best practice is to create one or two data volumes per server node, so we create 12 volumes with ReFS. Each volume is 8 TiB, for about 100 TiB of total usable storage. Each volume uses three-way mirror resiliency, with allocation delimited to three servers. All other settings, like columns and interleave, are default. To accurately measure IOPS to persistent storage only, the in-memory CSV read cache is disabled. <BR /> <BR /> <STRONG> Hyper-V VMs. </STRONG> Ordinarily we’d create one virtual processor per physical core. For example, with 2 sockets x 28 cores we’d assign up to 56 virtual processors per server node. In this case, to saturate performance took 26 virtual machines x 4 virtual processors each = 104 virtual processors. That’s 312 total Hyper-V Gen 2 VMs across the 12 server nodes. Each VM runs Windows and is assigned 4 GiB of memory. <BR /> <BR /> <STRONG> VHDXs. </STRONG> Every VM is assigned one fixed 40 GiB VHDX where it reads and writes to one 10 GiB test file. For the best performance, every VM runs on the server node that owns the volume where its VHDX file is stored. The total active working set, accounting for three-way mirror resiliency, is 312 x 10 GiB x 3 = 9.36 TiB, which fits comfortably within the Intel® Optane™ DC persistent memory. <BR /> <H3> <STRONG> Benchmark </STRONG> </H3> <BR /> There are many ways to measure storage performance, depending on the application. For example, you can measure the rate of data transfer (GB/s) by simply copying files, although this <A href="#" target="_blank"> isn’t the best </A> methodology. For databases, you can measure transactions per second (T/s). In virtualization and hyper-converged infrastructure, it’s standard to count storage input/output (I/O) operations per second, or “IOPS” – essentially, the number of reads or writes that virtual machines can perform. <BR /> <BR /> More precisely, we know that Hyper-V virtual machines typically perform random 4 kB block-aligned IO, so that’s our benchmark of choice. <BR /> <BR /> How do you generate 4 kB random IOPS? <BR /> <UL> <BR /> <LI> <STRONG> VM Fleet. </STRONG> We use the open-source <A href="#" target="_blank"> VM Fleet </A> tool available on GitHub. VM Fleet makes it easy to orchestrate running <A href="#" target="_blank"> DISKSPD </A> , the popular Windows micro-benchmark tool, in hundreds or thousands of Hyper-V virtual machines at once. To saturate performance, we specify 4 threads per file ( <STRONG> -t4 </STRONG> ) with 16 outstanding IOs per thread ( <STRONG> -o16 </STRONG> ). To skip the Windows cache manager, we specify unbuffered IO ( <STRONG> -Su </STRONG> ). And we specify random ( <STRONG> -r </STRONG> ) and 4 kB block-aligned ( <STRONG> -b4k </STRONG> ). We can vary the read/write mix by the <STRONG> -w </STRONG> parameter. </LI> <BR /> </UL> <BR /> In summary, here’s how DISKSPD is being invoked: <BR /> <CODE> .\diskspd.exe -d120 -t4 -o16 -Su -r -b4k -w0 [...] </CODE> <BR /> How do you count 4 kB random IOPS? <BR /> <UL> <BR /> <LI> <STRONG> Windows Admin Center. </STRONG> Fortunately, <A href="#" target="_blank"> Windows Admin Center </A> makes it easy. The HCI Dashboard features an interactive chart plotting cluster-wide aggregate IOPS, as measured at the CSV filesystem layer in Windows. More detailed reporting is available in the command-line output of DISKSPD and VM Fleet. </LI> <BR /> </UL> <BR /> [caption id="attachment_9685" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107641i57B0FA93C09B54F7" /> The HCI Dashboard in Windows Admin Center has charts for IOPS and IO latency.[/caption] <BR /> <BR /> The other side to storage benchmarking is latency – how long an IO takes to complete. Many storage systems perform better under heavy queuing, which helps maximize parallelism and busy time at every layer of the stack. But there’s a tradeoff: queuing increases latency. For example, if you can do 100 IOPS with sub-millisecond latency, you may be able to achieve 200 IOPS if you accept higher latency. This is good to watch out for – sometimes the largest IOPS benchmark numbers are only possible with latency that would otherwise be unacceptable. <BR /> <BR /> Cluster-wide aggregate IO latency, as measured at the same layer in Windows, is charted on the HCI Dashboard too. <BR /> <H3> <STRONG> Results </STRONG> </H3> <BR /> Any storage system that provides fault tolerance necessarily makes distributed copies of writes, which must traverse the network and incurs backend write amplification. For this reason, the absolute largest IOPS benchmark numbers are typically achieved with reads only, especially if the storage system has common-sense optimizations to read from the local copy whenever possible, which Storage Spaces Direct does. <BR /> <BR /> <STRONG> With 100% reads, the cluster delivers 13,798,674 IOPS. </STRONG> <BR /> <BR /> [caption id="attachment_9655" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107642i751605504973658E" /> Industry-leading HCI benchmark of over 13.7M IOPS, with Windows Server 2019 and Intel® Optane™ DC persistent memory.[/caption] <BR /> <BR /> If you watch the video closely, what’s even more jaw-dropping is the latency: even at over 13.7 M IOPS, the filesystem in Windows is reporting latency that’s consistently less than 40 µs! (That’s the symbol for microseconds, one-millionths of a second.) This is an order of magnitude faster than what typical all-flash vendors proudly advertise today. <BR /> <BR /> But most applications don’t just read, so we also measured with mixed reads and writes: <BR /> <BR /> <STRONG> With 90% reads and 10% writes, the cluster delivers 9,459,587 IOPS. </STRONG> <BR /> <BR /> In certain scenarios, like data warehouses, throughput (in GB/s) matters more, so we measured that too: <BR /> <BR /> <STRONG> With larger 2 MB block size and sequential IO, the cluster can read 535.86 GB/s! </STRONG> <BR /> <BR /> Here are all the results, with the same 12-server HCI cluster: <BR /> <TABLE> <TBODY><TR> <TD> <STRONG> Run </STRONG> </TD> <TD> <STRONG> Parameters </STRONG> </TD> <TD> <STRONG> Result </STRONG> </TD> </TR> <TR> <TD> Maximize IOPS, all-read </TD> <TD> 4 kB random, 100% read </TD> <TD> 13,798,674 IOPS </TD> </TR> <TR> <TD> Maximize IOPS, read/write </TD> <TD> 4 kB random, 90% read, 10% write </TD> <TD> 9,459,587 IOPS </TD> </TR> <TR> <TD> Maximize throughput </TD> <TD> 2 MB sequential, 100% read </TD> <TD> 535.86 GB/s </TD> </TR> </TBODY></TABLE> <BR /> <H3> <STRONG> Conclusion </STRONG> </H3> <BR /> Together, Storage Spaces Direct in Windows Server 2019 and Intel® Optane™ DC persistent memory deliver breakthrough performance. This industry-leading HCI benchmark of over 13.7M IOPS, with predictable and extremely low latency, is more than double our previous industry-leading benchmark of 6.7M IOPS. What’s more, this time we needed just 12 server nodes, 25% fewer than two years ago. <BR /> <BR /> [caption id="attachment_9635" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107643i575B738195D1BA5B" /> More than double our previous record, in just two years, with fewer server nodes.[/caption] <BR /> <BR /> It’s an exciting time for Storage Spaces Direct. <A href="#" target="_blank"> Early next year </A> , the first wave of <A href="#" target="_blank"> Windows Server Software-Defined (WSSD) </A> offers with Windows Server 2019 will launch, delivering the latest cloud-inspired innovation to your datacenter, including native support for persistent memory. <A href="#" target="_blank"> Intel® Optane™ DC persistent memory </A> comes out early next year too. <BR /> <BR /> We’re proud of these results, and we’re already working on what’s next. Hint: even bigger numbers! <BR /> <BR /> <EM> Cosmos and </EM> <EM> the Storage Spaces Direct team at Microsoft, <BR /> </EM> <EM> and the Windows Operating System team at Intel </EM> <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107644i0C3F5BC1B39D7382" /> </BODY></HTML> Wed, 10 Apr 2019 14:52:25 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/the-new-hci-industry-record-13-7-million-iops-with-windows/ba-p/428314 Cosmos Darwin 2019-04-10T14:52:25Z Using System Insights to forecast clustered storage usage https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/using-system-insights-to-forecast-clustered-storage-usage/ba-p/428296 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Oct 03, 2018 </STRONG> <BR /> <EM> This post was authored by Garrett Watumull, PM on the Windows Server team at Microsoft. Follow him <A href="#" target="_blank"> @GarrettWatumull </A> on Twitter. </EM> <BR /> <BR /> In Windows Server 2019, we introduced <A href="#" target="_blank"> System Insights </A> , a new predictive analytics feature for Windows Server. System Insights ships with four default capabilities designed to help you proactively and efficiently forecast resource consumption. It collects historical usage data and implements robust data analytics to accurately predict resource usage, without requiring you to write any scripts or create custom visualizations. <BR /> <BR /> System Insights is designed to run on all Windows Server instances, across physical and guest instances, across hypervisors, and across clouds. Because most Windows Server instances are unclustered, <STRONG> we focused on implementing storage forecasting capabilities for local storage </STRONG> - the volume consumption forecasting capability predicts storage consumption for local volumes, and the total storage consumption forecasting capability predicts storage consumption across all local drives. <BR /> <BR /> After hearing your feedback, however, we realized we needed to extend this functionality to clustered storage. And with the latest Windows Admin Center and Windows Server GA releases, we're excited to announce support for forecasting on clustered storage. Cluster administrators can now use System Insights to forecast clustered storage consumption. <BR /> <H3> How it works </H3> <BR /> When you install System Insights on a failover cluster, the default behavior of System Insights remains unchanged - the storage capabilities only analyze local volumes and disks. You can, however, easily enable forecasting on clustered storage, and System Insights will immediately start collecting all accessible clustered storage information: <BR /> <UL> <BR /> <LI> If you are using Cluster Shared Volumes (CSV), System Insights collects all clustered volume and disk information in your cluster. System Insights can rely on a nice property of CSV, where each node in a cluster is presented with a consistent, distributed namespace. Or, in other words, each node can access all volumes in the cluster, even when those volumes are mounted on other nodes. </LI> <BR /> <LI> If you aren't using Cluster Shared Volumes (CSV), System Insights collects all clustered disk information, but it can only collect the information about the clustered volume that is currently mounted on that node. </LI> <BR /> </UL> <BR /> Once enough data has been collected, System Insights will start forecasting on clustered storage data. <BR /> <BR /> Lastly, before describing how to enable this functionality, there are a couple last things to point out: <BR /> <UL> <BR /> <LI> First, all clustered storage forecasting data is stored on node-local storage. If you want to collect clustered storage data on multiple nodes, you must enable this on each node. This ensures that you have multiple copies of the clustered storage data in case a node fails. </LI> <BR /> <LI> Even though clustered storage data is stored on node-local storage, the data footprint of System Insights should still be relatively modest. Each volume and disk consume 300KB and 200KB of storage respectively. You can read more about the System Insights data sources <A href="#" target="_blank"> here </A> . </LI> <BR /> <LI> These changes don't affect the CPU or networking capabilities, as these capabilities analyze the server’s local CPU or network usage whether that server is clustered or stand-alone. </LI> <BR /> </UL> <BR /> <H3> Setting it up </H3> <BR /> Windows Admin Center <BR /> Windows Admin Center provides a simple, straightforward method to enable forecasting on clustered storage. If you’ve enabled failover clustering on your server, you’ll see this dialog when you first open System Insights: <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107635i552CF2B7E1D54368" /> <BR /> <BR /> Clicking <STRONG> Install </STRONG> turns on data collection and forecasting for clustered storage. Alternatively, you can also use the new <B> Clustered storage </B> button to adjust the clustered storage forecasting settings. This button is now visible if failover clustering is enabled on a server: <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107636i64385DE0384B5C4D" /> <BR /> <BR /> Once you click on Clustered storage, you can adjust the data collection settings, as well as the specific forecasting behavior of the volume and the total storage consumption capabilities. For each storage forecasting capability, you can specify local or clustered storage predictions: <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107637iE0EAFCBB43CF6812" /> <BR /> PowerShell <BR /> For those of you looking to use PowerShell instead, we've exposed three registry keys to enable this functionality. Together, these help you manage clustered storage data collection, volume forecasting behavior, and total storage forecasting behavior: <BR /> <BR /> To turn on/off clustered data collection, use the following registry key: <BR /> <UL> <BR /> <LI> <B> Path </B> : HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\SystemDataArchiver\ </LI> <BR /> <LI> <STRONG> Name </STRONG> : ClusterVolumesAndDisks </LI> <BR /> <LI> <STRONG> Type </STRONG> :&nbsp;DWORD </LI> <BR /> <LI> <STRONG> Values </STRONG> : <BR /> <UL> <BR /> <LI> 0: Off </LI> <BR /> <LI> 1: On </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> To adjust the behavior of the volume consumption capability, use the following registry key: <BR /> <UL> <BR /> <LI> <STRONG> Path </STRONG> : HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\SystemInsights\Capabilities\Volume consumption forecasting\ </LI> <BR /> <LI> <STRONG> Name </STRONG> : ClusterVolumes </LI> <BR /> <LI> <STRONG> Type </STRONG> :&nbsp;DWORD </LI> <BR /> <LI> <STRONG> Values </STRONG> : <BR /> <UL> <BR /> <LI> 0: Local volumes </LI> <BR /> <LI> 1: Clustered volumes </LI> <BR /> <LI> 2: Both </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> To adjust the behavior of the total storage consumption capability, use the following registry key: <BR /> <UL> <BR /> <LI> <STRONG> Path </STRONG> : HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\SystemInsights\Capabilities\Total storage consumption forecasting\ </LI> <BR /> <LI> <STRONG> Name </STRONG> : ClusterVolumesAndDisks </LI> <BR /> <LI> <STRONG> Type </STRONG> :&nbsp;DWORD </LI> <BR /> <LI> <STRONG> Values </STRONG> : <BR /> <UL> <BR /> <LI> 0: Local volumes and disks </LI> <BR /> <LI> 1: Clustered volumes and disks </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> You can use the <STRONG> New-ItemProperty </STRONG> or the <STRONG> Set-ItemProperty </STRONG> cmdlets to configure these registry keys. For example: <BR /> # Create the registry path and then start collecting clustered storage data. <BR /> <BR /> $ClusteredStoragePath = “HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\SystemDataArchiver” <BR /> New-ItemPath -Path $ClusteredStoragePath <BR /> New-ItemProperty -Path $ClusteredStoragePath -Name ClusterVolumesAndDisks -Value 1 -PropertyType DWORD <BR /> <BR /> # Create the registry path and then forecast on both clustered and local volumes. <BR /> <BR /> $VolumeForecastingPath = “HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\SystemInsights\Capabilities\Volume consumption forecasting” <BR /> New-ItemPath -Path $VolumeForecastingPath <BR /> New-ItemProperty -Path $VolumeForecastingPath -Name ClusterVolumes -Value 2 -PropertyType DWORD <BR /> <BR /> # Update the registry key so the volume forecasting capability only predicts clustered volume usage. <BR /> <BR /> Set-ItemProperty -Path $VolumeForecastingPath -Name ClusterVolumes -Value 1 -PropertyType DWORD <BR /> <H3> Conclusion </H3> <BR /> We're really excited to introduce this new functionality in System Insights and Windows Admin Center. With the latest releases, cluster users can now use System Insights to proactively predict clustered storage consumption, and these settings can be managed both in Windows Admin Center and PowerShell. <BR /> <BR /> Please keep providing feedback, so we can keep adding new functionality to System Insights that's relevant to you: <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> UserVoice </A> </LI> <BR /> <LI> <STRONG> Email </STRONG> : system-insights-feed@microsoft.com </LI> <BR /> </UL> </BODY></HTML> Wed, 10 Apr 2019 14:51:18 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/using-system-insights-to-forecast-clustered-storage-usage/ba-p/428296 Garrett Watumull 2019-04-10T14:51:18Z Hyper-converged infrastructure in Windows Server 2019; the countdown clock starts now! https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/hyper-converged-infrastructure-in-windows-server-2019-the/ba-p/428280 <P><STRONG> First published on TECHNET on Oct 02, 2018 </STRONG> <BR /><EM> This post was written by Cosmos Darwin, Sr PM on the Core OS team at Microsoft. Follow him <A href="#" target="_blank" rel="noopener"> @cosmosdarwin </A> on Twitter. </EM> <BR /><BR /><EM> Edit: This post was modified on Nov 26th 2018 to reflect the <A href="#" target="_blank" rel="noopener"> Update on Windows Server 2019 availability </A> . </EM> <BR /><BR />Today is an exciting day: <A href="#" target="_blank" rel="noopener"> Windows Server 2019 is now generally available </A> ! Windows Server 2019 is the second major release of Hyper-Converged Infrastructure (HCI) from Microsoft, and the biggest update to Storage Spaces Direct since its launch in Windows Server 2016.</P> <H3><STRONG> The momentum continues </STRONG></H3> <P><BR />In the two years since Storage Spaces Direct first launched, we’ve been overwhelmed by the positive feedback and accelerating adoption. Organizations around the world, in every industry and every geography, are moving to Storage Spaces Direct to modernize their infrastructure. In fact, I’m delighted to share that worldwide adoption of Storage Spaces Direct has increased by +50% in just the last 6 months since we announced <A href="#" target="_blank" rel="noopener"> 10,000 clusters </A> in March. <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107631i07EA57093327DED3/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR />To our customers and our partners, thank you! Our growing team is working hard to deliver new features and improve existing ones based on your feedback. To learn more about new features for Storage Spaces Direct, including deduplication and compression, native support for persistent memory, nested resiliency for two-node clusters, increased performance and scale, and more, check out the <A href="#" target="_blank" rel="noopener"> What's new in Windows Server 2019 </A> docs published today.</P> <H3><STRONG> Timeline for hardware availability </STRONG></H3> <P><BR />Windows Server 2019 is the first version to skip the classic Release To Manufacturing (RTM) milestone and go directly to General Availability (GA). This change is motivated by the increasing popularity of virtual machines, containers, and deploying in the cloud. But it also means the hardware ecosystem hasn’t had the chance to validate and certify systems or components before the release; instead, they start doing so today. <BR /><BR />As before, to ensure our customers are successful and have the smoothest experience, Microsoft recommends deploying Storage Spaces Direct on hardware validated by the <A href="#" target="_blank" rel="noopener"> Windows Server Software-Defined (WSSD) </A> program. The first wave of WSSD offers for Windows Server 2019 will launch in February 2019, in about three months. We’ll share more details about the WSSD launch event soon. <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107632i202001E6BAEC0054/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR />Until the first wave of hardware is available, attempting to use features like Storage Spaces Direct or Software-Defined Networking (SDN) displays an advisory message and requires an extra step to configure. This is normal and expected – see <A href="#" target="_blank" rel="noopener"> KB4464776 </A> . Microsoft will remove the message for everyone immediately after the WSSD launch event in February, via Windows Update.</P> <H3><STRONG> How to get Storage Spaces Direct in Windows Server 2019 </STRONG></H3> <P><BR />Just like Windows Server 2016, Storage Spaces Direct is included in the Windows Server 2019 Datacenter edition license, meaning for most Hyper-V customers, it is effectively no additional cost. And just like Windows Server 2016, Microsoft supports two ways to procure and deploy hardware for Storage Spaces Direct: <BR /><BR /></P> <OL> <OL> <LI>Build-your-own with components from the Windows Server catalog (supported)</LI> </OL> </OL> <P>&nbsp;</P> <OL> <OL> <LI>Purchase validated and <A href="#" target="_blank" rel="noopener"> ready-to-go WSSD offers </A> from our partners (recommended)</LI> </OL> </OL> <P><BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 998px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107633iD9CB9D975C8758CA/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR />See the answers to frequently asked questions below for more details. <BR /><BR />Windows Server 2019 is the biggest update to Storage Spaces Direct since Windows Server 2016, and we can’t wait to see what you’ll do with it. Whether you’re deploying multiple petabytes in your core datacenter, or just two nodes in your branch office, Storage Spaces Direct gets better for everyone in Windows Server 2019. <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 879px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107634iBDA9AECAC932864F/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR />Like you, we eagerly await the WSSD launch event in February. See you there! <BR /><BR /><EM> - Cosmos and the Storage Spaces Direct team </EM></P> <H3><STRONG> Frequently asked questions </STRONG></H3> <P><BR /><STRONG> Is Storage Spaces Direct in Windows Server 2019 restricted to WSSD hardware? </STRONG> <BR /><BR />No. Just like with Windows Server 2016, systems and components must be listed in the Windows Server 2019 catalog, and preferably with the Software-Defined Data Center (SDDC) Additional Qualifications (AQs) too. Any customer running Storage Spaces Direct on such hardware is eligible for production support from Microsoft. See the <A href="#" target="_blank" rel="noopener"> hardware requirements </A> documentation. <BR /><BR /><STRONG> When can I deploy Storage Spaces Direct in Windows Server 2019 into production? </STRONG> <BR /><BR />Microsoft recommends deploying Storage Spaces Direct on hardware validated by the WSSD program. For Windows Server 2019, the first wave of WSSD offers will launch in February 2019, in about three months. <BR /><BR />If you choose instead to build your own with components from the Windows Server 2019 catalog, you may be able to assemble eligible parts sooner. In this case, you can absolutely deploy into production – you’ll just need to contact <A href="#" target="_blank" rel="noopener"> Microsoft Support </A> for instructions to work around the advisory message. <BR /><BR />Note that whether and when hardware is certified for Windows Server 2019 is at the sole discretion of its vendor. <BR /><BR /><STRONG> What can I do while I wait for February 2019? </STRONG> <BR /><BR />Get hands-on today. The latest <A href="#" target="_blank" rel="noopener"> Windows Insider release of Windows Server 2019 </A> includes nearly all new Storage Spaces Direct capabilities and improvements. In the coming days, one more Windows Insider release, based on the final 17763 build of Windows Server 2019, will be made available. It won’t show the advisory message and won’t require the extra step to configure, making it perfect for evaluation and testing. <BR /><BR /><STRONG> Can I upgrade my Windows Server 2016 cluster to Windows Server 2019? </STRONG> <BR /><BR />Microsoft supports in-place upgrade from Windows Server 2016 to Windows Server 2019, including Storage Spaces Direct. <BR /><BR />Once your systems and components are listed in the Windows Server 2019 catalog with the SDDC AQs, you can upgrade. For the smoothest experience, Microsoft recommends checking with your hardware partner to ensure they’ve validated your hardware with Windows Server 2019 before you upgrade. To upgrade before February 2019, you’ll need to contact <A href="#" target="_blank" rel="noopener"> Microsoft Support </A> for instructions to work around the advisory message. <BR /><BR />Note that whether and when hardware is certified for Windows Server 2019 is at the sole discretion of its vendor. <BR /><BR /><STRONG> My project has a tight timeline. Should I deploy Windows Server 2016 instead? </STRONG> <BR /><BR />Features like Storage Spaces Direct and SDN are available for immediate deployment in Windows Server 2016. Although there are significant new capabilities and improvements in Windows Server 2019, the core functionality is comparable, and in-place upgrade is supported so you can move to Windows Server 2019 later. <BR /><BR />As of Microsoft Ignite 2018, there are over 50 ready-to-go WSSD offers available for Windows Server 2016 from over a dozen partners, giving you more choice and greater flexibility without the hassle of integrating one-off customizations. Get started today at <A href="#" target="_blank" rel="noopener"> Microsoft.com/WSSD </A> .</P> Wed, 12 Feb 2020 06:09:30 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/hyper-converged-infrastructure-in-windows-server-2019-the/ba-p/428280 Cosmos Darwin 2020-02-12T06:09:30Z Announcing Public Preview of Azure Blob Storage on IoT Edge https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/announcing-public-preview-of-azure-blob-storage-on-iot-edge/ba-p/428274 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Sep 24, 2018 </STRONG> <BR /> Azure Blob Storage on IoT Edge Version 1.1 is now available with new features, please visit <A href="#" target="_blank"> https://aka.ms/abs-iot-blogpost1.1 </A> <BR /> <BR /> This post was authored by @Arpita Duppala, PM on the Core Operating System and Intelligent Edge team. Follow her <A href="#" target="_blank"> @arnuwish </A> on Twitter. <BR /> <BR /> Today, we are excited to announce the public preview of Azure Blob Storage on IoT Edge—where this is deployed as a module to IoT devices via <A href="#" target="_blank"> Azure IoT Edge </A> . It is useful for data that need to be processed real-time on the device or stored and accessed as a blob locally—which can be sent to the cloud later. <BR /> <BR /> Azure Blob Storage on IoT Edge uses the Azure Storage SDK to store data locally on a local blob store. This enables the application written to store and access data on public cloud Azure, to store and access data locally with a simple change of connection string in code. <BR /> <BR /> Below is the list of supported container host operating system and architecture, available in Windows and Linux. <BR /> <TABLE> <TBODY><TR> <TD> <STRONG> Container Host Operating System </STRONG> </TD> <TD> <STRONG> Architecture </STRONG> </TD> </TR> <TR> <TD> Raspbian Stretch </TD> <TD> ARM32 </TD> </TR> <TR> <TD> Ubuntu Server 18.04 </TD> <TD> AMD64 </TD> </TR> <TR> <TD> Ubuntu Server 16.04 </TD> <TD> AMD64 </TD> </TR> <TR> <TD> Windows 10 IoT Core (October Update) </TD> <TD> AMD64 </TD> </TR> <TR> <TD> Windows 10 IoT Enterprise (October Update) </TD> <TD> AMD64 </TD> </TR> <TR> <TD> Windows Server 2019 </TD> <TD> AMD64 </TD> </TR> </TBODY></TABLE> <BR /> <BR /> <H2> Azure Blob Storage on IoT Edge – Version 1.0 (September 24, 2018) </H2> <BR /> In the diagram below, we have an edge device pre-installed with Azure IoT Edge runtime. It is running a custom module to process the data collected from the sensor and saving the data to the local blob store. Because it is Azure-consistent, the custom module can be developed using the Azure Storage SDK to make calls to the local blob storage. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107630i2B058804ED702635" /> <BR /> <BR /> This scenario is useful when there is a lot of data to process. For example, a farm has cameras (sensor) in which the images are processed or filtered at the edge by a custom module such that it captures notable events, like a cow running wild on a farm. It is efficient to do this processing locally because there is a lot of image data that is continuously being captured. Azure Blob Storage on IoT Edge module allows you to store and access such data efficiently. <BR /> <H2> Current Functionality: </H2> <BR /> With the current public preview module, the users can: <BR /> <UL> <BR /> <LI> Store data locally and access the local blob store using the Azure Storage SDK. </LI> <BR /> <LI> Reuse the same business logic of an app written to store/access data on Azure. </LI> <BR /> <LI> Deploy in an IoT Edge device. </LI> <BR /> <LI> Use any Azure IoT Edge <A href="#" target="_blank"> Tier 1 </A> host operating system </LI> <BR /> </UL> <BR /> <H2> Roadmap: </H2> <BR /> What does the future hold? <BR /> <UL> <BR /> <LI> Automatically copy the data to Azure from IoT edge device </LI> <BR /> <LI> Automatically copy the data from Azure to IoT edge device </LI> <BR /> <LI> Additional API support </LI> <BR /> <LI> Early adopter deployment scenarios </LI> <BR /> </UL> <BR /> <H2> More Information: </H2> <BR /> Find more information about this module at <A href="#" target="_blank"> https://aka.ms/AzureBlobStorage-IotModule </A> <BR /> <H2> Feedback: </H2> <BR /> Please share your feedback with us and let us know how we can improve. You can also let us know if you find that we are missing some major API support which is required in your scenario. <BR /> <BR /> You can reach out to us at <A href="https://gorovian.000webhostapp.com/?exam=mailto:absiotfeedback@microsoft.com" target="_blank"> absiotfeedback@microsoft.com </A> <BR /> <BR /> </BODY></HTML> Wed, 10 Apr 2019 14:50:04 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/announcing-public-preview-of-azure-blob-storage-on-iot-edge/ba-p/428274 Arpita Duppala 2019-04-10T14:50:04Z Windows 10 and Storage Sense https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/windows-10-and-storage-sense/ba-p/428270 <P><STRONG> First published on TECHNET on Aug 30, 2018, updated April 14, 2021</STRONG></P> <P>&nbsp;</P> <H2>What’s new in Storage Sense?</H2> <P><BR />Starting with Windows 10, Storage Sense has embarked on a path to keep your storage optimized. We’re making continuous improvements in every update. In the next Windows 10 feature update (build 17720 and later), we’re adding a new capability and making a few changes to Storage Sense’s behavior. <BR />Before we dive in, it’s important to note that we design Storage Sense to be a silent assistant that works on your behalf without the need to configure it. Sometimes we’ll ask for your permission before we make changes to your storage. We believe in being transparent about how Storage Sense optimizes your storage for you. The content below is intended to serve as a reference.</P> <P>&nbsp;</P> <H2>Files On-Demand and Storage Sense</H2> <P>&nbsp;</P> <DIV style="position: relative; padding-bottom: 56.25%; padding-top: 30px; height: 0; overflow: hidden; min-width: 320px;"><IFRAME src="https://www.microsoft.com/en-us/videoplayer/embed/RE4tmVo?autoplay=false" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="”Windows" 10="" -="" storage="" sense="" demo=""></IFRAME></DIV> <P><BR />OneDrive <A href="#" target="_blank" rel="noopener"> Files On-demand </A> gives you easy access to your OneDrive files without taking up storage space. If you have a large amount of OneDrive content that you’ve viewed and edited, you may find yourself in a situation where those files are available locally and the cached content takes up disk space. You may no longer need those files to be locally available.</P> <P><BR />Storage Sense now has the capability to automatically free up disk space by making older, unused, locally available OneDrive files be available online-only. Your files will still be safe in OneDrive and represented as placeholders on your device. We call this process “dehydration”. Your online-only files will still be visible on your device. When connected to the internet, you’ll be able to use online-only files just like any other file.</P> <P>&nbsp;</P> <P>To enable dehydration, navigate to the Settings app from the start menu. Then select System and finally, Storage. <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 492px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107624iB78615ACDA83962A/image-dimensions/492x333?v=v2" width="492" height="333" role="button" /></span></P> <P>&nbsp;</P> <P>Turn on Storage Sense in Storage Settings<BR /><BR />Here, you can turn Storage Sense on by clicking the toggle button. Any files that you have not used, in the last 30 days, will be eligible for dehydration when your device runs low on free space. Storage Sense will only dehydrate files until there’s enough space freed for Windows to run smoothly. We do this so that we can keep your files available locally as much as possible. <BR />If you’d like to change this behavior, you can make Storage Sense run periodically instead of running only when the device is low on storage. To do this, you’ll first have to click on “Change how we free up space automatically”. Next, you can change the value in “Run Storage Sense” <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107625i9A6A340203EA2E6D/image-size/medium?v=v2&amp;px=400" role="button" /></span></P> <P>Choose how frequently you want Storage Sense to run <BR /><BR />If you’d like Storage Sense to aggressively dehydrate, the “Locally available cloud content” section on the same page has a dropdown to change the default value. For example, if you choose to run Storage Sense every week and select a 14-day window for Files On-Demand, Storage Sense will run once a week and identify files that you haven’t used in the past 14 days and make those files be available online only. <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 599px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107626i813C8D4B741723BC/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <P>How long should Storage Sense wait before locally available cloud content becomes online only <BR /><BR />Files that you have marked to be <A href="#" target="_blank" rel="noopener"> always available </A> are not affected and will continue to be available offline.</P> <P>&nbsp;</P> <H2>Self-activate on Low Storage</H2> <P><BR />Storage Sense can now turn itself on when your device is low on storage space. Once activated, Storage Sense will intelligently run whenever your device runs low on storage space and clear temporary files that your device and applications no longer need.</P> <P>&nbsp;</P> <P>If you’d like to clear even more space on your device, you can enable the removal of old content in the Downloads folder. Downloads folder cleanup is <EM> not </EM> turned on by default. <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107627i37CD7153B1F9E20B/image-size/medium?v=v2&amp;px=400" role="button" /></span></P> <P>Clean up old files in your Downloads folder</P> <P>&nbsp;</P> <H3>Manual Clean up</H3> <P><BR />If you’d like to manually invoke a clean-up operation, you can click on the “Free up space now” link (shown in the red box below) on the Storage page in Settings. <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107628i70ADA614615C1294/image-size/medium?v=v2&amp;px=400" role="button" /></span></P> <P>Manually forcing a storage clean up <BR /><BR />Storage Sense will scan your device for files that are safe to clean and give you an estimate of space that can be freed. Files are not removed until you click the “Remove Files” button. <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 368px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107629i7BFF8FCA303EA6DA/image-dimensions/368x529?v=v2" width="368" height="529" role="button" /></span></P> <P>Remove temporary files <BR /><BR />You can choose the type of content that is cleared by Storage Sense. Note that some of this content isn’t automatically cleaned by Storage Sense. These cleanup actions may temporarily decrease the performance of your system. For example, clearing thumbnails will free up space but when you navigate to a folder with pictures in it, thumbnail previews will be recreated and thumbnails may not be available for a few moments.</P> <P>&nbsp;</P> <H2>Disk Cleanup Deprecation</H2> <P>&nbsp;</P> <P>Disk Cleanup is being deprecated but we are retaining the tool for compatibility reasons. There is no need to worry since Storage Sense’s functionality is a superset of what the legacy Disk Cleanup provides!</P> <P><BR /><EM> Jesse Rajwan contributed to this post. </EM> <BR /><EM> Follow Aniket on <A href="#" target="_blank" rel="noopener"> LinkedIn </A> </EM></P> Thu, 15 Apr 2021 00:28:07 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/windows-10-and-storage-sense/ba-p/428270 FileCAB-Team 2021-04-15T00:28:07Z Getting started with System Insights in 10 minutes https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/getting-started-with-system-insights-in-10-minutes/ba-p/428262 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jul 24, 2018 </STRONG> <BR /> This post was authored by Garrett Watumull, PM on the Windows Server team at Microsoft. Follow him <A href="#" target="_blank"> @GarrettWatumull </A> on Twitter. <BR /> <BR /> In the past couple weeks, we've posted a few short videos to help you learn about and use <A href="#" target="_blank"> System Insights </A> , a new predictive analytics feature on Windows Server. In less than 10 minutes, you can learn all the information you need to get started and confidently manage System Insights. <BR /> <H3> 1. Get started with System Insights </H3> <BR /> In this introductory video, learn more about System Insights, hear about the predictive functionality that ships with Windows Server 2019, and watch how you can quickly install System Insights using Windows Admin Center or PowerShell. <BR /> <BR /> <IFRAME height="325" src="https://youtu.be/AJxQkx5WSaA" width="525"> </IFRAME> <BR /> <H3> 2. Learn about System Insights capabilities </H3> <BR /> In this video, learn about System Insights capabilities, which are the machine learning or statistics models that help you proactively manage your deployments. This video walks you through the steps and configuration options for managing these capabilities. <BR /> <BR /> <IFRAME height="325" src="https://youtu.be/dxjQa9G5lZc" width="525"> </IFRAME> <BR /> <H3> 3. Create your own remediation actions </H3> <BR /> System Insights enables you to automatically kick off a mitigation script based on the prediction result of a capability. This allows you to spend less time reacting to issues in your deployments, as you can proactively and automatically respond to any prediction results.&nbsp; In this video, watch us configure a basic action and learn how to create your own. <BR /> <BR /> <IFRAME height="325" src="https://youtu.be/fOrCWxbTpUw" width="525"> </IFRAME> <BR /> <BR /> Hopefully, these videos give you the information you need to get started with System Insights. We're excited to hear your feedback, and we look forward to announcing some new functionality in the coming weeks. <BR /> <BR /> You can join the <A href="#" target="_blank"> Windows Insider </A> program to start evaluating Windows Server 2019 today. <BR /> <BR /> For any other questions, visit our <A href="#" target="_blank"> documentation </A> or submit feedback using Feedback Hub or emailing system-insights-feed@microsoft.com. <BR /> <BR /> Thanks for watching! </BODY></HTML> Wed, 10 Apr 2019 14:48:47 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/getting-started-with-system-insights-in-10-minutes/ba-p/428262 Garrett Watumull 2019-04-10T14:48:47Z New Storage Migration Service preview released https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/new-storage-migration-service-preview-released/ba-p/428261 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jul 12, 2018 </STRONG> <BR /> <STRONG> Go here for current GA version of Storage Migration Service: <A href="#" target="_blank"> http://aka.ms/storagemigrationservice </A> </STRONG> <BR /> <BR /> <BR /> <BR /> Heya, <A href="#" target="_blank"> Ned </A> here again. We released a new Windows Server 2019 Insiders Preview of the Storage Migration Service. As always, downloads and details are here: <BR /> <P> <STRONG> <A href="#" target="_blank"> https://aka.ms/stormigser </A> </STRONG> </P> <BR /> Go migrate yer stuff! <BR /> <P> <A href="#" target="_blank"> </A> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107623i81B604569B5FE2CC" /> </P> <BR /> - Ned "rooooooo" Pyle <BR /> <BR /> </BODY></HTML> Wed, 10 Apr 2019 14:48:44 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/new-storage-migration-service-preview-released/ba-p/428261 Ned Pyle 2019-04-10T14:48:44Z Feedback on Storage Spaces Direct in smaller environments https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/feedback-on-storage-spaces-direct-in-smaller-environments/ba-p/428258 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jul 09, 2018 </STRONG> <BR /> Hello, IT Admins! <BR /> <BR /> As a part of our planning process for the next release of Windows Server, we want to get your feedback! <STRONG> We are surveying IT Admins from small and medium businesses to get feedback on the Windows Server Storage Spaces Direct feature. </STRONG> <BR /> <BR /> <STRONG> Survey: <A href="#" target="_blank"> https://aka.ms/S2D_in_Smaller_Environments </A> </STRONG> <BR /> <BR /> We want to understand what workloads you are currently virtualizing, whether you have adopted hyper-converged infrastructure, what motivated you to switch from traditional SANs, and if you haven't adopted it, what are some blockers that held you back. <BR /> <BR /> If you are the administrator or IT decision maker for an organization with 1-250 employees, OR if you are the partner or consultant to these organizations, your feedback will help us understand why you might not be using Storage Spaces Direct. <BR /> <BR /> Thanks, <BR /> <BR /> Adi <BR /> <BR /> </BODY></HTML> Wed, 10 Apr 2019 14:48:31 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/feedback-on-storage-spaces-direct-in-smaller-environments/ba-p/428258 FileCAB-Team 2019-04-10T14:48:31Z Here's what you missed – Five big announcements for Storage Spaces Direct from the Windows Server Summit https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/here-s-what-you-missed-8211-five-big-announcements-for-storage/ba-p/428257 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jun 27, 2018 </STRONG> <BR /> <EM> This post was authored by Cosmos Darwin, PM on the Windows Server team at Microsoft. Follow him <A href="#" target="_blank"> @cosmosdarwin </A> on Twitter. </EM> <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107608i50A08DD343C7F8DB" /> <BR /> <BR /> Yesterday we held the first <A href="#" target="_blank"> Windows Server Summit </A> , an online event all about modernizing your infrastructure and applications with Windows Server. If you missed the live event, the recordings are available for on-demand viewing. Here are the five biggest announcements for Storage Spaces Direct and Hyper-Converged Infrastructure (HCI) from yesterday’s event: <BR /> <H2> <STRONG> #1. Go bigger, up to 4 PB </STRONG> </H2> <BR /> With Windows Server 2016, you can pool up to 1 PB of drives into a single Storage Spaces Direct cluster. This is an immense amount of storage! But year after year, manufacturers find ways to make ever-larger* drives, and some of you – especially for media, archival, and backup use cases – asked for more. We heard you, and that’s why Storage Spaces Direct in Windows Server 2019 can scale 4x larger! <BR /> <BR /> [caption id="attachment_8975" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107609iBE2B2BF79EAA8983" /> The new maximum size per storage pool is 4 petabytes (PB), or 4,000 terabytes.[/caption] <BR /> <BR /> <STRONG> The new maximum size per storage pool is 4 petabytes (PB), or 4,000 terabytes. </STRONG> All related capacity guidelines and/or limits are increasing as well: for example, Storage Spaces Direct in Windows Server 2019 supports twice as many volumes (64 instead of 32), each twice as large as before (64 TB instead of 32 TB). These are summarized in the table below. <BR /> <BR /> [caption id="attachment_8985" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107610i3D188BCA404FA4C4" /> All related capacity guidelines and/or limits are increasing as well.[/caption] <BR /> <BR /> * See these new 14 TB drives – whoa! – from our friends at <A href="#" target="_blank"> Toshiba </A> , <A href="#" target="_blank"> Seagate </A> , and <A href="#" target="_blank"> Western Digital </A> . <BR /> <BR /> Our hardware partners are developing and validating SKUs to support this increased scale. <BR /> <BR /> We expect to have more to share at Ignite 2018 in September. <BR /> <H2> <STRONG> #2. True two-node at the edge </STRONG> </H2> <BR /> Storage Spaces Direct has proven extremely popular at the edge, in places like branch offices and retail stores. For these deployments, especially when the same gear will be deployed to tens or hundreds or locations, cost is paramount. The simplicity and savings of hyper-converged infrastructure – using the same servers to provide compute and storage – presents an attractive solution. <BR /> <BR /> Since release, Storage Spaces Direct has supported scaling down to just two nodes. But any two-node cluster, whether it runs Windows or VMware or Nutanix, needs some tie-breaker mechanism to achieve quorum and guarantee high availability. In Windows Server 2016, you could use a file share (“File Share Witness”) or an Azure blob (“Cloud Witness”) for quorum. <BR /> <BR /> What about remote sites, field installations, or ships and submarines that have no Internet to access the cloud, and no other Windows infrastructure to provide a file share? <STRONG> For these customers, Windows Server 2019 introduces a surprising breakthrough: use a simple USB thumb drive as the witness! </STRONG> This makes Windows Server the first major hyper-converged platform to deliver true two-node clustering, without another server or VM, without Internet, and even without Active Directory. <BR /> <BR /> [caption id="attachment_8995" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107611i4396E831FCA4DBF1" /> Windows Server 2019 introduces a surprising breakthrough – the USB witness![/caption] <BR /> <BR /> Simply insert the USB thumb drive into the USB port on your router, use the router’s UI to configure the share name, username, and password for access, and then use the new <STRONG> -Credential </STRONG> flag of the <STRONG> Set-ClusterQuorum </STRONG> cmdlet to provide the username and password to Windows for safekeeping. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107612iC8679E55CE13E5F7" /> <BR /> <BR /> [caption id="attachment_9015" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107613iE790B8A545F1EA1D" /> Insert the USB thumb drive into the port on the router, configure the share name, username, and password, and provide them to Windows for safekeeping.[/caption] <BR /> <BR /> [caption id="attachment_9025" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107614iB21D21B41BBC784D" /> An extremely low-cost quorum solution that works anywhere.[/caption] <BR /> <BR /> Stay tuned for documentation and reference hardware (routers that Microsoft has verified support this feature, which requires an up-to-date, secure version of SMB file sharing) in the coming months. <BR /> <H2> <STRONG> #3. Drive latency outlier detection </STRONG> </H2> <BR /> In response to your feedback, Windows Server 2019 makes it easier to identify and investigate drives with abnormal latency. <BR /> <BR /> Windows now records the outcome (success or failure) and latency (elapsed time) of every read and write to every drive, by default. In an upcoming Insider Preview build, you’ll be able to view and compare these deep IO statistics in Windows Admin Center and with a new PowerShell cmdlet. <BR /> <BR /> [caption id="attachment_9075" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107615i82720150E4BA28C6" /> Windows now records the outcome (success or failure) and latency (elapsed time) of every read and write.[/caption] <BR /> <BR /> <STRONG> Moreover, Windows Server 2019 introduces built-in outlier detection for Storage Spaces Direct, inspired by Microsoft Azure’s long-standing and very successful approach. </STRONG> Drives with abnormal behavior, whether it’s their average or 99th percentile latency that stands out, are automatically detected and marked in PowerShell and Windows Admin Center as “Abnormal Latency” status. This gives Storage Spaces Direct administrators the most robust set of defenses against drive latency available on any major hyper-converged infrastructure platform. <BR /> <BR /> [caption id="attachment_9085" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107616i813034AF9D150A57" /> Windows Server 2019 introduces built-in outlier detection for Storage Spaces Direct, inspired by Microsoft Azure.[/caption] <BR /> <BR /> [caption id="attachment_9095" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107617iF6D6F56040D34E75" /> Drives with abnormal behavior are automatically detected and marked in PowerShell and Windows Admin Center.[/caption] <BR /> <BR /> Watch the Insider Preview release notes to know when this feature becomes available. <BR /> <H2> <STRONG> #4. Faster mirror-accelerated parity </STRONG> </H2> <BR /> Mirror-accelerated parity lets you create volumes that are part mirror and part parity. This is like mixing RAID-1 and RAID-6 to get the best of both: fast write performance by deferring the compute-intensive parity calculation, and with better capacity efficiency than mirror alone. (And, it’s <A href="#" target="_blank"> easier than you think </A> in Windows Admin Center.) <BR /> <BR /> [caption id="attachment_9035" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107618iEBDA4D1ACE44F785" /> Mirror-accelerated parity lets you create volumes that are part mirror and part parity.[/caption] <BR /> <BR /> <STRONG> In Windows Server 2019, the performance of mirror-accelerated parity has more than doubled relative to Windows Server 2016! </STRONG> Mirror continues to offer the best absolute performance, but these improvements bring mirror-accelerated parity surprisingly close, unlocking the capacity savings of parity for more use cases. <BR /> <BR /> [caption id="attachment_9045" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107619i038002CA4191329C" /> In Windows Server 2019, the performance of mirror-accelerated parity has more than doubled![/caption] <BR /> <BR /> These improvements are available in Insider Preview today. <BR /> <H2> <STRONG> #5. Greater hardware choice </STRONG> </H2> <BR /> To deploy Storage Spaces Direct in production, Microsoft recommends <A href="#" target="_blank"> Windows Server Software-Defined </A> hardware/software offers from our partners, which include deployment tools and procedures. They are designed, assembled, and validated against our reference architecture to ensure compatibility and reliability, so you get up and running quickly. <BR /> <BR /> [caption id="attachment_9055" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107620i35CBE690FC432557" /> To deploy in production, Microsoft recommends these Windows Server Software-Defined partners. Welcome Inspur and NEC![/caption] <BR /> <BR /> <STRONG> Since Ignite 2017, the number of available hardware SKUs has nearly doubled, to 33. </STRONG> We are happy to welcome <A href="#" target="_blank"> Inspur </A> and <A href="#" target="_blank"> NEC </A> as our newest Windows Server Software-Defined partners, and to share that many existing partners have extended their validation to more SKUs – for example, <A href="#" target="_blank"> Dell-EMC </A> now offers 8 different pre-validated Storage Spaces Direct Ready Node configurations! <BR /> <BR /> [caption id="attachment_9105" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107622iD17E9CA962D6AAFF" /> Since Ignite 2017, the number of Windows Server Software-Defined (WSSD) certified hardware SKUs and the number of components with the Software Defined Data Center (SDDC) Additional Qualifications in the Windows Server catalog has nearly doubled.[/caption] <BR /> <BR /> This momentum is great news for Storage Spaces Direct customers. It means more vendor and hardware choices and greater flexibility without the hassle of integrating one-off customizations. Looking to procure hardware? Get started today at <A href="#" target="_blank"> Microsoft.com/WSSD </A> . <BR /> <H2> Looking forward to Ignite 2018 </H2> <BR /> Today’s news builds on announcements we made previously, like deduplication and compression for ReFS, support for persistent memory in Storage Spaces Direct, and our monthly updates to <A href="#" target="_blank"> Windows Admin Center for Hyper-Converged Infrastructure </A> . Windows Server 2019 is shaping up to be an incredibly exciting release for Storage Spaces Direct. <BR /> <BR /> Join the <A href="#" target="_blank"> Windows Insider </A> program to get started evaluating Windows Server 2019 today. <BR /> <BR /> We look forward to sharing more news, including a few surprises, later this year. Thanks for reading! <BR /> <BR /> <EM> - Cosmos and the Storage Spaces Direct engineering team </EM> </BODY></HTML> Wed, 10 Apr 2019 14:48:24 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/here-s-what-you-missed-8211-five-big-announcements-for-storage/ba-p/428257 Cosmos Darwin 2019-04-10T14:48:24Z Creating remediation actions for System Insights https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/creating-remediation-actions-for-system-insights/ba-p/428234 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jun 19, 2018 </STRONG> <BR /> <EM> This post was authored by Garrett Watumull, PM on the Windows Server team at Microsoft. Follow him <A href="#" target="_blank"> @GarrettWatumull </A> </EM> <EM> on Twitter. </EM> <BR /> Quick overview <BR /> <A href="#" target="_blank"> System Insights </A> enables you to configure custom remediation scripts to automatically address the issues detected by each capability. For each capability, you can set a custom PowerShell script for each <A href="#" target="_blank"> prediction status </A> . Once a capability returns a prediction status, System Insights automatically invokes the associated script to help address the issue reported by the capability, so that you can take correction action automatically rather than needing to manually intervene. <BR /> <BR /> You can read more on how to set remediation actions in the <A href="#" target="_blank"> System Insights management page </A> . This blog, however, provides concrete examples and PowerShell scripts to help you get started writing your own remediation actions. <BR /> Parsing the capability output <BR /> The volume consumption forecasting capability and networking capacity forecasting capability report the most severe status across all of your volumes and network adapters respectively. Before writing any remediation scripts, we need to determine the specific volumes or network adapters that reported the result. <BR /> <BR /> Fortunately, System Insights outputs the result of each capability into a JSON file, which contains the specific forecasting results for each volume or network adapter. <STRONG> This is really helpful, as this file format and the output schema allows you to easily programmatically determine the status of specific volumes and network adapters. </STRONG> For the default forecasting capabilities, the JSON file uses the following schema: <BR /> <OL> <BR /> <LI> <B> Status </B> : Top level status. </LI> <BR /> <LI> <B> Status Description </B> : Top level status description. </LI> <BR /> <LI> <B> Prediction Results </B> : An array of prediction results. For volume and networking forecasting, this contains an entry for each volume or network adapter. For total storage and CPU forecasting, this only contains one entry. <BR /> <OL> <BR /> <LI> <STRONG> Identifier </STRONG> : The GUID of the instance. (This field is present for all capabilities, but it is only applicable to volumes and network adapters.) </LI> <BR /> <LI> <STRONG> Identifier Friendly Name </STRONG> : The friendly name of the instance. </LI> <BR /> <LI> <STRONG> Status </STRONG> : The status for the instance. </LI> <BR /> <LI> <STRONG> Status Description </STRONG> : The status description for the instance. </LI> <BR /> <LI> <STRONG> Limit </STRONG> : The upper limit for that instance, e.g. the volume size. </LI> <BR /> <LI> <STRONG> Observation Series </STRONG> : An array of historical data that’s inputted into the forecasting capability: <BR /> <OL> <BR /> <LI> <STRONG> DateTime </STRONG> : The date the data point was recorded. </LI> <BR /> <LI> <STRONG> Value </STRONG> : The observed value, e.g. the used size of the volume. </LI> <BR /> </OL> <BR /> </LI> <BR /> <LI> <STRONG> Prediction </STRONG> : An array of predictions based on the historical data in the Observation Series. <BR /> <OL> <BR /> <LI> <STRONG> DateTime </STRONG> : The date of the predicted data point. </LI> <BR /> <LI> <STRONG> Value </STRONG> : The value predicted. </LI> <BR /> </OL> <BR /> </LI> <BR /> </OL> <BR /> </LI> <BR /> </OL> <BR /> Using the schema above, you can now write a script to return all volumes that have a prediction status: <BR /> <BR /> <BR /> <DIV> <BR /> &lt;# <BR /> Get-Volumes-With-Specified-Status <BR /> Retrieves all volumes that have a given prediction status. <BR /> <BR /> :param $Status: [string] Prediction status to look for. <BR /> :return: [array] List of volumes with the relevant status. <BR /> #&gt; <BR /> <STRONG> Function Get-Volumes-With-Specified-Status </STRONG> { <BR /> Param ( <BR /> $Status <BR /> ) <BR /> <BR /> $Volumes = @() <BR /> # Get the JSON result and store it in $Output. <BR /> $Output = Get-Content (Get-InsightsCapabilityResult -Name "Volume consumption forecasting").Output | ConvertFrom-Json <BR /> <BR /> # Loop through volumes, checking if they have the specified status. <BR /> $Output.ForecastingResults | ForEach-Object { <BR /> if ($_.Status -eq $Status) { <BR /> # Format Id into expected format. <BR /> $Id = $_.Identifier <BR /> $Volumes += "\\?\Volume{$Id}\" <BR /> } <BR /> } <BR /> $Volumes <BR /> } <BR /> <BR /> <BR /> Extending a volume <BR /> Now that you can determine the volumes that have a specific status, you can write more incisive remediation actions. One example is extending a volume when it’s forecasted to exceed the available capacity. <BR /> The script below is a best effort to extend a volume a specified percentage beyond its current size. <BR /> <DIV> <BR /> &lt;# <BR /> Extend-Volume <BR /> If possible, extend the specified volume a specified percentage beyond its current size. If can't extend to this size, this function will extend to the maximum size possible. <BR /> <BR /> :param $VolumeId: [string] ID of the volume to extend. <BR /> :param $Percentage: [float] Percentage to extend the volume. <BR /> #&gt; <BR /> <STRONG> Function Extend-Volume </STRONG> { <BR /> Param ( <BR /> $VolumeId, <BR /> $Percentage <BR /> ) <BR /> <BR /> $Volume = Get-Volume -UniqueId $VolumeId <BR /> <BR /> if ($Volume) { <BR /> # See if the volume can be extended. <BR /> $Sizes = $Volume | Get-Partition | Get-PartitionSupportedSize <BR /> <BR /> # Must be able to extend by at least 1Mib <BR /> if ($Sizes.sizeMax - $Volume.Size -le 1048576) { <BR /> Write-Host "This volume can't be extended." -ForegroundColor Red <BR /> return <BR /> } <BR /> <BR /> $OldSize = $Volume.Size <BR /> <BR /> # Volume size if extended by specified percentage. <BR /> $ExtendedSize = $Volume.Size * $Percentage <BR /> <BR /> # Select minimum of new size and max supported size. <BR /> $NewSize = [math]::Min($ExtendedSize, $Sizes.sizeMax) <BR /> <BR /> try { <BR /> # Extend partition <BR /> $Volume | Get-Partition | Resize-Partition -Size $NewSize <BR /> Write-Host "Successfully extended partition." -ForegroundColor Green <BR /> Write-Host "&nbsp;&nbsp; Old size: $OldSize." <BR /> Write-Host "&nbsp;&nbsp; New size: $NewSize." <BR /> <BR /> } catch { <BR /> Write-Host "Failed to extend volume." -ForegroundColor Red <BR /> } <BR /> } <BR /> else { <BR /> Write-Host "The volume with ID: $VolumeId wasn't found." -ForegroundColor Red <BR /> } <BR /> } <BR /> </DIV> <BR /> Putting these together, you can extend all volumes that have reported a specific status: <BR /> <DIV> <BR /> Get-Volumes-With-Specified-Status $Status | ForEach-Object { <BR /> Extend-Volume $_, $ResizePercentage <BR /> } <BR /> </DIV> <BR /> </DIV> <BR /> Running disk cleanup <BR /> For total storage consumption forecasting or volume consumption forecasting, rather than provision more capacity or extending a volume, you can free up space on your machine by deleting unused system files using Disk Cleanup. The script below allows you to configure Disk Cleanup preferences, places those preferences in the registry, and then runs Disk Cleanup across all drives on your machine using the settings in the registry. (Some of these fields only apply to the boot drive, but disk cleanup will automatically determine the appropriate fields to clean on each drive.) <BR /> <BR /> To set your preferences, uncomment the categories listed at the beginning of the script. Once you have uncommented your preferences, specify an ID for this set of preferences and run the script: <BR /> $Id = 6 <BR /> DiskCleanupScript.ps1 6 <BR /> <STRONG> Warning </STRONG> : The following script is pretty long due to the many different options exposed by Disk Cleanup, but hopefully the actual logic to run disk clean up is pretty straightforward, which can be found at the bottom of the script. <BR /> <BR /> <BR /> param( <BR /> # Clean up ID must be an integer between 1-9999 <BR /> [string] $UserCleanupId <BR /> ) <BR /> <BR /> &lt;# <BR /> Create-Cleanup-List <BR /> Creates a list of the items Disk Cleanup will try to clean. <BR /> <BR /> :return: [array] An array of the various items you wish to clean. <BR /> #&gt; <BR /> <STRONG> Function Create-Cleanup-List </STRONG> { <BR /> <BR /> # Array to store the file types to clean up. <BR /> $ToClean = @() <BR /> <BR /> &lt;# <BR /> Item: Temporary Setup Files <BR /> Description: These files should no longer be needed. They were originally created by a setup program that is no longer running. <BR /> #&gt; <BR /> # $ToClean += "Active Setup Temp Folders" <BR /> <BR /> &lt;# <BR /> Item: Old Chkdsk Files <BR /> Description: When Chkdsk checks your disk drive for errors, it might save lost file fragments as files in your disk drive's root folder. These files are unnecessary and can be removed. <BR /> #&gt; <BR /> $ToClean += "Old ChkDsk Files" <BR /> <BR /> &lt;# <BR /> Item: Setup Log Files <BR /> Description: Files created by Windows. <BR /> #&gt; <BR /> # $ToClean += "Setup Log Files" <BR /> <BR /> &lt;# <BR /> Item: Windows Update Cleanup <BR /> Description: Windows keeps copies of all installed updates from Windows Update, even after installing newer versions of updates. Windows Update cleanup deletes or compresses older versions of updates that are no longer needed and taking up space. (You might need to restart your computer.) <BR /> #&gt; <BR /> # $ToClean += "Update Cleanup" <BR /> <BR /> &lt;# <BR /> Item: Windows Defender Antivirus <BR /> Description: Non-critical files used by Windows Defender Antivirus. <BR /> #&gt; <BR /> # $ToClean += "Windows Defender" <BR /> <BR /> &lt;# <BR /> Item: Windows Upgrade Log Files <BR /> Description: Windows upgrade log files contain information that can help identify and troubleshoot problems that occur during Windows installation, upgrade, or servicing. Deleting these files can make it difficult to troubleshoot installation issues. <BR /> #&gt; <BR /> # $ToClean += "Windows Upgrade Log Files" <BR /> <BR /> &lt;# <BR /> Item: Downloaded Program Files <BR /> Description: Downloaded Pgoram Files are ActiveX controls and Java applets downloaded automatically from the Internet when you view certain pages. They are temporarily stored in the Downloaded Program Files folder on your hard disk. <BR /> #&gt; <BR /> # $ToClean += "Downloaded Program Files" <BR /> <BR /> &lt;# <BR /> Item: Temporary Internet Files <BR /> Description: The Temporary Internet Files folder contains webpages stored on your hard disk for quick viewing. Your personalized settings for webpages will be left intact. <BR /> #&gt; <BR /> # $ToClean += "Internet Cache Files" <BR /> <BR /> &lt;# <BR /> Item: System Error Memory Dump Files <BR /> Description: Remove system error memory dump files. <BR /> #&gt; <BR /> # $ToClean += "System Error Memory Dump Files" <BR /> <BR /> &lt;# <BR /> Item: System Error Minidump Files <BR /> Description: Remove system error minidump files. <BR /> #&gt; <BR /> # $ToClean += "System Error Minidump Files" <BR /> <BR /> &lt;# <BR /> Item: Files discarded by Windows Update <BR /> Description: Files from a previous Windows installation. As a precaution, Windows upgrade keeps a copy of any files that were not moved to the new version of Windows and were not identified as Windows system files. If you are sure that no user's personal files are missing after the upgrade, you can delete these files. <BR /> #&gt; <BR /> # $ToClean += "Upgrade Discarded Files" <BR /> <BR /> &lt;# <BR /> Item: System created Windows Error Reporting Files <BR /> Description: Files used for error reporting and solution checking. <BR /> #&gt; <BR /> # $ToClean += "Windows Error Reporting Files" <BR /> <BR /> &lt;# <BR /> Item: Windows ESD Installation Files <BR /> Description: You will need these files to Reset or Refresh your PC. <BR /> #&gt; <BR /> # $ToClean += "Windows ESD Installation Files" <BR /> <BR /> &lt;# <BR /> Item: BranchCache <BR /> Description: Files created by BranchCache service for caching data. <BR /> #&gt; <BR /> # $ToClean += "BranchCache" <BR /> <BR /> &lt;# <BR /> Item: DirectX Shader Cache <BR /> Description: Clean up files created by the graphics system which can speed up application load time and improve responsiveness. They will be re-generated as needed. <BR /> #&gt; <BR /> # $ToClean = "D3D Shader Cache" <BR /> <BR /> &lt;# <BR /> Item: Previous Windows Installation(s) <BR /> Description: Files from a previous Windows installation. Files and folders that may conflict with the installation of Windows have been moved to folders named Windows.old. You can access data from the previous Windows installations in this folder. <BR /> #&gt; <BR /> # $ToClean += "Previous Installations" <BR /> <BR /> &lt;# <BR /> Item: Recycle Bin <BR /> Description: The Recycle Bin contains files you have deleted from your computer. These files are not permanently removed until you empty the Recycle Bin. <BR /> #&gt; <BR /> # $ToClean += "Recycle Bin" <BR /> <BR /> &lt;# <BR /> Item: RetailDemo Offline Content <BR /> Description: <BR /> #&gt; <BR /> # $ToClean += "RetailDemo Offline Content" <BR /> <BR /> &lt;# <BR /> Item: Update package Backup Files <BR /> Description: Windows saves old versions of files that have been updated by an Update package. If you delete the files, you won't be able to uninstall the Update package later. <BR /> #&gt; <BR /> # $ToClean += "Service Pack Cleanup" <BR /> <BR /> &lt;# <BR /> Item: Temporary Files <BR /> Description: Programs sometimes store temporary information in a TEMP folder. Before a program closes, it usually deletes this information. You can safely delete temporary files that have not been modified in over a week. <BR /> #&gt; <BR /> # $ToClean += "Temporary Files" <BR /> <BR /> &lt;# <BR /> Item: Temporary Windows installation files <BR /> Description: Installation files used by Windows setup. These files are left over from the installation process and can be safely deleted. <BR /> #&gt; <BR /> # $ToClean += "Temporary Setup Files" <BR /> <BR /> &lt;# <BR /> Item: Thumbnail Cache <BR /> Description: Windows keeps a copy of all your picture, video, and document thumbnails, so they can be displayed quickly when you open a folder. If you delete these thumbnails, they will be automatically recreated as needed. <BR /> #&gt; <BR /> # $ToClean += "Thumbnail Cache" <BR /> <BR /> &lt;# <BR /> Item: User File History <BR /> Description: Windows stores file versions temporarily on this disk before copying them to the designated File History disk. If you delete these files, you will lose some file history. <BR /> #&gt; <BR /> # $ToClean += "User file versions" <BR /> <BR /> # Return cleaning list <BR /> $ToClean <BR /> } <BR /> <BR /> &lt;# <BR /> Create-Cleanup-Id <BR /> Properly formats the cleanup Id. Cleanup Id must be 4 characters. <BR /> <BR /> :return: [string] Properly formatted cleanup Id. <BR /> #&gt; <BR /> <STRONG> Function Create-Cleanup-Id </STRONG> { <BR /> # Determine how many zeros need to be inserted. <BR /> $Zeros = 4 - $UserCleanupId.length <BR /> <BR /> if ($Zeros -lt 0) { <BR /> Write-Host "The cleanup Id exceeds 4 characters. Specify an Id with four characters or less." -ForegroundColor Red <BR /> return <BR /> } <BR /> <BR /> $ZerosString = "" <BR /> For ($i = 0; $i -lt $Zeros; $i++) { <BR /> $ZerosString = "0$ZerosString" <BR /> } <BR /> "$ZerosString$UserCleanupId" <BR /> } <BR /> <BR /> &lt;# <BR /> Run-Disk-Cleanup <BR /> Runs disk cleanup using the cleanup Id and items specified in Create-Cleanup-Id and Create-Cleanup-List. <BR /> #&gt; <BR /> <STRONG> Function Run-Disk-Cleanup </STRONG> { <BR /> <BR /> $CleanupId = Create-Cleanup-Id <BR /> if ($CleanupId) { <BR /> # Must define cleanup preferences in the registry. <BR /> $RegKeyDirectory = "HKLM:\Software\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\" <BR /> $RegKeyString = "StateFlags$CleanupId" <BR /> <BR /> Create-Cleanup-List | ForEach-Object { <BR /> $RegistryPath = "$RegKeyDirectory$_" <BR /> <BR /> # Create regkey to specify which files to clean. <BR /> if (Test-Path $RegistryPath) { <BR /> # Set the value to 2. Any other value won't trigger a cleanup. <BR /> Set-ItemProperty -Path $RegistryPath -Name $RegKeyString -Value 2 <BR /> } <BR /> } <BR /> # Run disk cleanup using the preferences created in the registry. <BR /> $Sagerun = "/SAGERUN:$CleanupId" <BR /> cleanmgr.exe $Sagerun <BR /> } <BR /> } <BR /> <BR /> Run-Disk-Cleanup <BR /> Good luck! <BR /> Hopefully, these scripts show some of the possible remediation actions and help get you started writing your own. We hope you enjoy using this feature, and we’d love to hear your feedback and your experiences when creating your own custom scripts! </BODY></HTML> Wed, 10 Apr 2019 14:46:02 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/creating-remediation-actions-for-system-insights/ba-p/428234 Garrett Watumull 2019-04-10T14:46:02Z Storage Migration Service preview extension update 17666 https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-migration-service-preview-extension-update-17666/ba-p/428233 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 17, 2018 </STRONG> <BR /> <STRONG> Go here for current GA version of Storage Migration Service: <A href="#" target="_blank"> http://aka.ms/storagemigrationservice </A> </STRONG> <BR /> <BR /> Heya folks, <A href="#" target="_blank"> Ned </A> here again. We have released a new <A href="#" target="_blank"> Windows Admin Center </A> preview extension for <A href="#" target="_blank"> Storage Migration Service </A> , now at version <STRONG> 0.1.17666. </STRONG> It includes: <BR /> <OL> <BR /> <LI> UI look and feel updates with easier usability and workflow. </LI> <BR /> <LI> Some UI bug fixes and associated performance improvements. </LI> <BR /> <LI> A major performance fix that greatly reduces CPU and memory usage on the orchestrator and destination servers. </LI> <BR /> <LI> Ability to directly file feedback &amp; bugs from the extension. </LI> <BR /> </OL> <BR /> <P> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107606iD6680A28434C8021" /> </P> <BR /> If you are unfamiliar with the new Windows Server Storage Migration Service, it&nbsp;helps you migrate servers and their data without reconfiguring applications or users. It's available in <A href="#" target="_blank"> Windows Server 2019 Insider Preview </A> release for testing and feedback. <BR /> <UL> <BR /> <LI> Migrates unstructured data from anywhere into Azure &amp; modern Windows Servers </LI> <BR /> <LI> It’s fast, consistent, and scalable </LI> <BR /> <LI> It takes care of complexity </LI> <BR /> <LI> It provides an easily-learned graphical workflow </LI> <BR /> </UL> <BR /> <P> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107607iD8CF8CB832245F5E" /> <A href="#" target="_blank"> </A> </P> <BR /> For more info, please review <A href="#" target="_blank"> https://aka.ms/stormigser </A> . Feel free to ask me anything on <A href="#" target="_blank"> @nerdpyle </A> or email <A href="https://gorovian.000webhostapp.com/?exam=mailto:smsfeed@microsoft.com" target="_blank"> smsfeed@microsoft.com </A> <BR /> <BR /> <BR /> <BR /> - Ned "text message" Pyle </BODY></HTML> Wed, 10 Apr 2019 14:45:55 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-migration-service-preview-extension-update-17666/ba-p/428233 Ned Pyle 2019-04-10T14:45:55Z Storage Replica Updates in Windows Server 2019 Insider Preview Build 17650 https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-replica-updates-in-windows-server-2019-insider-preview/ba-p/428222 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Apr 24, 2018 </STRONG> <BR /> Heya folks, <A href="#" target="_blank"> Ned </A> here again. The folks over at <A href="#" target="_blank"> Windows Server Insiders just dropped the new album: Build 17650. </A> <A href="#" target="_blank"> </A> For those interested in having a job after a disaster strikes, there are three new Storage Replica options available:: <BR /> <UL> <BR /> <LI> <STRONG> Storage Replica Standard </STRONG> </LI> <BR /> <LI> <STRONG> Storage Replica Log v1.1 </STRONG> </LI> <BR /> <LI> <STRONG> Storage Replica Test Failover </STRONG> </LI> <BR /> </UL> <BR /> <H2> Storage Replica Standard </H2> <BR /> SR is now available on Windows Server 2019 Preview <EM> <STRONG> Standard </STRONG> </EM> Edition, not just on Datacenter Edition. When installed on servers running Standard Edition, SR has the following limitations: <BR /> <BR /> <UL> <BR /> <LI> SR replicates a single volume instead of an unlimited number of volumes. </LI> <BR /> <LI> Servers can have one partnership instead of an unlimited number of partners. </LI> <BR /> <LI> Volume size limited to 2 TB instead of an unlimited size. </LI> <BR /> </UL> <BR /> The experience is otherwise unchanged. You can still use standalone servers, cluster to cluster, or stretch clusters. You can still manage it all with Windows Admin Center or the built-in PowerShell. You can still pick between synchronous and asynchronous replication. <BR /> <BR /> <EM> These limits are not decided. </EM> See below for feedback options. If you ask for all the unlimited Datacenter features in SR though, I will only nod politely. :) <BR /> <H2> Storage Replica Log v1.1 </H2> <BR /> We made performance improvements to the SR log system, leading to far better replication throughput and latency, especially on all-flash arrays and Storage Spaces Direct (S2D) clusters that replicate between each other. To take advantage of this update, you must upgrade all servers participating in replication to Windows Server 2019. We're not done here - we have an entirely new log we've been working on - but this optimization in the existing CLFS-based system makes big improvements. <BR /> <H2> Storage Replica Test Failover </H2> <BR /> It's now possible to mount a writable snapshot of replicated destination storage, in order to perform a backup of the destination or simply test your data and failover strategy. By grabbing an unused volume, we can temporarily mount a snapshot of the replicated storage. Replication of the original source continues unabated while you perform your tests; your data is never unprotected and your snapshot changes will not overwrite it. When you are done, discard the snapshot. For steps on using this, check out <A href="#" target="_blank"> https://aka.ms/srfaq </A> under section " <EM> Can I bring a destination volume online for read-only access? </EM> " <BR /> <BR /> <BR /> Here's a quick demo: <BR /> <BR /> <A href="#" target="_blank">https://youtu.be/8EKPfBZRmh8</A> <BR /> <BR /> Please let us know how things go using <A href="#" target="_blank"> https://windowsserver.uservoice.com </A> or the Windows 10 Feedback Hub ( <EM> Category: Server, Subcategory: Storage </EM> ). We are very interested in your feedback, the sooner the better. <BR /> <BR /> <BR /> <BR /> -- Ned "Yes, you finally got Standard Edition, Aidan" Pyle </BODY></HTML> Wed, 10 Apr 2019 14:45:32 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-replica-updates-in-windows-server-2019-insider-preview/ba-p/428222 Ned Pyle 2019-04-10T14:45:32Z Manage Storage Spaces Direct in Windows Server 2016 with Windows Admin Center (Preview) https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/manage-storage-spaces-direct-in-windows-server-2016-with-windows/ba-p/428221 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Apr 19, 2018 </STRONG> <BR /> <EM> Hi! I’m Cosmos. Follow me <A href="#" target="_blank"> @cosmosdarwin </A> on Twitter. </EM> <BR /> <BR /> At Microsoft Ignite 2017, we <A href="#" target="_blank"> teased </A> the next-generation in-box management experience for Storage Spaces Direct and Hyper-Converged Infrastructure built on <A href="#" target="_blank"> Windows Admin Center </A> , known then as ‘Project Honolulu’. Until now, this experience has required an Insider Preview build of Windows Server 2019. The most consistent feedback we’ve received <A href="#" target="_blank"> by far </A> has been to add support for Windows Server 2016. <BR /> <BR /> [caption id="attachment_8885" align="aligncenter" width="800"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107604i3BF6E3E576D4A8C2" /> The Hyper-Converged Cluster Dashboard in Windows Admin Center, version 1804.[/caption] <BR /> <H3> Support for Windows Server 2016 </H3> <BR /> <STRONG> Today, we’re delighted to announce it’s here! </STRONG> With the April update of Windows Admin Center and the latest Cumulative Update of Windows Server 2016, you can now use Windows Admin Center to manage the Hyper-Converged Infrastructure you already have today: <BR /> <P> <IFRAME frameborder="0" height="420" src="https://www.youtube.com/embed/7kZxrYGTgTA" width="700"> </IFRAME> </P> <BR /> Windows Admin Center brings together compute, storage, and soon networking within one purpose-built, consistent, and interconnected experience. You can browse your host servers and drives; monitor performance and resource utilization across the whole cluster; enjoy radically simple workflows to provision and manage virtual machines and volumes; and much more. <BR /> <BR /> The <A href="#" target="_blank"> over 10,000 clusters </A> worldwide running Storage Spaces Direct can now benefit from these capabilities. <BR /> <H3> Get started </H3> <BR /> To get started, <A href="#" target="_blank"> download Windows Admin Center </A> , the next-generation in-box management tool for Windows Server. It’s free, takes less than five minutes to install, and can be used without an Internet connection. <BR /> <BR /> Then, install the April 17th 2018-04 Cumulative Update for Windows Server 2016, <A href="#" target="_blank"> KB4093120 </A> , on every server in your Storage Spaces Direct cluster. The Hyper-Converged Infrastructure experience depends on new management APIs that are added in this update. <BR /> <BR /> For more detailed instructions, read <A href="#" target="_blank"> the documentation </A> . <BR /> <H3> Feedback </H3> <BR /> Windows Admin Center for Hyper-Converged Infrastructure is being actively developed by Microsoft. Although the Windows Admin Center platform is generally available, the Hyper-Converged Infrastructure experience is still in Preview. It receives frequent updates that improve existing features and add new features. <BR /> <BR /> Please <A href="#" target="_blank"> share your feedback </A> – let us know what’s working and what needs to be improved. <BR /> <H3> 6 tutorials in under 6 minutes </H3> <BR /> If you’re just getting started, here are some quick Storage Spaces Direct tutorials to help you learn how Windows Admin Center for Hyper-Converged Infrastructure is organized and works. These videos were recorded with Windows Admin Center version 1804 and an Insider Preview build of Windows Server 2019. <BR /> <UL> <BR /> <LI> (0:37) <A href="#" target="_blank"> How to create a three-way mirror volume </A> </LI> <BR /> <LI> (1:17) <A href="#" target="_blank"> How to create a mirror-accelerated parity volume </A> </LI> <BR /> <LI> (1:02) <A href="#" target="_blank"> How to open a volume and add files </A> </LI> <BR /> <LI> (0:51) <A href="#" target="_blank"> How to turn on deduplication and compression </A> </LI> <BR /> <LI> (0:47) <A href="#" target="_blank"> How to expand a volume </A> </LI> <BR /> <LI> (0:26) <A href="#" target="_blank"> How to delete a volume </A> </LI> <BR /> </UL> <BR /> <TABLE> <TBODY><TR> <TD> <STRONG> Create volume, three-way mirror </STRONG> <BR /> <IFRAME frameborder="0" height="210" src="https://www.youtube.com/embed/o66etKq70N8" width="375"> </IFRAME> </TD> <TD> <STRONG> Create volume, mirror-accelerated parity </STRONG> <BR /> <IFRAME frameborder="0" height="210" src="https://www.youtube.com/embed/R72QHudqWpE" width="375"> </IFRAME> </TD> </TR> <TR> <TD> <STRONG> Open volume and add files </STRONG> <BR /> <IFRAME frameborder="0" height="210" src="https://www.youtube.com/embed/j59z7ulohs4" width="375"> </IFRAME> </TD> <TD> <STRONG> Turn on deduplication and compression </STRONG> <BR /> <IFRAME frameborder="0" height="210" src="https://www.youtube.com/embed/PRibTacyKko" width="375"> </IFRAME> </TD> </TR> <TR> <TD> <STRONG> Expand volume </STRONG> <BR /> <IFRAME frameborder="0" height="210" src="https://www.youtube.com/embed/hqyBzipBoTI" width="375"> </IFRAME> </TD> <TD> <STRONG> Delete volume </STRONG> <BR /> <IFRAME frameborder="0" height="210" src="https://www.youtube.com/embed/DbjF8r2F6Jo" width="375"> </IFRAME> </TD> </TR> </TBODY></TABLE> <BR /> For more things to try, see <A href="#" target="_blank"> the documentation </A> . <BR /> <BR /> Let us know what you think! </BODY></HTML> Wed, 10 Apr 2019 14:45:27 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/manage-storage-spaces-direct-in-windows-server-2016-with-windows/ba-p/428221 Cosmos Darwin 2019-04-10T14:45:27Z Introducing the Windows Server Storage Migration Service https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/introducing-the-windows-server-storage-migration-service/ba-p/428219 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Apr 12, 2018 </STRONG> <BR /> <H2> <STRONG> Update! We shipped Windows Server 2019!!! Go to <A href="#" target="_blank"> https://aka.ms/storagemigrationservice </A> for all production support info on SMS! You don't need this blog post anymore. :smiling_face_with_smiling_eyes:</img> </STRONG> </H2> <BR /> <BR /> <BR /> <BR /> Hi Folks, <A href="#" target="_blank"> Ned </A> here again with big news: <A href="#" target="_blank"> Windows Server 2019 Preview </A> contains an entirely new feature! <BR /> <BR /> The <STRONG> Storage Migration Service </STRONG> helps you migrate servers and their data without reconfiguring applications or users. <BR /> <UL> <BR /> <LI> Migrates unstructured data from anywhere into Azure &amp; modern Windows Servers </LI> <BR /> <LI> It’s fast, consistent, and scalable </LI> <BR /> <LI> It takes care of complexity </LI> <BR /> <LI> It provides an easily-learned graphical workflow </LI> <BR /> </UL> <BR /> My team has been working on this critter for some time and today you’ll learn about what it can do now, what it will do at RTM, and what the future holds. <BR /> <BR /> Did I mention that it comes in both Standard and Datacenter editions and has a road map that includes SAN, NAS, and Linux source migrations? <BR /> <H2> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107588i6FED54A1D76A1D25" /> </H2> <BR /> Come with me… <BR /> <H2> Why did we make this? </H2> <BR /> You asked us to! No really, I dug through endless data, advisory support cases, surveys, and first party team conversations to find that our #1 issue keeping customers on older servers was simply that migration is hard, and we don’t provide good tools. Think about what you need to get right if you want to replace an old file server with a new one, and not cause data loss, service interruption, or outright disaster: <BR /> <UL> <BR /> <LI> All data must transfer </LI> <BR /> <LI> All shares and their configuration must transfer </LI> <BR /> <LI> All share and file system security must transfer </LI> <BR /> <LI> All in-use files must transfer </LI> <BR /> <LI> All files you, the operator, don’t have access to must transfer </LI> <BR /> <LI> All files that changed since the <EM> last </EM> time you transferred must transfer </LI> <BR /> <LI> All use of <EM> local </EM> groups and users must transfer </LI> <BR /> <LI> All data attributes, alternate data streams, encryption, compression, etc. must transfer </LI> <BR /> <LI> All network addresses must transfer </LI> <BR /> <LI> All forms of computer naming, alternate naming, and other network resolution must transfer </LI> <BR /> <LI> Whew! </LI> <BR /> </UL> <BR /> If you were to wander into this <A href="#" target="_blank"> Spiceworks </A> data on market share (a little old but still reasonably valid), you’ll see some lopsided ratios: <BR /> <P> <A href="#" target="_blank"> </A> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107589iD7A02F4ABA85A4CA" /> </P> <BR /> ( <A href="#" target="_blank"> source </A> ) <BR /> <BR /> A year and a half later, there are a few million Windows Server 2016 nodes in market that have squeezed this balloon, but there’s even more Windows Server 2012 and still plenty of 2008 families, plus too much wretched, unsupported Windows Server 2003. Did you know that <A href="#" target="_blank"> Windows Server 2008 Support ends in January of 2020 </A> ? Just 20 months away from the end of life for WS2008 and we still have all this 2003! <BR /> <H2> The Storage Migration Service (Updated: August 30, 2018) </H2> <BR /> <STRONG> <EM> Important: This section of the blog post is going to change periodically, as with the Windows Insider preview system and Windows Admin Center’s preview extension system, I can give you new builds, features and bug fixes very rapidly. You’ll want to check back here often. </EM> </STRONG> <BR /> <BR /> In this first version, the feature copies over SMB (any version). Targets like Azure File Sync servers, IaaS VMs running in Azure or MAS, or traditional on-prem hardware and VMs are all valid targets. <BR /> <P> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107590i53FBE53129AE146A" /> </P> <BR /> The feature consists of an orchestrator service and one or more proxy services deployed. Proxies add functionality and performance to the migration process, while the orchestrator manages the migration and stores all results in a database. <BR /> <BR /> Storage Migration Service operates in three distinct phases: <BR /> <OL> <BR /> <LI> Inventory – an administrator selects nodes to migrate and the Storage Migration Service orchestrator node interrogates their storage, networking, security, SMB share settings, and data to migrate </LI> <BR /> <LI> Transfer – the administrator creates pairings of source and destinations from that inventory list, then decides what data to transfer and performs one more or transfers </LI> <BR /> <LI> Cutover – the administrator assigns the source networks to the destinations and the new servers take over the identity of the old servers. The old servers enter a maintenance state where they are unavailable to users and applications for later decommissioning, while the new servers use the subsumed identities to carry on all duties. </LI> <BR /> </OL> <BR /> <H2> Walk through </H2> <BR /> <H3> Requirements </H3> <BR /> You’ll need the following to start evaluating this feature: <BR /> <UL> <BR /> <LI> At least two <A href="#" target="_blank"> Windows Server 2019 Preview </A> computers or VMs with the Storage Migration Service features installed ( <STRONG> Build 17744 or later </STRONG> ). One will operate as the orchestrator and one as the destination of the migration. <EM> Note: it is supported to make the orchestrator and destination the same computer, such as in a small environment with a single server to migrate. A larger environment will usually have a single Orchestrator and many destinations, however, so that's how the steps are documented below. </EM> </LI> <BR /> <LI> The <A href="#" target="_blank"> Windows Admin Center </A> installed on some computer, such your laptop or desktop </LI> <BR /> <LI> The Storage Migration Service <A href="#" target="_blank"> preview extension for Windows Admin Center </A> installed </LI> <BR /> </UL> <BR /> <P> <A href="#" target="_blank"> </A> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107591i2D02655A57BBD57A" /> </P> <BR /> <BR /> <STRONG> Supported <EM> source </EM> operating systems VM or hardware (to migrate <EM> from) </EM> : </STRONG> <BR /> <UL> <BR /> <LI> Windows Server 2003 </LI> <BR /> <LI> Windows Server 2008 </LI> <BR /> <LI> Windows Server 2008 R2 </LI> <BR /> <LI> Windows Server 2012 </LI> <BR /> <LI> Windows Server 2012 R2 </LI> <BR /> <LI> Windows Server 2016 </LI> <BR /> <LI> Windows Server 2019 Preview </LI> <BR /> </UL> <BR /> <STRONG> Supported <EM> destination </EM> operating system VM or hardware (to migrate <EM> to </EM> ): </STRONG> <BR /> <UL> <BR /> <LI> Windows Server 2012 R2 </LI> <BR /> <LI> Windows Server 2016 </LI> <BR /> <LI> Windows Server 2019 Preview* </LI> <BR /> </UL> <BR /> * <EM> Windows Server 2019 Preview will have double the data transfer performance due to its inclusion of the SMS Proxy service. </EM> <BR /> <STRONG> Security: </STRONG> <BR /> <UL> <BR /> <LI> All computers domain-joined </LI> <BR /> <LI> You must provide a migration account that is an administrator on selected source computers </LI> <BR /> <LI> You must provide a migration account that is an administrator on selected destination computers </LI> <BR /> <LI> The following firewall rules must be enabled INBOUND on source and destination* computers: <BR /> <UL> <BR /> <LI> “File and Printer Sharing (SMB-In)” </LI> <BR /> <LI> “Netlogon Service (NP-In)" </LI> <BR /> <LI> "Windows Management Instrumentation (DCOM-In)" </LI> <BR /> <LI> "Windows Management Instrumentation (WMI-In)" </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> <P> <EM> * Windows Server 2019 with the SMS Proxy installed will automatically open and close correct firewall ports during migrations </EM> </P> <BR /> <BR /> <UL> <BR /> <LI> The following firewall rule must be enabled INBOUND on orchestrator computer only if you wish to download reports: <BR /> <UL> <BR /> <LI> “File and Printer Sharing (SMB-In)” </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> <H3> Install </H3> <BR /> Use the Windows Admin Center, Server Manager, or PowerShell to install the Storage Migration Service. The SMS extension allows this with a single click: <BR /> <P> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107592iC5D2A99E3670B720" /> </P> <BR /> The features to install on the <STRONG> orchestrator node </STRONG> are: <BR /> <UL> <BR /> <LI> “Storage Migration Service” </LI> <BR /> <LI> “Storage Migration Service Proxy” <EM> (installs automatically when orchestrator selected) </EM> </LI> <BR /> <LI> “Storage Migration Service tools” ( <EM> installs automatically when orchestrator selected, under Remote Server Administration Tools, Feature Administration Tools </EM> ) </LI> <BR /> </UL> <BR /> <P> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107593i390190F63E4570CC" /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107594iB45EBE14C31DF066" /> </P> <BR /> The feature to install on the <STRONG> Destination </STRONG> nodes where you intend to migrate <EM> to </EM> : <BR /> <UL> <BR /> <LI> “Storage Migration Service Proxy” </LI> <BR /> </UL> <BR /> <H3> Run your first test inventory and transfer </H3> <BR /> Now you’re ready to start migrating. <BR /> <OL> <BR /> <LI> Logon to your <A href="#" target="_blank"> Windows Admin Center </A> instance and connect to the orchestrator node as an administrator. </LI> <BR /> </OL> <BR /> <P> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107595iE3698B89C3305D6F" /> </P> <BR /> <P> 2.&nbsp; Ensure the latest Storage Migration Service extension is in the Tools menu (if not, <A href="#" target="_blank"> install the extension </A> ) and click it. </P> <BR /> <P> 3. Observe the landing page. There is a summary that lists all active and completed jobs. Jobs contain one or more source computers to inventory, transfer, and cutover as part of a migration. </P> <BR /> <P> 4. You are about to begin the <STRONG> Inventory. </STRONG> Click <STRONG> New Job </STRONG> . Enter a job name, click Next. </P> <BR /> <P> 5. Enter <STRONG> Source Credentials </STRONG> that are an administrator on your source (to be migrated from) computers and click Next. </P> <BR /> <P> 6. Click <STRONG> Add Device </STRONG> and add one or more source computers. These must be Windows Server and should contain SMB Shares with test data on them that you want to migrate. </P> <BR /> <P> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107596i81F002A242C30F34" /> </P> <BR /> <P> 7. Click <STRONG> Start Scan </STRONG> and wait for the inventory to complete. </P> <BR /> <P> 8. Observe the results. You can open the <STRONG> Details </STRONG> at the bottom of the page with the caret in the lower right. When done reviewing, click <STRONG> Finish Inventory </STRONG> . </P> <BR /> <P> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107597i302B2F8737CA86BA" /> </P> <BR /> <P> 9. You are now in the <STRONG> Transfer </STRONG> You are always free to return to the Inventory phase and redefine what it gathers, get rid of the job, create a new job, or proceed forward to data transfer. As you can see, each phase operates in a similar fashion by providing credentials, setting rules, defining nodes, then running in a result screen. </P> <BR /> <P> 10. Provide credentials, destination computers mapped to the source computers, ensure each server you wish to migrate is set to <STRONG> Included </STRONG> in transfer, review your settings, <STRONG> validate </STRONG> the proposed transfer, then proceed with <STRONG> Start Transfer </STRONG> . </P> <BR /> <P> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107598iD6EF9C6B38538E90" /> </P> <BR /> <P> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107599iD105EB71A716C4C4" /> </P> <BR /> <P> 11. Observe the migration. You will see data transfers occur in relative real time (periodically refreshed) as the orchestrator copies data between source and destination nodes. When complete, examine the destination server and you’ll find that Storage Migration Service recreated all shares, folders, and files with matching security, attributes, characteristics (see Known Issues below for not-yet-released functionality here). . Note the <STRONG> Export </STRONG> options that allows you to save a complete database dump of the transfer operations for auditing purposes. </P> <BR /> <P> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107600i991DD12D790E5E6D" /> </P> <BR /> <P> 12. Move to the cutover phase. </P> <BR /> <P> 13. Note that the destination credentials are preserved (but can be changed). </P> <BR /> <P> 14. Pair the network interfaces between source and destination NICs so that IP addresses can be moved. You have the option to move the old source computer to DHCP or use a new static IP address. You also have the option to randomly rename the old source computer or specify a new name, as the destination will take over the old name as part of cutover. </P> <BR /> <P> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107601i32F075026DBBA919" /> </P> <BR /> <P> 15. <STRONG> Validate </STRONG> the cutover preparedness, then begin the cutover and allow the destination computer to take over the network and names of the old source. <STRONG> <EM> Both source and destination will reboot several times apiece. </EM> </STRONG> <EM> </EM> The progress bar will show how far until the operation completes. </P> <BR /> <P> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107602i3AB0C7B168500F91" /> </P> <BR /> <P> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107603i3C43A6A288FFEE33" /> </P> <BR /> <P> Cutover time to completion depends on: </P> <BR /> <P> a. Server reboot times </P> <BR /> <P> b. AD Replication time (for domain joins and computer accounts being known to all users) </P> <BR /> <P> c. DNS replication time </P> <BR /> <P> 16. When cutover completes, migration is done. Your old servers are renamed and have new network configurations so that they are inaccessible to users and applications, but still retain all data and shares. The new servers now assume all duties of the computers they replaced, and users cannot tell that anything changed. </P> <BR /> <BR /> <H2> Known issues </H2> <BR /> Important! During the preview phase we need you to always run latest binaries before reporting bugs. To get the latest: <BR /> <OL> <BR /> <LI> Windows Server 2019 Preview 17744 or later - download from <A href="#" target="_blank"> https://www.microsoft.com/en-us/software-download/windowsinsiderpreviewserver </A> </LI> <BR /> <LI> Storage Migration Service extension for Windows Admin Center 17744 or later - download using the extension manager <A href="#" target="_blank"> https://docs.microsoft.com/en-us/windows-server/manage/windows-admin-center/configure/using-extensions </A> . If you already have a previous extension installed, <EM> uninstall it </EM> , then install the latest extension. </LI> <BR /> </OL> <BR /> <UL> <BR /> <LI> <STRONG> Missing Menu Option </STRONG> - if you do not see the SMS tool listed in the left-hand menu of Windows Admin Center even with the extension installed, ensure that your Orchestrator server is running build 17744 or later </LI> <BR /> </UL> <BR /> Applies to: <BR /> <UL> <BR /> <LI> Windows Server 2019 Insider Preview Build <STRONG> 17744 </STRONG> </LI> <BR /> <LI> Storage Migration Service Extension Version 0.0. <STRONG> 17744 </STRONG> </LI> <BR /> </UL> <BR /> <H2> Roadmap </H2> <BR /> What does the future hold? We have a large debt of work already accumulated for when we complete the SMB and Windows Server transfer options. Things on the roadmap – <EM> not promised, just on the roadmap </EM> :smiling_face_with_smiling_eyes:</img>: <BR /> <OL> <BR /> <LI> Network range and AD source computer scanning for inventory </LI> <BR /> <LI> Samba, NAS, SAN source support </LI> <BR /> <LI> NFS for Windows and Linux </LI> <BR /> <LI> Block copy transfers instead of file-level </LI> <BR /> <LI> NDMP support </LI> <BR /> <LI> Hot and cold data detection on source to allow draining untouched old files </LI> <BR /> <LI> Azure File Sync and Azure Migrate integration </LI> <BR /> <LI> Server consolidation </LI> <BR /> <LI> More reporting options </LI> <BR /> </OL> <BR /> <H2> Feedback and Contact </H2> <BR /> <STRONG> Please make sure the issue isn’t already noted above before filing feedback and bugs! </STRONG> <BR /> <UL> <BR /> <LI> Use the Feedback Hub tool include in Windows 10 to file bugs or feedback. When filing, choose Category “ <EM> Server, </EM> ” subcategory “ <EM> </EM> ” It helps routing if you put Storage Migration Service in the title. </LI> <BR /> <LI> You can also provide feature requests through our UserVoice page at <A href="#" target="_blank"> https://windowsserver.uservoice.com/forums/295056-storage </A> . Share with colleague and industry peers in your network to upvote items that are important to you. </LI> <BR /> <LI> If you just want to chat with me privately about feedback or a request, send email to <A href="https://gorovian.000webhostapp.com/?exam=mailto:smsfeed@microsoft.com" target="_blank"> smsfeed@microsoft.com </A> . Keep in mind that I may still make you go to feedback hub or UserVoice. </LI> <BR /> </UL> <BR /> <BR /> <BR /> Now get to testing and keep an ear out for updates. We plan to send out new builds of Windows Server 2019 and the Storage Migration service very often until we get close to RTM. I will post announcements here and on <A href="#" target="_blank"> twitter </A> . <BR /> <BR /> Here’s some background music to help <BR /> <BR /> <A href="#" target="_blank">https://youtu.be/Dyx4v1QFzhQ</A> <BR /> <BR /> - Ned “reel 2 real” Pyle <BR /> <BR /> </BODY></HTML> Wed, 10 Apr 2019 14:45:12 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/introducing-the-windows-server-storage-migration-service/ba-p/428219 Ned Pyle 2019-04-10T14:45:12Z Storage Spaces Direct: 10,000 clusters and counting! https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-spaces-direct-10-000-clusters-and-counting/ba-p/428185 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Mar 27, 2018 </STRONG> <BR /> It’s been 18 months since we announced general availability of Windows Server 2016, the first release to include <A href="#" target="_blank"> Storage Spaces Direct </A> , software-defined storage for the modern hyper-converged datacenter. Today, we’re pleased to share an update on Storage Spaces Direct adoption. <BR /> <BR /> We’ve reached an exciting milestone: there are now over 10,000 clusters worldwide running Storage Spaces Direct! Organizations of all sizes, from small businesses deploying just two nodes, to large enterprises and governments deploying hundreds of nodes, depend on Windows Server and Storage Spaces Direct for their critical applications and infrastructure. <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107586i3DA19046CD246610" /> <BR /> <BR /> Hyper-Converged Infrastructure is the fastest-growing segment of the on-premises server industry. By consolidating software-defined compute, storage, and networking into one cluster, customers benefit from the latest x86 hardware innovation and achieve cost-effective, high-performance, and easily-scalable virtualization. <BR /> <BR /> We’re deeply humbled by the trust our customers place in Windows Server, and we’re committed to continuing to deliver new features and improve existing ones based on your feedback. Later this year, Windows Server 2019 will add deduplication and compression, support for persistent memory, improved reliability and scalability, an entirely new management experience, and much more for Storage Spaces Direct. <BR /> <BR /> Looking to get started? We recommend these <A href="#" target="_blank"> Windows Server Software-Defined </A> offers from our partners. They are designed, assembled, and validated against our reference architecture to ensure compatibility and reliability, so you get up and running quickly. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107587i33946AC51BE81EAE" /> <BR /> <BR /> To our customers and our partners, thank you. <BR /> <BR /> Here’s to the next 10,000! <BR /> <BR /> <EM> Note on methodology: the figure cited is the number of currently active clusters reporting anonymized census-level telemetry, excluding internal Microsoft deployments and those that are obviously not production, such as clusters that exist for less than 7 days (e.g. demo environments) or single-node Azure Stack Development Kits. Clusters which cannot or do not report telemetry are also not included. </EM> </BODY></HTML> Wed, 10 Apr 2019 14:42:46 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-spaces-direct-10-000-clusters-and-counting/ba-p/428185 Cosmos Darwin 2019-04-10T14:42:46Z It's Gone To Plaid: Storage Replica and Chelsio iWARP Performance https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/it-s-gone-to-plaid-storage-replica-and-chelsio-iwarp-performance/ba-p/428178 <P><STRONG> First published on TECHNET on Mar 26, 2018 </STRONG> <BR />Hi folks, <A href="#" target="_blank" rel="noopener"> Ned </A> here again. A few years ago, I demonstrated using <A href="#" target="_blank" rel="noopener"> Storage Replica as an extreme data mover </A> , not just as a DR solution; copying blocks is a heck of lot more efficient than copying files. At the time, having even a single NVME drive and RDMA networking was gee-wiz. Well, times have changed, and all-flash storage deployments are everywhere. Even better, RDMA networking like iWARP is becoming commonplace. When you combine Windows Server 2012 R2, Windows Server 2016, or the newly announce Windows Server 2019 with ultrafast flash storage and ultrafast networking, you can get amazing speed results. <BR /><BR />What sort of speeds are we talking about here?</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 640px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107574iEC194E5E15594933/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <H2>The Gear</H2> <P>The good folks at <A href="#" target="_blank" rel="noopener"> Chelsio </A> - makers of iWARP RDMA networking used by SMB Direct - setup a pair of servers with the following config: <BR /><BR /></P> <UL> <UL> <LI>OS: Windows 2016</LI> </UL> </UL> <UL> <UL> <LI>System Model: 2x Supermicro X10DRG-Q</LI> </UL> </UL> <UL> <UL> <LI>RAM:128GB per node</LI> </UL> </UL> <UL> <UL> <LI>CPU: Intel(R) Xeon(R) CPU E5-2687W v4 @ 3.00GHz (2 sockets, 24 cores) per node</LI> </UL> </UL> <UL> <UL> <LI>INTEL NVME SSD Model: SSDPECME016T4 (1.6TB) – 5x in source node</LI> </UL> </UL> <UL> <UL> <LI>Micron NVME SSD Model: MTFDHAX2T4MCF-1AN1ZABYY (2.4TB) – 5x in destination node</LI> </UL> </UL> <UL> <UL> <LI>2x Chelsio T6225-CR 25Gb iWARP RNICs</LI> </UL> </UL> <UL> <UL> <LI>2x Chelsio T62100-CR 100Gb iWARP RNICs</LI> </UL> </UL> <P><BR />25 and 100 Gigabit networking with CPU offloading the transfers and doing remote direct memory placement with SMB!? Yes please.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 300px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107576i5B3AB7FE23D44C5D/image-size/large?v=v2&amp;px=999" role="button" /></span> <span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 300px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107577iB6015723417ABDFB/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <H2>The Goal</H2> <P>We wanted to see if Storage Replica block copying with NVME could fully utilize an iWARP RDMA network and what the CPU overhead would look like. When using NVME drives, servers are much more likely to run out of networking under high data transfer workloads than storage IOPS and MB/sec throughput. 10Gb ethernet and TCP simply cannot keep up, and their need to use the motherboard’s CPU for all the work restricts perf even further. <BR /><BR />We already know that straight file copying would <A href="#" target="_blank" rel="noopener"> not be able to match the perf </A> of Storage Replica block copy and also show significant CPU usage on each node. But where would the bottleneck be now?</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 216px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107578iF5ECE5730471D5CF/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <H2>The Grades</H2> <H3>25Gb</H3> <P>First, I tried the 25Gb RDMA network, configuring Storage Replica to perform initial sync and clone the entire 2TB volume residing on top of the storage pool.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107579iC4F1837EA74C03F8/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <P><BR />As you can see, this immediately consumed the entire 25Gb network. The NVME is just too fast, and Storage Replica is a kernel mode disk filter that pump data blocks at the line rate of the storage.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 386px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107580i175756E1C9BE0397/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <P><BR />The CPU and memory are looking very low. This is the advantage that SMB Direct &amp; RDMA offloading is bringing to the table; the server is left with all the resources to do its real job, and not deal with user-mode nonsense.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 386px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107581iDD3621CB491FEFCA/image-size/large?v=v2&amp;px=999" role="button" /></span> <span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 387px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107582i0D6EEE8434D3BF2B/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <P><BR />In the end this is quite a respectable run and the data moved very fast. Copying 2TB in 12 minutes with no real CPU or memory hit is great by any definition.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 282px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107583iCAB9D6AEC46B9F75/image-size/large?v=v2&amp;px=999" role="button" /></span> <span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 363px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107584iB89826FF44B72A9E/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <P><BR />But we can do better. :D</img></P> <H3>100Gb</H3> <P>Same test with the same servers, storage, volumes, Storage replica – except this time I’m using 100Gb Chelsio iWARP networking. <BR /><BR />I like videos. Let’s watch a video this time (turn on CC if you're less familiar with SR and crank the resolution). <BR /><BR /><IFRAME src="https://www.youtube-nocookie.com/embed/3NNmB_TAAGI" width="525" height="325"> </IFRAME> <BR /><BR />Holy smokes!!! The storage cannot keep up with the networking. Let me restate:</P> <H3><EM> The striped NVME drives cannot keep up with SMB Direct and iWARP. </EM></H3> <P>We just pushed 2 terabytes of data over SMB 3.1.1 and RDMA in under four minutes. That’s ~10 gigabytes a second.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 640px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107585iCB20FD9A972FFBA9/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <H2>The Rundown</H2> <P>When you combine Windows Server and Chelsio iWARP RDMA, you get ultra-low latency, low-CPU, low-memory, high throughput SMB and workload performance in: <BR /><BR /></P> <UL> <UL> <LI>Storage Spaces Direct</LI> </UL> </UL> <UL> <UL> <LI>Storage Replica</LI> </UL> </UL> <UL> <UL> <LI>Hyper-V Live Migration</LI> </UL> </UL> <UL> <UL> <LI>Windows Server and Windows 10 Enterprise client SMB operations</LI> </UL> </UL> <P><BR />You will not be disappointed. <BR /><BR />A huge thanks to the good folks at Chelsio for the use of their loaner gear and lab. Y’all rock. <BR /><BR />-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Ned Pyle <BR /><!--<iframe src="https://twitter.com/NerdPyle/status/971181953054474240" width="525" height="325">--></P> Wed, 08 Jul 2020 18:28:38 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/it-s-gone-to-plaid-storage-replica-and-chelsio-iwarp-performance/ba-p/428178 Ned Pyle 2020-07-08T18:28:38Z Survey: Local Users and Groups on Windows Server in AD domains https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/survey-local-users-and-groups-on-windows-server-in-ad-domains/ba-p/426062 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jan 16, 2018 </STRONG> <BR /> Hey folks, <A href="#" target="_blank"> Ned </A> here again. We need to understand how or if you still use local security principals on Windows Server in Active Directory environments. Come take a 60 second survey: <BR /> <H2> <A href="#" target="_blank"> https://aka.ms/LocalSecurity1 </A> </H2> <BR /> <BR /> <BR /> Ned “this survey is weird, right?” Pyle </BODY></HTML> Wed, 10 Apr 2019 11:33:39 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/survey-local-users-and-groups-on-windows-server-in-ad-domains/ba-p/426062 Ned Pyle 2019-04-10T11:33:39Z Windows Work Folders On-Demand file access feature https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/windows-work-folders-on-demand-file-access-feature/ba-p/426060 <P><STRONG> First published on TECHNET on Jan 08, 2018 </STRONG> <BR />We’re excited to announce the Windows <A href="#" target="_blank" rel="noopener"> Work Folders </A> On-Demand file access feature will be available in the Windows 10 version 1803 release! This file access feature enables you to see and access all of your files on Windows 10 when using an enterprise managed file server with Work Folders. You control which files are stored on your PC and available offline. The rest of your files are always visible and don’t take up any space on your PC, but you need connectivity to the Work Folders file server to access them. </P> <P>&nbsp;</P> <H3>Prerequisites</H3> <P>If you’re interested in evaluating the on-demand file access feature prior to the Windows 10 version 1803 release, join the <A href="#" target="_blank" rel="noopener"> Windows Insider </A> program and install Windows 10 build 17063 or later. </P> <P>&nbsp;</P> <H3>How to enable the on-demand file access feature</H3> <P>There are three options to enable the on-demand file access feature: <BR /><BR /><STRONG> Option #1: Work Folders setup wizard </STRONG> <BR />When configuring Work Folders on a PC, verify the <STRONG> Enable on-demand file access on this PC </STRONG> setting is selected in the setup wizard. <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 500px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107528iDF735E48AEAF86AF/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR /><STRONG> Option #2: Work Folders control panel applet </STRONG> <BR />If Work Folders is currently configured on a PC, open the Work Folders control panel applet and select the <STRONG> Enable on-demand file access </STRONG> setting. <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 500px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107529iA83E62A3CA70D1DC/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR /><STRONG> Option #3: Work Folders group policy setting </STRONG> <BR />Administrators can control the on-demand file access feature on PCs by setting the <STRONG> On-demand file access preference </STRONG> group policy setting. <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 379px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107530i354270F4B7F57C5C/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR />These options can also be used to disable the on-demand file access feature if you want all files to be available offline on a PC. </P> <H3>&nbsp;</H3> <H3>File status in File Explorer</H3> <P>After enabling the on-demand file access feature, your files and folders stored in the Work Folders directory will have these statuses in File Explorer: <BR /><BR /><STRONG> Available when online </STRONG> <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 85px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107531iE97FF08F7949CBA4/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR /><STRONG> Available when online </STRONG> files don’t take up space on your computer. You see a cloud icon for each <STRONG> Available when online </STRONG> file in File Explorer, but the file doesn’t download to your device until you open it. You can only open <STRONG> Available when online </STRONG> files when your device is connected to the internet. However, your <STRONG> Available when online </STRONG> files will always be visible in File Explorer even if you are offline. <BR /><BR /><STRONG> Available on this device </STRONG> <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 81px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107532i1981E69C9275A3A1/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR />When you open an <STRONG> Available when online </STRONG> file, it downloads to your device and becomes an <STRONG> Available on this device </STRONG> file. You can open an <STRONG> Available on this device </STRONG> file anytime, even without Internet access. If you need more space, you can change the file back to <STRONG> Available when online </STRONG> . Just right-click the file and select <STRONG> Free up space </STRONG> . <BR /><BR /><STRONG> Always available on this device </STRONG> <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 80px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107534i451F5EFDDAAD86B6/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR />Only files that you mark as <STRONG> Always keep on this device </STRONG> have the green circle with the white check mark. These files will always be available even when you’re offline. They are downloaded to your device and take up space. </P> <H3>&nbsp;</H3> <H3>Frequently Asked Questions</H3> <P><STRONG>How do I make a file or folder available for offline use? </STRONG> <BR /><BR /></P> <UL> <UL> <LI>Right-click a file or folder in the Work Folders directory</LI> </UL> </UL> <UL> <UL> <LI>Select <STRONG> Always keep on this device </STRONG></LI> </UL> </UL> <P><BR /><STRONG>How do I make a file or folder available when online? </STRONG> <BR /><BR /></P> <UL> <UL> <LI>Right-click a file or folder in the Work Folders directory</LI> </UL> </UL> <UL> <UL> <LI>Select <STRONG> Free up space </STRONG></LI> </UL> </UL> <P><BR /><STRONG>I upgraded my PC to Windows 10 build 17063 or later, why do I not see the “Always keep on this device” and “Free up space” options in File Explorer when I right-click a file or folder? </STRONG> <BR /><BR /></P> <UL> <UL> <LI>The on-demand file access feature is disabled by default if Work Folders was configured on the PC prior to upgrading. To enable the on-demand file access feature, select the <STRONG> Enable on-demand file access </STRONG> setting in the Work Folders control panel applet or set the <STRONG> On-demand file access preference </STRONG> group policy setting.</LI> </UL> </UL> <H3>&nbsp;</H3> <H3>Feedback or Issues</H3> <P>If you have feedback or experience an issue, please post in the <A href="#" target="_blank" rel="noopener"> File Services and Storage </A> TechNet forum. </P> <H3>&nbsp;</H3> <H3>Additional Resources</H3> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Work Folders documentation on TechNet</A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Work Folders blog on TechNet </A></LI> </UL> </UL> <P>&nbsp;</P> Mon, 25 May 2020 22:59:22 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/windows-work-folders-on-demand-file-access-feature/ba-p/426060 Jeff Patterson 2020-05-25T22:59:22Z Survey: Storage Replica "Lite" https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/survey-storage-replica-quot-lite-quot/ba-p/426035 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Dec 14, 2017 </STRONG> <BR /> Hey folks, <A href="#" target="_blank"> Ned </A> here again. Are you interested in a reduced cost but reduced functionality version of Storage Replica? We are too. Come take a 2-minute survey: <BR /> <H2> <A href="#" target="_blank"> https://aka.ms/srlite1 </A> </H2> <BR /> Ned "this survey promises nothing" Pyle </BODY></HTML> Wed, 10 Apr 2019 11:32:41 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/survey-storage-replica-quot-lite-quot/ba-p/426035 Ned Pyle 2019-04-10T11:32:41Z Storage Spaces Direct with Samsung Z-SSD™ https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-spaces-direct-with-samsung-z-ssd-8482/ba-p/426034 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Dec 04, 2017 </STRONG> <BR /> Hello, Claus here again. <BR /> <BR /> Today we are going to take a look at a new device from Samsung, the SZ985, which is marketed as a ultra-low latency NVMe SSD based on Samsung Z-NAND flash memory and a new NVMe controller. It offers ~3GB/s throughput, and random read operations in 20µs with capacities up to 3.2TB and 30 drive writes per day (DWPD). <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107523i8D89AA4602FA2144" /> <BR /> <BR /> We added two Z-SSD devices to each server in a 4-node cluster, each node configured with the following hardware: <BR /> <UL> <BR /> <LI> 2x Intel® Xeon® E5-2699v4 (22 cores @ 2.2 GHz) </LI> <BR /> <LI> 128GiB DDR4 DRAM </LI> <BR /> <LI> 2x 800GB Samsun Z-SSD </LI> <BR /> <LI> 20x SATA SSD </LI> <BR /> <LI> 1x Mellanox CX-3 Pro 2x40Gb </LI> <BR /> </UL> <BR /> <UL> <BR /> <LI> BIOS configuration <BR /> <UL> <BR /> <LI> BIOS performance profile </LI> <BR /> <LI> C States disabled </LI> <BR /> <LI> Hyper-threading on </LI> <BR /> <LI> Speedstep/Turbo on </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> We deployed Windows Server 2016 Storage Spaces Direct and <A href="#" target="_blank"> VMFleet </A> with: <BR /> <UL> <BR /> <LI> 4x 3-way mirror CSV volumes </LI> <BR /> <LI> Cache configured for read/write </LI> <BR /> <LI> 44 VMs per node, each with <BR /> <UL> <BR /> <LI> DISKSPD v2.0.17 </LI> <BR /> <LI> 5GB working set (~900GB total) </LI> <BR /> <LI> 1 IO thread </LI> <BR /> <LI> 8 QD </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> First, we took a look at 100% read scenario. The graph shows that observed latency at the top of the storage stack stayed relative constant as we ramped IOPS. The highpoint is 200µs @ 200K IOPS, but staying between 100-150µs at 400K+ IOPS. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107524i1FB86B504350AE72" /> <BR /> <BR /> The graph below shows the CPU utilization linear increase as we ramp up IOPS, which is expected. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107525iA014E163D8168025" /> <BR /> <BR /> Second, we took a look at 90% read and 10% write scenario, which is more common. Writes have to be performed on multiple nodes to ensure resiliency, which involves network communication and thus is&nbsp; bit slower than local read operations, but stayed under 1ms even at 1M+ IOPS, and reads stayed very close to what was seen with 100% read. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107526iB5009B11B6936274" /> <BR /> <BR /> Similar to the 100% read scenario, the CPU utilization increases linear as we increase IOPS pressure on the system in the 90% read and 10% write scenario. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107527iBFDE12C10CB6ABF4" /> <BR /> <BR /> It is good to see the innovation in driving down latency in flash storage to the benefit of relational database servers, like SQL Server, and caches, like the Storage Spaces Direct cache. I look forward to seeing these devices in our <A href="#" target="_blank"> Windows Server Software-Defined datacenter solutions </A> . <BR /> <BR /> What do you think? <BR /> <BR /> Until next time <BR /> <BR /> Claus <BR /> <BR /> </BODY></HTML> Wed, 10 Apr 2019 11:32:36 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-spaces-direct-with-samsung-z-ssd-8482/ba-p/426034 Claus Joergensen 2019-04-10T11:32:36Z Object Storage Survey for Windows https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/object-storage-survey-for-windows/ba-p/426028 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Nov 01, 2017 </STRONG> <BR /> Hi folks, <A href="#" target="_blank"> Ned </A> here again. We have released a new survey for on-premises object storage on Windows. We want to better understand your specific object storage workloads in the datacenter and design these products to meet your needs. The survey will only take a couple of minutes and is completely anonymous. <BR /> <BR /> <A href="#" target="_blank"> Click here for survey </A> <BR /> <BR /> <EM> Note: this is different than the previous survey you might have filled out a few weeks ago. We're iterating. :) </EM> <BR /> <BR /> Thanks! </BODY></HTML> Wed, 10 Apr 2019 11:31:50 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/object-storage-survey-for-windows/ba-p/426028 Ned Pyle 2019-04-10T11:31:50Z Storage Spaces Direct with Cavium FastLinQ® 41000 https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-spaces-direct-with-cavium-fastlinq-174-41000/ba-p/426027 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Sep 21, 2017 </STRONG> <BR /> Hello, Claus here again. I am very excited about how the RDMA networking landscape is evolving. We took RDMA mainstream in Windows Server 2012 when we introduced SMB Direct and even more so in Windows Server 2016 where Storage Spaces Direct is leveraging SMB Direct for east-west traffic. <BR /> <BR /> More partners than ever offer RDMA enabled network adapters. Most partners focus on either iWARP or RoCE. In this post, we are taking a closer look at <A href="#" target="_blank"> Microsoft SDDC-Premium </A> certified <A href="#" target="_blank"> Cavium FastLinQ® 41000 </A> RDMA adapter, which comes in 10G, 25G, 40G or even 50G versions. The FastLinQ® NIC is a unique NIC, in that it supports both iWARP and RoCE, and can do both at the same time. This provides great flexibility for customer as they can deploy the RDMA technology of their choice, or they can connect both Hyper-V hosts with RoCE adapters and Hyper-V hosts with iWARP adapters to the same Storage Spaces Direct cluster equipped with FastLinQ® 41000 NICs. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107522i9AC46B8C38A7242A" /> <BR /> <BR /> <BR /> <P> Figure 1 Cavium FastLinQ® 41000 </P> <BR /> We use a 4-node cluster, each node configured with the following hardware: <BR /> <UL> <BR /> <LI> DellEMC PowerEdge R730XD </LI> <BR /> <LI> 2x Intel® Xeon® E5-2697v4 (18 cores @ 2.3 GHz) </LI> <BR /> <LI> 128GiB DDR4 DRAM </LI> <BR /> <LI> 4x 800GB Dell Express Flash NVMe SM1715 </LI> <BR /> <LI> 8x 800GB Toshiba PX04SHB080 SSD </LI> <BR /> <LI> Cavium FastLinQ® QL41262H 25GbE Adapter (2-Port) </LI> <BR /> </UL> <BR /> <UL> <BR /> <LI> BIOS configuration <BR /> <UL> <BR /> <LI> BIOS performance profile </LI> <BR /> <LI> C States disabled </LI> <BR /> <LI> HT On </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> We deployed Windows Server 2016 Storage Spaces Direct and <A href="#" target="_blank"> VMFleet </A> with: <BR /> <UL> <BR /> <LI> 4x 3-way mirror CSV volumes </LI> <BR /> <LI> Cache configured for read/write </LI> <BR /> <LI> 18 VMs per node </LI> <BR /> </UL> <BR /> First, we configured VMFleet for throughput. Each VM runs <A href="#" target="_blank"> DISKSPD </A> , with 512KB IO size at 100% read at various queue depths: <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> 512K Bytes </STRONG> </P> <BR /> </TD> <TD> <STRONG> iWARP </STRONG> </TD> <TD> <STRONG> RoCE </STRONG> </TD> <TD> <BR /> <P> <STRONG> iWARP and RoCE </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> Queue Depth </P> <BR /> </TD> <TD> BW (GB/s) </TD> <TD> Read latency (ms) </TD> <TD> BW (GB/s) </TD> <TD> Read latency (ms) </TD> <TD> BW (GB/s) </TD> <TD> Read latency (ms) </TD> </TR> <TR> <TD> <BR /> <P> 1 </P> <BR /> </TD> <TD> 33.0 </TD> <TD> 1.1 </TD> <TD> 32.2 </TD> <TD> 1.2 </TD> <TD> 33.2 </TD> <TD> <BR /> <P> 1.1 </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> 2 </P> <BR /> </TD> <TD> 39.7 </TD> <TD> 1.9 </TD> <TD> 39.4 </TD> <TD> 1.9 </TD> <TD> 40.1 </TD> <TD> <BR /> <P> 1.9 </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> 4 </P> <BR /> </TD> <TD> 41.0 </TD> <TD> 3.7 </TD> <TD> 40.6 </TD> <TD> 3.7 </TD> <TD> 41.0 </TD> <TD> <BR /> <P> 3.7 </P> <BR /> </TD> </TR> <TR> <TD> 8 </TD> <TD> 41.4 </TD> <TD> 7.4 </TD> <TD> 41.1 </TD> <TD> 7.4 </TD> <TD> 41.6 </TD> <TD> <BR /> <P> 7.4 </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> <BR /> <BR /> Aggregate throughput is very close to what's possible with the cache devices in the system. Also, the aggregate throughput and latency is very consistent whether it is with iWARP, RoCE or using both at the same time. In these tests, DCB is configured to enable PFC for RoCE but iWARP is without any DCB configuration. <BR /> <BR /> Next, we reconfigured VMFleet for IOPS. Each VM runs <A href="#" target="_blank"> DISKSPD </A> , with 4KB IO size at 90% read and 10% write at various queue depths: <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> 4K Bytes </STRONG> </P> <BR /> </TD> <TD> <STRONG> iWARP </STRONG> </TD> <TD> <STRONG> RoCE </STRONG> </TD> <TD> <BR /> <P> <STRONG> iWARP and RoCE </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> Queue Depth </P> <BR /> </TD> <TD> IOPS </TD> <TD> Read latency (ms) </TD> <TD> IOPS </TD> <TD> Read latency (ms) </TD> <TD> IOPS </TD> <TD> Read latency (ms) </TD> </TR> <TR> <TD> <BR /> <P> 1 </P> <BR /> </TD> <TD> 272,588 </TD> <TD> 0.253 </TD> <TD> 268,107 </TD> <TD> 0.258 </TD> <TD> 271,004 </TD> <TD> <BR /> <P> 0.256 </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> 2 </P> <BR /> </TD> <TD> 484,532 </TD> <TD> 0.284 </TD> <TD> 481,493 </TD> <TD> 0.287 </TD> <TD> 482,564 </TD> <TD> <BR /> <P> 0.284 </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> 4 </P> <BR /> </TD> <TD> 748,090 </TD> <TD> 0.367 </TD> <TD> 729,442 </TD> <TD> 0.375 </TD> <TD> 740,107 </TD> <TD> <BR /> <P> 0.372 </P> <BR /> </TD> </TR> <TR> <TD> 8 </TD> <TD> 1,177,243 </TD> <TD> 0.465 </TD> <TD> 1,161,534 </TD> <TD> 0.474 </TD> <TD> 1,164,115 </TD> <TD> <BR /> <P> 0.472 </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> <BR /> <BR /> Again, very similar and consistent IOPS rates and latency numbers for iWARP, RoCE or when using both at the same time. <BR /> <BR /> As mentioned in the beginning, more and more partners are offering RDMA network adapters, most focusing on either iWARP or RoCE. The <A href="#" target="_blank"> Cavium FastLinQ® 41000 </A> can do both, which means customers can deploy either or both, or even change over time if the need arises. The numbers look very good and consistent regardless if it used with iWARP, RoCE or both at the same time. <BR /> <BR /> What do you think? <BR /> <BR /> Until next time <BR /> <BR /> Claus </BODY></HTML> Wed, 10 Apr 2019 11:31:46 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-spaces-direct-with-cavium-fastlinq-174-41000/ba-p/426027 Claus Joergensen 2019-04-10T11:31:46Z Understanding SSD endurance: drive writes per day (DWPD), terabytes written (TBW), and the minimum recommended for Storage Spaces Direct https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/understanding-ssd-endurance-drive-writes-per-day-dwpd-terabytes/ba-p/426024 <P><EM> First published on TECHNET on Aug 11, 2017</EM></P> <P>Hi! I’m Cosmos. Follow me on Twitter <A href="#" target="_blank" rel="noopener"> @cosmosdarwin</A>.</P> <P>&nbsp;</P> <H2><STRONG> Background </STRONG></H2> <P>Storage Spaces Direct in Windows Server 2016 and Windows Server 2019 features a built-in, persistent, read and write cache to maximize storage performance. You can read all about it at <A href="#" target="_blank" rel="noopener"> Understanding the cache in Storage Spaces Direct </A> . In all-flash deployments, NVMe drives typically cache for SATA/SAS SSDs; in hybrid deployments, NVMe or SATA/SAS SSDs cache for HDDs. <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 500px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107520iF2F03858BDD5CA79/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <P><BR />In any case, the cache drives will serve the overwhelming majority of IO, including 100% of writes. This is essential to delivering the unrivaled performance of Storage Spaces Direct, whether you measure that in <A href="#" target="_blank" rel="noopener"> millions of IOPS </A> , <A href="#" target="_blank" rel="noopener"> Tb/s of IO throughput </A> , or consistent sub-millisecond latency.</P> <P><BR />But nothing is free: these cache drives are liable to wear out quickly.</P> <P>&nbsp;</P> <H2><STRONG> Review: What is flash wear </STRONG></H2> <P>Solid-state drives today are almost universally comprised of NAND flash, which wears out with use. Each flash memory cell can only be written so many times before it becomes unreliable. (There are numerous great write-ups online that cover all the gory details – including <A href="#" target="_blank" rel="noopener"> on Wikipedia </A> .) <BR /><BR />You can watch this happen in Windows by looking at the <STRONG> Wear </STRONG> reliability counter in PowerShell: <BR /><BR /><STRONG> PS C:\&gt; Get-PhysicalDisk | Get-StorageReliabilityCounter | Select Wear </STRONG> <BR /><BR />Here’s the output from my laptop – my SSD is about 5% worn out after two years. <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 578px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107521iDEC61107343CAD20/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <P><BR /><EM>Note: Not all drives accurately report this value to Windows. In some cases, the counter may be blank. Check with your manufacturer to see if they have proprietary tooling you can use to retrieve this value. </EM><BR /><BR />Generally, reads do not wear out NAND flash.</P> <P>&nbsp;</P> <H2><STRONG> Quantifying flash endurance </STRONG></H2> <P>Measuring wear is one thing, but how can we predict the longevity of an SSD?</P> <P>&nbsp;</P> <P>Flash “endurance” is commonly measured in two ways:</P> <UL> <LI>Drive Writes Per Day (DWPD)</LI> <LI>Terabytes Written (TBW)</LI> </UL> <P>Both approaches are based on the manufacturer’s warranty period for the drive, its so-called “lifetime”.</P> <P>&nbsp;</P> <H2><STRONG> Drive Writes Per Day (DWPD) </STRONG></H2> <P>Drive Writes Per Day (DWPD) measures how many times you could overwrite the drive’s entire size each day of its life. For example, suppose your drive is 200 GB and its warranty period is 5 years. If its DWPD is 1, that means you can write 200 GB (its size, one time) into it every single day for the next five years.</P> <P>&nbsp;</P> <P>If you multiply that out, that’s 200 GB per day × 365 days/year × 5 years = 365 TB of cumulative writes before you may need to replace it.</P> <P>&nbsp;</P> <P>If its DWPD was 10 instead of 1, that would mean you can write 10 × 200 GB = 2 TB (its size, ten times) into it every day. Correspondingly, that’s 3,650 TB = 3.65 PB of cumulative writes over 5 years.</P> <P>&nbsp;</P> <H2><STRONG> Terabytes Written (TBW) </STRONG></H2> <P>Terabytes Written (TBW) directly measures how much you can write cumulatively into the drive over its lifetime. Essentially, it just includes the multiplication we did above in the measurement itself. <BR /><BR />For example, if your drive is rated for 365 TBW, that means you can write 365 TB into it before you may need to replace it. <BR /><BR />If its warranty period is 5 years, that works out to 365 TB ÷ (5 years × 365 days/year) = 200 GB of writes per day. If your drive was 200 GB in size, that’s equivalent to 1 DWPD. Correspondingly, if your drive was rated for 3.65 PBW = 3,650 TBW, that works out to 2 TB of writes per day, or 10 DWPD. <BR /><BR />As you can see, if you know the drive’s size and warranty period, you can always get from DWPD to TBW or vice-versa with some simple multiplications or divisions. The two measurements are really very similar.</P> <P>&nbsp;</P> <H2><STRONG> What’s the difference? </STRONG></H2> <P>The only real difference is that DWPD depends on the drive’s size whereas TBW does not. <BR /><BR />For example, consider an SSD which can take 1,000 TB of writes over its 5-year lifetime. <BR /><BR />Suppose the SSD is 200 GB: <BR /><BR /><STRONG> 1,000 TB ÷ (5 years </STRONG> <STRONG> × 365 days/year </STRONG> <STRONG> × 200 GB) = 2.74 DWPD </STRONG> <BR /><BR />Now suppose the SSD is 400 GB: <BR /><BR /><STRONG> 1,000 TB ÷ (5 years </STRONG> <STRONG> × 365 days/year </STRONG> <STRONG> × 400 GB) = 1.37 DWPD </STRONG> <BR /><BR />The resulting DWPD is different! What does that mean? <BR /><BR />On the one hand, the larger 400 GB drive can do the exact same cumulative writes over its lifetime as the smaller 200 GB drive. Looking at TBW, this is very clear – both drives are rated for 1,000 TBW. But looking at DWPD, the larger drive appears to have just half the endurance! You might argue that because under the same workload, it would perform “the same”, using TBW is better. <BR /><BR />On the other hand, you might argue that the 400 GB drive can provide storage for more workload because it is larger, and therefore its 1,000 TBW spreads more thinly, and it really does have just half the endurance! By this reasoning, using DWPD is better.</P> <P>&nbsp;</P> <H2><STRONG> The bottom line </STRONG></H2> <P>You can use the measurement you prefer. It is almost universal to see both TBW and DWPD appear on drive spec sheets today. Depending on your assumptions, there is a compelling case for either.</P> <P>&nbsp;</P> <H2><STRONG> Recommendation for Storage Spaces Direct</STRONG></H2> <P>Our minimum recommendation for Storage Spaces Direct is listed on the <A href="#" target="_blank" rel="noopener"> Hardware requirements </A> page.&nbsp;As of mid-2017, for cache drives: <BR /><BR /></P> <UL> <UL> <LI>If you choose to measure in DWPD, we recommend 3 or more.</LI> </UL> </UL> <UL> <UL> <LI>If you choose to measure in TBW, we recommend 4 TBW per day of lifetime. Spec sheets often provide TBW cumulatively, which you’ll need to divide by its lifetime. For example, if your drive has a warranty period of 5 years, then 4 TB × 365 days/year × 5 years = 7,300 TBW = 7.3 PBW total.</LI> </UL> </UL> <P><BR />Often, one of these measurements will work out to be slightly less strict than the other.</P> <P>&nbsp;</P> <P>You may use whichever measurement you prefer.</P> <P><BR />There is no minimum recommendation for capacity drives.</P> <P>&nbsp;</P> <H2><STRONG> Addendum: Write amplification </STRONG></H2> <P>You may be tempted to reason about endurance from IOPS numbers, if you know them. For example, if your workload generates (on average) 100,000 IOPS which are (on average) 4 KiB each of which (on average) 30% are writes, you may think: <BR /><BR /><STRONG> 100,000 </STRONG> <STRONG> × 30% </STRONG> <STRONG> × 4 KiB = 120 MB/s of writes </STRONG> <BR /><BR /><STRONG> 120 MB/s </STRONG> <STRONG> × 60 secs/min </STRONG> <STRONG> × 60 mins/hour </STRONG> <STRONG> × 24 hours = approx. 10 TBW/day </STRONG> <BR /><BR />If you have four servers with two cache drives each, that’s: <BR /><BR /><STRONG> 10 TBW/day ÷ (8 total cache drives) = approx. 1.25 TBW/day per drive </STRONG> <BR /><BR />Interesting! Less than 4 TBW/day! <BR /><BR />Unfortunately, this is flawed math&nbsp;because it does not account for write amplification. <BR /><BR />Write amplification is when one write (at the user or application layer) becomes multiple writes (at the physical device layer). Write amplification is inevitable in any storage system that guarantees resiliency and/or crash consistency. The most blatant example in Storage Spaces Direct is three-way mirror: it writes everything three times, to three different drives. <BR /><BR />There are other sources of write amplification too: repair jobs generate additional IO; data deduplication generates additional IO; the filesystem, and many other components, generate additional IO by persisting their metadata and log structures; etc. In fact, the drive itself generates write amplification from internal activities such as garbage collection! (If you're interested, check out the <A href="#" target="_blank" rel="noopener"> JESD218 </A> standard methodology for how to factor this into endurance calculations.) <BR /><BR />This is all necessary and good, but it makes it difficult to derive drive-level IO activity at the bottom of the stack from application-level IO activity at the top of the stack in any consistent way.&nbsp;That’s why, based on our experience, we publish the minimum DWPD and TBW recommendation. <BR /><BR />Let us know what you think! :)</img></P> Thu, 19 Sep 2019 23:55:36 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/understanding-ssd-endurance-drive-writes-per-day-dwpd-terabytes/ba-p/426024 Cosmos Darwin 2019-09-19T23:55:36Z Windows Server 2016 NTFS sparse file/Data Deduplication users: please install KB4025334 https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/windows-server-2016-ntfs-sparse-file-data-deduplication-users/ba-p/426018 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jul 20, 2017 </STRONG> <BR /> Please note: all updates to Windows Server 2016 are cumulative, so any current or future KB will the fixes described in this blog post. Microsoft always recommends taking the latest KB. <BR /> <BR /> Hi folks, <BR /> <BR /> <A href="#" target="_blank"> KB4025334 </A> prevents a critical data corruption issue with NTFS sparse files in Windows Server 2016. This helps avoid data corruptions that may occur when using Data Deduplication in Windows Server 2016, although all applications and Windows components that use sparse files on NTFS benefit from applying this update. Installation of this KB helps avoid any new or further corruptions for Data Deduplication users on Windows Server 2016. This does <B> not </B> help recover existing corruptions that may have already happened. This is because NTFS incorrectly removes in-use clusters from the file and there is no ability to identify what clusters were incorrectly removed after the fact. Although KB4025334 is an optional update, we strongly recommend that all NTFS users, especially those using Data Deduplication, install this update as soon as possible. This fix will become mandatory in the "Patch Tuesday" release for August 2017. <BR /> <BR /> For Data Deduplication users, this data corruption is particularly hard to notice as it is a so called "silent" corruption - it cannot be detected by the weekly Dedup integrity scrubbing job. Therefore, <A href="#" target="_blank"> KB4025334 </A> also includes an update to chkdsk to help identify which files are corrupted. Affected files can be identified using chkdsk with the following steps: <OL> <BR /> <LI> Install KB4025334 on your server from the <A href="#" target="_blank"> Microsoft Update Catalog </A> and reboot. If you are running a Failover Cluster, this patch will need to be applied to all nodes in the cluster. </LI> <BR /> <LI> Run chkdsk in readonly mode (this is the default mode for chkdsk) </LI> <BR /> <LI> For potentially corrupted files, chkdsk will report something like the following <BR /> <BR /> <CODE> The total allocated size in attribute record (128, "") of file 20000000000f3 is incorrect. </CODE> <BR /> <BR /> where <B> 20000000000f3 </B> is the file id. Note all affected file ids. <BR /> </LI> <BR /> <LI> Use fsutil to look up the name of the file by its file id. This should look like the following: <BR /> <BR /> <CODE> <BR /> E:\myfolder&gt; fsutil file queryfilenamebyid e:\ 0x20000000000f3 <BR /> A random link name to this file is [file://%3f/E:/myfolder/TEST.0]\\?\E:\myfolder\TEST.0 <BR /> </CODE> <BR /> <BR /> where <B> E:\myfolder\TEST.0 </B> is the affected file. </LI> <BR /> </OL> <BR /> <BR /> We're very sorry for the inconvenience this issue has caused. Please don't hesitate to reach out in the comment section below if you have any additional questions about KB4025334, and we'll be happy to answer. <BR /> <BR /> <BR /> </BODY></HTML> Wed, 10 Apr 2019 11:31:06 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/windows-server-2016-ntfs-sparse-file-data-deduplication-users/ba-p/426018 Will Gries 2019-04-10T11:31:06Z Storage Spaces Direct on Intel® Xeon® Scalable Processors https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-spaces-direct-on-intel-174-xeon-174-scalable-processors/ba-p/426016 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jul 11, 2017 </STRONG> <BR /> Hello, Claus here again. As you have probably noticed by now, we have a great ongoing collaboration with Intel. In this blog post we are going to look at Windows Server 2016 <A href="#" target="_blank"> Storage Spaces Direct </A> on Intel's latest and greatest hardware, which includes a new processor family, Intel® Xeon® Scalable Processors, an iWARP RDMA network adapter with the integrated Intel® Ethernet Connection X722, and "Intel® Optane™ Solid State Drives. <BR /> <BR /> We use a 4 node cluster, each node with configured with the following hardware (not including boot drive): <BR /> <UL> <BR /> <LI> Intel® Server System R2208WF </LI> <BR /> <LI> 2x Intel® Xeon® Platinum 8168 CPU (24cores @ 2.7Ghz) </LI> <BR /> <LI> 128GiB DDR4 DRAM </LI> <BR /> <LI> 2x 375GB Intel® Optane SSD DC P4800X (NVMe SSD) </LI> <BR /> <LI> 4x 1.2TB Intel® SSD DC S3610 SATA SSD </LI> <BR /> <LI> Intel® Ethernet Connection X722 with 4x 10GbE iWARP RDMA </LI> <BR /> <LI> BIOS configuration <BR /> <UL> <BR /> <LI> C States disabled </LI> <BR /> <LI> BIOS performance plan </LI> <BR /> <LI> Turbo On </LI> <BR /> <LI> HT On </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> We deployed Windows Server 2016 Storage Spaces Direct and stood up <A href="#" target="_blank"> VMFleet </A> with: <BR /> <UL> <BR /> <LI> 4x 3-copy mirror CSV volumes </LI> <BR /> <LI> Cache configured for Read and Write </LI> <BR /> <LI> 24 VMs per node </LI> <BR /> <LI> Each VM rate limited to 7,500 IOPS (similar to Azure P40 disk) </LI> <BR /> </UL> <BR /> Each VM runs <A href="#" target="_blank"> DISKSPD </A> , with 4K IO size at 90% read and 10% write rate limited at 7,500IOPS. This produces a total IOPS of ~720K (4 * 24 * 7,500). <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107518iA1268D5E12E18DEA" /> <BR /> <BR /> Read IO is served at about <STRONG> 80 microseconds </STRONG> ! This is significantly less than anything else we have seen before.&nbsp;Write IO is served at about <STRONG> 300 microseconds </STRONG> . The write latency is higher than read latency mostly due to network latency, as writes are mirrored in 3 copies to peer nodes in the cluster. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107519i52426109BEB96C27" /> <BR /> <BR /> <BR /> <BR /> In addition, CPU consumption is less than 25%, which means there is plenty of headroom for applications to consume this storage performance. <BR /> <BR /> We are very excited to see these numbers and the value that the new generation of Intel Scalable Processors, Intel Optane DC SSD devices combined with the Intel Ethernet Connection X722 with 4x 10GbE iWARP RDMA adapter can deliver to our joint customers. Let me know what you think. <BR /> <BR /> Please also see the recorded <A href="#" target="_blank"> video </A> . <BR /> <BR /> Until next time <BR /> <BR /> Claus </BODY></HTML> Wed, 10 Apr 2019 11:31:01 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-spaces-direct-on-intel-174-xeon-174-scalable-processors/ba-p/426016 Claus Joergensen 2019-04-10T11:31:01Z SMB1 Product Clearinghouse https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/smb1-product-clearinghouse/ba-p/426008 <P><STRONG> First published on TECHNET on Jun 01, 2017 </STRONG> <BR />Hi folks, <A href="#" target="_blank" rel="noopener"> Ned </A> here again. This blog post contains all products requiring SMB1, where the vendor explicitly states this in their own documentation or communications, or where a customer has reported it and shown some degree of proof without vendor refutation. This list is <EM> not </EM> complete and you should never treat it as complete; check back often. All products arranged in alphabetical order, by vendor, by product, with a URL to their documentation stating SMB1 requirements. <STRONG> Vendor – Product - Documentation </STRONG> <BR /><BR /></P> <UL> <UL> <LI><STRONG> Aerohive </STRONG> - <I> HiveManager not affected, it does not use SMB.&nbsp;HiveOS 8.2r1 and later no longer requires SMB1 (available 28th December 2017). HiveOS versions prior to 8.2r1 are affected. All info here provided directly by Aerohive to MS. </I></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Alfresco </STRONG> - <EM> Alfresco (when not using WebDAV) </EM> - <A href="#" target="_blank" rel="noopener"> https://community.alfresco.com/thread/231880-smb2-smb3-server-support#comment-816590 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Apple </STRONG> - <EM> Time Capsule </EM> - <A href="#" target="_blank" rel="noopener"> https://discussions.apple.com/thread/8106574 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Applied Systems </STRONG> - <EM> TAM -&nbsp;Customer reported after contacting TAM Support - Vendor does not publicly document their requirement for SMB1<BR /><BR /><BR /></EM></LI> <LI><STRONG>Arris -&nbsp;</STRONG><EM>SURFboard (at least&nbsp;SBG7600AC2, possibly others) -&nbsp;Customer reported after contacting Arris Support - Vendor does not publicly document their requirement for SMB1&nbsp;</EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Aruba </STRONG> - <EM> Clearpass older than 6.6.7 </EM> <EM> (Aruba has resolved with patch; this entry will stay for a short while just to inform their customers to patch) </EM> - <A href="#" target="_blank" rel="noopener"> http://community.arubanetworks.com/t5/Security/ClearPass-Release-Announcements/m-p/303234#M32873 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> ASUS </STRONG> - <EM> Wireless Routers with USB storage connection -&nbsp;Customer reported after contacting ASUS Support - Vendor does not publicly document their requirement for SMB1 </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> AVM </STRONG> - <EM>Fritz!Box older than OS 7.20. Many Fritz!Box device versions have been updated to support SMB 2 &amp; 3 through the Fritz!OS 7.20 release -&nbsp;<A href="#" target="_blank">https://en.avm.de/service/fritzbox/fritzbox-7490/knowledge-base/publication/show/3327_SMB-versions-supported-by-the-FRITZ-Box/&nbsp;</A>&nbsp;&amp;&nbsp;<A href="#" target="_blank">FRITZ!OS 7.20 | AVM International</A></EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Axis Communications </STRONG> - <EM> Various security &amp; surveillance cameras, using firmware older than 5.8x. Axis' firmware list shows around half of all devices do not support at least firmware 5.8+: <A href="#" target="_blank" rel="noopener"> https://www.axis.com/global/en/support/firmware </A> </EM> <EM> -&nbsp;Customer reported after contacting Axis Support - Vendor does not publicly document their requirement for SMB1 </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Barracuda </STRONG> - <EM> Load Balancer, perhaps other products that support backups (backups to SMB, using at least 6.0 firmware) </EM> - <EM> Customer reported after contacting Barracuda Support - Vendor does not publicly document their requirement for SMB1 </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Barracuda </STRONG> - <EM> SSL VPN </EM> - <A href="#" target="_blank" rel="noopener"> https://campus.barracuda.com/product/sslvpn/article/SSLVPN/CreateNetworkPlace/ </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Barracuda </STRONG> - <EM> Web Security Gateway backups </EM> - <A href="#" target="_blank" rel="noopener"> https://community.barracudanetworks.com/forums.php?url=/topic/29561-backup-via-smb/ </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Belkin </STRONG> - <EM> all routers with USB storage connection </EM> - <EM> Microsoft confirmed with Belkin support </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Bitdefender </STRONG> - <EM> Gravity Zone (Antivirus)&nbsp; -&nbsp;Customer reported after contacting Bitdefender Support - Vendor does not publicly document their requirement for SMB1 </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Boxen </STRONG> - <EM> Boxen </EM> - <A href="#" target="_blank" rel="noopener"> https://www.opena.tv/allgemeine-image-informationen/37127-e2-boxen-und-das-windows-10-fall-creators-update.html?highlight=samba </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Buffalo </STRONG> - <EM> TeraStation current generation supports SMB2+, </EM> v <EM> arious LinkStation models may not allow SMB2 configuration through UI, but do through Samba CONF modification </EM> - <A href="#" target="_blank" rel="noopener"> http://forums.buffalotech.com/index.php?topic=24628.msg88050#msg88050 </A> , <A href="#" target="_blank" rel="noopener"> http://forums.buffalotech.com/index.php?topic=24630.msg88052#msg88052 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Carestream Health </STRONG> - <EM> SoftDent -&nbsp;Customer reported after contacting Carestream Support - Vendor does not publicly document their requirement for SMB1 </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Canon </STRONG> <STRONG> (&amp; Océ) </STRONG> - <EM> Printers via "print to share" </EM> - <EM> NOTE: customer stated 6/5/2018 that Canon onsite technicians can now come install updated firmware to support SMB2 and 3 (This is stated nowhere on Canon's website). Please contact Canon support for this option. </EM> - <A href="#" target="_blank" rel="noopener"> https://support.usa.canon.com/kb/index?page=content&amp;id=ART143573 </A> &amp; <A href="#" target="_blank" rel="noopener"> https://files.lfpp.csa.canon.com/media/Assets/PDFs/TSS/external/WF_PrintDrivers/Documentation/Oce_LF_Systems_Connectivity_information_for_Windows_environment_Administration_guide_en.GB.pdf</A><BR /><BR /><BR /></LI> <LI><STRONG>Canon -</STRONG><EM> imageRUNNER multifunction printers (at least C5235i models) - Gen2 and earlier are SMB1 only. Gen 2.5 supports SMB 2.x. Gen 3 supports SMB 2.x and 3.x.&nbsp;&nbsp;<I>Customer reported after contacting Canon, v</I>endor does not publicly document their requirement for SMB1<BR /><BR /></EM></LI> </UL> </UL> <UL> <UL> <LI><STRONG> Check Point Software Technologies </STRONG> <EM> - Security Gateway (Version R76, R77, R80) &amp;&nbsp;Mobile Access / SSL VPN (all versions) </EM> - <A href="#" target="_blank" rel="noopener"> https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&amp;solutionid=sk114859&amp;partition=Advanced&amp;product=Security </A> &amp; <A href="#" target="_blank" rel="noopener"> https://supportcenter.checkpoint.com/supportcenter/portal? </A> <A href="#" target="_blank" rel="noopener"> eventSubmit_doGoviewsolutiondetails=&amp;solutionid=sk112202&amp;partition=General&amp;product=Mobile </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Cisco </STRONG> - <I> Web Security Appliance/WSAv - </I> <A href="#" target="_blank" rel="noopener"> https://bst.cloudapps.cisco.com/bugsearch/bug/CSCuo70696/?referring_site=bugquickviewredir </A> &amp; <A href="#" target="_blank" rel="noopener"> https://supportforums.cisco.com/discussion/13295496/wsav-supports-smbv1-only </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Cisco </STRONG> - <I> Wide Area Application Services/WAAS 5.0 &amp; older </I> - <A href="#" target="_blank" rel="noopener"> http://www.cisco.com/c/en/us/td/docs/app_ntwk_services/waas/waas/v501/release/notes/ws501xrn.html </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Cisco </STRONG> - <EM>Firesight IDS (formerly Sourcefire), RSA Authentication Manager, Firepower Management Center</EM> - <A href="#" target="_blank" rel="noopener"> https://community.rsa.com/docs/DOC-79194 </A> , <A href="#" target="_blank" rel="noopener"> https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvd10403/?referring_site=bugquickviewredir</A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG>Cisco</STRONG> -&nbsp;<EM>Identity Services Engine (ISE) 1.2 and older(SMB "2.0" [probably meaning SMB 2.02, there is no such thing as 2.0] supported on ISE 1.3 and later</EM> -&nbsp;<A href="#" target="_self">https://www.cisco.com/c/en/us/td/docs/security/ise/2-2/admin_guide/b_ise_admin_guide_22/b_ise_admin_guide_22_chapter_01000.html</A> and&nbsp;<A href="#" target="_blank" rel="noopener">https://community.cisco.com/t5/policy-and-access/what-smb-version-the-cisco-ise-v1-1-3-uses-to-communicate-with/td-p/3231596</A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Citrix </STRONG> - <EM> ELM with DFS Namespaces </EM> - <A href="#" target="_blank" rel="noopener"> https://support.citrix.com/article/CTX227613 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> ClearSwift </STRONG> - <EM> Secure Web Gateway 4.6 (domain join) </EM> - <EM> Customer reported after contacting ClearSwift Support - Vendor does not publicly document their requirement for SMB1 </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> D-Link </STRONG> - <EM> DNS-series NAS - <A href="#" target="_blank" rel="noopener"> https://support.dlink.com/FAQView.aspx?f=gWJF0HOEoYThPqaf8T4XYg%3d%3d </A> </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> DataAccess </STRONG> - <EM> legacy Dataflex embedded DB (vendor also offers many alternative ways to not need SMB1) - </EM> <A href="#" target="_blank" rel="noopener"> http://www.dataaccess.com/KBasePublic/Files/2476.Tuning%20Microsoft%20Networks%20for%20the%20Legacy%20Embedded%20Database_PDF_FMT.PDF </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> DellEMC </STRONG> - <EM> All VNX2 Systems older than 8.1.9.211 (domain join) - <A href="#" target="_blank" rel="noopener"> https://support.emc.com/kb/500036 </A> </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> DellEMC </STRONG> - <EM> Versions older than iDRAC 9. </EM> <EM> <STRONG> iDRAC9 and later support SMB 2 through SMB 3.1.1 - </STRONG> </EM> <A href="#" target="_blank" rel="noopener"> http://www.dell.com/support/home/us/en/19/drivers/driversdetails?driverId=74GHJ </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Denon - </STRONG> <EM> HEOS wireless music system - <A href="#" target="_blank" rel="noopener"> http://rn.dmglobal.com/usmodel/Enable_SMB_1.0_CIFS_File_Sharing_Support_Windows-10-Build-1709.pdf </A> &amp; <A href="#" target="_blank" rel="noopener"> http://rn.dmglobal.com/usmodel/Procedure_to_Setup_Your_PC_to_Share_a_File_or_Folder_on_Your_Network_For_Windows7-10.pdf </A> </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Drobo </STRONG> - <EM> All FS devices. Drobo N firmware released in February 8, 2019 version 4.1.4 first offers option to disable SMB1 ( <A href="#" target="_blank" rel="noopener"> http://files.drobo.com/webrelease/5N2/Release-Notes-firmware-5N2-4.1.4.pdf </A> ) - </EM> <A href="#" target="_blank" rel="noopener"> https://supportportal.drobo.com/retrieve/s3/knowledge/AA/AA-01304.html </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Egnyte </STRONG> - <EM> Storage Connect </EM> - <A href="#" target="_blank" rel="noopener"> https://helpdesk.egnyte.com/hc/en-us/articles/201639514-Storage-Connect-Overview </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Epicor </STRONG> - <EM> At least Epicor ERP&nbsp;10.2.1 and older - <I> Customer reported after contacting Epicor &amp; showing management UI screenshot to MS - Vendor does not publicly document their requirement for SMB1. </I> </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Epson </STRONG> - <EM> WF-3640, WF-5620, perhaps others (when accessing memory card interface) - <I> Customer reported after contacting Epson &amp; showing SMB1 error screenshot to MS - Vendor does not publicly document their requirement for SMB1. </I> </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> F5 </STRONG> - <I> RDP client gateway, Microsoft Exchange Proxy - </I> <A href="#" target="_blank" rel="noopener"> https://support.f5.com/csp/article/K55889450</A><BR /><BR /><BR /></LI> <LI><STRONG>F5</STRONG> -&nbsp;<EM>BIG-IQ system -&nbsp;</EM><A href="#" target="_self">https://support.f5.com/csp/article/K15612</A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Facet Corp </STRONG> - <EM>FacetWIN</EM> - <A href="#" target="_blank" rel="noopener"> https://www.facetcorp.com/pdf/FacetWin_Windows_7_8_10_smb.pdf </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> FAST </STRONG> - <EM> LTA Silent Cubes -&nbsp;Customer reported after contacting FAST Support - Vendor does not publicly document their requirement for SMB1<BR /><BR /></EM></LI> <LI><STRONG>Fred</STRONG> <STRONG>Health</STRONG> - <EM>Free Dispense (not only requires SMB1 but requires you to disable SMB2+) -&nbsp;<A href="#" target="_self">https://help.fredhealth.com.au/media/p/13379.aspx</A> &amp;&nbsp;<A href="#" target="_self">https://help.fredhealth.com.au/media/p/13351.aspx</A></EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> FreeBSD </STRONG> - <EM> smbfs </EM> - <A href="#" target="_blank" rel="noopener"> https://people.freebsd.org/~bp/smben.html </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Forcepoint (Raytheon) </STRONG> - <EM> "some Forcepoint products",&nbsp;Content Gateway proxy authentication,&nbsp;ForcePoint DLP (version 8.3 or lower; <STRONG> DLP version 8.4 or later </STRONG> <STRONG> no longer needs SMB1 </STRONG> ) - </EM> <A href="#" target="_blank" rel="noopener"> https://support.forcepoint.com/KBArticle?id=000012832 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Fujitsu </STRONG> - <EM> FX devices <EM> DMP-X and below (Herakles) do not support SMB2. DMP-XI-1a (Seito, Herakles2) and later support SMB2. Customer reported after contacting Fujitsu Support - Vendor does not publicly document their requirement for SMB1 </EM> </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Hitachi - </STRONG> <I> Hitachi Data Ingestor 6.4.0 and older (HDI 6.4.2 DOES support SMB2+): </I> <A href="#" target="_blank" rel="noopener"> https://community.hds.com/thread/12170-smb-302-supported-on-hdi </A> , <A href="#" target="_blank" rel="noopener"> https://knowledge.hds.com/Documents/Storage/Data_Ingestor </A> &amp; <A href="#" target="_blank" rel="noopener"> https://knowledge.hds.com/Documents/Storage/Data_Ingestor/6.4.2/Release_notes/Hitachi_Data_Ingestor_6.4.2-00_Release_Notes </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> <I> </I> HP </STRONG> - <EM> Various printers (many do support SMB2) </EM> - <A href="#" target="_blank" rel="noopener"> http://h10032.www1.hp.com/ctg/Manual/c05547920 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> HPE </STRONG> - <EM> ArcSight (Legacy Unified Connector, not latest version) </EM> - <A href="#" target="_blank" rel="noopener"> https://community.saas.hpe.com/t5/ArcSight-Connectors/SmartConnector-for-Microsoft-Windows-Event-Log-Native/ta-p/1585123?attachment-id=59177 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> HPE </STRONG> - <EM> StoreOnce Software supports SMB2+, however not clear which version added SMB2+ support (under investigation); For completeness only, 3.14.x </EM> <EM> + makes SMB2 default setting </EM> <A href="#" target="_blank" rel="noopener"> https://blogs.technet.microsoft.com/filecab/bb897-91004_3-16-3_rpm_rn/</A><BR /><BR /></LI> <LI><STRONG>Huawei</STRONG> - <EM>TalkTalk routers with USB storage, e.g. HG633 and likely others -&nbsp;</EM><A href="#" target="_self">https://community.talktalk.co.uk/t5/Broadband/USB-storage-device-on-HG633/m-p/2553103</A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> IBM </STRONG> - <EM> NetServer&nbsp;V7R2 or below - </EM> <A href="#" target="_blank" rel="noopener"> http://www-01.ibm.com/support/docview.wss?uid=nas8N1011878 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> IBM </STRONG> - <EM> QRadar Vulnerability Manager </EM> 7.2.x or below (7.3 has been updated) - <A href="#" target="_blank" rel="noopener"> http://www-01.ibm.com/support/docview.wss?uid=swg22004178 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Ignite Technologies </STRONG> - <EM> SenSage AP (Windows Retriever) <EM> does not support SMB2.&nbsp; Customer reported after contacting Fujitsu Support - Vendor does not publicly document their requirement for SMB1 </EM> </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Imprivata - </STRONG> <EM> Imprivata </EM> <I> OneSign versions older than 5.5 (March 5, 2018 release) - <A href="#" target="_blank" rel="noopener"> https://imprivata.force.com/servlet/fileField?id=0BE34000000ThuZ </A> </I></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Infoblox </STRONG> - <EM> NIOS versions older than 8.2.0 (8.2.0 support SMB2+) </EM> - <A href="#" target="_blank" rel="noopener"> https://community.infoblox.com/t5/DNS-DHCP-IPAM/Infoblox-Integration-with-Microsoft-Windows-DNS/m-p/10900/highlight/true#M2222 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Infusion Business Software </STRONG> - <EM> Infusion (requires disabling SMB2) </EM> - <A href="#" target="_blank" rel="noopener"> https://www.infusionsoftware.co.nz/support/download-documentation/manuals-support-notes/infusion-software-upgrade-notes/streamFile </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Kodi </STRONG> - <EM> Kodi V17 and older (at least) </EM> - <A href="#" target="_blank" rel="noopener"> https://forum.kodi.tv/showthread.php?tid=314269&amp;highlight=smb1 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Konica Minolta </STRONG> - <EM> Bizhub </EM> , <EM> C284e series, C3350 series, likely others still for sale - </EM> <EM> Customer reported after contacting Konica Minolta Support - Vendor does not publicly document their requirement for SMB1. </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Kyocera </STRONG> - <EM> Some models do support SMb2+ when running latest firmware; others require SMb1. Vendor maintains a complete list available to their customers. Contact vendor via </EM> <A href="#" target="_blank" rel="noopener"> http://www.kyoceradocumentsolutions.com/support/ </A> . <EM> Vendor does not make </EM> <EM> list public </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Lexmark </STRONG> - <EM> Firmware&nbsp;eSF 2.x &amp; eSF 3.x MFPs (scan to network) </EM> - <A href="#" target="_blank" rel="noopener"> http://support.lexmark.com/index?page=content&amp;id=FA716&amp;locale=en&amp;userlocale=EN_US </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Linksys </STRONG> - <EM> All routers with USB storage - Microsoft confirmed with Linksys support. </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Linux Kernel </STRONG> - <EM> CIFS client 2.5.42 to 3.5.x (3.7 added first SMB2 client implementation) </EM> - <A href="#" target="_blank" rel="noopener"> https://wiki.samba.org/index.php/LinuxCIFSKernel </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI> <DIV><STRONG>Servicedesk Manage Engine </STRONG> - <EM> ServiceDesk Plus in versions older than 9.3.11 (build 9311) (single sign on feature) </EM> - <A href="#" target="_blank" rel="noopener"> https://www.manageengine.com/products/service-desk/readme-9.3.html </A> &amp; <A href="#" target="_blank" rel="noopener"> https://forums.manageengine.com/topic/smb-v1 </A></DIV> </LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> McAfee </STRONG> - <EM> Rogue Detection System (RDS), all versions (required when scanning remote systems on a subnet that do not themselves have McAfee products installed) </EM> - <EM> Vendor does not publicly document their requirement for SMB1 </EM> <EM> but confirmed with audit logs. </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> McAfee </STRONG> - <I> Web Gateway </I> - <A href="#" target="_blank" rel="noopener"> https://kc.mcafee.com/corporate/index?page=content&amp;id=KB89350 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> McKesson/Change Healthcare </STRONG> - <EM> Portico Provider Manager (7.1 and older at least) </EM> -&nbsp;Customer reported after contacting McKesson Support - <EM> Vendor does not publicly document their requirement for SMB1, but stated that upgrading to latest version would remove need for SMB1. Version not stated. </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Microsoft </STRONG> - Windows XP, Windows Server 2003 (and older), Windows Embedded Standard 2009</LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Mobotix </STRONG> - <EM> various products </EM> - <A href="#" target="_blank" rel="noopener"> https://www.mobotix.com/eng_GB/Support/User-Forum/Installation-Network/Windows-servers-%253E-SMB1-end-of-life-%253E-THIS-BEAKS-MOBOTIX-FUNCTIONALITY </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Multi-Tech Systems </STRONG> - <I> Faxfinder </I> -&nbsp;Customer reported after contacting Multi-Tech Support - <EM> Vendor does not publicly document their requirement for SMB1 </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> MYOB </STRONG> - <EM> Accountants Office &amp; Accountants Enterprise (states requirement for disabling opportunistic locking, i.e. SMB1 behavior option) </EM> - <A href="#" target="_blank" rel="noopener"> https://www.myob.com/au/accountants-and-partners/support/minimum-system-requirements </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> NetApp </STRONG> - <EM> Versions of ONTAP prior to 8.3.2P5, 9.0P1 &amp; 9.1 require SMB1 for domain join (not client connections). ONTAP 8.3.2P5, 9.0P1, 9.1 can instead utilize SMB2 for domain join as well as client connections via SMB2 &amp; 3, and ONTAP 9.2 allows for complete disabling of any SMB1 connections </EM> - <A href="#" target="_blank" rel="noopener"> http://mysupport.netapp.com/NOW/cgi-bin/bol?Type=Detail&amp;Display=786189 </A> &amp; <A href="#" target="_blank" rel="noopener"> https://averageguyx.blogspot.com/2017/06/does-ontap-need-smb1-no.html?m=1 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> NetGear </STRONG> - <I> ReadyNAS running less than OS6,&nbsp;RAIDiator </I> - <A href="#" target="_blank" rel="noopener"> https://community.netgear.com/t5/Using-your-ReadyNAS/SMB-1-0-Given-Wanna-Cry/m-p/1283738#M129977 </A> &amp; <A href="#" target="_blank" rel="noopener"> https://kb.netgear.com/24923/ReadyNAS-OS-6-SMB-Plus-App </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Nexenta </STRONG> - <EM> NexentaStor - SMB1 required for domain join for version 4.0.4-FP5 and older, first resolved in version 4.0.5-FP3 </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG>NexGen Connect (Mirth Connect)</STRONG> - <EM>Requires SMB1 for&nbsp;delivery mechanism</EM>&nbsp;<A href="#" target="_blank" rel="noopener">https://www.nextgen.com/products-and-services/integration-engine</A>&nbsp;&amp;&nbsp;<A href="#" target="_blank" rel="noopener">http://www.mirthcorp.com/community/issues/browse/MIRTH-4239</A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG>Nimble (HPE) </STRONG> - <EM> Nimble OS 3.1 (and older) domain join </EM> - <A href="#" target="_blank" rel="noopener"> https://connect.nimblestorage.com/people/rdm/blog/2016/03/01/nimble-os-30-active-directory-integration?commentID=2479#comment-2479</A><BR /><BR /></LI> <LI><STRONG>Nintendo</STRONG> - <EM>Nintendo 3DS - Requires SMB1 for&nbsp;microSD management -&nbsp;</EM><EM>&amp;&nbsp;</EM><A href="#" target="_self">https://en-americas-support.nintendo.com/app/social/questions/detail/qid/46237/~/trouble-with-new-3ds-microsd-management/comment/16146</A>&nbsp;&amp;&nbsp;<A href="#" target="_self">https://www.reddit.com/r/3DS/comments/7fpvgx/windows_cannot_access_3ds/drx6wfk/ </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> NVIDIA </STRONG> - <EM> Shield line of products (note: latest ShiLD TV update adds SMB3 client support, allowing Shield devices to connect to SMB3 servers) </EM> - <A href="#" target="_blank" rel="noopener"> https://forums.geforce.com/default/topic/1020382/?comment=5194685 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Oki </STRONG> - <EM> Various multifunction printers that support print to share </EM> - <A href="#" target="_blank" rel="noopener"> http://my.okidata.com/idocs2.nsf/2a6b07dbf414dda9852572b100580bbe/1f6b01dcdfd8294885257325006d62ee/$FILE/C3530-CIFS.pdf </A> &amp; <A href="#" target="_blank" rel="noopener"> https://okidata-ja.custhelp.com/euf/assetshttps://techcommunity.microsoft.com/images/answers/4074/MC361%20&amp;%20MC561%20Scan%20to%20Shared%20Folder%20-%20Config%20Tool.pdf </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Open GI </STRONG> - <EM> ICP </EM> - <EM> Customer reported after contacting Open GI Support - Vendor does not publicly document their requirement for SMB1. </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Oracle </STRONG> - <EM> Solaris 11.3 and older </EM> - <A href="#" target="_blank" rel="noopener"> http://docs.oracle.com/cd/E86824_01/html/E54775/smb-4.html </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Pulse Secure </STRONG> - <EM> PCS devices running 8.1R9 / 8.2R4 and below or PPS devices running 5.1R9 / 5.3R4 and below; 9X and later support SMB2+ </EM> - <A href="#" target="_blank" rel="noopener"> https://kb.pulsesecure.net/articles/Pulse_Secure_Article/KB40598 </A> &amp; <A href="#" target="_blank" rel="noopener"> https://kb.pulsesecure.net/articles/Pulse_Secure_Article/KB40602/?q=smb&amp;l=en_US&amp;fs=Search&amp;pn=1&amp;atype= </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> QNAP - </STRONG> <EM> all storage devices using firmware lower than 4.1 - </EM> <A href="#" target="_blank" rel="noopener"> https://www.qnap.com/en-us/support/con_show.php?cid=11 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Rapid7 </STRONG> - <EM> Various products within suite (some components under some circumstances - the forums do not mention SMB1 at all, but developers and customers confirm some SMB1 usage) </EM> - <A href="#" target="_blank" rel="noopener"> https://github.com/rapid7/ruby_smb </A> &amp; <A href="#" target="_blank" rel="noopener"> https://twitter.com/thelightcosine/status/895682178713100289 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> RedHat </STRONG> - <EM> RHEL 5, RHEL 6 domain join; earliest SMB2+ CIFS client documented is in RedHat 7.2 </EM> ( <A href="#" target="_blank" rel="noopener"> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/7.2_Release_Notes/file_systems.html </A> ) <EM> ; RedHat server provide by Samba, see Samba note below </EM> - <A href="#" target="_blank" rel="noopener"> https://access.redhat.com/solutions/3037961 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Ricoh </STRONG> <EM> <STRONG> (Ricoh/Savin/Gestetner/Lanier) </STRONG> </EM> - any printers/MFPs/etc. <STRONG> not listed </STRONG> as covered by October 2017 firmware here: <A href="#" target="_blank" rel="noopener"> https://www.ricoh.com/products/mfp20170727_1.html </A> Contact Ricoh for firmware download and install details.</LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> RSA </STRONG> - <EM> Authentication Manager Server </EM> - <A href="#" target="_blank" rel="noopener"> https://community.rsa.com/thread/191171 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Samba </STRONG> - <EM> versions older than 3.5.0 (note: all <STRONG> supported </STRONG> versions of Samba support SMB2+, see </EM> <A href="#" target="_blank" rel="noopener"> https://wiki.samba.org/index.php/Samba_Release_Planning#Discontinued </A> ) - <A href="#" target="_blank" rel="noopener"> https://wiki.samba.org/index.php/Samba_3.6_Features_added/changed#SMB2_support </A> &amp; <A href="#" target="_blank" rel="noopener"> https://wiki.samba.org/index.php/Samba_3.5_Features_added/changed#Protocol_changes</A>. Starting in Samba 4.11.0, SMB1 is disabled by default, see&nbsp;<A href="#" target="_blank" rel="noopener">https://github.com/samba-team/samba/blob/59cca4c5d699be80b4ed22b40d8914787415c507/WHATSNEW.txt</A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Samba </STRONG> - <EM> JCIFS SMB client </EM> - <A href="#" target="_blank" rel="noopener"> https://lists.samba.org/archive/jcifs/2013-December/010123.htm </A> l &amp; <A href="#" target="_blank" rel="noopener"> https://jcifs.samba.org/ </A> . Note: there are several open source and commercial alternatives for developers to call upon - for instance, JCIF/NG - <A href="#" target="_blank" rel="noopener"> https://github.com/AgNO3/jcifs-ng/ </A> , hierynomus/smbj - <A href="#" target="_blank" rel="noopener"> https://github.com/hierynomus/smbj </A> ,&nbsp;Visuality - <A href="#" target="_blank" rel="noopener"> https://visualitynq.com/products/jnq-java-smb-client </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Seagate </STRONG> - <EM> Seagate Central -&nbsp;Confirmed by customer conversation with vendor </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Sharp </STRONG> - <EM> Subset of MFP printers </EM> <EM> (many do support SMB2 and 3) </EM> - <A href="#" target="_blank" rel="noopener"> https://msdnshared.blob.core.windows.net/media/2017/06/sharp2017.pdf </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> ShoreTel </STRONG> - <EM> ShoreTel Server </EM> - <EM> Customer reported after contacting ShoreTel Support - Vendor does not publicly document their requirement for SMB1. </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Sonos </STRONG> - <I> Wireless speakers </I> - <A href="#" target="_blank" rel="noopener"> https://en.community.sonos.com/setting-up-sonos-228990/sonos-support-for-smb-20-protocol-6739642/index1.html </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Sony </STRONG> - <EM> HAP-S1 and HAP-Z1ES h </EM> <EM> igh-Resolution Audio HDD players </EM> - <I> Customer reported after contacting Sony support - Vendor does not publicly document their requirement for SMB1. </I></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Sophos </STRONG> - <I> Sophos has now updated all products to no longer require SMB1, as long as you update to&nbsp;Sophos XG Firewall 16.05 MR6 (16.05.6.266), Sophos UTM 9.5 MR2 (9.502), Sophos Web Appliance 4.3.3, Sophos Anti-Virus for Linux 9.13.2, Sophos Anti-Virus for vShield 2.1.10, Sophos for Virtual Environments 1.0.1 </I> - <A href="#" target="_blank" rel="noopener"> https://community.sophos.com/kb/en-us/126757 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> SUSE </STRONG> - <EM> SUSE Linux Enterprise Server 11 and older </EM> (note: 10 and older versions are unsupported, regardless) - <A href="#" target="_blank" rel="noopener"> https://www.suse.com/support/kb/doc/?id=7019892 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Synology </STRONG> - <I> Should support SMB3 and/or SMB2 (but ensure not set to SMB1 maximum via </I> <A href="#" target="_blank" rel="noopener"> https://www.synology.com/en-us/knowledgebase/DSM/help/DSM/AdminCenter/file_winmacnfs_win </A> )</LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Thecus </STRONG> - <I> At least models N2310/N4310 &amp; N2520/N2560/N4520/N4560 </I> - <A href="#" target="_blank" rel="noopener"> https://thecus.kayako.com/Knowledgebase/Article/View/750/0/why-since-windows-10-updated-and-unable-to-access-n2310n4310--n2520n2560n4520n4560-share-folder </A> &amp; <A href="#" target="_blank" rel="noopener"> https://github.com/docksal/docksal/issues/382 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Thomson Reuters </STRONG> - <EM> Some portions of CS Professional Suite might need oplocks disabled for troubleshooting purposes only, which requires SMB1 until RS3 provided the new leasing mode option; there is no pure requirement for SMB1 (note: Payroll CS, Trial Balance CS, and Write-Up CS no longer supported as of&nbsp;March 1, 2017. Replaced by Accounting CS and MyPay, see </EM> <A href="#" target="_blank" rel="noopener"> http://cs.thomsonreuters.com/ua/acct_pr/csa/cs_us_en/kb/transitioning-from-csa-to-acs.htm?product=csa) </A> - <A href="#" target="_blank" rel="noopener"> http://cs.thomsonreuters.com/ua/acct_pr/csa/cs_us_en/kb/how-to-disable-opportunistic-locking-or-file-caching.htm </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Tintri </STRONG> - <I> Tintri OS, Tintri Global Center </I> - <A href="#" target="_blank" rel="noopener"> https://knowledge.tintri.com/Internal/KB_Drafts/FAQ_-Technical_Service_Bulletin_Document_No._TSB-05242017-01_–_Reduced_Severity </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Toshiba </STRONG> - <EM> E-Studio MFP line,&nbsp;6508a -&nbsp;Customer reported after contacting Toshiba Support - Vendor does not publicly document their requirement for SMB1. </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> TP-Link </STRONG> - <EM> Archer routers with firmware prior to Archer C7(US)_V2_180114, Published Date: 2018-01-17 (that firmware releases adds SMB2 support, thanks TP-Link team!) - </EM> <A href="#" target="_blank" rel="noopener"> https://www.tp-link.com/us/download/Archer-C7_V2.html#Firmware </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Unitrends </STRONG> - <EM> Variety of products, see link - </EM> <A href="#" target="_blank" rel="noopener"> https://support.unitrends.com/UnitrendsBackup/s/article/ka840000000blnfAAA/000005618?t=1505917588676 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> VMware </STRONG> <I> Vcenter VMware vCenter Server Appliance, VMware vRealize Automation Identity Appliance,&nbsp;vCenter Server Appliance 6.7 U2 backup via SMB -&nbsp;</I><A href="#" target="_self">https://kb.vmware.com/s/article/70646</A>&nbsp;&amp;&nbsp;<A href="#" target="_blank" rel="noopener">https://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&amp;docType=kc&amp;externalId=2134063&amp;sliceId=1&amp;docTypeID=DT_KB_1_1&amp;dialogID=479220377&amp;stateId=0 </A> ( <EM> note: steps to configure SMB2 for VCenter, at least on latest versions, until VMware updates their KB - </EM> <A href="#" target="_blank" rel="noopener"> https://virtualizationnation.com/2017/04/17/enabling-vcenter-server-appliance-vcsa-to-use-smb2/ </A> )</LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> VMware </STRONG> - Older than <I> ESXI 6.0, older than VCenter 6.0 Update 3c </I> - <A href="#" target="_blank" rel="noopener"> https://communities.vmware.com/message/2663902#2663902 </A> &amp; <A href="#" target="_blank" rel="noopener"> https://communities.vmware.com/message/2668266#2668266 </A> &amp; <A href="#" target="_blank" rel="noopener"> https://docs.vmware.com/en/VMware-vSphere/6.0/rn/vsphere-vcenter-server-60u3c-release-notes.html </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Western Digital </STRONG> - My Cloud (Home, Wireless, Mirror, EX2 lines), My Passport Wireless lines, WD Live TV - <A href="#" target="_blank" rel="noopener">https://support-en.wd.com/app/answers/detail/a_id/4155</A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Wolters Kluwer Financial Services (acquired Svenson) - </STRONG> Svenson Financial Reporting Products - <I> Customer reported after contacting support - Vendor does not publicly document their requirement for SMB1 </I></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Worldox </STRONG> - <EM> Worldox&nbsp;GX3 DMS (SMB1 recommended but supports SMB2 under some circumstances; note that GX3 is end of life, per vendor) </EM> - <A href="#" target="_blank" rel="noopener"> https://knowledgebase.worldox.com/wp-content/uploads/2015/09/Worldox-and-SMB-White-Paper.pdf </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Xerox </STRONG> - <EM> SMB Workflow Scanning on printers not running ConnectKey Firmware, such as WC75XX models. Certain multifunction models </EM> - <A href="#" target="_blank" rel="noopener"> http://forum.support.xerox.com/t5/Copying-Faxing-Scanning/Xerox-Machines-and-SMBv2-V3-Scanning-Support/td-p/204802/highlight/true/page/2 </A> &amp; <A href="#" target="_blank" rel="noopener"> https://www.xerox.com/download/security/white-paper/1bcfc-55251eec62dd0/Xerox-Product-SMB-Supported-Versions.pdf</A><BR /><BR /></LI> <LI><STRONG>Zappiti</STRONG> - <EM>Media Player -&nbsp;</EM><A href="#" target="_blank" rel="noopener">https://zappiti.uservoice.com/knowledgebase/articles/1830778--network-smb1-sharing-protocol-on-windows-10</A>&nbsp; &amp; <A href="#" target="_self">https://zappiti.uservoice.com/knowledgebase/articles/1912693--network-transfer-smb1-protocol-on-windows-10</A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Zebra </STRONG> - <EM> Zebra Label Printers,&nbsp;ZPL formats, &amp; Eltron/EPL formats -&nbsp;&nbsp;Customer reported after contacting Zebra Support - Vendor does not publicly document their requirement for SMB1. </EM></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Zendesk </STRONG> - Infusion Business Software - <A href="#" target="_blank" rel="noopener"> https://infusionsoftware.zendesk.com/hc/en-us/articles/360000193095-SMB1-on-Windows-10-Build-1803 </A></LI> </UL> </UL> <P>&nbsp;</P> <UL> <UL> <LI><STRONG> Zyxel </STRONG> - <EM> At least NSA320S, NAS325, NAS326,&nbsp;XMG3512-B10A, VMG8924-B10A on at least firmware </EM> - <A href="#" target="_blank" rel="noopener"> https://homeforum.zyxel.com/discussion/comment/1775#Comment_1775 </A></LI> </UL> </UL> <P><BR /><BR />To update this list, please email <A href="https://gorovian.000webhostapp.com/?exam=mailto:StillNeedsSMB1@microsoft.com" target="_blank" rel="noopener"> StillNeedsSMB1@microsoft.com </A> or tweet <A href="#" target="_blank" rel="noopener"> @nerdpyle </A> with hashtag <A href="#" target="_blank" rel="noopener"> #StillNeedsSMB1 </A> . Adding a product to this list ideally requires direct quote or documentation from the vendor of that product, including their website, knowledgebase, support forums, or other vendor channels; third party forums are not enough to qualify. Alternatively, if your vendor has responded to you in a support case that SMB1 is required but does not provide public documentation, products will be added case-by-case. Consult your vendor for updates and newer product versions that support at least SMB 2.02. If you are a vendor and wish to report requirements for SMB1 or if information above has changed, email <A href="https://gorovian.000webhostapp.com/?exam=mailto:StillNeedsSMB1@microsoft.com" target="_blank" rel="noopener"> StillNeedsSMB1@microsoft.com </A> . There are vendors who are not publishing their SMB1 requirements. <EM> It is up to you, their customer, to have them publish this information - Microsoft cannot make them do so. </EM> If a vendor does not state if they require SMB1 but you believe they do, please contact that vendor directly. If you need assistance getting a vendor response, email <A href="https://gorovian.000webhostapp.com/?exam=mailto:StillNeedsSMB1@microsoft.com" target="_blank" rel="noopener"> StillNeedsSMB1@microsoft.com </A> and we will try our best to assist. <STRONG> <EM> Politeness works best; the person you are speaking to at a vendor is extremely unlikely to have put SMB1 into the product &amp; probably isn't any happier about it than you are! </EM> </STRONG> For more information on why using SMB1 is unsafe, see <A href="#" target="_blank" rel="noopener"> StopUsingSMB1 </A> . SMB1 has been deprecated for years and will be removed by default from many editions and SKUs of Windows 10 and Windows Server 2016 in the RS3 release. <BR /><BR /><STRONG> Important: </STRONG> if your vendor requires <EM> disabling SMB2 </EM> in order to force SMB1, they will also often require disabling oplocks. Disabling Oplocks is not recommended by Microsoft, but required by some older software, often due to using legacy database technology. Windows 10 RS3 and Windows Server 2016 RS3 allow a special oplock override workaround now for these scenarios - see <A href="#" target="_blank" rel="noopener"> https://twitter.com/NerdPyle/status/876880390866190336 </A> . This is only a workaround - just like SMB1 oplock disable is only a workaround - and your vendor should update to not require it. <BR /><BR />Be safe out there, <BR /><BR />- Ned Pyle, Principal Program Manager of the SMB protocol family at Microsoft</P> Thu, 28 Jan 2021 19:38:57 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/smb1-product-clearinghouse/ba-p/426008 Ned Pyle 2021-01-28T19:38:57Z Work Folders updates for Windows 10 version 1703, Android and iOS https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/work-folders-updates-for-windows-10-version-1703-android-and-ios/ba-p/426007 <P><STRONG> First published on TECHNET on May 31, 2017 </STRONG> <BR />We're excited to announce several improvements to the Work Folders clients for Windows 10 version 1703, Android and iOS: </P> <UL> <UL> <LI>Remote users can securely access their files on the Work Folders server using Azure Active Directory Application Proxy</LI> </UL> </UL> <UL> <UL> <LI>Improved single sign on experience (fewer authentication prompts) when using Azure Active Directory Application Proxy</LI> </UL> </UL> <UL> <UL> <LI>Group policy setting to manage the Work Folders directory location on Windows devices</LI> </UL> </UL> <P>For more details, please review the sections below. </P> <P>&nbsp;</P> <H3><STRONG> Azure Active Directory Application Proxy Support </STRONG></H3> <P>Applies to: Windows 10 version 1703, Android and iOS <BR /><BR />Work Folders supports using VPN, Web Application Proxy (WAP) or a third-party reverse proxy solution to enable remote users access to their files on the Work Folders server. These remote access solutions require expensive hardware or additional on-premises servers that need to be managed. <BR /><BR />Work Folders now supports using Azure AD Application Proxy to enable remote users to securely access their files on the Work Folders server. <BR /><BR /><STRONG> Benefits of using Azure AD Application Proxy </STRONG> <BR /><BR /></P> <UL> <UL> <LI>It's easier to manage and more secure than on-premises solutions because you don't have to open any inbound connections through your firewall.</LI> </UL> </UL> <UL> <UL> <LI>When you publish Work Folders using Azure AD Application Proxy, you can take advantage of the rich authorization controls and security analytics in Azure.</LI> </UL> </UL> <UL> <UL> <LI>Improved single sign on experience, the&nbsp;Work Folders clients prompt less frequently for authentication.</LI> </UL> </UL> <P>To learn more about Azure AD Application Proxy, please see the following article: <A href="#" target="_blank" rel="noopener"> How to provide secure remote access to on-premises applications </A> <BR /><BR /></P> <P><STRONG>How to enable remote access to Work Folders using Azure Active Directory Application Proxy </STRONG> <BR />For more details on how to configure Work Folders access using Azure AD Application Proxy, please see the following blog: <A href="#" target="_blank" rel="noopener"> Enable remote access to Work Folders using Azure Active Directory Application Proxy </A> </P> <P>&nbsp;</P> <H3><STRONG> Token Broker Support <BR /></STRONG></H3> <P>Applies to: Windows 10 version 1703, Android and iOS <BR /><BR />A common complaint when using AD FS authentication is the remote user is prompted for credentials every 8 hours if the device is not registered with the AD FS server. To reduce the frequency of credential prompts, you can enable the Keep Me Signed In (KMSI) <A href="#" target="_blank" rel="noopener"> feature </A> but the maximum single sign on period for a non-registered device is 7 days. To register the device, the user needs to use the Workplace Join <A href="#" target="_blank" rel="noopener"> feature </A> . <BR /><BR />To improve the user experience when using Azure AD Application Proxy, Work Folders now supports Token Broker which is an authentication broker that supports device registration. When using Token Broker with Azure AD Application Proxy for remote access, the client device can be registered in Azure AD when configuring the Work Folders client. Once the device is registered, device authentication will be used to access the Work Folders server. <BR /><BR />Device registration provides the following benefits: <BR /><BR /></P> <UL> <UL> <LI>Improved single sign on experience (less authentication prompts)</LI> </UL> </UL> <UL> <UL> <LI>Device-based conditional access</LI> </UL> </UL> <P>For more details on Azure Active Directory device registration, please see the following article on TechNet: <A href="#" target="_blank" rel="noopener"> Get started with Azure Active Directory device registration </A> </P> <P>&nbsp;</P> <P><BR /><STRONG> How to enable Token Broker </STRONG> <BR />To enable Token Broker on a Windows 10 version 1703 system, enable the "Enables the user of Token Broker for AD FS authentication"&#157; group policy setting which is located under User Configuration\Administrative Templates\Windows Components\Work Folders <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 500px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107516iED283F6576F3DA00/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR />For Android and iOS devices, Token Broker will be used automatically when using Azure AD Application Proxy. <BR /><BR /><STRONG> Note </STRONG> :&nbsp;Token Broker is currently supported when using Azure AD Application Proxy for remote access. Using Token Broker with AD FS authentication may be supported in a future update. <BR /><BR /><STRONG> How to register devices using the Work Folders client </STRONG> <BR />When Token Broker is enabled on a Windows client, the user will be prompted to register their device in Azure AD when configuring the Work Folders client. If the Work Folders client is managed via group policy, the device is automatically registered in Azure AD. <BR /><BR />For devices (Android and iOS), the device is automatically registered when configuring the Work Folders client. </P> <P>&nbsp;</P> <H3><STRONG> Managing Work Folders client directory location using group policy <BR /></STRONG></H3> <P>Applies to: Windows 10 version 1703 <BR /><BR />A common request when managing Work Folders clients via group policy is to configure the Work Folders client directory location. <BR /><BR /><STRONG> How to configure the Work Folder client directory location using group policy </STRONG> <BR />On Windows 10 version 1703, a group policy setting "Work Folders Local Path" has been added to configure the Work Folders client directory location. This group setting is located under User Configuration\Administrative Templates\Windows Components\Work Folders\Specify Work Folders settings. <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 500px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107517i2EC17AC530DF3020/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR /><STRONG> Note </STRONG> : The Work Folders Local Path group policy setting applies to Windows 10 version 1607 and Windows 10 version 1703 systems. If the value is not defined, the client directory will be located under %userprofile%\Work Folders.</P> <P><BR /><STRONG> Additional Resources </STRONG></P> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Work Folders documentation on TechNet</A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Work Folders blog on TechNet </A></LI> </UL> </UL> <P>&nbsp;</P> Mon, 25 May 2020 23:07:47 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/work-folders-updates-for-windows-10-version-1703-android-and-ios/ba-p/426007 Jeff Patterson 2020-05-25T23:07:47Z Enable remote access to Work Folders using Azure Active Directory Application Proxy https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/enable-remote-access-to-work-folders-using-azure-active/ba-p/425998 <P><STRONG> First published on TECHNET on May 31, 2017 </STRONG> <BR />We're excited to announce Work Folders now supports using Azure Active Directory Application Proxy to enable remote users to securely access their files on the Work Folders server.</P> <P><BR />Work Folders supports using VPN, Web Application Proxy (WAP) or a third-party reverse proxy solution to enable remote users access to their files on the Work Folders server. These remote access solutions require expensive hardware or additional on-premises servers that need to be managed. <BR /><BR /><STRONG> Benefits of using Azure AD Application Proxy</STRONG></P> <UL> <LI>It's easier to manage and more secure than on-premises solutions because you don't have to open any inbound connections through your firewall.</LI> <LI>When you publish Work Folders using Azure AD Application Proxy, you can take advantage of the rich authorization controls and security analytics in Azure.</LI> <LI>Improved single sign on experience, the&nbsp;Work Folders clients prompt less frequently for authentication</LI> </UL> <P>To learn more about Azure Active Directory Application Proxy, please see the following article: <A href="#" target="_blank" rel="noopener"> How to provide secure remote access to on-premises applications </A></P> <P><BR />To enable Work Folders access using Azure AD Application proxy, please follow the steps below.</P> <P>&nbsp;</P> <H3><STRONG> Prerequisites </STRONG></H3> <P>Before you can enable Work Folders access using Azure AD Application Proxy, you need to have:</P> <UL> <UL> <LI>A <A href="#" target="_blank" rel="noopener"> Microsoft Azure AD basic or premium subscription </A> and an Azure AD directory for which you are a global administrator</LI> </UL> </UL> <UL> <UL> <LI>An Active Directory Domain Services forest with Windows Server 2012 R2 schema extensions</LI> </UL> </UL> <UL> <UL> <LI>Your on-premises Active Directory user accounts are synchronized to Azure AD using <A href="#" target="_blank" rel="noopener"> Azure AD Connect </A> <BR /><BR /> <UL> <UL> <LI>Note: <A href="#" target="_blank" rel="noopener"> Device writeback </A> should be enabled if using conditional access</LI> </UL> </UL> </LI> </UL> </UL> <UL> <UL> <LI>A Work Folders server running Windows Server 2012 R2 or Windows Server 2016 <BR /><BR /> <UL> <UL> <LI>See <A href="#" target="_blank" rel="noopener"> Deploying Work Folders </A> on TechNet to configure the Work Folders server and sync shares</LI> </UL> </UL> </LI> </UL> </UL> <UL> <UL> <LI>A server running Windows Server 2012 R2 or higher on which you can install the Application Proxy Connector</LI> </UL> </UL> <UL> <UL> <LI>A Windows 10 version 1703, Android or iOS client</LI> </UL> </UL> <H3>&nbsp;</H3> <H3><STRONG>Overview of the steps required to enable Work Folders access using Azure AD Application proxy </STRONG></H3> <P>High-level overview of the steps required:</P> <OL> <LI>Create a Work Folders proxy application in Azure AD and give users access.</LI> <LI>Create a Work Folders native application in Azure AD.</LI> <LI>Install the Application Proxy Connector on an on-premises server.</LI> <LI>Verify the Application Proxy Connector status.</LI> <LI>Verify the Work Folders server is configured to use Integrated Windows Authentication.</LI> <LI>Create an SPN for the Work Folders server.</LI> <LI>Configure constrained delegation for the App Proxy Connector server.</LI> <LI>Optional: Install the Work Folders certificate on the App Proxy Connector server.</LI> <LI>Optional: Enable Token Broker for Windows 10 version 1703 clients.</LI> <LI>Configure a Work Folders client to use the Azure AD App Proxy URL.</LI> </OL> <H3>&nbsp;</H3> <H3><STRONG>Create a Work Folders proxy application in Azure AD and give users access </STRONG></H3> <UL> <LI>Sign in to <STRONG> <A href="#" target="_blank" rel="noopener"> Azure </A> </STRONG> with your global administrator account.</LI> <LI>Select <STRONG> Azure Active Directory </STRONG> , click <STRONG> Switch Directory </STRONG> and then select the directory that will be used for the <STRONG> Work Folders proxy application </STRONG> .</LI> <LI>Click <STRONG> Enterprise applications </STRONG> and then click <STRONG> New application.</STRONG></LI> <LI>On the <STRONG> Categories page </STRONG> , click <STRONG> All </STRONG> and then click <STRONG> On-premises application</STRONG>.</LI> <LI>On the <STRONG style="font-family: SegoeUI, Lato, 'Helvetica Neue', Helvetica, Arial, sans-serif; background-color: #ffffff; color: #333333; font-size: 16px; white-space: normal;"> Add your own on-premises application </STRONG><STRONG style="font-family: SegoeUI, Lato, 'Helvetica Neue', Helvetica, Arial, sans-serif; background-color: #ffffff; color: #333333; font-size: 16px; white-space: normal;"> Add </STRONG></LI> <UL> <LI><STRONG> Name </STRONG> = You can choose any name. For this example, we'll use Work Folders Proxy</LI> <LI><STRONG> Internal URL </STRONG> = <A href="#" target="_blank" rel="noopener">https://workfolders.domain.com</A> <UL> <LI>Note: This value should match the internal URL of your Work Folders server. If workfolders.domain.com is used for the internal URL, a workfolders CNAME record must exist in DNS.</LI> </UL> </LI> </UL> <UL> <LI><STRONG> External URL </STRONG> = The URL is auto-populated based on the application name but can be changed <UL> <LI>Note or write down the External URL. This URL will be used by the Work Folders client to access the Work Folders server.</LI> </UL> </LI> <LI><STRONG> Pre Authentication </STRONG> = Azure Active Directory</LI> </UL> </UL> <UL> <UL> <LI><STRONG> Translate URL in Headers </STRONG> = Yes</LI> </UL> </UL> <UL> <UL> <LI><STRONG> Backend Application Timeout </STRONG> = Default</LI> </UL> </UL> <UL> <UL> <LI><STRONG> Connector Group </STRONG> = Default</LI> </UL> </UL> <P class="lia-indent-padding-left-60px"><STRONG>Example </STRONG></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-60px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 440px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107509i50B12752BFACE67E/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <UL> <LI>Click <STRONG> OK </STRONG> to the notification that no connectors are configured (will be done in a later step).</LI> <LI>On the <STRONG> Work Folders Proxy </STRONG> enterprise application page, click <STRONG> Single sign-on</STRONG>.</LI> <LI>Change <STRONG> Mode </STRONG> to <STRONG> Integrated Windows Authentication </STRONG> .</LI> <LI>In the <STRONG> Internal Application SPN </STRONG> field, enter <STRONG> http/workfolders.domain.com </STRONG><BR /> <UL> <UL> <LI>Note: This value should match the FQDN of your Work Folders server</LI> </UL> </UL> </LI> <LI>Click <STRONG> Save </STRONG> to save the changes.</LI> <LI>On the <STRONG> Work Folders Proxy </STRONG> enterprise application page, click <STRONG> Users and groups</STRONG>.</LI> <LI>Click <STRONG> Add user </STRONG> , select the users and groups that can access the <STRONG> Work Folders proxy application </STRONG> and click <STRONG> Assign</STRONG>.</LI> </UL> <P><STRONG> Note </STRONG> : If you have multiple Work Folders servers, you need to create a proxy application for each Work Folders server (repeat the steps above).</P> <P>&nbsp;</P> <H3><STRONG> Create a Work Folders native application in Azure AD </STRONG></H3> <UL> <LI>In the <STRONG> <A href="#" target="_blank" rel="noopener"> Azure </A> </STRONG> portal, click <STRONG> Azure Active Directory </STRONG> and verify the directory that was used to create the <STRONG> Work Folders proxy application </STRONG> is selected.</LI> <LI>Click <STRONG> App registrations </STRONG> and then click <STRONG> New application registration</STRONG>.</LI> <LI>On the <STRONG> Create </STRONG> page, enter the following and click <STRONG> Create</STRONG>:</LI> <UL> <LI><STRONG> Name </STRONG> = You can choose any name. For this example, we'll use Work Folders Native</LI> </UL> </UL> <UL> <UL> <LI><STRONG> Application Type </STRONG> = Native</LI> </UL> </UL> <UL> <UL> <LI><STRONG> Redirect URI </STRONG> = <A href="#" target="_blank" rel="noopener"> https://168f3ee4-63fc-4723-a61a-6473f6cb515c/redir </A></LI> </UL> </UL> <P class="lia-indent-padding-left-60px"><STRONG>Example </STRONG></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-60px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 289px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107510i665ABA0C07F027C3/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <UL> <LI>On the <STRONG> App registrations </STRONG> page, click <STRONG> Work Folders Native </STRONG> and then click <STRONG> Settings</STRONG>.</LI> <LI>Select <STRONG> Redirect URIS </STRONG> under Settings, add the following URIs one at a time and click <STRONG> Save</STRONG>:</LI> <UL> <LI><STRONG> msauth://code/x-msauth-msworkfolders%3A%2F%2Fcom.microsoft.workfolders </STRONG></LI> </UL> </UL> <UL> <UL> <LI><STRONG> x-msauth-msworkfolders://com.microsoft.workfolders </STRONG></LI> </UL> </UL> <UL> <UL> <LI><STRONG> msauth://com.microsoft.workfolders/Cb61uxHImS0Da29PGZyTdl9APp0%3D </STRONG></LI> </UL> </UL> <UL> <UL> <LI><STRONG> ms-appx-web://microsoft.aad.brokerplugin/* </STRONG><BR /> <UL> <LI>Replace * with the Application ID that is listed for the <STRONG> Work Folders Native </STRONG> application. If the Application ID is 3996076e-7ec2-4e87-a57f-5a69b7aa8865, the URI should be ms-appx-web://microsoft.aad.brokerplugin/3996076e-7ec2-4e87-a57f-5a69b7aa8865</LI> </UL> </LI> </UL> </UL> <P class="lia-indent-padding-left-60px"><STRONG>Example </STRONG></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-60px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 500px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107511iA30290FB35BD6AE0/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <UL> <LI>Select <STRONG> Required permissions </STRONG> under Settings.</LI> <LI>Click <STRONG> Windows Azure Active Directory </STRONG> , grant the following permissions and click <STRONG> Save</STRONG>:</LI> <UL> <LI>Sign in and read user profile</LI> <LI>Access the directory as the signed-in user</LI> </UL> <LI>Under <STRONG> Required permissions </STRONG> , click <STRONG> Add </STRONG> , click <STRONG> Select an API </STRONG> , select <STRONG> Windows Azure Service Management API </STRONG> and click <STRONG> Select </STRONG> .</LI> <LI>On the <STRONG> Select Permissions </STRONG> for <STRONG> Windows Azure Service Management API </STRONG> page, grant the following permission, click <STRONG> Select </STRONG> and then click <STRONG> Done </STRONG> :</LI> </UL> <UL> <UL> <LI>Access Azure Service Management as organization users</LI> </UL> <LI>Under <STRONG> Required permissions </STRONG> , click <STRONG> Add </STRONG> , click <STRONG> Select an API </STRONG> , in the search box type <STRONG> Work Folders Proxy </STRONG> (or the name of the Work Folders proxy application).</LI> <LI>Click <STRONG> Work Folders Proxy </STRONG> and then click <STRONG> Select.</STRONG></LI> <LI>On the <STRONG> Select Permissions </STRONG> for <STRONG> Work Folders Proxy </STRONG> page, grant the following permission, click <STRONG> Select </STRONG> and then click <STRONG> Done</STRONG>:</LI> </UL> <UL> <UL> <LI>Access Work Folders Proxy</LI> </UL> </UL> <P><STRONG>Note</STRONG>: If you have multiple Work Folders servers and you created multiple Work Folders proxy applications, please repeat the steps above to give the Work Folders native application access to all Work Folders proxy applications.</P> <P>&nbsp;</P> <UL> <LI>Verify the following applications are listed under the <STRONG> Required Permissions </STRONG> section:</LI> </UL> <P class="lia-indent-padding-left-60px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 500px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107512i6F1D93C9066FFFAD/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <P>&nbsp;</P> <P><STRONG>Note: </STRONG>The&nbsp;Work Folders native application must be set to public. Under <STRONG>Advanced settings</STRONG>, verify <STRONG>Treat application as a public client</STRONG> is set to <STRONG>Yes</STRONG>.</P> <P>&nbsp;</P> <H3><STRONG>Install the Application Proxy Connector on an on-premises server </STRONG></H3> <OL> <LI>In the <STRONG> <A href="#" target="_blank" rel="noopener"> Azure </A> </STRONG> portal, click <STRONG> Azure Active Directory </STRONG> and verify the directory that was used to create the <STRONG> Work Folders proxy application </STRONG> is selected.</LI> <LI>Click <STRONG> Application proxy</STRONG>.</LI> <LI>Click <STRONG> Enable application proxy </STRONG> if not enabled.</LI> <LI>Click <STRONG> Download connector </STRONG> and follow the steps to download the <STRONG> AADApplicationProxyConnectorInstaller.exe </STRONG> package.</LI> <LI>Copy the <STRONG> AADApplicationProxyConnectorInstaller.exe </STRONG> installer package to the server that will run the Application Proxy Connector.</LI> <LI>Run the <STRONG> AADApplicationProxyConnectorInstaller.exe </STRONG> installer package on the Application Proxy Connector server.</LI> <LI>Follow the instructions to complete the installation.</LI> </OL> <P>To learn more about the Application Proxy Connector and the outbound network ports that are required, please see the following article: <A href="#" target="_blank" rel="noopener"> Get started with Application Proxy and install the connector</A></P> <P>&nbsp;</P> <H3><STRONG> Verify the Application Proxy Connector status </STRONG></H3> <OL> <LI>In the <STRONG> <A href="#" target="_blank" rel="noopener"> Azure </A> </STRONG> portal, click <STRONG> Azure Active Directory </STRONG> and verify the directory that was used to create the <STRONG> Work Folders proxy application </STRONG> is selected.</LI> <LI>Click <STRONG> Application proxy </STRONG> .</LI> <LI>In the <STRONG> Connector groups and connectors </STRONG> section, verify the connector is listed and the status is <STRONG> Active </STRONG> .</LI> </OL> <H3>&nbsp;</H3> <H3><STRONG>Verify the Work Folders server is configured to use Integrated Windows Authentication </STRONG></H3> <P>The Work Folders server is configured by default to use Integrated Windows Authentication.</P> <P>&nbsp;</P> <P>To verify the server is configured properly, perform the following steps:</P> <OL> <LI>On the Work Folders server, open <STRONG> Server Manager</STRONG>.</LI> <LI>Click <STRONG> File and Storage Services </STRONG> , click <STRONG> Servers </STRONG> , and then select your <STRONG> Work Folders server </STRONG> in the list.</LI> <LI>Right-click the server name and click <STRONG> Work Folders Settings</STRONG>.</LI> <LI>Click <STRONG> Windows Authentication </STRONG> (if not selected) and click <STRONG> OK </STRONG> .</LI> </OL> <P><STRONG>Note </STRONG> : If the Work Folders environment is currently configured to use ADFS authentication, changing the authentication method from ADFS to Windows Authentication will cause existing users to fail to authenticate. To resolve this issue, the Work Folders clients will need to be re-configured to use the Work Folders proxy application URL or create another Work Folders server that will be used for Azure AD Application Proxy.</P> <P>&nbsp;</P> <H3><STRONG> Create an SPN for the Work Folders server </STRONG></H3> <OL> <LI>On a <STRONG> domain controller </STRONG> , open an elevated <STRONG> command prompt</STRONG>.</LI> <LI>Type the following command and hit <STRONG> enter </STRONG> :</LI> </OL> <P class="lia-indent-padding-left-30px"><STRONG style="font-family: inherit;">setspn -S http/workfolders.domain.com servername</STRONG></P> <P class="lia-indent-padding-left-30px"><STRONG> Example </STRONG> : setspn -S http/workfolders.contoso.com 2016-wf</P> <P>&nbsp;</P> <P>In the example above, the FQDN for the work folders server is workfolders.contoso.com and Work Folders server name is 2016-wf.</P> <P>&nbsp;</P> <P><STRONG> Note </STRONG> : The SPN value entered using the setspn command must match the SPN value entered in the <STRONG> Work Folders proxy application </STRONG> in the Azure portal.</P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px"><STRONG> Example </STRONG></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 368px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107513i84E65F0FC03711AC/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <H3>&nbsp;</H3> <H3><STRONG>Configure constrained delegation for the App Proxy Connector server </STRONG></H3> <OL> <LI>On a <STRONG> domain controller </STRONG> , open <STRONG> Active Directory Users and Computers</STRONG>.</LI> <LI>Locate the computer the connector is running on (example: 2016-appc).</LI> <LI>Double-click the computer and then click the <STRONG> Delegation </STRONG> tab.</LI> <LI>Select <STRONG> Trust this computer for delegation to the specified services only </STRONG> and then select <STRONG> Use any authentication protocol</STRONG>.</LI> <LI>Click <STRONG> Add </STRONG> , click <STRONG> Users or Computers </STRONG> , enter the Work Folders sever name and click <STRONG> OK</STRONG>.</LI> <LI>In the <STRONG> Add Services </STRONG> window, select the <STRONG> SPN </STRONG> that was created and click <STRONG> OK</STRONG>,</LI> <LI>Verify the SPN was added and click <STRONG> OK </STRONG> .</LI> </OL> <P class="lia-indent-padding-left-30px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 471px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107514iD109F92210BEF3FF/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <H3>&nbsp;</H3> <H3><STRONG>Optional: Install the Work Folders certificate on the App Proxy Connector server </STRONG></H3> <P>You can skip this section if you're not using a self-signed certificate on the Work Folders server.</P> <P>&nbsp;</P> <P>If the Work Folders server is using a self-signed certificate, you need to export the certificate on the Work Folders server and import the certificate on the App Proxy Connector server. This step is necessary for the App Proxy Connector server to communicate with the Work Folders server. <BR /><BR />To export the certificate on the Work Folders server, follow these steps:</P> <OL> <LI>Right-click <STRONG> Start </STRONG> , and then click <STRONG> Run.</STRONG></LI> <LI>Type <STRONG> MMC </STRONG> , and then click <STRONG> OK</STRONG>.</LI> <LI>On the <STRONG> File </STRONG> menu, click <STRONG> Add/Remove Snap-in</STRONG>.</LI> <LI>In the <STRONG> Available snap-ins </STRONG> list, select <STRONG> Certificates </STRONG> , and then click <STRONG> Add </STRONG> . The Certificates Snap-in Wizard starts.</LI> <LI>Select <STRONG> Computer account </STRONG> , and then click <STRONG> Next </STRONG> .</LI> <LI>Select <STRONG> Local computer: (the computer this console is running on) </STRONG> , and then click <STRONG> Finish </STRONG> .</LI> <LI>Click <STRONG> OK</STRONG>.</LI> <LI>Expand the folder <STRONG> Console Root\Certificates(Local Computer)\Personal\Certificates</STRONG>.</LI> <LI>Right-click the Work Folders certificate, click <STRONG> All Tasks </STRONG> , and then click <STRONG> Export</STRONG>.</LI> <LI>The Certificate Export Wizard opens. Select <STRONG> Yes, export the private key</STRONG>.</LI> <LI>On the <STRONG> Export File Format </STRONG> page, leave the default options selected, and click <STRONG> Next</STRONG>.</LI> <LI>Create a password for the certificate. This is the password that you'll use later when you import the certificate to other devices. Click <STRONG> Next</STRONG>.</LI> <LI>Enter a location and name for the certificate, and then click <STRONG> Finish </STRONG> .</LI> </OL> <P>To import the certificate on the App Proxy Connector server, follow these steps:</P> <OL> <LI>Right-click <STRONG> Start </STRONG> , and then click <STRONG> Run</STRONG>.</LI> <LI>Type <STRONG> MMC </STRONG> , and then click <STRONG> OK</STRONG>.</LI> <LI>On the <STRONG> File </STRONG> menu, click <STRONG> Add/Remove Snap-in</STRONG>.</LI> <LI>In the <STRONG> Available snap-ins </STRONG> list, select <STRONG> Certificates </STRONG> , and then click <STRONG> Add </STRONG> . The Certificates Snap-in Wizard starts.</LI> <LI>Select <STRONG> Computer account </STRONG> , and then click <STRONG> Next</STRONG>.</LI> <LI>Select <STRONG> Local computer: (the computer this console is running on) </STRONG> , and then click <STRONG> Finish</STRONG>.</LI> <LI>Click <STRONG> OK</STRONG>.</LI> <LI>Expand the folder <STRONG> Console Root\Certificates(Local Computer)\Trusted Root Certification Authorities\Certificates</STRONG>.</LI> <LI>Right-click <STRONG> Certificates </STRONG> , click <STRONG> All Tasks </STRONG> , and then click <STRONG> Import</STRONG>.</LI> <LI>Browse to the folder that contains the Work Folders certificate, and follow the instructions in the wizard to import the file and place it in the Trusted Root Certification Authorities store.</LI> </OL> <H3>&nbsp;</H3> <H3><STRONG>Optional: Enable Token Broker for Windows 10 version 1703 clients </STRONG></H3> <P>Token Broker is an authentication broker that supports device registration. When using Token Broker with Azure AD Application Proxy for remote access, the client device can be registered in Azure AD when configuring the Work Folders client. Once the device is registered, device authentication will be used to access the Work Folders server. <BR /><BR />Device registration provides the following benefits:</P> <UL> <UL> <LI>Improved single sign on experience (fewer authentication prompts)</LI> </UL> </UL> <UL> <UL> <LI>Device-based conditional access</LI> </UL> </UL> <P><STRONG>How to enable Token Broker </STRONG></P> <P>To enable Token Broker on a Windows 10 version 1703 system, enable the "Enables the user of Token Broker for AD FS authentication"&#157; group policy setting which is located under User Configuration\Administrative Templates\Windows Components\Work Folders. <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 500px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107515i601B39FE3340D15B/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR />For Android and iOS devices, Token Broker will be used automatically when using Azure AD Application Proxy. <BR /><BR /><STRONG> How to register devices using the Work Folders client </STRONG> <BR />When Token Broker is enabled on a Windows client, the user will be prompted to register their device in Azure AD when configuring the Work Folders client. If the Work Folders client is managed via group policy, the device is automatically registered in Azure AD. <BR /><BR />For devices (Android and iOS), the device is automatically registered when configuring the Work Folders client.</P> <H3>&nbsp;</H3> <H3><STRONG>Configure a Work Folders client to use the Azure App Proxy URL </STRONG></H3> <P>How to configure a Windows 10 version 1703 client to use the Azure AD App Proxy URL:</P> <P>&nbsp;</P> <OL> <LI>On the client machine, open the <STRONG> Control Panel </STRONG> and click <STRONG> Work Folders</STRONG>.</LI> <LI>Click <STRONG> Set up Work Folders</STRONG>.</LI> <LI>On the <STRONG> Enter your work email address </STRONG> page, click <STRONG> Enter a Work Folders URL instead </STRONG> and enter the Work Folders application proxy URL (e.g., <A href="#" target="_blank" rel="noopener"> https://workfolders-contoso.msappproxy.net </A> ), and then click <STRONG> Next</STRONG>. <UL> <LI>Note: The Work Folders application proxy URL is listed as <STRONG> External URL </STRONG> in the Azure portal when you view the <STRONG> Work Folders proxy application </STRONG> settings.</LI> </UL> </LI> <LI>Enter your credentials and click <STRONG> Sign in</STRONG>.</LI> <LI>After you have authenticated, the <STRONG> Introducing Work Folders </STRONG> page is displayed, where you can optionally change the Work Folders directory location. Click <STRONG> Next</STRONG>.</LI> <LI>On the <STRONG> Security Policies </STRONG> page, check <STRONG> I accept these policies on my PC </STRONG> and click <STRONG> Set up Work Folders</STRONG>.</LI> <LI>A message is displayed stating that Work Folders has started syncing with your PC. Click <STRONG> Close </STRONG> .</LI> </OL> <P>How to configure an Android or iOS client to use the Azure App Proxy URL:</P> <OL> <LI>Install the <STRONG> Work Folders </STRONG> app from the <STRONG> Google Play Store </STRONG> or <STRONG> Apple App Store</STRONG>.</LI> <LI>Open the <STRONG> Work Folders </STRONG> app and then click <STRONG> Continue</STRONG>.</LI> <LI>Click <STRONG> Enter a Work Folders URL Instead</STRONG>.</LI> <LI>Enter the Work Folders application proxy <STRONG> URL </STRONG> (e.g., <A href="#" target="_blank" rel="noopener"> https://workfolders-contoso.msappproxy.net </A> ), and then click <STRONG> Continue</STRONG>.</LI> <LI>Click Launch Web Site, enter your credentials and click <STRONG> Sign In</STRONG>.</LI> <LI><STRONG> Add a passcode </STRONG> for the Work Folders application.</LI> <LI>Work Folders will start syncing your files to your device.</LI> </OL> <H3>&nbsp;</H3> <H3><STRONG>Troubleshooting </STRONG></H3> <P>If you experience an issue when configuring or using a Work Folders client, please see our troubleshooting guide: <A href="#" target="_blank" rel="noopener"> How to troubleshoot remote access to Work Folders using Azure AD Application Proxy </A></P> <H3>&nbsp;</H3> <H3>Additional Resources</H3> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Work Folders overview </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Work Folders blog on TechNet </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Azure AD Application Proxy overview </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Azure AD Application Proxy blog on TechNet </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Network topology considerations when using Azure AD Application Proxy </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Management with Microsoft Intune </A></LI> </UL> </UL> <P>&nbsp;</P> Tue, 26 May 2020 02:49:19 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/enable-remote-access-to-work-folders-using-azure-active/ba-p/425998 Jeff Patterson 2020-05-26T02:49:19Z To RDMA, or not to RDMA – that is the question https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/to-rdma-or-not-to-rdma-8211-that-is-the-question/ba-p/425982 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Mar 27, 2017 </STRONG> <BR /> Hello, Claus here again. By now, you have probably seen some of my blogs and demos on Storage Spaces Direct performance. One of Storage Spaces Direct’s advantages is <A href="#" target="_blank"> RDMA </A> networking support that lowers latency and reduces CPU consumption. I often get the question “Is RDMA <EM> required </EM> for Storage Spaces Direct”. The answer to this question is: <STRONG> no </STRONG> . We support plain-old <A href="#" target="_blank"> Ethernet </A> as long as it’s 10GbE or better. But let’s look a bit deeper. <BR /> <BR /> Recently we did a performance investigation on new hardware, comparing it with an in-market offering (more about that in another post). We ran the tests with RDMA enabled and RDMA disabled (Ethernet mode), which provided the data for this post. For this investigation, we used <A href="#" target="_blank"> DISKSPD </A> with the following configuration: <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> DISKSPD </A> version 2.0.17 <BR /> <UL> <BR /> <LI> 4K IO </LI> <BR /> <LI> 70:30 read/write mix </LI> <BR /> <LI> 10 threads, each thread at queue depth 4 (40 total) </LI> <BR /> <LI> A 10GiB file per thread (“a modest VHDX”) for a total of 100GiB </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> We used the following hardware configuration: <BR /> <UL> <BR /> <LI> 4 node cluster <BR /> <UL> <BR /> <LI> Intel S2600WT Platform </LI> <BR /> <LI> 2x E5-2699v4 CPU (22c44t 2.2Ghz) </LI> <BR /> <LI> 128GiB DDR4 DRAM </LI> <BR /> <LI> 4x Intel P3700 NVMe per node </LI> <BR /> <LI> Mellanox CX3 Pro 40Gb, dual port connected, RoCE v2 </LI> <BR /> <LI> C States disabled, OS High Performance, BIOS Performance Plan, Turbo/HT on </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> Software configuration <BR /> <UL> <BR /> <LI> Windows Server 2016 with January roll-up package </LI> <BR /> <LI> No cache drive configuration </LI> <BR /> <LI> 3-copy mirror volume </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> We are by no means driving this system hard, which is on purpose since we want to show the delta between RDMA and non-RDMA under a reasonable workload and not at the edge of what the system can do. <BR /> <TABLE> <TBODY><TR> <TD> <STRONG> Metric </STRONG> </TD> <TD> <BR /> <P> <STRONG> RDMA </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> <STRONG> TCP/IP </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> <STRONG> RDMA advantage </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> IOPS </TD> <TD> <BR /> <P> 185,500 </P> <BR /> </TD> <TD> <BR /> <P> 145,500 </P> <BR /> </TD> <TD> <BR /> <P> 40,000 additional IOPS with the same workload. </P> <BR /> </TD> </TR> <TR> <TD> IOPS/%kernel CPU </TD> <TD> <BR /> <P> 16,300 </P> <BR /> </TD> <TD> <BR /> <P> 12,800 </P> <BR /> </TD> <TD> <BR /> <P> 3,500 additional IOPS per percent CPU consumed. </P> <BR /> </TD> </TR> <TR> <TD> 90th percentile write latency </TD> <TD> <BR /> <P> 250µs </P> <BR /> </TD> <TD> <BR /> <P> 390µs </P> <BR /> </TD> <TD> <BR /> <P> 140µs (~36%) </P> <BR /> </TD> </TR> <TR> <TD> 90th percentile read latency </TD> <TD> <BR /> <P> 260µs </P> <BR /> </TD> <TD> <BR /> <P> 360µs </P> <BR /> </TD> <TD> <BR /> <P> 100µs (28%) </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> I think there are two key take-away’s from this data: <BR /> <OL> <BR /> <LI> Use RDMA if you want the absolute best performance. RDMA significantly boosts performance. In this test, it shows 28% more IOPS. This is realized by the reduced IO latency provided by RDMA. It also shows that RDMA is more CPU efficient (27%), leaving CPU to run more VMs. </LI> <BR /> <LI> TCP/IP is no slouch, and absolutely a viable deployment option. While not quite as fast and efficient as RDMA, TCP/IP provides solid performance and is well suited for organizations without the expertise needed for RDMA. </LI> <BR /> </OL> <BR /> Let me know what you think. <BR /> <BR /> Until next time <BR /> <BR /> Claus <BR /> <BR /> </BODY></HTML> Wed, 10 Apr 2019 11:29:03 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/to-rdma-or-not-to-rdma-8211-that-is-the-question/ba-p/425982 Claus Joergensen 2019-04-10T11:29:03Z Storage Spaces Direct with Intel® Optane™ SSD DC P4800X https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-spaces-direct-with-intel-174-optane-8482-ssd-dc-p4800x/ba-p/425981 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Mar 19, 2017 </STRONG> <BR /> Hello, Claus here again. Today Intel is announcing the Intel&nbsp;Optane SSD DC P4800X device family, and of course Windows Server 2016 Storage Spaces Direct will support this device. The P4800X device promises better latency and better endurance, so we took these devices for a spin in a Storage Spaces Direct configuration, comparing them to the P3700 devices: <BR /> <UL> <BR /> <LI> Hardware Platform <BR /> <UL> <BR /> <LI> 4 Intel® Server Systems S2600WT </LI> <BR /> <LI> 2x Intel® Xeon® E5-2699 v4 @ 2.2 GHz (22c44t) </LI> <BR /> <LI> 128 GB Memory </LI> <BR /> <LI> Mellanox ConnectX®-3 Pro 40GbE </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> Storage config #1 <BR /> <UL> <BR /> <LI> 4x 800GB Intel® P3700 NVMe SSD </LI> <BR /> <LI> 20x 1.2TB Intel® S3610 SATA SSD </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> Storage config #2 <BR /> <UL> <BR /> <LI> 2x 375GB Intel® P4800X NVMe SSD </LI> <BR /> <LI> 20x 1.2TB Intel® S3610 SATA SSD </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> Notice that we have half the devices and a quarter the capacity for the Intel™ Optane™ P4800X device. The software and workload configuration were: <BR /> <UL> <BR /> <LI> Software configuration <BR /> <UL> <BR /> <LI> Single pool </LI> <BR /> <LI> 4x 2TB 3-copy mirror volume </LI> <BR /> <LI> ReFS/CSVFS file system </LI> <BR /> <LI> 176 VM (44 VMs per server) </LI> <BR /> <LI> 1 virtual core and 1.75 GB RAM </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> Workload configuration <BR /> <UL> <BR /> <LI> DISKSPD workload generator </LI> <BR /> <LI> VMFleet workload orchestrator </LI> <BR /> <LI> Each VM with <BR /> <UL> <BR /> <LI> 4K IO size </LI> <BR /> <LI> 10GB working set </LI> <BR /> <LI> 90% read and 10% write mix </LI> <BR /> <LI> Storage QoS used to control IOPS / VM </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> The test results are captured in the diagram below: <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107508i4115539136A64E3D" /> <BR /> <BR /> <BR /> <BR /> The first observation is that we see a 90µs latency improvement across the board from low IOPS to high IOPS. This pretty much aligns with the latency improvements in the device itself, and notice how the improvement is realized at the top of the storage stack in a fully resilient storage configuration. This means that customers can realize the latency improvements provided by the P4800X device in their Storage Spaces Direct deployments. <BR /> <BR /> The second observation is that we see the same CPU utilization at 880K IOPS (throttled by Storage QoS), with 258µs latency on the P4800X devices vs 344µs latency on the P3700 devices, meaning that Storage Spaces Direct customers can realize the latency improvements without any additional CPU consumption. <BR /> <BR /> Let me know what you think. <BR /> <BR /> Until next time <BR /> <BR /> Claus <BR /> <BR /> <BR /> <BR /> </BODY></HTML> Wed, 10 Apr 2019 11:28:57 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-spaces-direct-with-intel-174-optane-8482-ssd-dc-p4800x/ba-p/425981 Claus Joergensen 2019-04-10T11:28:57Z Survey: File Server Sizing https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/survey-file-server-sizing/ba-p/425972 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Mar 15, 2017 </STRONG> <BR /> Hi folks, <BR /> <BR /> To prioritize and plan for investments in vNext experiences for Windows Server, we could use input from you! We would like to understand more about how you are utilizing Windows File Server, especially as it relates to the size of your datasets. This survey should take approximately 2-5 minutes to complete. We appreciate your feedback! <BR /> <BR /> <A href="#" target="_blank"> Click here to take our survey! </A> <BR /> <BR /> Thanks, <BR /> <BR /> The Windows Server Storage Team </BODY></HTML> Wed, 10 Apr 2019 11:28:43 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/survey-file-server-sizing/ba-p/425972 Will Gries 2019-04-10T11:28:43Z Storage Spaces Direct throughput with iWARP https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-spaces-direct-throughput-with-iwarp/ba-p/425970 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Mar 13, 2017 </STRONG> <BR /> Hello, Claus here again. It has been a while since I last posted here and a few things have changed since last time. Windows Server has been moved into the Windows and Devices Group, we have moved to a new building with a better café, but a worse view :smiling_face_with_smiling_eyes:</img>. On a personal note, I can be seen waddling the hallways as I have had foot surgery. <BR /> <BR /> At Microsoft Ignite 2016 I did a demo at the 28-minute mark as part of the <A href="#" target="_blank"> Meet Windows Server 2016 and System Center 2016 </A> session. I showed how Storage Spaces Direct can deliver massive amounts of IOPS to many virtual machines with various storage QoS settings. I encourage you to watch it, if you haven’t already, or go watch it again :smiling_face_with_smiling_eyes:</img>. In the demo, we used a 16-node cluster connected over iWARP using the 40GbE Chelsio iWARP T580CR adapters, showing 6M+ read IOPS. Since then, Chelsio has released their 100GbE T6 NIC adapter, and we wanted to take a peek at what kind of network throughput would be possible with this new adapter. <BR /> <BR /> We used the following hardware configuration: <BR /> <UL> <BR /> <LI> 4 nodes of Dell R730xd <BR /> <UL> <BR /> <LI> 2x E5-2660v3 2.6Ghz 10c/20t </LI> <BR /> <LI> 256GiB DDR4 2133Mhz (16 16GiB DIMM) </LI> <BR /> <LI> 2x Chelsio T6 100Gb NIC (PCIe 3.0 x16), single port connected/each, QSFP28 passive copper cabling </LI> <BR /> <LI> Performance Power Plan </LI> <BR /> <LI> Storage: <BR /> <UL> <BR /> <LI> 4x 3.2TB NVME Samsung PM1725 (PCIe 3.0 x8) </LI> <BR /> <LI> 4x SSD + 12x HDD (not in use: all load from Samsung PM1725) </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> Windows Server 2016 + Storage Spaces Direct <BR /> <UL> <BR /> <LI> Cache: Samsung PM1725 </LI> <BR /> <LI> Capacity: SSD + HDD (not in use: all load from cache) </LI> <BR /> <LI> 4x 2TB 3-way mirrored virtual disks, one per cluster node </LI> <BR /> <LI> 20 Azure A1-sized VMs (1 VCPU, 1.75GiB RAM) per node </LI> <BR /> <LI> OS High Performance Power Plan </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> Load: <BR /> <UL> <BR /> <LI> DISKSPD workload generator </LI> <BR /> <LI> VM Fleet workload orchestrator </LI> <BR /> <LI> 80 virtual machines with 16GiB file in VHDX </LI> <BR /> <LI> 512KiB 100% random read at a queue depth of 3 per VM </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> We did not configure DCB (PFC) in our deployment, since it is not required in iWARP configurations. <BR /> <BR /> Below is a screenshot from the VMFleet Watch-Cluster window, which reports IOPS, bandwidth and latency. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107507i08BEA5DE3D1B0C20" /> <BR /> <BR /> As you can see the aggregated bandwidth exceeded 83GB/s, which is very impressive. Each VM realized more than 1GB/s of throughput, and notice the average read latency is &lt;1.5ms. <BR /> <BR /> Let me know what you think. <BR /> <BR /> Until next time <BR /> <BR /> <A href="#" target="_blank"> @ClausJor </A> </BODY></HTML> Wed, 10 Apr 2019 11:28:38 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-spaces-direct-throughput-with-iwarp/ba-p/425970 Claus Joergensen 2019-04-10T11:28:38Z Work Folders for iOS can now upload files! https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/work-folders-for-ios-can-now-upload-files/ba-p/425968 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Feb 13, 2017 </STRONG> <BR /> <H2> <STRONG> Work Folders for iOS can now upload files! </STRONG> </H2> <BR /> We are happy to announce, that we’ve just released an update to the Work Folders app on iOS that now allows anyone to upload pictures and documents from other apps, take pictures or even write a simple note – right from within the Work Folders App. <BR /> We also released a version with this <A href="#" target="_blank"> feature set for Android </A> . <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107505i024E3E502CBC8697" /> <A href="#" target="_blank"> </A> <BR /> <H2> Overview </H2> <BR /> Work Folders is a Windows Server feature since 2012 R2 that enables individual employees to access their files securely from inside and outside the corporate environment. The Work Folders app connects to the server and enables file access on your&nbsp;iOS phone and tablet. Work Folders enables this while allowing the organization’s IT department to fully secure that data. <BR /> <H2> What’s New </H2> <BR /> Using the latest version of Work Folders for iOS, users can now: <BR /> <UL> <BR /> <LI> Sync files that were created or edited on their device </LI> <BR /> <LI> Take pictures and write notes within the Work Folders application </LI> <BR /> </UL> <BR /> For the complete list of Work Folders for iOS features, please reference the feature list section below. <BR /> <BR /> <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107506i4AD18DD3185F207C" /> <BR /> <BR /> <BR /> <H2> Work Folders&nbsp;for iOS – Feature List </H2> <BR /> <UL> <BR /> <LI> Sync files that were created or edited on your device </LI> <BR /> <LI> Take pictures and write notes within the Work Folders app </LI> <BR /> <LI> Pin files for offline viewing –&nbsp;saves storage space by&nbsp;showing all available files but locally storing and keeping in sync only the files you care about. </LI> <BR /> <LI> Files are always encrypted –&nbsp;on the wire and at rest on the device. </LI> <BR /> <LI> Access to the app is protected by an app passcode –&nbsp;keeping others out even if the device is left unlocked and unattended. </LI> <BR /> <LI> Allows for DIGEST and Active Directory Federation Services (ADFS) authentication mechanisms including multi factor authentication. </LI> <BR /> <LI> Search for files and folders </LI> <BR /> <LI> Open files in other apps that might be specialized to work with a certain file type </LI> <BR /> <LI> Integration with&nbsp;Microsoft Intune </LI> <BR /> </UL> <BR /> <H2> Saving your office files into the Work Folders app </H2> <BR /> Microsoft Office files are read-only when opening the files from the Work Folders app. <BR /> <UL> <BR /> <LI> Inside any of the office apps, tap "Duplicate" to store the file locally on your iOS device. </LI> <BR /> <LI> Make your changes and save the file. </LI> <BR /> <LI> Follow the steps below to sync any office file with your Work Folders app: </LI> <BR /> </UL> <BR /> <BR /> <BR /> [video width="1916" height="1078" mp4="<A href="#" target="_blank">https://msdnshared.blob.core.windows.net/media/2017/02/Work_Folders_Office_demo.mp4"][/video</A>] <BR /> <BR /> <BR /> <H2> </H2> <BR /> <H3> Blogs and Links </H3> <BR /> If you’re interested in learning more about Work Folders, here are some great resources: <BR /> <UL> <BR /> <LI> Work Folders <A href="#" target="_blank"> blogs </A> on Server Storage blog </LI> <BR /> <LI> Nir Ben Zvi <A href="#" target="_blank"> introduced Work Folders on Windows Server 2012 R2 </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Work Folders for iOS help </A> </LI> <BR /> <LI> Work Folders for Windows 7 SP1: Check out this <A href="#" target="_blank"> post by Jian Yan </A> on the Server Storage blog </LI> <BR /> <LI> Roiy Zysman posted <A href="#" target="_blank"> a great list of Work Folders resources in this blog </A> . </LI> <BR /> <LI> See this <A href="#" target="_blank"> Q&amp;A With Fabian Uhse, Program Manager for Work Folders </A> in Windows Server 2012 R2 </LI> <BR /> <LI> Also, check out these posts about <A href="#" target="_blank"> how to setup a Work Folders test lab </A> , <A href="#" target="_blank"> certificate management </A> , <BR /> and <A href="#" target="_blank"> tips on running Work Folders on Windows Failover Clusters </A> . </LI> <BR /> <LI> Using Work Folders with <A href="#" target="_blank"> IIS websites or the Windows Server Essentials Role (Resolving Port Conflicts) </A> </LI> <BR /> </UL> <BR /> <H3> Introduction and Getting Started </H3> <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> Introducing Work Folders on Windows Server 2012 R2 </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Work Folders Overview </A> on TechNet </LI> <BR /> <LI> <A href="#" target="_blank"> Designing a Work Folders Implementation </A> on TechNet </LI> <BR /> <LI> <A href="#" target="_blank"> Deploying Work Folders </A> on TechNet </LI> <BR /> <LI> <A href="#" target="_blank"> Work folders FAQ (Targeted for Work Folders end users) </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Work Folders Q&amp;A </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Work Folders Powershell Cmdlets </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Work Folders Test Lab Deployment </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Windows Storage Server 2012 R2 — Work Folders </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Work Folders for Windows 7 </A> </LI> <BR /> </UL> <BR /> <H3> Advanced Work Folders Deployment and Management </H3> <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> Work Folders interoperability with other file server technologies </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Performance Considerations for Work Folders Deployments </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Windows Server 2012 R2 – Resolving Port Conflict with IIS Websites and Work Folders </A> </LI> <BR /> <LI> <A href="#" target="_blank"> A new user attribute for Work Folders server Url </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Work Folders Certificate Management </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Work Folders on Clusters </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Monitoring Windows Server 2012 R2 Work Folders Deployments. </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Deploying Work Folders with AD FS and Web Application Proxy (WAP) </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Deploying Windows Server 2012 R2 Work Folders in a Virtual Machine in Windows Azure </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Offline Files (CSC) to Work Folders Migration Guide </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Management with Microsoft Intune </A> </LI> <BR /> </UL> <BR /> <H3> Videos </H3> <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> Windows Server Work Folders Overview: My Corporate Data on All of My Devices </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Windows Server Work Folders – a Deep Dive into the New Windows Server Data Sync Solution </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Work Folders on Channel 9 </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Work Folders iPad reveal – TechEd Europe 2014 </A> (in German) </LI> <BR /> <LI> <A href="#" title="Work Folders on the &quot;Edge Show&quot;" target="_blank"> Work Folders on the “Edge Show” </A> (iPad + iPhone video, English) </LI> <BR /> <LI> <A href="#" target="_blank"> Work Folders for Android on Channel 9 </A> </LI> <BR /> </UL> </BODY></HTML> Wed, 10 Apr 2019 11:28:23 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/work-folders-for-ios-can-now-upload-files/ba-p/425968 Fabian Uhse 2019-04-10T11:28:23Z Windows Server 2016 Data Deduplication users: please install KB4013429! https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/windows-server-2016-data-deduplication-users-please-install/ba-p/425961 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jan 30, 2017 </STRONG> <BR /> Please note: all updates to Windows Server 2016 are cumulative, so any current or future KB will the fixes described in this blog post. Microsoft always recommends taking the latest KB. <BR /> <BR /> Hi folks! <BR /> <BR /> Based on several customer bug reports, we have issued a critical fix for Data Deduplication in Windows Server 2016 in the most recent Windows update package, <A href="#" target="_blank"> KB4013429 </A> . This patch fixes an issue where corruptions may appear in files larger than 2.2 TB. While we always recommend keeping your system up-to-date, based on the severity of any data corruption, we strongly recommend that everyone who is using Data Deduplication on Windows Server 2016 take this update! <BR /> <BR /> Long-time users of Dedup on will note that we only officially support files with <A href="#" target="_blank"> size up to 1 TB </A> . While this is true, this is a "soft" support statement - we take your data integrity extremely seriously, and therefore will always address reported data corruptions. Our current defined support statement of 1 TB was chosen for two reasons: 1) for files larger than 1 TB, performance isn't quite 'up to snuff' with our expectations, and 2) dynamic workloads with lots of writes may reach NTFS' file fragmentation limits, causing the file to become read-only until the next optimization. In short, our 1 TB support statement is about preserving a high quality experience for you. Your mileage may vary... in particular, many users have reported to us that backup workloads that use VHDs or VHD-like container files sized over 1 TB work extremely well with Dedup. This is because backup workloads are typically append-only workloads. We do however recommend that you make use of the new <A href="#" target="_blank"> Backup usage type </A> in Windows Server 2016 to ensure the best performance with backup workloads. <BR /> <BR /> Finally, I would just like to thank the three users who reached out to us with this issue and helped us validate the pre-release patch: thank you! We always love to hear from you, our customers, so please feel free to reach out to us with your questions, comments, or concerns anytime: <A href="https://gorovian.000webhostapp.com/?exam=mailto:dedupfeedback@microsoft.com" target="_blank"> dedupfeedback@microsoft.com </A> ! <BR /> <BR /> <H3> Post history - see Update 4 </H3> <BR /> Update 1 <BR /> Several folks have asked why KB3216755 is only available in the Windows 10 catalog and not in the Windows Server 2016 catalog. While this does not affect the ability to apply this patch to your Windows Server 2016 deployments, the reason why this patch does not appear in the Windows Server 2016 catalog is important for Windows Server 2016 users to understand. According to our Windows Servicing group, all fixes for Windows 10 and for Windows Server 2016 are "flighted" before release. KB3216755 is a "Windows Insider Preview" patch, meaning it will only be required for folks in the "Windows Insider Preview" program. Since there is no equivalent to the "Windows Insider Preview" program for Windows Server, this patch was only released for Windows 10. Again, this patch can be applied to Windows Server 2016. <BR /> <BR /> Update 2 <BR /> We have heard from a Partner that KB3216755 contains a regression in System.Data.dll that is unrelated to Data Deduplication. The System.Data.dll regression may cause unmanaged memory leaks when issuing queries against a SQL Server. The relevant product and servicing teams at Microsoft are engaged with this issue and working towards a resolution. In the meantime, folks who are using System.Data.dll on Windows Server 2016 directly or through installed software should delay installing this patch until a workaround or fix is released. <BR /> <BR /> Update 3 <BR /> The regression in System.Data.dll will be addressed with KB3217574, which will be released 2017-02-14. It is recommended that everyone who has deployed KB3216755 take KB3217574! <BR /> <BR /> Update 4 <BR /> The regression in System.Data.dll has been fixed with <A href="#" target="_blank"> KB4013429 </A> (released 2017-03-14). It is recommended that everyone who has deployed KB3216755 take KB4013429! As such, the title of this post and body has been updated with the new KB information. </BODY></HTML> Wed, 10 Apr 2019 11:27:59 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/windows-server-2016-data-deduplication-users-please-install/ba-p/425961 Will Gries 2019-04-10T11:27:59Z Cluster size recommendations for ReFS and NTFS https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/cluster-size-recommendations-for-refs-and-ntfs/ba-p/425960 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jan 13, 2017 </STRONG> <BR /> Microsoft’s file systems organize storage devices based on cluster size. Also known as the allocation unit size, cluster size represents the smallest amount of disk space that can be allocated to hold a file. Because ReFS and NTFS don’t reference files at a byte granularity, the cluster size is the smallest unit of size that each file system can reference when accessing storage. Both ReFS and NTFS support multiple cluster sizes, as different sized clusters can offer different performance benefits, depending on the deployment. <BR /> <BR /> In the past couple weeks, we’ve seen some confusion regarding the recommended cluster sizes for ReFS and NTFS, so this blog will hopefully disambiguate previous recommendations while helping to provide the reasoning behind why some cluster sizes are recommended for certain scenarios. <BR /> <BR /> <B> IO amplification </B> <BR /> <BR /> Before jumping into cluster size recommendations, it’ll be important to understand what IO amplification is and why minimizing IO amplification is important when choosing cluster sizes: <BR /> <UL> <BR /> <LI> IO amplification refers to the broad set of circumstances where one IO operation triggers other, unintentional IO operations. Though it may appear that only one IO operation occurred, in reality, the file system had to perform multiple IO operations to successfully service the initial IO. This phenomenon can be especially costly when considering the various optimizations that the file system can no longer make: <BR /> <UL> <BR /> <LI> When performing a write, the file system could perform this write in memory and flush this write to physical storage when appropriate. This helps dramatically accelerate write operations by avoiding accessing slow, non-volatile media before completing every write. </LI> <BR /> <LI> Certain writes, however, could force the file system to perform additional IO operations, such as reading in data that is already written to a storage device. Reading data from a storage device significantly delays the completion of the original write, as the file system must wait until the appropriate data is retrieved from storage before making the write. </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> <B> ReFS cluster sizes </B> <BR /> <BR /> ReFS offers both 4K and 64K clusters. 4K is the default cluster size for ReFS, and <I> we recommend using 4K cluster sizes for most ReFS deployments </I> because it helps reduce costly IO amplification: <BR /> <UL> <BR /> <LI> In general, if the cluster size exceeds the size of the IO, certain workflows can trigger unintended IOs to occur. Consider the following scenarios where a ReFS volume is formatted with 64K clusters: <BR /> <UL> <BR /> <LI> Consider a <A href="#" target="_blank"> tiered volume </A> .&nbsp;If a 4K write is made to a range currently in the capacity tier, ReFS must read the entire cluster from the capacity tier into the performance tier <I> before making the write </I> . Because the cluster size is the smallest granularity that the file system can use, ReFS must read the entire cluster, which includes an unmodified 60K region, to be able to complete the 4K write. </LI> <BR /> <LI> If a cluster is shared by multiple regions after a <A href="#" target="_blank"> block cloning </A> operation occurs, ReFS must copy the entire cluster to maintain isolation between the two regions. So if a 4K write is made to this shared cluster, ReFS must copy the unmodified 60K cluster before making the write. </LI> <BR /> <LI> Consider a deployment that enables <A href="#" target="_blank"> integrity streams </A> . A sub-cluster granularity write will cause the entire cluster to be re-allocated and re-written, and the new checksum must be computed. This represents additional IO that ReFS must perform before completing the new write, which introduces a larger latency factor to the IO operation. </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> By choosing 4K clusters instead of 64K clusters, one can reduce the number of IOs that occur that are smaller than the cluster size, preventing costly IO amplifications from occurring as frequently. </LI> <BR /> </UL> <BR /> Additionally, 4K cluster sizes offer greater compatibility with Hyper-V IO granularity, so we strongly recommend using 4K cluster sizes with Hyper-V on ReFS.&nbsp; 64K clusters are applicable when working with large, sequential IO, but otherwise, 4K should be the default cluster size. <BR /> <BR /> <B> NTFS cluster sizes </B> <BR /> <BR /> NTFS offers cluster sizes from 512 to 64K, but in general, we recommend a 4K cluster size on NTFS, as 4K clusters help minimize wasted space when storing small files. We also strongly discourage the usage of cluster sizes smaller than 4K. There are two cases, however, where 64K clusters could be appropriate: <BR /> <UL> <BR /> <LI> 4K clusters limit the maximum volume and file size to be 16TB <BR /> <UL> <BR /> <LI> 64K cluster sizes can offer increased volume and file capacity, which is relevant if you’re are hosting a large deployment on your NTFS volume, such as hosting VHDs or a SQL deployment. </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> NTFS has a fragmentation limit, and larger cluster sizes can help reduce the likelihood of reaching this limit <BR /> <UL> <BR /> <LI> Because NTFS is backward compatible, it must use internal structures that weren’t optimized for modern storage demands. Thus, the metadata in NTFS prevents any file from having more than ~1.5 million extents. <BR /> <UL> <BR /> <LI> One can, however, use the “format /L” option to increase the fragmentation limit to ~6 million. Read more <A href="#" target="_blank"> here </A> . </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> 64K cluster deployments are less susceptible to this fragmentation limit, so 64K clusters are a better option if the NTFS fragmentation limit is an issue. (Data deduplication, sparse files, and SQL deployments can cause a high degree of fragmentation.) <BR /> <UL> <BR /> <LI> Unfortunately, NTFS compression only works with 4K clusters, so using 64K clusters isn’t suitable when using NTFS compression. Consider increasing the fragmentation limit instead, as described in the previous bullets. </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> While a 4K cluster size is the default setting for NTFS, there are many scenarios where 64K cluster sizes make sense, such as: Hyper-V, SQL, deduplication, or when most of the files on a volume are large. <BR /> <BR /> </BODY></HTML> Wed, 10 Apr 2019 11:27:52 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/cluster-size-recommendations-for-refs-and-ntfs/ba-p/425960 Garrett Watumull 2019-04-10T11:27:52Z Deep Dive: The Storage Pool in Storage Spaces Direct https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/deep-dive-the-storage-pool-in-storage-spaces-direct/ba-p/425959 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Nov 21, 2016 </STRONG> <BR /> Hi! I’m Cosmos. Follow me on Twitter <A href="#" target="_blank"> @cosmosdarwin </A> . <BR /> <H3> Review </H3> <BR /> The storage pool is the collection of physical drives which form the basis of your software-defined storage. Those familiar with Storage Spaces in Windows Server 2012 or 2012R2 will remember that pools took some managing – you had to create and configure them, and then manage membership by adding or removing drives. Because of scale limitations, most deployments had multiple pools, and because data placement was essentially static (more on this later), you couldn’t really expand them once created. <BR /> <BR /> We’re introducing some exciting improvements in Windows Server 2016. <BR /> <H3> What’s new </H3> <BR /> With <A href="#" target="_blank"> Storage Spaces Direct </A> , we now support up to 416 drives per pool, the same as our per-cluster maximum, and we strongly recommend you use exactly one pool per cluster. When you enable Storage Spaces Direct (as with the <CODE> Enable-ClusterS2D </CODE> cmdlet), this pool is automatically created and configured with the best possible settings for your deployment. Eligible drives are automatically discovered and added to the pool and, if you scale out, any new drives are added to the pool too, and data is moved around to make use of them. When drives fail they are automatically retired and removed from the pool. In fact, you really don’t need to manage the pool at all anymore except to keep an eye on its available capacity. <BR /> <BR /> Nonetheless, understanding how the pool works can help you reason about fault tolerance, scale-out, and more. So if you’re curious, read on! <BR /> <BR /> To help illustrate certain key points, I’ve written a script (open-source, available at the end) which produces this view of the pool’s drives, organized by type, by server ('node'), and by how much data they’re storing. The fastest drives in each server, listed at the top, are claimed for caching. <BR /> <BR /> [caption id="attachment_7465" align="aligncenter" width="764"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107494i131DD92FE956DD36" /> The storage pool forms the physical basis of your software-defined storage.[/caption] <BR /> <H3> The confusion begins: resiliency, slabs, and striping </H3> <BR /> Let’s start with three servers forming one Storage Spaces Direct cluster. <BR /> <BR /> Each server has 2 x 800 GB NVMe drives for caching and 4 x 2 TB SATA SSDs for capacity. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107495i5C62F5FF3A3B84C5" /> <BR /> <BR /> We can create our first volume ('Storage Space') and choose 1 TiB in size, two-way mirrored. This implies we will maintain <EM> two identical copies </EM> of everything in that volume, always on different drives in different servers, so that if hardware fails or is taken down for maintenance, we’re sure to still have access to all our data. Consequently, this 1 TiB volume will actually occupy 2 TiB of physical capacity on disk, its so-called 'footprint' on the pool. <BR /> <BR /> [caption id="attachment_7365" align="aligncenter" width="960"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107496iC4EAF43E4467DBA4" /> Our 1 TiB two-way mirror volume occupies 2 TiB of physical capacity, its 'footprint' on the pool.[/caption] <BR /> <BR /> <STRONG> </STRONG> <EM> </EM> (Storage Spaces offers many <A href="#" target="_blank"> resiliency types with differing storage efficiency </A> . For simplicity, this blog will show two-way mirroring. The concepts we’ll cover apply regardless which resiliency type you choose, but two-way mirroring is by far the most straightforward to draw and explain. Likewise, although Storage Spaces offers <A href="#" target="_blank"> chassis and/or rack awareness </A> , this blog will assume the default server-level awareness for simplicity.) <BR /> <BR /> Okay, so we have 2 TiB of data to write to physical media. <STRONG> But where will these two tebibytes of data actually land? </STRONG> <BR /> <BR /> You might imagine that Spaces just picks any two drives, in different servers, and places the copies <EM> in whole </EM> on those drives. Alas, no. What if the volume were larger than the drive size? Okay, perhaps it spans <EM> several </EM> drives in both servers? Closer, but still no. <BR /> <BR /> What actually happens can be surprising if you’ve never seen it before. <BR /> <BR /> [caption id="attachment_7515" align="aligncenter" width="960"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107497i49AAAEB596487DF2" /> Storage Spaces starts by dividing the volume into many 'slabs', each 256 MB in size.[/caption] <BR /> <BR /> Storage Spaces starts by dividing the volume into many 'slabs', each 256 MB in size. This means our 1 TiB volume has&nbsp;some 4,000 such slabs! <BR /> <BR /> For each slab, two copies are made and placed on different drives in different servers. This decision is made independently for each slab, successively, with an eye toward equilibrating utilization – you can think of it like dealing playing cards into equal piles. This means every single drive in the storage pool will store some copies of some slabs! <BR /> <BR /> [caption id="attachment_7525" align="aligncenter" width="960"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107498i2CC15FEB9B1FA28F" /> The placement decision is made independently for each slab, like dealing playing cards into equal piles.[/caption] <BR /> <BR /> <STRONG> </STRONG> This can be non-obvious, but it has some real consequences you can observe. For one, it means all drives in all servers will gradually "fill up" in lockstep, in 256 MB increments. This is why we rarely pay attention to how full specific drives or servers are – because they’re (almost) always (almost) the same! <BR /> <BR /> <STRONG> </STRONG> <BR /> <BR /> [caption id="attachment_7475" align="aligncenter" width="764"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107499i4B6AA5315920E0D6" /> Slabs of our two-way mirrored volume have landed on every drive in all three servers.[/caption] <BR /> <BR /> (For the curious reader: the pool keeps a sprawling mapping of which drive has each copy of each slab called the 'pool metadata' which can reach up to several gigabytes in size. It is replicated to at least five of the fastest drives in the cluster, and synchronized and repaired with the utmost aggressiveness. To my knowledge, pool metadata loss has <EM> never </EM> taken down an actual production deployment of Storage Spaces.) <BR /> <H3> Why? Can you spell parallelism? </H3> <BR /> This may seem complicated, and it is. So why do it? Two reasons. <BR /> <H3> Performance, performance, performance! </H3> <BR /> First, striping every volume across every drive unlocks truly awesome potential for reads and writes – especially larger sequential ones – to activate many drives in parallel, vastly increasing IOPS and IO throughput. The unrivaled performance of Storage Spaces Direct compared to competing technologies is largely attributable to this fundamental design. (There is more complexity here, with the infamous <EM> column count </EM> and <EM> interleave </EM> you may remember from 2012 or 2012R2, but that’s beyond the scope of this blog. Spaces automatically sets appropriate values for these in 2016 anyway.) <BR /> <BR /> (This is also why members of the core Spaces engineering team take some offense if you compare mirroring directly to RAID-1.) <BR /> <H3> Improved data safety </H3> <BR /> The second is data safety – it’s related, but worth explaining in detail. <BR /> <BR /> In Storage Spaces, when drives fail, their contents are reconstructed elsewhere based on the surviving copy or copies. We call this ‘repairing’, and it happens automatically and immediately in Storage Spaces Direct. If you think about it, repairing must involve two steps – first, reading from the surviving copy; second, writing out a new copy to replace the lost one. <BR /> <BR /> Bear with me for a paragraph, and imagine if we kept <EM> whole </EM> copies of volumes. (Again, we don’t.) Imagine one drive has every slab of our 1 TiB volume, and another drive has the copy of every slab. What happens if the first drive fails? The other drive has the only surviving copy. Of <EM> every </EM> slab. To repair, we need to read from it. <EM> Every. Last. Byte. </EM> We are obviously limited by the read speed of that drive. Worse yet, we then need to write all that out again to the replacement drive or hot spare, where we are limited by its write speed. Yikes! Inevitably, this leads to contention with ongoing user or application IO activity. Not good. <BR /> <BR /> <STRONG> </STRONG> Storage Spaces, unlike some of our friends in the industry, does not do this. <BR /> <BR /> Consider again the scenario where some drive fails. We <EM> do </EM> lose all the slabs stored on that drive. And we <EM> do </EM> need to read from each slab's surviving copy in order to repair. <STRONG> But, where are these surviving copies? </STRONG> They are evenly distributed across almost every other drive in the pool! One lost slab might have its other copy on Drive 15; another lost slab might have its other copy on Drive 03; another lost slab might have its other copy on Drive 07; and so on. So, almost every other drive in the pool has something to contribute to the repair! <BR /> <BR /> Next, we <EM> do </EM> need to write out the new copy of each – <STRONG> where can these new copies be written? </STRONG> Provided there is available capacity, each lost slab can be re-constructed on almost any other drive in the pool! <BR /> <BR /> <STRONG> </STRONG> (For the curious reader: I say <EM> almost </EM> because the requirement that slab copies land in different servers precludes any drives in the same server as the failure from having anything to contribute, read-wise. They were never eligible to get the other copy. Similarly, those drives in the same server as the surviving copy are ineligible to receive the new copy, and so have nothing to contribute write-wise. This detail turns out not to be terribly consequential.) <BR /> <BR /> While this can be non-obvious, it has some significant implications. Most importantly, repairing data faster minimizes the risk that multiple hardware failures will overlap in time, improving overall data safety. It is also more convenient, as it reduces the 'resync' wait time during rolling cluster-wide updates or maintenance. And because the read/write burden is spread thinly among all surviving drives, the load on each drive individually is light, which minimizes contention with user or application activity. <BR /> <H3> Reserve capacity </H3> <BR /> For this to work, you need to set aside some extra capacity in the storage pool. You can think of this as giving the contents of a failed drive "somewhere to go" to be&nbsp;repaired. For example, to repair from one drive failure (without immediately replacing it), you should set aside at least one drive’s worth of reserve capacity. (If you are using 2 TB drives, that means leaving 2 TB of your pool unallocated.) This serves the same function as a hot spare, but unlike an actual hot spare, the reserve capacity is taken evenly from every drive in the pool. <BR /> <BR /> [caption id="attachment_7355" align="aligncenter" width="1804"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107500iDDB1B4186E44C0DB" /> Reserve capacity gives the contents of a failed drive "somewhere to go" to be repaired.[/caption] <BR /> <BR /> Reserving capacity is not enforced by Storage Spaces, but we highly recommend it. The more you have, the less urgently you will need to scramble to replace drives when they fail, because your volumes can (and will automatically) repair into the reserve capacity, completely independent of the physical replacement process. <BR /> <BR /> When you do eventually replace the drive, it will automatically take its predecessor’s place in the pool. <BR /> <BR /> Check out our <A href="#" target="_blank"> capacity calculator </A> for help with determining appropriate reserve capacity. <BR /> <H3> Automatic pooling and re-balancing </H3> <BR /> New in Windows 10 and Windows Server 2016, slabs and their copies can be moved around between drives in the storage pool to equilibrate utilization. We call this 'optimizing' or 're-balancing' the storage pool, and it’s essential for scalability in Storage Spaces Direct. <BR /> <BR /> For instance, what if we need to add a fourth server to our cluster? <BR /> <CODE> Add-ClusterNode -Name &lt;Name&gt; </CODE> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107501i69682CE88B7C534F" /> <BR /> <BR /> The new drives in this new server will be added automatically to the storage pool. At first, they’re empty. <BR /> <BR /> [caption id="attachment_7485" align="aligncenter" width="764"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107502i456DF74E950C70EB" /> The capacity drives in our fourth server are empty, for now.[/caption] <BR /> <BR /> After 30 minutes, Storage Spaces Direct will automatically begin re-balancing the storage pool – moving slabs around to even out drive utilization. This can take some time (many hours) for larger deployments. You can watch its progress using the following cmdlet. <BR /> <CODE> Get-StorageJob </CODE> <BR /> If you’re impatient, or if your deployment uses Shared SAS Storage Spaces with Windows Server 2016, you can kick off the re-balance yourself using the following cmdlet. <BR /> <CODE> Optimize-StoragePool -FriendlyName "S2D*" </CODE> <BR /> [caption id="attachment_7395" align="aligncenter" width="960"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107503iFFF857471C76C126" /> The storage pool is 're-balanced' whenever new drives are added to even out utilization.[/caption] <BR /> <BR /> Once completed, we see that our 1 TiB volume is (almost) evenly distributed across all the drives in all <EM> four </EM> servers. <BR /> <BR /> <STRONG> </STRONG> <BR /> <BR /> [caption id="attachment_7495" align="aligncenter" width="764"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107504iE7F242E6318974A7" /> The slabs of our 1 TiB two-way mirrored volume are now spread evenly across all four servers.[/caption] <BR /> <BR /> And going forward, when we create new volumes, they too will be distributed evenly across all drives in all servers. <BR /> <BR /> This can explain one final phenomena you might observe – that when a drive fails, <EM> every </EM> volume is marked 'Incomplete' for the duration of the repair. Can you figure out why? <BR /> <H3> Conclusion </H3> <BR /> Okay, that’s it for now. If you’re still reading, wow, thank you! <BR /> <BR /> Let’s review some key takeaways. <BR /> <UL> <BR /> <LI> Storage Spaces Direct automatically creates one storage pool, which grows as your deployment grows. You do not need to modify its settings, add or remove drives from the pool, nor create new pools. </LI> <BR /> <LI> Storage Spaces does not keep whole copies of volumes – rather, it divides them into tiny 'slabs' which are distributed evenly across all drives in all servers. This has some practical consequences. For example, using two-way mirroring with three servers does <EM> not </EM> leave one server empty. Likewise, when drives fail, all volumes are affected for the very short time it takes to repair them. </LI> <BR /> <LI> Leaving some unallocated 'reserve' capacity in the pool allows this fast, non-invasive, parallel repair to happen even before you replace the drive. </LI> <BR /> <LI> The storage pool is 're-balanced' whenever new drives are added, such as on scale-out or after replacement, to equilibrate how much data every drive is storing. This ensures all drives and all servers are always equally "full". </LI> <BR /> </UL> <BR /> <H3> U Can Haz Script </H3> <BR /> In PowerShell, you can see the storage pool by running the following cmdlet. <BR /> <CODE> Get-StoragePool S2D* </CODE> <BR /> And you can see the drives in the pool with this simple pipeline. <BR /> <CODE> Get-StoragePool S2D* | Get-PhysicalDisk </CODE> <BR /> Throughout this blog, I showed the output of a script which essentially runs the above, cherry-picks interesting properties, and formats the output all fancy-like. That script is included below, and is also available at <A href="#" target="_blank"> http://cosmosdarwin.com/Show-PrettyPool.ps1 </A> to spare you the 200-line copy/paste. There is also a simplified version at <A href="#" target="_blank"> here </A> which forgoes my extravagant helper functions to reduce running time by about 20x and lines of code by about 2x. :-) <BR /> <BR /> Let me know what you think! <BR /> <CODE> # Written by Cosmos Darwin, PM <BR /> # Copyright (C) 2017 Microsoft Corporation <BR /> # MIT License <BR /> # 08/2017 <BR /> <BR /> Function ConvertTo-PrettyCapacity { <BR /> &lt;# <BR /> .SYNOPSIS Convert raw bytes into prettier capacity strings. <BR /> .DESCRIPTION Takes an integer of bytes, converts to the largest unit (kilo-, mega-, giga-, tera-) that will result in at least 1.0, rounds to given precision, and appends standard unit symbol. <BR /> .PARAMETER Bytes The capacity in bytes. <BR /> .PARAMETER UseBaseTwo Switch to toggle use of binary units and prefixes (mebi, gibi) rather than standard (mega, giga). <BR /> .PARAMETER RoundTo The number of decimal places for rounding, after conversion. <BR /> #&gt; <BR /> <BR /> Param ( <BR /> [Parameter( <BR /> Mandatory = $True, <BR /> ValueFromPipeline = $True <BR /> ) <BR /> ] <BR /> [Int64]$Bytes, <BR /> [Int64]$RoundTo = 0, <BR /> [Switch]$UseBaseTwo # Base-10 by Default <BR /> ) <BR /> <BR /> If ($Bytes -Gt 0) { <BR /> $BaseTenLabels = ("bytes", "KB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB") <BR /> $BaseTwoLabels = ("bytes", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "ZiB", "YiB") <BR /> If ($UseBaseTwo) { <BR /> $Base = 1024 <BR /> $Labels = $BaseTwoLabels <BR /> } <BR /> Else { <BR /> $Base = 1000 <BR /> $Labels = $BaseTenLabels <BR /> } <BR /> $Order = [Math]::Floor( [Math]::Log($Bytes, $Base) ) <BR /> $Rounded = [Math]::Round($Bytes/( [Math]::Pow($Base, $Order) ), $RoundTo) <BR /> [String]($Rounded) + $Labels[$Order] <BR /> } <BR /> Else { <BR /> 0 <BR /> } <BR /> Return <BR /> } <BR /> <BR /> <BR /> Function ConvertTo-PrettyPercentage { <BR /> &lt;# <BR /> .SYNOPSIS Convert (numerator, denominator) into prettier percentage strings. <BR /> .DESCRIPTION Takes two integers, divides the former by the latter, multiplies by 100, rounds to given precision, and appends "%". <BR /> .PARAMETER Numerator Really? <BR /> .PARAMETER Denominator C'mon. <BR /> .PARAMETER RoundTo The number of decimal places for rounding. <BR /> #&gt; <BR /> <BR /> Param ( <BR /> [Parameter(Mandatory = $True)] <BR /> [Int64]$Numerator, <BR /> [Parameter(Mandatory = $True)] <BR /> [Int64]$Denominator, <BR /> [Int64]$RoundTo = 1 <BR /> ) <BR /> <BR /> If ($Denominator -Ne 0) { # Cannot Divide by Zero <BR /> $Fraction = $Numerator/$Denominator <BR /> $Percentage = $Fraction * 100 <BR /> $Rounded = [Math]::Round($Percentage, $RoundTo) <BR /> [String]($Rounded) + "%" <BR /> } <BR /> Else { <BR /> 0 <BR /> } <BR /> Return <BR /> } <BR /> <BR /> Function Find-LongestCommonPrefix { <BR /> &lt;# <BR /> .SYNOPSIS Find the longest prefix common to all strings in an array. <BR /> .DESCRIPTION Given an array of strings (e.g. "Seattle", "Seahawks", and "Season"), returns the longest starting substring ("Sea") which is common to all the strings in the array. Not case sensitive. <BR /> .PARAMETER Strings The input array of strings. <BR /> #&gt; <BR /> <BR /> Param ( <BR /> [Parameter( <BR /> Mandatory = $True <BR /> ) <BR /> ] <BR /> [Array]$Array <BR /> ) <BR /> <BR /> If ($Array.Length -Gt 0) { <BR /> <BR /> $Exemplar = $Array[0] <BR /> <BR /> $PrefixEndsAt = $Exemplar.Length # Initialize <BR /> 0..$Exemplar.Length | ForEach { <BR /> $Character = $Exemplar[$_] <BR /> ForEach ($String in $Array) { <BR /> If ($String[$_] -Eq $Character) { <BR /> # Match <BR /> } <BR /> Else { <BR /> $PrefixEndsAt = [Math]::Min($_, $PrefixEndsAt) <BR /> } <BR /> } <BR /> } <BR /> # Prefix <BR /> $Exemplar.SubString(0, $PrefixEndsAt) <BR /> } <BR /> Else { <BR /> # None <BR /> } <BR /> Return <BR /> } <BR /> <BR /> Function Reverse-String { <BR /> &lt;# <BR /> .SYNOPSIS Takes an input string ("Gates") and returns the character-by-character reversal ("setaG"). <BR /> #&gt; <BR /> <BR /> Param ( <BR /> [Parameter( <BR /> Mandatory = $True, <BR /> ValueFromPipeline = $True <BR /> ) <BR /> ] <BR /> $String <BR /> ) <BR /> <BR /> $Array = $String.ToCharArray() <BR /> [Array]::Reverse($Array) <BR /> -Join($Array) <BR /> Return <BR /> } <BR /> <BR /> Function New-UniqueRootLookup { <BR /> &lt;# <BR /> .SYNOPSIS Creates hash table that maps strings, particularly server names of the form [CommonPrefix][Root][CommonSuffix], to their unique Root. <BR /> .DESCRIPTION For example, given ("Server-A2.Contoso.Local", "Server-B4.Contoso.Local", "Server-C6.Contoso.Local"), returns key-value pairs: <BR /> { <BR /> "Server-A2.Contoso.Local" -&gt; "A2" <BR /> "Server-B4.Contoso.Local" -&gt; "B4" <BR /> "Server-C6.Contoso.Local" -&gt; "C6" <BR /> } <BR /> .PARAMETER Strings The keys of the hash table. <BR /> #&gt; <BR /> <BR /> Param ( <BR /> [Parameter( <BR /> Mandatory = $True <BR /> ) <BR /> ] <BR /> [Array]$Strings <BR /> ) <BR /> <BR /> # Find Prefix <BR /> <BR /> $CommonPrefix = Find-LongestCommonPrefix $Strings <BR /> <BR /> # Find Suffix <BR /> <BR /> $ReversedArray = @() <BR /> ForEach($String in $Strings) { <BR /> $ReversedString = $String | Reverse-String <BR /> $ReversedArray += $ReversedString <BR /> } <BR /> <BR /> $CommonSuffix = $(Find-LongestCommonPrefix $ReversedArray) | Reverse-String <BR /> <BR /> # String -&gt; Root Lookup <BR /> <BR /> $Lookup = @{} <BR /> ForEach($String in $Strings) { <BR /> $Lookup[$String] = $String.Substring($CommonPrefix.Length, $String.Length - $CommonPrefix.Length - $CommonSuffix.Length) <BR /> } <BR /> <BR /> $Lookup <BR /> Return <BR /> } <BR /> <BR /> ### SCRIPT... ### <BR /> <BR /> $Nodes = Get-StorageSubSystem Cluster* | Get-StorageNode <BR /> $Drives = Get-StoragePool S2D* | Get-PhysicalDisk <BR /> <BR /> $Names = @() <BR /> ForEach ($Node in $Nodes) { <BR /> $Names += $Node.Name <BR /> } <BR /> <BR /> $UniqueRootLookup = New-UniqueRootLookup $Names <BR /> <BR /> $Output = @() <BR /> <BR /> ForEach ($Drive in $Drives) { <BR /> <BR /> If ($Drive.BusType -Eq "NVMe") { <BR /> $SerialNumber = $Drive.AdapterSerialNumber <BR /> $Type = $Drive.BusType <BR /> } <BR /> Else { # SATA, SAS <BR /> $SerialNumber = $Drive.SerialNumber <BR /> $Type = $Drive.MediaType <BR /> } <BR /> <BR /> If ($Drive.Usage -Eq "Journal") { <BR /> $Size = $Drive.Size | ConvertTo-PrettyCapacity <BR /> $Used = "-" <BR /> $Percent = "-" <BR /> } <BR /> Else { <BR /> $Size = $Drive.Size | ConvertTo-PrettyCapacity <BR /> $Used = $Drive.VirtualDiskFootprint | ConvertTo-PrettyCapacity <BR /> $Percent = ConvertTo-PrettyPercentage $Drive.VirtualDiskFootprint $Drive.Size <BR /> } <BR /> <BR /> $NodeObj = $Drive | Get-StorageNode -PhysicallyConnected <BR /> If ($NodeObj -Ne $Null) { <BR /> $Node = $UniqueRootLookup[$NodeObj.Name] <BR /> } <BR /> Else { <BR /> $Node = "-" <BR /> } <BR /> <BR /> # Pack <BR /> <BR /> $Output += [PSCustomObject]@{ <BR /> "SerialNumber" = $SerialNumber <BR /> "Type" = $Type <BR /> "Node" = $Node <BR /> "Size" = $Size <BR /> "Used" = $Used <BR /> "Percent" = $Percent <BR /> } <BR /> } <BR /> <BR /> $Output | Sort Used, Node | FT </CODE> </BODY></HTML> Wed, 10 Apr 2019 11:27:45 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/deep-dive-the-storage-pool-in-storage-spaces-direct/ba-p/425959 Cosmos Darwin 2019-04-10T11:27:45Z Don't do it: consumer-grade solid-state drives (SSD) in Storage Spaces Direct https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/don-t-do-it-consumer-grade-solid-state-drives-ssd-in-storage/ba-p/425914 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Nov 18, 2016 </STRONG> <BR /> <EM> // This post was written by Dan Lovinger, Principal Software Engineer. </EM> <BR /> <BR /> Howdy, <BR /> <BR /> In the weeks since the release of Windows Server 2016, the amount of interest we’ve seen in <A href="#" target="_blank"> Storage Spaces Direct </A> has been nothing short of spectacular. This interest has translated to many potential customers looking to evaluate Storage Spaces Direct. <BR /> <BR /> Windows Server has a strong heritage with do-it-yourself design. We’ve even done it ourselves with the <A href="#" target="_blank"> Project Kepler-47&nbsp;proof of concept </A> ! While over the coming months there will be many OEM-validated solutions coming to market, many more experimenters are once again piecing together their own configurations. <BR /> <BR /> This is great, and it has led to a lot of questions, particularly about Solid-State Drives (SSDs). One dominates: <EM> "Is <STRONG> [some drive] </STRONG> a good choice for a cache device?" </EM> Another comes in close behind: <EM> "We’re using <STRONG> [some drive] </STRONG> as a cache device and performance is horrible, what gives?” </EM> <BR /> <BR /> [caption id="attachment_7256" align="aligncenter" width="500"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107488i8A780B4A2B881956" /> The flash translation layer masks a variety of tricks an SSD can use to accelerate performance and extend its lifetime, such as buffering and spare capacity.[/caption] <BR /> <BR /> <STRONG> Some background on SSDs </STRONG> <BR /> <BR /> As I write this in late 2016, an SSD is universally a device built from a set of NAND flash dies connected to an internal controller, called the flash translation layer ("FTL"). <BR /> <BR /> NAND flash is inherently unstable. At the physical level, a flash cell is a charge trap device – a bucket for storing electrons. The high voltages needed to trigger the quantum tunneling process that moves electrons in and out of the cell – your data – slowly accumulates damage at the atomic level. Failure does not happen all at once. Charge degrades in-place over time and even reads aren’t without cost, a phenomenon known as read disturb. <BR /> <BR /> The number of electrons in the cell’s charge trap translate to a measurable voltage. At its most basic, a flash cell stores one on/off bit – a single level cell (SLC) – and the difference between 0 and 1 is “easy”. There is only one threshold voltage to consider. On one side the cell represents 0, on the other it is 1. <BR /> <BR /> However, conventional SSDs have moved on from SLC designs. Common SSDs now store two (MLC) or even three (TLC) bits per cell, requiring four (00, 01, 10, 11) or eight (001, 010, … 110, 111) different charge levels. On the horizon is 4 bit QLC NAND, which will require sixteen! As the damage accumulates it becomes difficult to reliably set charge levels; eventually, they cannot store new data. This happens faster and faster as bit densities increase. <BR /> <UL> <BR /> <LI> SLC: 100,000 or more writes per cell </LI> <BR /> <LI> MLC: 10,000 to 20,000 </LI> <BR /> <LI> TLC: low to mid 1,000’s </LI> <BR /> <LI> QLC: mid-100’s </LI> <BR /> </UL> <BR /> The FTL has two basic defenses. <BR /> <UL> <BR /> <LI> error correcting codes (ECC) stored alongside the data </LI> <BR /> <LI> extra physical capacity, over and above the apparent size of the device, "over-provisioning" </LI> <BR /> </UL> <BR /> Both defenses work like a bank account. <BR /> <BR /> Over the short term, some amount of the ECC is needed to recover the data on each read. Lightly-damaged cells or recently-written data won’t draw heavily on ECC, but as time passes, more of the ECC is necessary to recover the data. When it passes a safety margin, the data must be re-written to “refresh” the data and ECC, and the cycle continues. <BR /> <BR /> Across a longer term, the over-provisioning in the device &nbsp;replaces failed cells and preserves the apparent capacity of the SSD. Once this account is drawn down, the device is at the end of its life. <BR /> <BR /> To complete the physical picture, NAND is not freely writable. A die is divided into what we refer to as program/erase "P/E" pages. These are the actual writable elements. A page must first be erased to prepare writing it, then the entire page can be written at once. A page may be as small as 16K, or potentially much larger. Any one single write that arrives in the SSD probably won’t line up with the page size! <BR /> <BR /> And finally, NAND never re-writes in place. The FTL is continuously keeping track of wear, preparing fresh erased pages, and consolidating valid data sitting in pages alongside stale data corresponding to logical blocks which have already been re-written. These are additional reasons for over-provisioning. <BR /> <BR /> <STRONG> </STRONG> <BR /> <BR /> [caption id="attachment_7325" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107489i91E297F89551FD8A" /> In consumer devices, and especially in mobile, an SSD can safely leverage an unprotected, volatile cache because the device’s battery ensures it will not unexpectedly lose power. In servers, however, an SSD must provide its own power protection, typically in the form of a capacitor.[/caption] <BR /> <BR /> <STRONG> Buffers and caches </STRONG> <BR /> <BR /> The bottom line is that a NAND flash SSD is a complex, dynamic environment and there is a lot going on to keep your data safe. As device densities increase, it is getting ever harder. We must maximize the value of each write, as it takes the device one step closer to failure. Fortunately, we have a trick: a buffer. <BR /> <BR /> A buffer in an SSD is just like the cache in the system that surrounds it: some memory which can accumulate writes, allowing the user/application request to complete while it gathers more and more data to write efficiently to the NAND flash. Many small operations turn into a small number of larger operations. Just like the memory in a conventional computer, though, on its own that buffer is volatile – if a power loss occurs, any pending write operations are lost. <BR /> <BR /> Losing data is, of course, not acceptable. Storage Spaces Direct is at the far end of a series of actions which have led to it getting a write. A virtual machine on another computer may have had an application issue a flush which, in a physical system, would put the data on stable storage. After Storage Spaces Direct acknowledges <EM> any </EM> write, it must be stable. <BR /> <BR /> How can any SSD have a volatile cache!? Simple, and it is a crucial detail of how the SSD market has differentiated itself: you are very likely reading this on a device with a battery! <EM> Consumer </EM> flash is volatile in the device but not volatile when considering the entire system – your phone, tablet or laptop. Making a cache non-volatile requires some form of power storage (or new technology …), which adds unneeded expense in the consumer space. <BR /> <BR /> What about servers? In the enterprise space, the cost and complexity of providing complete power safety to a collection of servers can be prohibitive. This is the design point enterprise SSDs sit in: the added cost of internal power capacity to allow saving the buffer content is small. <BR /> <BR /> [caption id="attachment_7295" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107490i4CF7EC484522FA97" /> An (older) enterprise-grade SSD, with its removable and replaceable built-in battery![/caption] <BR /> <BR /> [caption id="attachment_7305" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107491iD3FFEE2EDED0EE13" /> This newer enterprise-grade SSD, foreground, uses a capacitor (the three little yellow things, bottom right) to provide power-loss protection.[/caption] <BR /> <BR /> Along with volatile caches, consumer flash is also universally of lower endurance. A consumer device targets environments with light activity. Extremely dense, inexpensive, fragile NAND flash - which may wear out after only a thousand writes - could still provide many years of service. However, expressed in total writes over time or capacity written per day, a consumer device could wear out more than <EM> 10x </EM> faster than available enterprise-class SSD. <BR /> <BR /> So, where does that leave us? Two requirements for SSDs for Storage Spaces Direct. One hard, one soft, but they normally go together: <BR /> <UL> <BR /> <LI> the device must have a non-volatile write cache </LI> <BR /> <LI> the device <EM> should </EM> have enterprise-class endurance </LI> <BR /> </UL> <BR /> But … could I get away with it? And more crucially – for us – what happens if I just put a consumer-grade SSD with a volatile write cache in a Storage Spaces Direct system? <BR /> <BR /> <BR /> <BR /> <STRONG> An experiment with consumer-grade SSDs </STRONG> <BR /> <BR /> For this experiment, we’ll be using a new-out-of-box 1 TB consumer class SATA SSD. While we won’t name it, it is a first tier, high quality, widely available device. It just happens to not be appropriate for an enterprise workload like Storage Spaces Direct, as we’ll see shortly. <BR /> <BR /> In round numbers, its data sheet says the following: <BR /> <UL> <BR /> <LI> QD32&nbsp;4K Read: 95,000 IOPS </LI> <BR /> <LI> QD32 4K Write: 90,000 IOPS </LI> <BR /> <LI> Endurance: 185TB over the device lifetime </LI> <BR /> </UL> <BR /> Note: QD ("queue depth") is geek-speak for the targeted number of IOs outstanding during a storage test. Why do you always see 32? That’s the SATA Native Command Queueing (NCQ) limit to which commands can be pipelined to a SATA device. SAS and especially NVME can go much deeper. <BR /> <BR /> Translating the endurance to the widely-used device-writes-per-day (DWPD) metric, over the device’s 5-year warranty period that is <BR /> <CODE> 185 TB / (365 days x 5 years = 1825 days) = ~ 100 GB writable per day <BR /> 100 GB / 1 TB total capacity = 0.10 DWPD <BR /> </CODE> <BR /> The device can handle just over 100 GB each day for 5 years before its endurance is exhausted. That’s a lot of Netflix and web browsing for a single user! Not so much for a large set of virtualized workloads. <BR /> <BR /> To gather the data below, I prepared the device with a 100 GiB load file, written through sequentially a little over 2 times. I used <A href="#" target="_blank"> DISKSPD 2.0.18 </A> to do a QD8 70:30 4 KiB mixed read/write workload using 8 threads, each issuing a single IO at a time to the SSD. First with the write buffer enabled: <BR /> <CODE> diskspd.exe -t8 -b4k -r4k -o1 -w30 -Su -D -L -d1800 -Rxml Z:\load.bin </CODE> <BR /> [caption id="attachment_7315" align="aligncenter" width="500"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107492i2E42F473A32FC9C0" /> Normal unbuffered IO sails along, with a small write cliff.[/caption] <BR /> <BR /> The first important note here is the length of the test: 30 minutes. This shows an abrupt drop of about 10,000 IOPS two minutes in – this is normal, certainly for consumer devices. It likely represents the FTL running out of pre-erased NAND ready for new writes. Once its reserve runs out, the device runs slower until a break in the action lets it catch back up. With web browsing and other consumer scenarios. the chances of noticing this are small. <BR /> <BR /> An aside: this is a good, stable device in each mode of operation – behavior before and after the "write cliff" is very clean. <BR /> <BR /> Second, note that the IOPS are … a bit different than the data sheet might have suggested, even before it reaches steady operation. We’re intentionally using a light, QD8 70:30 4K to drive it more like a generalized workload. It still rolls over the write cliff. Under sustained, mixed IO pressure the FTL has much more work to take care of and it shows. <BR /> <BR /> That’s with the buffer on, though. Now just adding write-through (with -Su <EM> w </EM> ): <BR /> <CODE> diskspd.exe -t8 -b4k -r4k -o1 -w30 -Suw -D -L -d1800 -Rxml Z:\load.bin </CODE> <BR /> [caption id="attachment_7316" align="aligncenter" width="500"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107493i363570F7197C9F25" /> Write-through IO exposes the true latency of NAND, normally masked by the FTL/buffer.[/caption] <BR /> <BR /> Wow! <BR /> <BR /> First: it’s great that the device honors write-through requests. In the consumer space, this gives an application a useful tool for making data durable when it must be durable. This is a good device! <BR /> <BR /> Second, <EM> oh my </EM> does the performance drop off. This is no longer an "SSD": especially as it goes over the write cliff – which is still there – it’s merely a fast HDD, at about 220 IOPS. Writing NAND is slow! This is the FTL forced to push all the way into the NAND flash dies, immediately, without being able to buffer, de-conflict the read and write IO streams and manage all the other background activity it needs to do. <BR /> <BR /> Third, those immediate writes take what is already a device with modest endurance and deliver a truly crushing blow to its total lifetime. <BR /> <BR /> Crucially, <EM> this </EM> is how Storage Spaces Direct would see this SSD. Not much of a "cache" anymore. <BR /> <BR /> <BR /> <BR /> <STRONG> So, why does a non-volatile buffer help? </STRONG> <BR /> <BR /> It lets the SSD claim that a write is stable once it is in the buffer. A write-through operation – or a flush, or a request to disable the cache – can be honored without forcing all data directly into the NAND. We’ll get the good behavior, the stated endurance, <EM> and </EM> the data stability we require for reliable, software-defined storage to a complex workload. <BR /> <BR /> In short, your device will behave much as we saw in the first chart: a nice, flat, fast performance profile. A good cache device. If it’s NVMe it may be even more impressive, but that’s a thought for another time. <BR /> <BR /> <BR /> <BR /> <STRONG> Finally, how do you identify a device with a non-volatile buffer cache? </STRONG> <BR /> <BR /> Datasheet, datasheet, datasheet. Look for language like: <BR /> <UL> <BR /> <LI> "Power loss protection" or "PLP" <BR /> <UL> <BR /> <LI> Samsung SM863, and related </LI> <BR /> <LI> Toshiba HK4E series, and related </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> "Enhanced power loss data protection" <BR /> <UL> <BR /> <LI> Intel S3510, S3610, S3710, P3700 series, and related </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> … along with many others across the industry. You should be able to find a device from your favored provider. These will be more expensive than consumer grade devices, but hopefully we’ve convinced you why they are worth it. <BR /> <BR /> Be safe out there! <BR /> <BR /> / Dan Lovinger </BODY></HTML> Wed, 10 Apr 2019 11:25:35 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/don-t-do-it-consumer-grade-solid-state-drives-ssd-in-storage/ba-p/425914 Cosmos Darwin 2019-04-10T11:25:35Z Work Folders for Android can now upload files! https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/work-folders-for-android-can-now-upload-files/ba-p/425889 <P><STRONG> First published on TECHNET on Nov 08, 2016 </STRONG> <BR />Hi all, <BR /><BR />I’m Jeff Patterson, Program Manager for Work Folders. <BR /><BR />We’re excited to announce that we’ve released an updated version of Work Folders for Android to the Google Play <A href="#" target="_blank" rel="noopener"> Store </A> which enables users to sync files that were created or edited on their Android device. <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107485i36559CD8C244E77D/image-size/large?v=v2&amp;px=999" role="button" /></span> </P> <H3>Overview</H3> <P><BR />Work Folders is a Windows Server feature since 2012 R2 that enables individual employees to access their files securely from inside and outside the corporate environment. The Work Folders app connects to the server and enables file access on your Android phone and tablet. Work Folders enables this while allowing the organization’s IT department to fully secure that data. </P> <H3>&nbsp;</H3> <H3>What’s New</H3> <P><BR />Using the latest version of Work Folders for Android, users can now: <BR /><BR /></P> <UL> <UL> <LI>Sync files that were created or edited on their device</LI> </UL> </UL> <UL> <UL> <LI>Take pictures and write notes within the Work Folders application</LI> </UL> </UL> <UL> <UL> <LI>When working within other applications (i.e., Microsoft Word), the Work Folders location can be selected when opening or saving files. No need to open the Work Folders app to sync your files.</LI> </UL> </UL> <P><BR />For the complete list of Work Folders for Android features, please reference the feature list section below. <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107486i92B8501D98D4ADF0/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107487i8FBBB6C00D3190DF/image-size/large?v=v2&amp;px=999" role="button" /></span> </P> <H3>Work Folders&nbsp;for Android - Feature List</H3> <P>&nbsp;</P> <UL> <UL> <LI>Sync files that were created or edited on your device</LI> </UL> </UL> <UL> <UL> <LI>Take pictures and write notes within the Work Folders app</LI> </UL> </UL> <UL> <UL> <LI>Pin files for offline viewing -&nbsp;saves storage space by&nbsp;showing all available files but locally storing and keeping in sync only the files you care about.</LI> </UL> </UL> <UL> <UL> <LI>Files are always encrypted -&nbsp;on the wire and at rest on the device.</LI> </UL> </UL> <UL> <UL> <LI>Access to the app is protected by an app passcode -&nbsp;keeping others out even if the device is left unlocked and unattended.</LI> </UL> </UL> <UL> <UL> <LI>Allows for DIGEST and Active Directory Federation Services (ADFS) authentication mechanisms including multi factor authentication.</LI> </UL> </UL> <UL> <UL> <LI>Search for files and folders</LI> </UL> </UL> <UL> <UL> <LI>Open files in other apps that might be specialized to work with a certain file type</LI> </UL> </UL> <UL> <UL> <LI>Integration with&nbsp;Microsoft Intune</LI> </UL> </UL> <H3><BR />Android Version Support</H3> <P>&nbsp;</P> <UL> <UL> <LI>Work Folders for Android is supported on all devices running Android Version 4.4 KitKat or later.</LI> </UL> </UL> <H3><BR />Known Issues</H3> <UL> <UL> <LI>Microsoft Office files are read-only when opening the files from the Work Folders app. To workaround this issue, open the file from the Office application (e.g., Microsoft Word).</LI> </UL> </UL> <H3><BR />Blogs and Links</H3> <P><BR />If you’re interested in learning more about Work Folders, here are some great resources: <BR /><BR /></P> <UL> <UL> <LI>Work Folders <A href="#" target="_blank" rel="noopener"> blogs </A> on Server Storage blog</LI> </UL> </UL> <UL> <UL> <LI>Nir Ben Zvi <A href="#" target="_blank" rel="noopener"> introduced Work Folders on Windows Server 2012 R2 </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Work Folders for iOS help </A></LI> </UL> </UL> <UL> <UL> <LI>Work Folders for Windows 7 SP1: Check out this <A href="#" target="_blank" rel="noopener"> post by Jian Yan </A> on the Server Storage blog</LI> </UL> </UL> <UL> <UL> <LI>Roiy Zysman posted <A href="#" target="_blank" rel="noopener"> a great list of Work Folders resources in this blog </A> .</LI> </UL> </UL> <UL> <UL> <LI>See this <A href="#" target="_blank" rel="noopener"> Q&amp;A With Fabian Uhse, Program Manager for Work Folders </A> in Windows Server 2012 R2</LI> </UL> </UL> <UL> <UL> <LI>Also, check out these posts about <A href="#" target="_blank" rel="noopener"> how to setup a Work Folders test lab </A> , <A href="#" target="_blank" rel="noopener"> certificate management </A> , <BR />and <A href="#" target="_blank" rel="noopener"> tips on running Work Folders on Windows Failover Clusters </A> .</LI> </UL> </UL> <UL> <UL> <LI>Using Work Folders with <A href="#" target="_blank" rel="noopener"> IIS websites or the Windows Server Essentials Role (Resolving Port Conflicts) </A></LI> </UL> </UL> <P><BR />Introduction and Getting Started <BR /><BR /></P> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Introducing Work Folders on Windows Server 2012 R2 </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Work Folders Overview </A> on TechNet</LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Designing a Work Folders Implementation </A> on TechNet</LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Deploying Work Folders </A> on TechNet</LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Work folders FAQ (Targeted for Work Folders end users) </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Work Folders Q&amp;A </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Work Folders Powershell Cmdlets </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Work Folders Test Lab Deployment </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Windows Storage Server 2012 R2 — Work Folders </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Work Folders for Windows 7 </A></LI> </UL> </UL> <P><BR />Advanced Work Folders Deployment and Management <BR /><BR /></P> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Work Folders interoperability with other file server technologies </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Performance Considerations for Work Folders Deployments </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Windows Server 2012 R2 – Resolving Port Conflict with IIS Websites and Work Folders </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> A new user attribute for Work Folders server Url </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Work Folders Certificate Management </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Work Folders on Clusters </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Monitoring Windows Server 2012 R2 Work Folders Deployments. </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Deploying Work Folders with AD FS and Web Application Proxy (WAP) </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Deploying Windows Server 2012 R2 Work Folders in a Virtual Machine in Windows Azure </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Offline Files (CSC) to Work Folders Migration Guide </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Management with Microsoft Intune </A></LI> </UL> </UL> <P>Videos <BR /><BR /></P> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Windows Server Work Folders Overview: My Corporate Data on All of My Devices </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Windows Server Work Folders – a Deep Dive into the New Windows Server Data Sync Solution </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Work Folders on Channel 9 </A></LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Work Folders iPad reveal – TechEd Europe 2014 </A> (in German)</LI> </UL> </UL> <UL> <UL> <LI><A title="Work Folders on the &quot;Edge Show&quot;" href="#" target="_blank" rel="noopener"> Work Folders on the “Edge Show” </A> (iPad + iPhone video, English)</LI> </UL> </UL> <UL> <UL> <LI><A href="#" target="_blank" rel="noopener"> Work Folders for Android on Channel 9</A></LI> </UL> </UL> <P>&nbsp;</P> Tue, 26 May 2020 02:40:12 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/work-folders-for-android-can-now-upload-files/ba-p/425889 Jeff Patterson 2020-05-26T02:40:12Z Survey: Internet-connected servers https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/survey-internet-connected-servers/ba-p/425884 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Oct 20, 2016 </STRONG> <BR /> Hi folks, <A href="#" target="_blank"> Ned </A> here again with another quickie engineering survey. As always, it's anonymous, requires no registration, and should take no more than 30 seconds. We'd like to learn about your server firewalling, plus a&nbsp;couple&nbsp;drill-down&nbsp;questions. <BR /> <BR /> <STRONG> Survey: <A href="#" target="_blank"> What percentage of your Windows Servers have Internet access? </A> </STRONG> <BR /> <BR /> This may require some uncomfortable admissions, but it's for a good cause, I promise. Honesty is always the best policy in helping us make better software for you. <BR /> <BR /> <STRONG> Note: </STRONG> for a handful of you early survey respondents, the # of servers question had the wrong limit. It's fixed now and you can adjust up. <BR /> <BR /> - Ned "census taker" Pyle <BR /> <BR /> </BODY></HTML> Wed, 10 Apr 2019 11:23:59 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/survey-internet-connected-servers/ba-p/425884 Ned Pyle 2019-04-10T11:23:59Z Storage Spaces Direct with Persistent Memory https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-spaces-direct-with-persistent-memory/ba-p/425881 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Oct 17, 2016 </STRONG> <BR /> Howdy, <A href="#" target="_blank"> Claus </A> here again, this time with Dan Lovinger. <BR /> <BR /> At our recent Ignite conference we had some very exciting results and experiences to share around Storage Spaces Direct and Windows Server 2016. One of the more exciting ones that you may have missed was&nbsp;an experiment we did on a set of systems built with the help of Mellanox and Hewlett-Packard Enterprise’s NVDIMM-N technology. <BR /> <BR /> What’s exciting about NVDIMM-N is that it is part of the first wave of new memory technologies referred to as Persistent Memory (PM), sometimes also referred to as Storage Class Memory (SCM&nbsp;). A PM device offers persistent storage – stays around after the server resets or the power drops - but can be on the super high speed memory bus, and accessible at the granularity (bytes not blocks!) and latencies we’re more familiar with for memory. In the case of NVDIMM-N it is literally memory (DRAM) with the addition of natively persistent storage, usually NAND flash, and some power capacity and to allow capture of the DRAM to that persistent storage regardless of conditions. <BR /> <BR /> These 8 HPE ProLiant DL380 Gen9 nodes had Mellanox CX-4 100Gb adapters connected through a Mellanox Spectrum switch and <STRONG> <EM> 16 </EM> </STRONG> 8GiB NVDIMM-N modules along with 4 NVMe flash drives – <STRONG> <EM> each </EM> </STRONG> - for an eye-watering <STRONG> <EM> 1TiB </EM> </STRONG> of NVDIMM-N around the cluster. <BR /> <BR /> Of course, being storage nerds, what did we do: we created three-way mirrored Storage Spaces Direct virtual disks over each type of storage – NVMe and, in their block personality, the NVDIMM-N – and benched them off. Our partners in SQL Server showed it like this: <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107484iE7FADA67048ECBF5" /> <BR /> <BR /> What we’re seeing here are simple, low intensity DISKSPD loads – equal in composition – which lets us highlight the relative latencies of each type of storage. In the first pair of 64K IO tests we see the dramatic difference which gets PM up to the line rate of the 100Gb network before NVME is even 1/3 <SUP> rd </SUP> of the way there. In the second we can see how PM neutralizes the natural latency of going all the way into a flash device – even as efficient and high speed as our NVMe devices were – and provides reads at less than 180us to the 99 <SUP> th </SUP> percentile – <STRONG> <EM> 99% of the read IO was over three times faster </EM> </STRONG> for three-way mirrored, two node fault tolerant storage! <BR /> <BR /> We think this is pretty exciting! Windows Server is on a journey to integrate Persistent Memory and this is one of the steps along the way. While we may do different things with it in the future, this was an interesting experiment to point to where we may be able to go (and more!). <BR /> <BR /> Let us know what you think. <BR /> <BR /> Claus and Dan. <BR /> <BR /> p.s. if you’d like to see the entire SQL Server 2016 &amp; HPE Persistent Memory presentation at Ignite (video available!), follow this link: <A href="#" target="_blank"> https://myignite.microsoft.com/sessions/2767 </A> </BODY></HTML> Wed, 10 Apr 2019 11:23:55 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-spaces-direct-with-persistent-memory/ba-p/425881 Claus Joergensen 2019-04-10T11:23:55Z TLS for Windows Standards-Based Storage Management (SMI-S) and System Center Virtual Machine Manager (VMM) https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/tls-for-windows-standards-based-storage-management-smi-s-and/ba-p/425878 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Oct 14, 2016 </STRONG> <BR /> In a <A href="#" target="_blank"> previous blog post </A> , I discussed setting up the Windows Standards-Based Storage Management Service (referred to below as Storage Service) on Windows Server 2012 R2. For Windows Server 2016 and System Center 2016 Virtual Machine Manager, configuration is much simpler since installation of the service includes setting up the necessary self-signed certificate. We also allow using CA signed certificates now provided the Common Name (CN) is “MSSTRGSVC”. <BR /> <BR /> Before I get into those changes, I want to talk about the Transport Layer Security 1.2 (TLS 1.2) protocol, which is now a required part of the Storage Management Initiative Specification (SMI-S). <BR /> <H2> TLS 1.2 </H2> <BR /> Secure communication through the Hyper Text Transport Protocol (HTTPS) is accomplished using the encryption capabilities of <A href="#" target="_blank"> Transport Layer Security </A> , which is itself an update to the much older Security Sockets Layer protocol (SSL) – although still commonly called Secure Sockets. Over the years, several vulnerabilities in SSL and TLS have been exposed, making earlier versions of the protocol insecure. TLS 1.2 is the latest version of the protocol and is defined by <A href="#" target="_blank"> RFC 5246 </A> . <BR /> <BR /> The Storage Networking Industry Association (SNIA) made TLS 1.2 a mandatory part of SMI-S (even <EM> retroactively </EM> ). In 2015, the International Standards Organization (ISO) published <A href="#" target="_blank"> ISO 27040:2015 </A> “Information Technology – Security Techniques – Storage Security”, and this is incorporated by reference into the SMI-S protocol and pretty much all things SNIA. <BR /> <BR /> Even though TLS 1.2 was introduced in 2008, it’s uptake was impeded by interoperability concerns. Adoption was accelerated after several exploits (e.g., <A href="#" target="_blank"> BEAST </A> ) ushered out the older SSL 3.0 and TLS 1.0 protocols (TLS 1.1 did not see broad adoption). Microsoft Windows offered <A href="#" target="_blank"> support </A> for TLS 1.2 beginning in Windows 7 and Windows Server 2008 R2. That being said, there were still a lot of interop issues at the time, and TLS 1.1 and 1.2 support was hidden behind various registry keys. <BR /> <BR /> Now it’s 2016, and there are no more excuses for using older, proven-insecure protocols, so it’s time to update your SMI-S providers. But unfortunately, you still need to take action to fully enable TLS 1.2. There are three primary Microsoft components that are used by the Storage Service which affect HTTPS communications between providers and the service: SCHANNEL, which implements the SSL/TLS protocols; HTTP.SYS, an HTTP server used by the Storage Service to support indications; and .NET 4.x, used by Virtual Machine Manager (VMM) (not by the Storage Service itself). <BR /> <BR /> I’m going to skip some of the details of how clients and servers negotiate TLS versions (this may or may not allow older versions) and cipher suites (the most secure suite mutually agreed upon is always selected, but refer to this <A href="#" target="_blank"> site </A> for a recent exploit involving certain cipher suites). <BR /> <H3> A sidetrack: Certificate Validation </H3> <BR /> How certificates are validated varies depending on whether the certificate is self-signed or created by a trusted Certificate Authority (CA). For the most part, SMI-S will use self-signed certificates – and providers should never, ever, be exposed to the internet or another untrusted network. A quick overview: <BR /> <BR /> A CA signed certificate contains a signature that indicates what authority signed it. The user of that certificate will be able to establish a chain of trust to a well-known CA. <BR /> <BR /> A self-signed certificate needs to establish this trust in some other way. Typically, the self-signed certificate will need to be loaded into a local certificate store on the system that will need to validate it. See below for more on this. <BR /> <BR /> In either case, the following conditions must be true: the certificate has not expired; the certificate has not been revoked (look up <A href="#" target="_blank"> Revocation List </A> for more about this); and the purpose of the certificate makes sense for its use. Additional checks include “Common Name” matching (disabled by default for the Storage Service; must not be used by providers) and key length. Note that we have seen issues with certificates being valid “from” a time and there is a time mismatch between the provider and the storage service. These tend to cure themselves once the start time has been passed on both ends of the negotiation. When using the Windows PowerShell cmdlet <A href="#" target="_blank"> Register-SmisProvider </A> you will see this information. <BR /> <BR /> In some instances, your provider may ignore one or more of the validation rules and just accept any certificate that we present. A useful debugging approach but not very secure! <BR /> <BR /> <STRONG> One more detail </STRONG> : when provisioning certificates for the SMI-S providers, make sure they use key lengths of 1024 or 2048 bits only. 512 bit keys are no longer supported due to recent exploits. And odd length keys won’t work either. At least I have never seen them work, even though technically allowed. <BR /> <H2> Microsoft product support for TLS 1.2 </H2> <BR /> This article will discuss Windows Server and System Center Releases, and the .NET Framework.&nbsp;It should not be necessary to mess with registry settings that control cipher suites or SSL versions except as noted below for the .NET framework. <BR /> <H2> Windows Server 2012 R2/2016 </H2> <BR /> Since the initial releases of these products, there have been <EM> many </EM> security fixes released as patches, and more than a few of them changed SCHANNEL and HTTP.SYS behavior. Rather than attempt to enumerate all of the changes, let’s just say it is essential to apply ALL security hotfixes. <BR /> <BR /> If you are using Windows Server 2016 RTM, you also need to apply all available. <BR /> <BR /> There is no .NET dependency. <BR /> <H2> System Center 2012 R2 Virtual Machine Manager </H2> <BR /> SC 2012 R2 VMM uses the .NET runtime library but the Storage Service does not. If you are using VMM 2012 R2, to fully support TLS 1.2, the most recent version of .NET 4.x should be installed; this is currently <A href="#" target="_blank"> .NET 4.6.2 </A> . Also, update VMM to the latest Update Release. <BR /> <BR /> If, for some reason, you must stay on .NET 4.5.2, then a registry change will be required to turn on TLS 1.2 on the VMM Server(s) since by default, .NET 4.5.2 only enables SSL 3.0 and TLS 1.0. <BR /> <BR /> The registry value (which changes to allow TLS 1.0, TLS 1.1 and TLS 1.2 and <EM> not </EM> SSL 3.0 which you should never use anyway) is: <BR /> <BR /> HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\.NETFramework\ <STRONG> v4.0.30319 </STRONG> "SchUseStrongCrypto"=dword:00000001 <BR /> <BR /> <BR /> <BR /> You can use this PowerShell command to change the behavior: <BR /> <BR /> Set-ItemProperty -Path "HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\.NETFramework\v4.0.30319" -Name "SchUseStrongCrypto" -Value "1" -Force <BR /> <BR /> (Note that the version number highlighted applies regardless of a particular release of .NET 4.5; do not change it!) <BR /> <BR /> This change will apply to every application using the .NET 4.x runtime on the same system. Note that Exchange 2013 does not support 4.6.x, but you shouldn’t be running VMM and Exchange on the same server anyway! Again, apply this to the VMM <EM> Server </EM> system or VM, which may not be the same place you are running the VMM <EM> UI. </EM> <BR /> <H2> System Center 2016 VMM </H2> <BR /> VMM 2016 uses .NET 4.6.2; no changes required. <BR /> <H2> Exporting the Storage Service Certificate </H2> <BR /> Repeating the information from a previous blog, follow these steps on the VMM Server machine: <BR /> <UL> <BR /> <LI> Run MMC.EXE from an administrator command prompt. </LI> <BR /> <LI> Add the Certificates Snap-in using the File\Add/Remove Snap-in menu. </LI> <BR /> <LI> Make sure you select Computer Account when the wizard prompts you, select Next and leave Local Computer selected. Click Finish. </LI> <BR /> <LI> Click OK. </LI> <BR /> <LI> Expand Certificates (Local Computer), then Personal and select Certificates. </LI> <BR /> <LI> In the middle pane, you should see the msstrgsvc Right click, select All Tasks, Export… That will bring up the Export Wizard. </LI> <BR /> <LI> Click Next to not export the private key (this might be grayed out anyway), then select a suitable format. Typically DER or Base-64 encoded are used but some vendors may support .P7B files. For EMC, select Base-64. </LI> <BR /> <LI> Specify a file to store the certificate. Note that Base-64 encoded certificates are text files and can be open with Notepad or any other editing program. </LI> <BR /> </UL> <BR /> Note: if you deployed VMM in a HA configuration, you will need to repeat these steps on each VMM Server instance. Your vendor’s SMI-S provider must support a certificate store that allows multiple certificates. <BR /> <H2> Storage Providers </H2> <BR /> Microsoft is actively involved in SNIA plugfests and directly with storage vendors to ensure interoperability. Some providers may require settings to ensure the proper security protocols are enabled and used, and many require updates. <BR /> <H3> OpenSSL </H3> <BR /> Many SMI-S providers and client applications rely on the open source project <A href="#" target="_blank"> OpenSSL </A> . <BR /> <BR /> Storage vendors who use OpenSSL must absolutely keep up with the latest version(s) of this library and it is up to them to provide you with updates. We have seen a lot of old providers that rely on the long obsolete OpenSSL 0.9.8 releases or unpatched later versions. Microsoft will not provide any support if your provider is out-of-date, so if you have been lazy and not keeping up-to-date, time to get with the program. At the time of this writing there are three current branches of OpenSSL, each with patches to mend security flaws that crop up frequently. Consult the link above. How a provider is updated is a vendor-specific activity. (Some providers – such as EMC’s – do not use OpenSSL; check with the vendor anyway.) <BR /> <H3> Importing the Storage Service certificate </H3> <BR /> This step will vary greatly among providers. You will need to consult the vendor documentation for how to import the certificate into their appropriate Certificate Store. If they do not provide a mechanism to import certificates, you will not be able to use fully secure indications or mutual authentication with certificate validation. <BR /> <H2> Summary </H2> <BR /> To ensure you are using TLS 1.2 (and enabling indications), you must do the following: <BR /> <UL> <BR /> <LI> Check with your storage vendor for the latest provider updates and apply them as directed </LI> <BR /> <LI> Update to .NET 4.6.2 on your VMM Servers <EM> or </EM> enable .NET strong cryptography if you must use .NET 4.5.x for any reason </LI> <BR /> <LI> Install the Storage Service (installing VMM will do this for you) </LI> <BR /> <LI> If you are using Windows Server 2012 R2, refer back to this <A href="#" target="_blank"> previous blog post </A> to properly configure the Storage Service (skip this for Windows Server 2016) </LI> <BR /> <LI> Export the storage service certificate </LI> <BR /> <LI> Import the certificate into your provider’s certificate store (see vendor instructions) </LI> <BR /> <LI> <EM> Then </EM> you can register one or more SMI-S providers, either through the Windows <A href="#" target="_blank"> Register-SmisProvider </A> cmdlet or using VMM </LI> <BR /> </UL> <BR /> <BR /> <BR /> </BODY></HTML> Wed, 10 Apr 2019 11:23:39 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/tls-for-windows-standards-based-storage-management-smi-s-and/ba-p/425878 FileCAB-Team 2019-04-10T11:23:39Z Squeezing hyper-convergence into the overhead bin, for barely $1,000/server: the story of Project Kepler-47 https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/squeezing-hyper-convergence-into-the-overhead-bin-for-barely-1/ba-p/425877 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Oct 14, 2016 </STRONG> <BR /> [caption id="attachment_7045" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107477iAA470516617DF688" /> This tiny two-server cluster packs powerful compute and spacious storage into one cubic foot.[/caption] <BR /> <BR /> <STRONG> The Challenge </STRONG> <BR /> <BR /> In the Windows Server team, we tend to focus on going <EM> big. </EM> Our enterprise customers and service providers are increasingly relying on Windows as the foundation of their software-defined datacenters, and needless to say, our hyperscale public cloud Azure does too. Recent <EM> big </EM> announcements like support for <A href="#" target="_blank"> 24 TB of memory per server </A> with Hyper-V, or <A href="#" target="_blank"> 6+ million IOPS per cluster </A> with Storage Spaces Direct, or delivering <A href="#" target="_blank"> 50 Gb/s of throughput per virtual machine </A> with Software-Defined Networking are the proof. <BR /> <BR /> But what can these same features in Windows Server do for smaller deployments? Those known in the IT industry as Remote-Office / Branch-Office (“ROBO”) – think retail stores, bank branches, private practices, remote industrial or constructions sites, and more. After all, their basic requirement isn’t so different – they need high availability for mission-critical apps, with rock-solid storage for those apps. And generally, they need it to be <EM> local, </EM> so they can operate – process transactions, or look up a patient’s records – even when their Internet connection is flaky or non-existent. <BR /> <BR /> For these deployments, cost is paramount. Major retail chains operate thousands, or tens of thousands, of locations. This multiplier makes IT budgets <EM> extremely </EM> sensitive to the per-unit cost of each system. The simplicity and savings of hyper-convergence – using the same servers to provide compute <EM> and storage </EM> – present an attractive solution. <BR /> <BR /> With this in mind, under the auspices of <EM> Project Kepler-47 </EM> , we set about going <EM> small </EM> … <BR /> <H3> </H3> <BR /> <BR /> <BR /> <STRONG> Meet Kepler-47 </STRONG> <BR /> <BR /> The resulting prototype – and it’s just that, a <EM> prototype ­­ </EM> – was revealed at Microsoft Ignite 2016. <BR /> <BR /> [caption id="attachment_7055" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107478iD04351EF6C0620CE" /> Kepler-47 on expo floor at Microsoft Ignite 2016 in Atlanta.[/caption] <BR /> <BR /> In our configuration, this tiny two-server cluster provides over 20 TB of available storage capacity, and over 50 GB of available memory for a handful of mid-sized virtual machines. The storage is flash-accelerated, the chips are Intel Xeon, and the memory is error-correcting DDR4 – no compromises. The storage is mirrored to tolerate hardware failures – drive or server – with continuous availability. And if one server goes down or needs maintenance, virtual machines live migrate to the other server with no appreciable downtime. <BR /> <BR /> (Did we mention it also has not one, but <EM> two </EM> 3.5mm headphone jacks? <A href="#" target="_blank"> Hah </A> !) <BR /> <BR /> [caption id="attachment_7005" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107479iC0B9BA515EA30C44" /> Kepler-47 is 45% smaller than standard 2U rack servers.[/caption] <BR /> <BR /> In terms of size, Kepler-47 is barely one cubic foot – 45% smaller than standard 2U rack servers. For perspective, this means both servers fit readily in one carry-on bag in the overhead bin! <BR /> <BR /> We bought (almost) every part online at retail prices. The total cost for each server was just $1,101. This excludes the drives, which we salvaged from around the office, and which could vary wildly in price depending on your needs. <BR /> <BR /> [caption id="attachment_7015" align="aligncenter" width="840"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107480iE3E48189BFC01073" /> Each Kepler-47 server cost just $1,101 retail, excluding drives.[/caption] <BR /> <BR /> <BR /> <BR /> <STRONG> Technology </STRONG> <BR /> <BR /> Kepler-47 is comprised of two servers, each running <A href="#" target="_blank"> Windows Server 2016 Datacenter </A> . The servers form one hyper-converged <A href="#" target="_blank"> Failover Cluster </A> , with the new <A href="#" target="_blank"> Cloud Witness </A> as the low-cost, low-footprint quorum technology. The cluster provides high availability to <A href="#" target="_blank"> Hyper-V </A> virtual machines (which may also run Windows, at no additional licensing cost), and <A href="#" target="_blank"> Storage Spaces Direct </A> provides fast and fault tolerant storage using just the local drives. <BR /> <BR /> Additional fault tolerance can be achieved using new features such as <A href="#" target="_blank"> Storage Replica </A> with Azure Site Recovery. <BR /> <BR /> Notably, Kepler-47 does not use traditional Ethernet networking between the servers, eliminating the need for costly high-speed network adapters and switches. Instead, it uses Intel Thunderbolt™ 3 over a USB Type-C connector, which provides up to 20 Gb/s (or up to 40 Gb/s when utilizing display and data together!) – plenty for replicating storage and live migrating virtual machines. <BR /> <BR /> To pull this off, we partnered with our friends at Intel, who furnished us with pre-release PCIe add-in-cards for Thunderbolt™ 3 and a proof-of-concept driver. <BR /> <BR /> [caption id="attachment_7025" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107481i0DF4410815304857" /> Kepler-47 does not use traditional Ethernet between the servers; instead, it uses Intel Thunderbolt™ 3.[/caption] <BR /> <BR /> To our delight, it worked like a charm – here’s the <EM> Networks </EM> view in Failover Cluster Manager. Thanks, Intel! <BR /> <BR /> [caption id="attachment_7036" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107482i633C816ED9753A37" /> The Networks view in Failover Cluster Manager, showing Thunderbolt™ Networking.[/caption] <BR /> <BR /> While Thunderbolt™ 3 is already in widespread use in laptops and other devices, this kind of server application is new, and it’s one of the main reasons Kepler-47 is <EM> strictly </EM> a prototype. It also boots from USB 3 DOM, which isn’t yet supported, and has no host-bus adapter (HBA) nor SAS expander, both of which are currently required for Storage Spaces Direct to leverage SCSI Enclosure Services (SES) for slot identification. However, it otherwise passes all our validation and testing and, as far as we can tell, works flawlessly. <BR /> <BR /> (In case you missed it, support for Storage Spaces Direct clusters with just two servers was announced at Ignite!) <BR /> <BR /> <BR /> <BR /> <STRONG> Parts List </STRONG> <BR /> <BR /> Ok, now for the juicy details. Since Ignite, we have been asked repeatedly what parts we used. Here you go: <BR /> <BR /> [caption id="attachment_7035" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107483i59EC7AB7B64174EC" /> The key parts of Kepler-47.[/caption] <BR /> <TABLE> <TBODY><TR> <TD> <EM> Function </EM> </TD> <TD> <EM> Product </EM> </TD> <TD> <EM> View Online </EM> </TD> <TD> <EM> Cost </EM> </TD> </TR> <TR> <TD> <STRONG> Motherboard </STRONG> </TD> <TD> ASRock C236 WSI </TD> <TD> <A href="#" target="_blank"> Link </A> </TD> <TD> $199.99 </TD> </TR> <TR> <TD> <STRONG> CPU </STRONG> </TD> <TD> Intel Xeon E3-1235L v5 25w 4C4T 2.0Ghz </TD> <TD> <A href="#" target="_blank"> Link </A> </TD> <TD> $283.00 </TD> </TR> <TR> <TD> <STRONG> Memory </STRONG> </TD> <TD> 32 GB (2 x 16 GB) Black Diamond ECC DDR4-2133 </TD> <TD> <A href="#" target="_blank"> Link </A> </TD> <TD> $208.99 </TD> </TR> <TR> <TD> <STRONG> Boot Device </STRONG> </TD> <TD> Innodisk 32 GB USB 3 DOM </TD> <TD> <A href="#" target="_blank"> Link </A> </TD> <TD> $29.33 </TD> </TR> <TR> <TD> <STRONG> Storage (Cache) </STRONG> </TD> <TD> 2 x 200 GB Intel S3700 2.5” SATA SSD </TD> <TD> <A href="#" target="_blank"> Link </A> </TD> <TD> - </TD> </TR> <TR> <TD> <STRONG> Storage (Capacity) </STRONG> </TD> <TD> 6 x 4 TB Toshiba MG03ACA400 3.5” SATA HDD </TD> <TD> <A href="#" target="_blank"> Link </A> </TD> <TD> - </TD> </TR> <TR> <TD> <STRONG> Networking (Adapter) </STRONG> </TD> <TD> Intel Thunderbolt™ 3 JHL6540 PCIe Gen 3 x4 Controller Chip </TD> <TD> <A href="#" target="_blank"> Link </A> </TD> <TD> - </TD> </TR> <TR> <TD> <STRONG> Networking (Cable) </STRONG> </TD> <TD> Cable Matters 0.5m 20 Gb/s USB Type-C Thunderbolt™ 3 </TD> <TD> <A href="#" target="_blank"> Link </A> </TD> <TD> $17.99* </TD> </TR> <TR> <TD> <STRONG> SATA Cables </STRONG> </TD> <TD> 8 x SuperMicro CBL-0481L </TD> <TD> <A href="#" target="_blank"> Link </A> </TD> <TD> $13.20 </TD> </TR> <TR> <TD> <STRONG> Chassis </STRONG> </TD> <TD> U-NAS NSC-800 </TD> <TD> <A href="#" target="_blank"> Link </A> </TD> <TD> $199.99 </TD> </TR> <TR> <TD> <STRONG> Power Supply </STRONG> </TD> <TD> ASPower 400W Super Quiet 1U </TD> <TD> <A href="#" target="_blank"> Link </A> </TD> <TD> $119.99 </TD> </TR> <TR> <TD> <STRONG> Heatsink </STRONG> </TD> <TD> Dynatron K2 75mm 2 Ball CPU Fan </TD> <TD> <A href="#" target="_blank"> Link </A> </TD> <TD> $34.99 </TD> </TR> <TR> <TD> <STRONG> Thermal Pads </STRONG> </TD> <TD> StarTech Heatsink Thermal Transfer Pads (Set of 5) </TD> <TD> <A href="#" target="_blank"> Link </A> </TD> <TD> $6.28* </TD> </TR> </TBODY></TABLE> <BR /> * Just one needed for both servers. <BR /> <BR /> <BR /> <BR /> <STRONG> Practical Notes </STRONG> <BR /> <BR /> The ASRock C236 WSI motherboard is the only one we could locate that is mini-ITX form factor, has eight SATA ports, and supports server-class processors and error-correcting memory with SATA hot-plug. The E3-1235L v5 is just 25 watts, which helps keep Kepler-47 very quiet. (Dan has been running it <EM> literally </EM> on his desk since last month, and he hasn’t complained yet.) <BR /> <BR /> Having spent all our SATA ports on the storage, we needed to boot from something else. We were delighted to spot the USB 3 header on the motherboard. <BR /> <BR /> The U-NAS NSC-800 chassis is not the cheapest option. You could go cheaper. However, it features an aluminum outer casing, steel frame, and rubberized drive trays – the quality appealed to us. <BR /> <BR /> We actually had to order two sets of SATA cables – the first were not malleable enough to weave their way around the tight corners from the board to the drive bays in our chassis. The second set we got are flat and 30 AWG, and they work great. <BR /> <BR /> Likewise, we had to confront physical limitations on the heatsink – the fan we use is barely 2.7 cm tall, to fit in the chassis. <BR /> <BR /> We salvaged the drives we used, for cache and capacity, from other systems in our test lab. In the case of the SSDs, they’re several years old and discontinued, so it’s not clear how to accurately price them. In the future, we imagine ROBO deployments of Storage Spaces Direct will vary tremendously in the drives they use – we chose 4 TB HDDs, but some folks may only need 1 TB, or may want 10 TB. This is why we aren’t focusing on the price of the drives themselves – it’s really up to you. <BR /> <BR /> Finally, the Thunderbolt™ 3 controller chip in PCIe add-in-card form factor was pre-release, for development purposes only. It was graciously provided to us by our friends at Intel. They have cited a price-tag of $8.55 for the chip, but not made us pay yet. :-) <BR /> <BR /> <BR /> <BR /> <STRONG> Takeaway </STRONG> <BR /> <BR /> With <EM> Project Kepler-47 </EM> , we used Storage Spaces Direct and Windows Server 2016 to build an unprecedentedly low-cost high availability solution to meet remote-office, branch-office needs. It delivers the simplicity and savings of hyper-convergence, with compute and storage in a single two-server cluster, with next to no networking gear, that is <EM> very </EM> budget friendly. <BR /> <BR /> Are you or is your organization interested in this type of solution? Let us know in the comments! <BR /> <BR /> <BR /> <BR /> // Cosmos Darwin ( <A href="#" target="_blank"> @CosmosDarwin </A> ), Dan Lovinger, and Claus Joergensen ( <A href="#" target="_blank"> @ClausJor </A> ) </BODY></HTML> Wed, 10 Apr 2019 11:23:32 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/squeezing-hyper-convergence-into-the-overhead-bin-for-barely-1/ba-p/425877 Cosmos Darwin 2019-04-10T11:23:32Z Fixed: Work Folders does not work on iOS 10 when using Digest authentication https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/fixed-work-folders-does-not-work-on-ios-10-when-using-digest/ba-p/425863 <P><STRONG> First published on TECHNET on Oct 10, 2016 </STRONG> <BR />Hi all, <BR /><BR />I’m Jeff Patterson, Program Manager for Work Folders. <BR /><BR />I wanted to let you know that Digest authentication does not work on iOS 10. Please review the issue details below if you’re currently using the Work Folders iOS client in your environment. </P> <P>&nbsp;</P> <P><STRONG>Symptom </STRONG> <BR />After upgrading to iOS 10, Work Folders fails with the following error after user credentials are provided: <BR /><BR />Check your user name and password </P> <P>&nbsp;</P> <P><STRONG>Cause </STRONG> <BR />There’s a bug in iOS 10 which causes Digest authentication to fail. </P> <P>&nbsp;</P> <P><STRONG>Status </STRONG> <BR />This issue is fixed in iOS 10.2 (released&nbsp;December 12th). <BR /><BR />Thanks, <BR /><BR />Jeff</P> Tue, 26 May 2020 02:41:31 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/fixed-work-folders-does-not-work-on-ios-10-when-using-digest/ba-p/425863 Jeff Patterson 2020-05-26T02:41:31Z All The Windows Server 2016 sessions at Ignite https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/all-the-windows-server-2016-sessions-at-ignite/ba-p/425862 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Sep 22, 2016 </STRONG> <BR /> Hi folks, Ned here again. If you were smart/cool/lucky enough to land some Microsoft Ignite tickets for next week, here's the nicely organized list of all the Windows Server 2016 sessions. Color-coding, filters, it's very sharp. <BR /> <H3> <STRONG> <A href="#" target="_blank"> aka.ms/ws2016ignite </A> </STRONG> </H3> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107476iB8AB34A0EF158F74" /> <BR /> <BR /> Naturally, the&nbsp;killer session you should register for is <A href="#" target="_blank"> Drill into Storage Replica in Windows Server 2016 </A> . I hear the presenter kicks ass and has swag for attendees. <BR /> <BR /> - Ned "not so humble brag" Pyle </BODY></HTML> Wed, 10 Apr 2019 11:22:16 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/all-the-windows-server-2016-sessions-at-ignite/ba-p/425862 Ned Pyle 2019-04-10T11:22:16Z The not future of SMB1 - another MS engineering quickie survey https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/the-not-future-of-smb1-another-ms-engineering-quickie-survey/ba-p/425859 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Sep 16, 2016 </STRONG> <BR /> Hi folks, <A href="#" target="_blank"> Ned </A> here again. <A href="#" target="_blank"> Speaking of SMB1 removal </A> , please take 30 seconds to complete this anonymous survey on SMB1 removal as a default option.&nbsp;Your honesty&nbsp;counts;&nbsp;I'd rather hear 'no' and some legit reasons than have sunshine blown up my kilt. <BR /> <P> <STRONG> Survey: <A href="#" target="_blank"> consideration of SMB1 being removed by default in OSes </A> </STRONG> </P> <BR /> Thankee, <BR /> <BR /> Ned "Scotsman" Pyle </BODY></HTML> Wed, 10 Apr 2019 11:22:04 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/the-not-future-of-smb1-another-ms-engineering-quickie-survey/ba-p/425859 Ned Pyle 2019-04-10T11:22:04Z Stop using SMB1 https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/stop-using-smb1/ba-p/425858 <P><STRONG> First published on TECHNET on Sep 16, 2016 </STRONG></P> <P><BR />Hi folks, <A href="#" target="_blank" rel="noopener"> Ned </A> here again and today’s topic is short and sweet:</P> <P>Stop using SMB1. <I> Stop using SMB1 </I> . <STRONG> <I> <STRONG> STOP USING SMB1! </STRONG> </I> </STRONG></P> <P><BR />In September of 2016, <A href="#" target="_blank" rel="noopener"> MS16-114 </A> , a security update that prevents denial of service and remote code execution. If you need this security patch, you already have a much bigger problem: you are still running SMB1. <BR /><BR />The original SMB1 protocol is nearly <A href="#" target="_blank" rel="noopener"> 30 years old </A> , and like much of the software made in the 80’s, it was designed for a world that no longer exists. A world without malicious actors, without vast sets of important data, without near-universal computer usage. Frankly, its naivete is staggering when viewed though modern eyes. I blame the West Coast hippy lifestyle :). <BR /><BR />If you don't care about the why and just want to get to the how, I recommend you review: <BR /><BR /></P> <UL> <LI><A href="#" target="_blank" rel="noopener"> How to remove SMB1 </A></LI> <LI><A href="#" target="_blank" rel="noopener"> The SMB1 clearinghouse </A></LI> <LI><A href="#" target="_blank" rel="noopener"> SMB1 is being removed from Windows and Windows Server </A></LI> </UL> <P><BR />Otherwise, let me explain why this protocol needs to hit the landfill.</P> <P>&nbsp;</P> <H3>SMB1 isn’t safe</H3> <P>When you use SMB1, you lose key protections offered by later SMB protocol versions: <BR /><BR /></P> <UL> <LI><A href="#" target="_blank" rel="noopener"> Pre-authentication Integrity </A> (SMB 3.1.1+). Protects against security downgrade attacks.</LI> <LI><A style="background-color: #ffffff;" href="#" target="_blank" rel="noopener">Secure Dialect Negotiation </A> (SMB 3.0, 3.02). Protects against security downgrade attacks.</LI> <LI><A style="background-color: #ffffff;" href="#" target="_blank" rel="noopener">Encryption </A> (SMB 3.0+). Prevents inspection of data on the wire, MiTM attacks. In SMB 3.1.1 encryption performance is even better than signing!</LI> <LI><A href="#" target="_blank" rel="noopener">Insecure guest auth blocking (SMB 3.0+ on Windows 10+) </A> . Protects against MiTM attacks.</LI> <LI><A href="#" target="_blank" rel="noopener">Better message signing </A> (SMB 2.02+). HMAC SHA-256 replaces MD5 as the hashing algorithm in SMB 2.02, SMB 2.1 and AES-CMAC replaces that in SMB 3.0+. Signing performance increases in SMB2 and 3.</LI> </UL> <P>&nbsp;</P> <P>The nasty bit is that no matter how you secure all these things, if your clients use SMB1, then a man-in-the-middle can tell your client <I> to ignore all the above </I> . All they need to do is block SMB2+ on themselves and answer to your server’s name or IP. Your client will happily derp away on SMB1 and share all its darkest secrets unless you required encryption on that share to prevent SMB1 in the first place. This is not theoretical – we’ve seen it. We believe this so strongly that when we introduced Scaleout File Server, we explicitly prevented SMB1 access to those shares!</P> <BLOCKQUOTE><BR /> <P>As an owner of SMB at MS, I cannot emphasize enough how much I want everyone to stop using SMB1 <A href="#" target="_blank" rel="noopener"> https://t.co/kHPqvyxTKC </A></P> <BR />— Ned Pyle (@NerdPyle) <A href="#" target="_blank" rel="noopener"> April 12, 2016 </A></BLOCKQUOTE> <P><BR />US-CERT agrees with me, BTW: <A href="#" target="_blank" rel="noopener"> https://www.us-cert.gov/ncas/current-activity/2017/01/16/SMB-Security-Best-Practices </A></P> <P>&nbsp;</P> <H3>SMB1 isn’t modern or efficient</H3> <P>When you use SMB1, you lose key performance and productivity optimizations for end users. <BR /><BR /></P> <UL> <LI>Larger reads and writes (2.02+)- more efficient use of faster networks or higher latency WANs. Large MTU support.</LI> <LI>Peer caching of folder and file properties (2.02+) - clients keep local copies of folders and files via BranchCache</LI> <LI>Durable handles (2.02, 2.1) - allow for connection to transparently reconnect to the server if there is a temporary disconnection</LI> <LI>Client oplock leasing model (2.02+) - limits the data transferred between the client and server, improving performance on high-latency networks and increasing SMB server scalability</LI> <LI>Multichannel &amp; SMB Direct (3.0+) - aggregation of network bandwidth and fault tolerance if multiple paths are available between client and server, plus usage of modern ultra-high throughout RDMA infrastructure</LI> <LI>Directory Leasing (3.0+) - Improves application response times in branch offices through caching</LI> </UL> <P>&nbsp;</P> <BLOCKQUOTE> <P>Running SMB1 is like taking your grandmother to prom: she means well, but she can't really move anymore. Also, it's creepy and gross</P> <BR />— Ned Pyle (@NerdPyle) <A href="#" target="_blank" rel="noopener"> September 16, 2016 </A></BLOCKQUOTE> <H3>&nbsp;</H3> <H3>SMB1 isn’t usually necessary</H3> <P>This is the real killer: there are far fewer cases left in modern enterprises where SMB1 is the <EM> only </EM> option. Some legit reasons: <BR /><BR /></P> <OL> <OL> <LI>You’re still running XP or WS2003 under a custom support agreement.</LI> <LI>You have old management software that demands admins browse via the so-called ‘network' aka 'network neighborhood’ master browser list.</LI> <LI>You run old multi-function printers with old firmware in order to “scan to share”.</LI> </OL> </OL> <P>&nbsp;</P> <P>These will only affect the average business or user if you let them. Vendors are moving to upgrade their SMB2 support - see here: <A href="#" target="_blank" rel="noopener"> https://aka.ms/stillneedssmb1 </A> For the ones who aren't, their competitors are. You have leverage here. You have the wallet. <BR /><BR />We work carefully with partners in the storage, printer, and application spaces all over the world to ensure they provide at least SMB2 support and have done so with annual conferences and plugfests for six years. Samba supports SMB 2 and 3. So does OSX and MacOS. So do EMC, NetApp, and their competitors. So do our licensed SMB providers like Visuality and Tuxera, who also help printer manufacturers join the modern world. <BR /><BR />A proper IT pro is always from Missouri though. We provide SMB1 usage auditing in Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, and Windows Server 2008 R2 (the latter two received via backported functionality in monthly updates several years ago) plus their client equivalents, just to be sure. That way you can configure your Windows Servers to see if disabling SMB1 would break someone:</P> <P><BR /><STRONG>Set-SmbServerConfiguration –AuditSmb1Access $true</STRONG></P> <P>&nbsp;</P> <P>On Windows Server 2008 R2 and Windows 7 you must edit the registry directly for this DWORD value, there is no SMB PowerShell:</P> <P>&nbsp;</P> <P><STRONG>Set-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters" AuditSmb1Access&nbsp;-Type DWORD -Value 1 –Force</STRONG></P> <P>&nbsp;</P> <P>Then just examine the SMBServer\Audit event log on the systems. If you have older servers than WS2012 R2, now is good time to talk upgrade. Ok, that’s a bit extortionist – now is the time to talk to your blue teams, network teams, and other security folks about if and where they are seeing SMB1 usage on the network. If they have no idea, they need to get one. If you still don’t know because this is a smaller shop, run your own network captures on a sample of your servers and clients, see if SMB1 appears.</P> <P>&nbsp;</P> <BLOCKQUOTE>Day 700 without SMB1 installed: nothing happened. Just like last 699 days. Because anyone requiring SMB1 is not allowed on my $%^&amp;%# network <BR /><BR />— Ned Pyle (@NerdPyle) <A href="#" target="_blank" rel="noopener"> September 13, 2016 </A></BLOCKQUOTE> <P><BR /><EM><STRONG>Update April 7, 2017: </STRONG> </EM> Great article on using DSC to track down machines with SMB1 installed or enabled: <A href="#" target="_blank" rel="noopener"> https://blogs.technet.microsoft.com/ralphkyttle/2017/04/07/discover-smb1-in-your-environment-with-dscea/ </A> <BR /><BR /><EM> <STRONG> Update June 19, 2017 </STRONG> </EM> - Group Policy to disable SMB1: <A href="#" target="_blank" rel="noopener"> https://blogs.technet.microsoft.com/secguide/2017/06/15/disabling-smbv1-through-group-policy/ </A> <BR /><BR /><EM> <STRONG> Update June 30, 2017 </STRONG> </EM> - You have probably seen me announce this on <A href="#" target="_blank" rel="noopener"> twitter </A> and in other public venues: <STRONG> Windows 10 RS3 (Fall Creators Update) and Windows Server 2016 RS3 have SMB1 uninstalled by default under most circumstances: <A href="#" target="_blank" rel="noopener"> https://aka.ms/smb1rs3 </A> . The full removal has begun. Make sure you check <A href="#" target="_blank" rel="noopener"> https://aka.ms/stillneedssmb1 </A> for products that may require updates or replacement to be used without the need for SMB1. </STRONG> <BR /><BR /><STRONG> Update July 7, 2017: </STRONG> if your vendor requires <EM> disabling SMB2 </EM> in order to force SMB1, they will also often require disabling oplocks. Disabling Oplocks is not recommended by Microsoft, but required by some older software, often due to using legacy database technology. Windows 10 RS3 and Windows Server 2016 RS3 allow a special oplock override workaround now for these scenarios - see <A href="#" target="_blank" rel="noopener"> https://twitter.com/NerdPyle/status/876880390866190336 </A> . This is only a workaround - just like SMB1 oplock disable is only a workaround - and your vendor should update to not require it. Many have by now (I've spoken to some, at least) and their customers might still just be running an out of date version - call your suppliers.</P> <P>&nbsp;</P> <H3>SMB1 removal isn’t hard</H3> <P>Starting in Windows 8.1 and Windows Server 2012 R2, we made removal of the SMB1 feature possible and trivially easy. <BR /><BR /><STRONG> <STRONG> On Server, the Server Manager approach: </STRONG> </STRONG> <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 726px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107469i74838DCBECC9DBDE/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR /><STRONG> <STRONG> On Server, the PowerShell approach (Remove-WindowsFeature FS-SMB1): </STRONG> </STRONG> <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 405px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107470iEC8E5676B57BDEAD/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR /><STRONG> <STRONG> On Client, the add remove programs approach (appwiz.cpl): </STRONG> </STRONG> <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 387px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107471i09E7F02028C1BD68/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR /><STRONG> <STRONG> On Client, the PowerShell approach (Disable-WindowsOptionalFeature -Online -FeatureName smb1protocol) </STRONG> </STRONG> <BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 586px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107472i467E527694787B5E/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR /><STRONG> <STRONG> On legacy operating systems: </STRONG> </STRONG> <BR /><BR />When using operating systems older than Windows 8.1 and Windows Server 2012 R2, you can’t remove SMB1 – but you can disable it: <A href="#" target="_blank" rel="noopener"> KB 2696547- How to enable and disable SMBv1, SMBv2, and SMBv3 in Windows Vista, Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows 8, and Windows Server 2012 </A> <BR /><BR />A key point: when you begin the removal project, start at smaller scale and work your way up. <I> No one says you must finish this in a day. </I></P> <P>&nbsp;</P> <H3>Explorer Network Browsing</H3> <DIV>The Computer Browser service relies on SMB1 in order to populate the Windows Explorer Network (aka "Network Neighborhood"). This legacy protocol is long deprecated, doesn't route, and has limited security. Because it cannot function without SMB1, it is removed at the same time.</DIV> <DIV><BR />However, some customers still use the Explorer Network in home and small business workgroup environments to locate Windows computers. To continue using Explorer Network, you can perform the following steps on your Windows computers that no longer use SMB1:</DIV> <DIV><BR />1. Start the "Function Discovery Provider Host" and "Function Discovery Resource Publication" services and set them to delayed start. <BR /><BR /></DIV> <DIV><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 957px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107473iBD92DA9664C99FD0/image-size/large?v=v2&amp;px=999" role="button" /></span></DIV> <P>&nbsp;</P> <DIV><BR />2. When the user opens Network, they will be prompted to enable network discovery.&nbsp; Do so.</DIV> <DIV>&nbsp;</DIV> <DIV><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 880px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107474i8A6B62F8BAB0A1DE/image-size/large?v=v2&amp;px=999" role="button" /></span></DIV> <P>&nbsp;</P> <DIV><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 738px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107475iAF4527E114826674/image-size/large?v=v2&amp;px=999" role="button" /></span></DIV> <P>&nbsp;</P> <DIV>3. Now all Windows devices within that subnet that have these settings in place will appear in Network for browsing. This uses the WS-DISCOVERY protocol. Check with your other vendors and manufacturers if their devices still do not appear in this browse list after Windows devices appear; it is likely they have this protocol disabled or only support SMB1.</DIV> <P>&nbsp;</P> <DIV><EM><EM>Note: we highly recommend you map drives and printers for your users instead of enabling this feature, which still requires searching and browsing for their devices. Mapped resources are easier for them to locate, require less training, and are safer to use, especially when provided automatically through group policy. </EM> </EM></DIV> <P>&nbsp;</P> <H3>SMB1 isn’t good</H3> <P>Stop using SMB1. For your children. For your children’s children. Please. <A href="#" target="_blank" rel="noopener"> We’re begging you. </A> And if that's not enough: SMB1 is being removed (fully or partially, depending on SKU) by default in the RS3 release of Windows and Windows Server. This is here folks: <A href="#" target="_blank" rel="noopener"> https://aka.ms/smb1rs3 </A> <BR /><BR />- Ned “and the rest of the SMB team at Microsoft” Pyle</P> Wed, 02 Sep 2020 17:44:04 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/stop-using-smb1/ba-p/425858 Ned Pyle 2020-09-02T17:44:04Z Survey: Why R2? https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/survey-why-r2/ba-p/425832 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Sep 08, 2016 </STRONG> <BR /> Hi folks, <A href="#" target="_blank"> Ned </A> here again. I have a very quick survey for you if you're&nbsp;a Windows Server customer. It is anonymous, only has one mandatory question, and one secondary optional question - shouldn't take you more than 30 seconds and will&nbsp;help us understand our&nbsp;customer base better. <BR /> <BR /> <STRONG> <A href="#" target="_blank"> Why do you deploy R2 versions so much more than non-R2? </A> </STRONG> <BR /> <BR /> Thanks in advance. <BR /> <UL> <BR /> <LI> Ned "actual monkey" Pyle </LI> <BR /> </UL> </BODY></HTML> Wed, 10 Apr 2019 11:20:53 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/survey-why-r2/ba-p/425832 Ned Pyle 2019-04-10T11:20:53Z Volume resiliency and efficiency in Storage Spaces Direct https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/volume-resiliency-and-efficiency-in-storage-spaces-direct/ba-p/425831 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Sep 06, 2016 </STRONG> <BR /> Hello, Claus here again. <BR /> <BR /> One of the most important aspects when creating a volume is to choose the resiliency settings. The purpose of resiliency is to provide resiliency in case of failures, such as failed drive or a server failure. It also enables data availability when performing maintenance, such as server hardware replacement or operating system updates. Storage Spaces Direct supports two resiliency types; mirror and parity. <BR /> <H2> Mirror resiliency </H2> <BR /> Mirror resiliency is relatively simple. Storage Spaces Direct generates multiple block copies of the same data. By default, it generates 3 copies. Each copy is stored on a drive in different servers, providing resiliency to both drive and server failures. The diagram shows 3 data copies (A, A’ and A’’) laid out across a cluster with 4 servers. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107466iBA2CB81ED372030C" /> <BR /> <P> <EM> Figure </EM> <EM> 1 </EM> <EM> 3-copy mirror across 4 servers </EM> </P> <BR /> Assuming there is a failure on the drive in server 2 where A’ is written. A’ is regenerated from reading A or A’’ and writing a new copy of A’ on another drive in server 2 or any drive in server 3. A’ cannot be written to drives in server 1 or server 4 since it is not allowed to have two copies of the same data in the same server. <BR /> <BR /> If the admin puts a server in maintenance mode, the corresponding drives also enters maintenance mode. While maintenance mode suspends IO to the drives, the administrator can still perform drive maintenance tasks, such as updating drive firmware. Data copies stored on the server in maintenance mode will not be updated since IOs are suspended. Once the administrator takes the server out of maintenance mode, the data copies on the server will be updated using data copies from other servers. Storage Spaces Direct tracks which data copies are changed while the server is in maintenance mode, to minimize data resynchronization. <BR /> <BR /> Mirror resiliency is relatively simple, which means it has great performance and does not have a lot of CPU overhead. The downside to mirror resiliency is that it is relatively inefficient, with 33.3% storage efficiency when storing 3 full copies of all data. <BR /> <H2> Parity resiliency </H2> <BR /> Parity resiliency is much more storage efficient compared to mirror resiliency. Parity resiliency uses parity symbols across a larger set of data symbols to drive up storage efficiency. Each symbol is stored on a drive in different servers, providing resiliency to both drive and server failures. Storage Spaces Direct requires at least 4 servers to enable parity resiliency. The diagram shows two data symbols (X <SUB> 1 </SUB> and X <SUB> 2 </SUB> ) and two parity symbols (P <SUB> 1 </SUB> and P <SUB> 2 </SUB> ) laid out across a cluster with 4 servers. <BR /> <P> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107467iCD9760BCC0DB1F43" /> </P> <BR /> <P> <EM> Figure </EM> <EM> 2 </EM> <EM> Parity resiliency across 4 servers </EM> </P> <BR /> Assuming there is a failure on the drive in server 2 where X <SUB> 2 </SUB> is written. X <SUB> 2 </SUB> is regenerated from reading the other symbols (X <SUB> 1 </SUB> , P <SUB> 1 </SUB> and P <SUB> 2 </SUB> ), recalculate the value of X <SUB> 2 </SUB> and write X <SUB> 2 </SUB> on another drive in server 2. X <SUB> 2 </SUB> cannot be written to drives in others servers, since it is not allowed to have two symbols in the same symbol set in the same server. <BR /> <BR /> Parity resiliency works similar to mirror resiliency when a server is in maintenance mode. <BR /> <BR /> Parity resiliency has better storage efficiency than mirror resiliency. With 4 servers the storage efficiency is 50%, and it can be as high as 80% with 16 servers. The downside of parity resiliency is twofold: <BR /> <UL> <BR /> <LI> Performing data reconstruction involves all of the surviving symbols. All symbols are read, which is extra storage IO, Lost symbols are recalculated, which incurs expensive CPU cycles and written back to disk. </LI> <BR /> <LI> Overwriting existing data involves all symbols. All data symbols are read, data is updated, parity is recalculated, and all symbols are written. This is also known as Read-Modify-Write and incurs significant storage IO and CPU cycles. </LI> <BR /> </UL> <BR /> <H2> Local Reconstruction Codes </H2> <BR /> Storage Spaces Direct uses <A href="#" target="_blank"> Reed-Solomon error correction </A> (aka erasure coding) for parity calculation in smaller deployments for the best possible efficiency and resiliency to two simultaneous failures. A cluster with four servers has 50% storage efficiency and resiliency to two failures. With larger clusters storage efficiency is increased as there can be more data symbols without increasing the number of parity symbols. On the flip side, data reconstruction becomes increasingly inefficient as the total number of symbols (data symbols + parity symbols) increases, as all surviving symbols will have to be read in order to calculate and regenerate the missing symbol(s). To address this, Microsoft Research invented <A href="#" target="_blank"> Local Reconstruction Codes </A> , which is being used in Microsoft Azure and Storage Spaces Direct. <BR /> <BR /> Local Reconstruction Codes (LRC) optimizes data reconstruction for the most common failure scenario, which is a single drive failure. It does so by grouping the data symbols and calculate a single (local) parity symbol across the group using simple XOR. It then calculates a global parity across all the symbols. The diagram below shows LRC in a cluster with 12 servers. <BR /> <P> <A href="#" target="_blank"> </A></P><P> <BR /> </P> <P></P> <BR /> <P> <EM> Figure </EM> <EM> 3 </EM> <EM> LRC in a cluster with 12 servers </EM> </P> <BR /> In the above example we have 11 symbols, 8 data symbols represented by X <SUB> 1 </SUB> , X <SUB> 2 </SUB> , X <SUB> 3, </SUB> X <SUB> 4, </SUB> Y <SUB> 1, </SUB> Y <SUB> 2, </SUB> Y <SUB> 3 </SUB> and Y <SUB> 4 </SUB> , 2 local parity symbols represented by P <SUB> X </SUB> and P <SUB> Y </SUB> , and finally one global parity symbol represented by Q. This particular layout is also sometimes described as (8,2,1) representing 8 data symbols, 2 groups and 1 global parity. <BR /> <BR /> Inside each group the parity symbol is calculated as simple XOR across the data symbols in the group. XOR is not a very computational intensive operation and thus requires few CPU cycles. Q is calculated using the data symbols and local parity symbols across all the groups. In this particular configuration, the storage efficiency is 8/11 or ~72%, as there are 8 data symbols out of 11 total symbols. <BR /> <BR /> As mentioned above, in storage systems a single failure is more common than multiple failures and LRC is more efficient and incurs less storage IO when reconstructing data in the single device failure scenario and even some multi-failure scenarios. <BR /> <BR /> Using the example from figure 3 above: <BR /> <BR /> What happens if there is one failure, e.g. the disk that stores X <SUB> 2 </SUB> fails? In that case X <SUB> 2 </SUB> is reconstructed by reading X <SUB> 1, </SUB> X <SUB> 3, </SUB> X <SUB> 4 </SUB> <SUB> , </SUB> <SUB> </SUB> and P <SUB> X </SUB> (four reads), perform XOR operation (simple), and write X <SUB> 2 </SUB> (one write) on a different disk in server 2. Notice that none of the Y symbols or the global parity Q are read or involved in the reconstruction. <BR /> <BR /> What happens if there are two simultaneous failures, e.g. the disk that stores X <SUB> 1 </SUB> fails and the disk that stores Y <SUB> 2 </SUB> also fails. In this case, because the failures occurred in two different groups, X <SUB> 1 </SUB> is reconstructed by reading X <SUB> 2, </SUB> X <SUB> 3, </SUB> X <SUB> 4 </SUB> and P <SUB> X </SUB> (four reads), perform XOR operation, and write X <SUB> 1 </SUB> (one write) on a different disk in server 1. Similarly, Y <SUB> 2 </SUB> is reconstructed by reading Y <SUB> 1, </SUB> Y <SUB> 3, </SUB> Y <SUB> 4 </SUB> and P <SUB> Y </SUB> (four reads), perform XOR operation, and write Y <SUB> 2 </SUB> (one write) to a different disk in server 5. A total of eight reads and two writes. Notice that only simple XOR was involved in data reconstruction thus reducing the pressure on the CPU. <BR /> <BR /> What happens if there are two failures in the same group, e.g. the disks that stores X <SUB> 1 </SUB> and X <SUB> 2 </SUB> have both failed. In this case X <SUB> 1 </SUB> is reconstructed by reading X <SUB> 3, </SUB> X <SUB> 4 </SUB> P <SUB> X </SUB> , Y <SUB> 1 </SUB> , Y <SUB> 2, </SUB> Y <SUB> 3, </SUB> Y <SUB> 4 </SUB> and Q (8 reads), perform erasure code computation and write X <SUB> 1 </SUB> to a different disk in server 1. It is not necessary to read P <SUB> Y </SUB> , since it can be calculated it from knowing Y <SUB> 1, </SUB> Y <SUB> 2, </SUB> Y <SUB> 4 </SUB> and Y <SUB> 4 </SUB> . Once X <SUB> 1 </SUB> is reconstructed, X <SUB> 2 </SUB> can be reconstructed using the same mechanism described for one failure above, except no additional reads are needed. <BR /> <BR /> Notice how, in the example above, one server does not have symbols? This configuration allows reconstruction of symbols even in the case where a server has malfunctioned and is permanently retired, after which the cluster effective will have only 11 servers until a replacement server is added to the cluster. <BR /> <BR /> The number of data symbols in a group depends on the cluster size and the drive types being used. Solid state drives perform better, so the number of data symbols in a group can be larger. The below table, outlines the default erasure coding scheme (RS or LRC) and the resulting efficiency for hybrid and all-flash storage configuration in various cluster sizes. <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <STRONG> Servers </STRONG> </TD> <TD> <BR /> <P> <STRONG> SSD + HDD </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> <STRONG> All SSD </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> <STRONG> </STRONG> </TD> <TD> <STRONG> Layout </STRONG> </TD> <TD> <STRONG> Efficiency </STRONG> </TD> <TD> <STRONG> Layout </STRONG> </TD> <TD> <STRONG> Efficiency </STRONG> </TD> </TR> <TR> <TD> <STRONG> 4 </STRONG> </TD> <TD> RS 2+2 </TD> <TD> 50% </TD> <TD> RS 2+2 </TD> <TD> 50% </TD> </TR> <TR> <TD> <STRONG> 5 </STRONG> </TD> <TD> RS 2+2 </TD> <TD> 50% </TD> <TD> RS 2+2 </TD> <TD> 50% </TD> </TR> <TR> <TD> <STRONG> 6 </STRONG> </TD> <TD> RS 2+2 </TD> <TD> 50% </TD> <TD> RS 2+2 </TD> <TD> 50% </TD> </TR> <TR> <TD> <STRONG> 7 </STRONG> </TD> <TD> RS 4+2 </TD> <TD> 66% </TD> <TD> RS 4+2 </TD> <TD> 66% </TD> </TR> <TR> <TD> <STRONG> 8 </STRONG> </TD> <TD> RS 4+2 </TD> <TD> 66% </TD> <TD> RS 4+2 </TD> <TD> 66% </TD> </TR> <TR> <TD> <STRONG> 9 </STRONG> </TD> <TD> RS 4+2 </TD> <TD> 66% </TD> <TD> RS 6+2 </TD> <TD> 75% </TD> </TR> <TR> <TD> <STRONG> 10 </STRONG> </TD> <TD> RS 4+2 </TD> <TD> 66% </TD> <TD> RS 6+2 </TD> <TD> 75% </TD> </TR> <TR> <TD> <STRONG> 11 </STRONG> </TD> <TD> RS 4+2) </TD> <TD> 66% </TD> <TD> RS 6+2 </TD> <TD> 75% </TD> </TR> <TR> <TD> <STRONG> 12 </STRONG> </TD> <TD> LRC (8,2,1) </TD> <TD> 72% </TD> <TD> RS 6+2 </TD> <TD> 75% </TD> </TR> <TR> <TD> <STRONG> 13 </STRONG> </TD> <TD> LRC (8,2,1) </TD> <TD> 72% </TD> <TD> RS 6+2 </TD> <TD> 75% </TD> </TR> <TR> <TD> <STRONG> 14 </STRONG> </TD> <TD> LRC (8,2,1) </TD> <TD> 72% </TD> <TD> RS 6+2 </TD> <TD> 75% </TD> </TR> <TR> <TD> <STRONG> 15 </STRONG> </TD> <TD> LRC (8,2,1) </TD> <TD> 72% </TD> <TD> RS 6+2 </TD> <TD> 75% </TD> </TR> <TR> <TD> <STRONG> 16 </STRONG> </TD> <TD> LRC (8,2,1) </TD> <TD> 72% </TD> <TD> LRC (12,2,1) </TD> <TD> 80% </TD> </TR> </TBODY></TABLE> <BR /> <H2> Accelerating parity volumes </H2> <BR /> In Storage Spaces Direct it is possible to create a hybrid volume. A hybrid volume is essentially a volume where some of the volume uses mirror resiliency and some of the volume uses parity resiliency. <BR /> <P> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107468iF8E740E5D9C86E56" /> <A href="#" target="_blank"> </A> </P> <BR /> <P> <EM> Figure </EM> <EM> 4 </EM> <EM> Hybrid Volume </EM> </P> <BR /> The purpose of mixing mirror and parity in the volume is to provide a balance between storage performance and storage efficiency. Hybrid volumes require the use of the ReFS on-disk file system as it is aware of the volume layout: <BR /> <UL> <BR /> <LI> ReFS always writes data to the mirror portion of the volume, taking advantage of the write performance of mirror </LI> <BR /> <LI> ReFS rotates data into the parity portion of the volume when needed, taking advantage of the efficiency of parity </LI> <BR /> <LI> Parity is only calculated when rotating data into the parity portion </LI> <BR /> <LI> ReFS writes updates to data stored in the parity portion by placing new data in the mirror portion and invalidating the old stored in to parity portion – again to take advantage of the write performance of mirror </LI> <BR /> </UL> <BR /> ReFS starts rotating data into the parity portion at 60% utilization of the mirror portion and gradually becomes more aggressive in rotating data as utilization increases. It is highly desirable to: <BR /> <UL> <BR /> <LI> Size the mirror portion to twice the size of the active working set (hot data) to avoid excessive data rotation </LI> <BR /> <LI> Size the overall volume to always have 20% free space to avoid excessive fragmentation due to data rotation </LI> <BR /> </UL> <BR /> <H2> Conclusion </H2> <BR /> I hope this blog post helps provide more insight into how mirror and parity resiliency works in Storage Spaces Direct, how data is laid out across servers, and how data is reconstructed in various failure cases. <BR /> <BR /> We also discussed how Local Reconstruction Codes (LRC) increases the efficiency of data reconstruction in both reduced storage IO churn and CPU cycles, and overall helps reach a healthy system quicker. <BR /> <BR /> And finally we discussed how hybrid volumes provide a balance between the performance of mirror and the efficiency of parity. <BR /> <BR /> Let me know what you think. <BR /> <BR /> Until next time <BR /> <BR /> Claus <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> </BODY></HTML> Wed, 10 Apr 2019 11:20:48 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/volume-resiliency-and-efficiency-in-storage-spaces-direct/ba-p/425831 Claus Joergensen 2019-04-10T11:20:48Z Work Folders and Offline Files support for Windows Information Protection https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/work-folders-and-offline-files-support-for-windows-information/ba-p/425810 <P><STRONG> First published on TECHNET on Aug 29, 2016 </STRONG> <BR />Hi all, <BR /><BR />I’m Jeff Patterson, Program Manager for Work Folders and Offline Files. <BR /><BR />Windows 10, version 1607 will be available to Enterprise customers soon so I wanted to cover support for Windows Information Protection (a.k.a. Enterprise Data Protection) when using Work Folders or Offline Files. </P> <P>&nbsp;</P> <H3>Windows Information Protection Overview</H3> <P>Windows Information Protection (WIP) is a new security feature introduced in Windows 10, version 1607 to protect against data leaks. <BR /><BR />Benefits of WIP <BR /><BR /></P> <UL> <UL> <LI>Separation between personal and corporate data, without requiring employees to switch environments or apps</LI> </UL> </UL> <UL> <UL> <LI>Additional data protection for existing line-of-business apps without a need to update the apps</LI> </UL> </UL> <UL> <UL> <LI>Ability to wipe corporate data from devices while leaving personal data alone</LI> </UL> </UL> <UL> <UL> <LI>Use of audit reports for tracking issues and remedial actions</LI> </UL> </UL> <UL> <UL> <LI>Integration with your existing management system (Microsoft Intune, System Center Configuration Manager 2016, or your current mobile device management (MDM) system) to configure, deploy, and manage WIP for your company</LI> </UL> </UL> <P>For additional information on Windows Information Protection, please reference our TechNet <A href="#" target="_blank" rel="noopener"> documentation </A> . </P> <H3>&nbsp;</H3> <H3>Work Folders support for Windows Information Protection</H3> <P>Work Folders was updated in Windows 10 to support Windows Information Protection. <BR /><BR />If a WIP policy is applied to a Windows 10 device, all user data stored in the Work Folders directory will be encrypted using the same key and Enterprise ID that is used by Windows Information Protection. <BR /><BR />Note: The user data is only encrypted on the Windows 10 device. When the user data is synced to the Work Folders server, it’s not encrypted on the server. To encrypt the user data on the Work Folders server, you need to use RMS encryption. </P> <H3>&nbsp;</H3> <H3>Offline Files and Windows Information Protection</H3> <P>Offline Files (a.k.a. Client Side Caching) is an older file sync solution and was not updated to support Windows Information Protection. This means any user data stored on a network share that’s cached locally on the Windows 10 device using Offline Files is not protected by Windows Information Protection. <BR /><BR />If you’re currently using Offline Files, our recommendation is to migrate to a modern file sync solution such as <A href="#" target="_blank" rel="noopener"> Work Folders </A> or <A href="#" target="_blank" rel="noopener"> OneDrive for Business </A> which supports Windows Information Protection. <BR /><BR />If you decide to use Offline Files with Windows Information Protection, you need to be aware of the following issue if you try to open cached files while working offline: <BR /><BR />Can't open files offline when you use Offline Files and Windows Information Protection <BR /><A href="#" target="_blank" rel="noopener"> https://support.microsoft.com/en-us/kb/3187045 </A> </P> <H3>&nbsp;</H3> <H3>Conclusion</H3> <P>Offline Files does not support Windows Information Protection, you should use a modern file sync solution such as <A href="#" target="_blank" rel="noopener"> Work Folders </A> or <A href="#" target="_blank" rel="noopener"> OneDrive for Business </A> that supports WIP.</P> Tue, 26 May 2020 02:45:08 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/work-folders-and-offline-files-support-for-windows-information/ba-p/425810 Jeff Patterson 2020-05-26T02:45:08Z Windows Server 2016 Dedup Documentation Now Live! https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/windows-server-2016-dedup-documentation-now-live/ba-p/425809 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Aug 29, 2016 </STRONG> <BR /> Hi all! <BR /> <BR /> We just released the Data Deduplication documentation for <A href="#" target="_blank"> Windows Server 2016 over on TechNet </A> ! The new documentation includes <A href="#" target="_blank"> a more detailed explanation of how Dedup works </A> , <A href="#" target="_blank"> crisper guidance on how to evaluate workloads for deduplication </A> , and <A href="#" target="_blank"> information on the available Dedup settings, with context for why you would want to change them </A> . <BR /> <BR /> Check it out: <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107465iB1497AC01C7A1D1C" /> <BR /> <BR /> As always, questions, concerns, or feedback are very welcome! Please feel free to comment at the bottom of this post, or reach out to us directly at <A href="https://gorovian.000webhostapp.com/?exam=mailto:dedupfeedback@microsoft.com" target="_blank"> dedupfeedback@microsoft.com </A> . </BODY></HTML> Wed, 10 Apr 2019 11:20:10 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/windows-server-2016-dedup-documentation-now-live/ba-p/425809 Will Gries 2019-04-10T11:20:10Z Deep Dive: Volumes in Storage Spaces Direct https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/deep-dive-volumes-in-storage-spaces-direct/ba-p/425807 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Aug 29, 2016 </STRONG> <BR /> <EM> </EM> Hi! I’m Cosmos. You can follow me on Twitter <A href="#" target="_blank"> @cosmosdarwin </A> . <BR /> <BR /> <EM> Note: This blog was updated on January 6th 2017 to align more closely to official documention. </EM> <BR /> <H2> Quick Review </H2> <BR /> In <A href="#" target="_blank"> Storage Spaces Direct </A> , volumes can derive their fault tolerance from mirroring, parity encoding, or mirror-accelerated parity. <BR /> <BR /> Briefly… <BR /> <UL> <BR /> <LI> Mirroring is similar to distributed, software-defined RAID-1. It provides the fastest possible reads/writes, but isn’t very capacity efficient, because you’re effectively keeping full extra copies of everything. It’s best for actively written data, so-called "hot" data. </LI> <BR /> <LI> Parity is similar to&nbsp;distributed, software-defined RAID-5 or RAID-6. Our implementation includes several breakthrough advancements developed by Microsoft Research. Parity can achieve far greater capacity efficiency, but at the expense of computational work for each write. It’s best for infrequently written, so-called "cold" data. </LI> <BR /> <LI> Beginning in Windows Server 2016, one volume can be part mirror&nbsp;and part parity.&nbsp;Writes land first in the mirrored portion and are gradually moved into the parity portion later. This accelerates ingestion and reduces resource utilization when large writes arrive by allowing the compute-intensive parity encoding to happen over a longer time. This works great for workloads that write in large, sequential passes, such as archival or backup targets. </LI> <BR /> </UL> <BR /> <H2> Let’s See It </H2> <BR /> So, that’s the concept.&nbsp;How can you see all this in Windows? The Storage Management API is the answer, but unfortunately it’s not quite as straightforward as you might think. <STRONG> This blog aims to untangle the many objects and their properties, so we can get one comprehensive view </STRONG> , like this: <BR /> <BR /> [caption id="attachment_6675" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107461iC2FD9267051B49C4" /> <EM> Volumes, their capacities, how they're filling up, resiliency, footprints, efficiency, all in one easy view. </EM> [/caption] <BR /> <BR /> The first thing to understand is that in Storage Spaces Direct, every “volume” is really a little hamburger-like stack of objects. The <STRONG> Volume </STRONG> sits on a <STRONG> Partition </STRONG> ; the <STRONG> Partition </STRONG> sits on a <STRONG> Disk </STRONG> ; that <STRONG> Disk </STRONG> is a <STRONG> Virtual Disk </STRONG> , also commonly called a <EM> Storage Space </EM> . <BR /> <BR /> [caption id="attachment_6685" align="aligncenter" width="500"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107462iD917F3FC2048D094" /> Classes and their relationships in the Windows Storage Management API.[/caption] <BR /> <BR /> Let's grab properties from several of these, to assemble the picture we want. <BR /> <BR /> We can get the volumes in our system by launching PowerShell as Administrator and running <STRONG> Get-Volume </STRONG> . The key properties are the <STRONG> FileSystemLabel </STRONG> , which is how the volume shows up mounted in Windows (literally – the name of the folder), the <STRONG> FileSystemType </STRONG> , which shows us whether the volume is ReFS or NTFS, and the <STRONG> Size </STRONG> . <BR /> Get-Volume | Select FileSystemLabel, FileSystemType, Size <BR /> Given any volume, we can follow associations down the hamburger. For example, try this: <BR /> Get-Volume -FileSystemLabel &lt;Choose One&gt; | Get-Partition <BR /> Neat! Now, the <STRONG> Partition </STRONG> isn’t very interesting, and frankly, neither is the <STRONG> Disk </STRONG> , but following these associations is the safest way to get to the underlying <STRONG> VirtualDisk </STRONG> (the <EM> Storage Space </EM> !), which has many key properties we want. <BR /> $Volume = Get-Volume -FileSystemLabel &lt;Choose One&gt; <BR /> $Partition = $Volume | Get-Partition <BR /> $Disk = $Partition | Get-Disk <BR /> $VirtualDisk = $Disk | Get-VirtualDisk <BR /> <EM> Voila! </EM> (We speak Français in Canada.) Now we have the <STRONG> VirtualDisk </STRONG> underneath our chosen <STRONG> Volume </STRONG> , saved as <STRONG> $VirtualDisk </STRONG> . You could shortcut this whole process and just run <STRONG> Get-VirtualDisk </STRONG> , but theoretically you can’t be sure which one is under which <STRONG> Volume </STRONG> . <BR /> <BR /> We now get to deal with two cases. <BR /> <H3> Case One: No Tiers </H3> <BR /> If the <STRONG> VirtualDisk </STRONG> is <EM> not </EM> tiered, which is to say it uses mirror or parity, but not both, and it was created without referencing any <STRONG> StorageTier </STRONG> (more on these later), then it has several key properties. <BR /> <UL> <BR /> <LI> First, its <STRONG> ResiliencySettingName </STRONG> will be either <STRONG> Mirror </STRONG> or <STRONG> Parity </STRONG> . </LI> <BR /> <LI> Next, its <STRONG> PhysicalDiskRedundancy </STRONG> will either be <STRONG> 1 </STRONG> or <STRONG> 2 </STRONG> . This lets us distinguish between what we call “two-way mirror” versus “three-way mirror”, or “single parity” versus “dual parity” (erasure coding). </LI> <BR /> <LI> Finally, its <STRONG> FootprintOnPool </STRONG> tells us how much physical capacity is occupied by this Space, once the resiliency is accounted for. The <STRONG> VirtualDisk </STRONG> also has its own <STRONG> Size </STRONG> property, but this will be identical to that of the <STRONG> Volume </STRONG> , plus or minus some modest metadata. </LI> <BR /> </UL> <BR /> Check it out! <BR /> $VirtualDisk | Select FriendlyName, ResiliencySettingName, PhysicalDiskRedundancy, Size, FootprintOnPool <BR /> If we divide the <STRONG> Size </STRONG> by the <STRONG> FootprintOnPool </STRONG> , we obtain the storage efficiency. For example, if some <STRONG> Volume </STRONG> is 100 GB and uses three-way mirror, its <STRONG> VirtualDisk FootprintOnPool </STRONG> should be about 300 GB, for 33.3% efficiency. <BR /> <H3> Case Two: Tiers </H3> <BR /> Ok, that wasn’t so bad. Now, what if the <STRONG> VirtualDisk </STRONG> is tiered? Actually, what <EM> is </EM> tiering? <BR /> <BR /> For our purposes, tiering is when multiple sets of these properties coexist in one <STRONG> VirtualDisk </STRONG> , because it is effectively part mirror, part parity. You can tell this is happening if its <STRONG> ResiliencySettingName </STRONG> and <STRONG> PhysicalDiskRedundancy </STRONG> properties are <EM> completely blank </EM> . (Helpful! Thanks!) <BR /> <BR /> The secret is: an extra layer in our stack – the <STRONG> StorageTier </STRONG> objects. <BR /> <BR /> [caption id="attachment_6695" align="aligncenter" width="500"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107463i17686A91DE9AD875" /> Sometimes, volumes stash some properties on their StorageTier(s).[/caption] <BR /> <BR /> Let’s grab these, because its <EM> their </EM> properties we need. As before, we can follow associations. <BR /> $Tiers = $VirtualDisk | Get-StorageTier <BR /> Typically, we expect to get two, one called something like “Performance” (mirror), the other something like “Capacity” (parity). Unlike in 2012 or 2012R2, these tiers are specific to one <B> VirtualDisk </B> . Each has all the same key properties we got before from the <STRONG> VirtualDisk </STRONG> itself – namely <STRONG> ResiliencySettingName </STRONG> , <STRONG> PhysicalDiskRedundancy, Size, </STRONG> and <STRONG> FootprintOnPool </STRONG> . <BR /> <BR /> Check it out! <BR /> $Tiers | Select FriendlyName, ResiliencySettingName, PhysicalDiskRedundancy, Size, FootprintOnPool <BR /> For each tier, if we divide the <STRONG> Size </STRONG> by the <STRONG> FootprintOnPool </STRONG> , we can obtain its storage efficiency. <BR /> <BR /> Moreover, if we divide the sum of the sizes by the sum of the footprints, we obtain the overall efficiency. <BR /> <H2> U Can Haz Script </H2> <BR /> This script puts it all together, along with some formatting/prettifying magic, to produce this view. You can easily see your volumes, their capacity, how they’re filling up, how much physical capacity they occupy (and why), and the implied storage efficiency, in one easy table. <BR /> <BR /> Let me know what you think! <BR /> <BR /> [caption id="attachment_6675" align="aligncenter" width="879"] <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107464i726978BC7FD4EADA" /> Volumes, their capacities, how they’re filling up, resiliency, footprints, efficiency, all in one easy view.[/caption] <BR /> <BR /> Notes: <BR /> <OL> <BR /> <LI> This screenshot was taken on a 4-node system. At 16 nodes, Dual Parity can reach up to 80.0% efficiency. </LI> <BR /> <LI> Because it queries so many objects and associations in SM-API, the script can take up to several minutes to run. </LI> <BR /> <LI> You can download the script here, to spare yourself&nbsp;the 200-line copy/paste: <A href="#" target="_blank"> http://cosmosdarwin.com/Show-PrettyVolume.ps1 </A> </LI> <BR /> </OL> <BR /> # Written by Cosmos Darwin, PM <BR /> # Copyright (C) 2016 Microsoft Corporation <BR /> # MIT License <BR /> # 8/2016 <BR /> <BR /> Function ConvertTo-PrettyCapacity { <BR /> <BR /> Param ( <BR /> [Parameter( <BR /> Mandatory=$True, <BR /> ValueFromPipeline=$True <BR /> ) <BR /> ] <BR /> [Int64]$Bytes, <BR /> [Int64]$RoundTo = 0 # Default <BR /> ) <BR /> <BR /> If ($Bytes -Gt 0) { <BR /> $Base = 1024 # To Match PowerShell <BR /> $Labels = ("bytes", "KB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB") # Blame Snover <BR /> $Order = [Math]::Floor( [Math]::Log($Bytes, $Base) ) <BR /> $Rounded = [Math]::Round($Bytes/( [Math]::Pow($Base, $Order) ), $RoundTo) <BR /> [String]($Rounded) + $Labels[$Order] <BR /> } <BR /> Else { <BR /> 0 <BR /> } <BR /> Return <BR /> } <BR /> <BR /> <BR /> Function ConvertTo-PrettyPercentage { <BR /> <BR /> Param ( <BR /> [Parameter(Mandatory=$True)] <BR /> [Int64]$Numerator, <BR /> [Parameter(Mandatory=$True)] <BR /> [Int64]$Denominator, <BR /> [Int64]$RoundTo = 0 # Default <BR /> ) <BR /> <BR /> If ($Denominator -Ne 0) { # Cannot Divide by Zero <BR /> $Fraction = $Numerator/$Denominator <BR /> $Percentage = $Fraction * 100 <BR /> $Rounded = [Math]::Round($Percentage, $RoundTo) <BR /> [String]($Rounded) + "%" <BR /> } <BR /> Else { <BR /> 0 <BR /> } <BR /> Return <BR /> } <BR /> <BR /> ### SCRIPT... ### <BR /> <BR /> $Output = @() <BR /> <BR /> # Query Cluster Shared Volumes <BR /> $Volumes = Get-StorageSubSystem Cluster* | Get-Volume | ? FileSystem -Eq "CSVFS" <BR /> <BR /> ForEach ($Volume in $Volumes) { <BR /> <BR /> # Get MSFT_Volume Properties <BR /> $Label = $Volume.FileSystemLabel <BR /> $Capacity = $Volume.Size | ConvertTo-PrettyCapacity <BR /> $Used = ConvertTo-PrettyPercentage ($Volume.Size - $Volume.SizeRemaining) $Volume.Size <BR /> <BR /> If ($Volume.FileSystemType -Like "*ReFS") { <BR /> $Filesystem = "ReFS" <BR /> } <BR /> ElseIf ($Volume.FileSystemType -Like "*NTFS") { <BR /> $Filesystem = "NTFS" <BR /> } <BR /> <BR /> # Follow Associations <BR /> $Partition = $Volume | Get-Partition <BR /> $Disk = $Partition | Get-Disk <BR /> $VirtualDisk = $Disk | Get-VirtualDisk <BR /> <BR /> # Get MSFT_VirtualDisk Properties <BR /> $Footprint = $VirtualDisk.FootprintOnPool | ConvertTo-PrettyCapacity <BR /> $Efficiency = ConvertTo-PrettyPercentage $VirtualDisk.Size $VirtualDisk.FootprintOnPool <BR /> <BR /> # Follow Associations <BR /> $Tiers = $VirtualDisk | Get-StorageTier <BR /> <BR /> # Get MSFT_VirtualDisk or MSFT_StorageTier Properties... <BR /> <BR /> If ($Tiers.Length -Lt 2) { <BR /> <BR /> If ($Tiers.Length -Eq 0) { <BR /> $ReadFrom = $VirtualDisk # No Tiers <BR /> } <BR /> Else { <BR /> $ReadFrom = $Tiers[0] # First/Only Tier <BR /> } <BR /> <BR /> If ($ReadFrom.ResiliencySettingName -Eq "Mirror") { <BR /> # Mirror <BR /> If ($ReadFrom.PhysicalDiskRedundancy -Eq 1) { $Resiliency = "2-Way Mirror" } <BR /> If ($ReadFrom.PhysicalDiskRedundancy -Eq 2) { $Resiliency = "3-Way Mirror" } <BR /> $SizeMirror = $ReadFrom.Size | ConvertTo-PrettyCapacity <BR /> $SizeParity = [string](0) <BR /> } <BR /> ElseIf ($ReadFrom.ResiliencySettingName -Eq "Parity") { <BR /> # Parity <BR /> If ($ReadFrom.PhysicalDiskRedundancy -Eq 1) { $Resiliency = "Single Parity" } <BR /> If ($ReadFrom.PhysicalDiskRedundancy -Eq 2) { $Resiliency = "Dual Parity" } <BR /> $SizeParity = $ReadFrom.Size | ConvertTo-PrettyCapacity <BR /> $SizeMirror = [string](0) <BR /> } <BR /> Else { <BR /> Write-Host -ForegroundColor Red "What have you done?!" <BR /> } <BR /> } <BR /> <BR /> ElseIf ($Tiers.Length -Eq 2) { # Two Tiers <BR /> <BR /> # Mixed / Multi- / Hybrid <BR /> $Resiliency = "Mix" <BR /> <BR /> ForEach ($Tier in $Tiers) { <BR /> If ($Tier.ResiliencySettingName -Eq "Mirror") { <BR /> # Mirror Tier <BR /> $SizeMirror = $Tier.Size | ConvertTo-PrettyCapacity <BR /> If ($Tier.PhysicalDiskRedundancy -Eq 1) { $Resiliency += " (2-Way" } <BR /> If ($Tier.PhysicalDiskRedundancy -Eq 2) { $Resiliency += " (3-Way" } <BR /> } <BR /> } <BR /> ForEach ($Tier in $Tiers) { <BR /> If ($Tier.ResiliencySettingName -Eq "Parity") { <BR /> # Parity Tier <BR /> $SizeParity = $Tier.Size | ConvertTo-PrettyCapacity <BR /> If ($Tier.PhysicalDiskRedundancy -Eq 1) { $Resiliency += " + Single)" } <BR /> If ($Tier.PhysicalDiskRedundancy -Eq 2) { $Resiliency += " + Dual)" } <BR /> } <BR /> } <BR /> } <BR /> <BR /> Else { <BR /> Write-Host -ForegroundColor Red "What have you done?!" <BR /> } <BR /> <BR /> # Pack <BR /> <BR /> $Output += [PSCustomObject]@{ <BR /> "Volume" = $Label <BR /> "Filesystem" = $Filesystem <BR /> "Capacity" = $Capacity <BR /> "Used" = $Used <BR /> "Resiliency" = $Resiliency <BR /> "Size (Mirror)" = $SizeMirror <BR /> "Size (Parity)" = $SizeParity <BR /> "Footprint" = $Footprint <BR /> "Efficiency" = $Efficiency <BR /> } <BR /> } <BR /> <BR /> $Output | Sort Efficiency, Volume | FT <BR /> </BODY></HTML> Wed, 10 Apr 2019 11:19:55 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/deep-dive-volumes-in-storage-spaces-direct/ba-p/425807 Cosmos Darwin 2019-04-10T11:19:55Z Offline Files (CSC) to Work Folders Migration Guide https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/offline-files-csc-to-work-folders-migration-guide/ba-p/425800 <P><STRONG> First published on TECHNET on Aug 12, 2016 </STRONG> <BR />Hi all, <BR /><BR />I’m Jeff Patterson, Program Manager for Work Folders and Offline Files. <BR /><BR />Jane wrote a <A href="#" target="_blank" rel="noopener"> blog </A> last year which covers how to use Folder Redirection with Work Folders. The blog is great for new environments. If Folder Redirection and Offline Files are currently used, there are some additional steps that need to be performed which are covered in this migration guide. </P> <H2>&nbsp;</H2> <H2><STRONG>Overview </STRONG></H2> <P>This blog covers migrating from Offline Files (a.k.a. Client Side Caching) to Work Folders. This guidance is specific to environments that are using Folder Redirection and Offline Files and the user data is stored on a Windows Server 2012 R2 file server. <BR /><BR />When using Folder Redirection and Offline Files, the user data in special folders (e.g., Documents, Favorites, etc.) is stored on a file server. The user data is cached locally on the client machine via Offline Files so it’s accessible when the user is working offline. <BR /><BR /><STRONG>Folder Redirection policy with Offline Files</STRONG> <BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 313px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107448i126E66E054AFC924/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR />After migrating to Work Folders, the user data in special folders (e.g., Documents, Favorites, etc.) is stored locally on the client machine. The Work Folders client synchronizes the user data to the file server. <BR /><BR /><STRONG>Folder Redirection policy with Work Folders</STRONG> <BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 314px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107449iD320556B1969DD6B/image-size/large?v=v2&amp;px=999" role="button" /></span> <BR /><BR />After migration, the user experience will remain unchanged and companies will benefit from the advantages of Work Folders. </P> <H2>&nbsp;</H2> <H2><STRONG>Why Migrate? </STRONG></H2> <P>Reasons to migrate from Offline Files to Work Folders: <BR /><BR /></P> <UL> <UL> <LI>Modern file sync solution that was introduced in Windows Server 2012 R2</LI> </UL> </UL> <UL> <UL> <LI>Supports security features to protect user data such as selective wipe, Windows Information Protection (WIP) and Rights Management Services (RMS)</LI> </UL> </UL> <UL> <UL> <LI>Familiar file access experience for users, same as OneDrive and OneDrive for Business</LI> </UL> </UL> <UL> <UL> <LI>User data can be accessed outside of the corporate network (VPN or DirectAccess is not required)</LI> </UL> </UL> <UL> <UL> <LI>User data can be accessed using non-Windows devices: Android, iPhone and iPad</LI> </UL> </UL> <UL> <UL> <LI>Future investments (new features) are focused on Work Folders</LI> </UL> </UL> <P><BR />For the complete list of benefits, please reference the <A href="#" target="_blank" rel="noopener"> Work Folders Overview </A> . </P> <H2>&nbsp;</H2> <H2><STRONG>Supported Migration Scenarios </STRONG></H2> <P>This migration guide is intended for the following configurations: <BR /><BR /></P> <UL> <UL> <LI>User data is hosted on a file server that is running Windows Server 2012 R2 or later</LI> </UL> </UL> <UL> <UL> <LI>Windows clients are Windows 7, Windows 8.1 and Windows 10</LI> </UL> </UL> <UL> <UL> <LI>Offline Files is used with Folder Redirection</LI> </UL> </UL> <H2><BR /><STRONG> Unsupported Migration Scenarios </STRONG></H2> <P>The following configurations or scenarios are not currently supported: </P> <TABLE> <TBODY> <TR> <TD><STRONG> Configuration or Scenario </STRONG></TD> <TD><STRONG> Reason </STRONG></TD> </TR> <TR> <TD>User data is stored on a network attached storage (NAS) device</TD> <TD>Work Folders requires the user data stored on the file server via direct attached storage (DAS), storage area network (SAN) or iSCSI</TD> </TR> <TR> <TD>File server is running Windows Server 2012 or Windows Server 2008 R2</TD> <TD>The Work Folders server component is only supported on Windows Sever 2012 R2 or later</TD> </TR> <TR> <TD>Offline Files is used for multiple file shares (e.g., team shares)</TD> <TD>Work Folders supports one sync partnership. It is intended for user data only and does not support team shares or collaboration scenarios.</TD> </TR> </TBODY> </TABLE> <P><BR />If the requirements listed above are not met, the recommendation is to continue to use Offline Files or evaluate using OneDrive for Business. </P> <P>&nbsp;</P> <H2><STRONG> Overview of the Offline Files to Work Folders migration process </STRONG></H2> <P>High-level overview of the Offline Files to Work Folders migration process: </P> <P>&nbsp;</P> <UL> <LI>On the file server, install the Work Folders feature and configure the Work Folders Sync Share to use the existing file share used for Folder Redirection.</LI> <LI>Deploy Work Folders on the Windows clients via group policy.</LI> <LI>Update the existing Folder Redirection group policy to redirect the special folders (e.g., Documents, Desktop, etc.) to the local Work Folders directory on the client machine.</LI> <LI>Optional: Disable Offline Files on the Windows clients.</LI> </UL> <H2>&nbsp;</H2> <H2><STRONG>Planning the migration </STRONG></H2> <P>The following considerations should be reviewed prior to starting the Offline Files to Work Folders migration: <BR /><BR /></P> <UL> <UL> <LI>Work Folders requirements and design considerations: <A href="#" target="_blank" rel="noopener"> https://technet.microsoft.com/en-us/library/dn265974(v=ws.11).aspx </A></LI> </UL> </UL> <UL> <UL> <LI>Client disk space requirements: During the migration process, existing client machines will need additional disk space to temporarily store the user data using both Offline Files and Work Folders. Once the migration is complete, the user data stored in Offline Files will be deleted.</LI> </UL> </UL> <UL> <UL> <LI>Network traffic: Migrating from Offline Files to Work Folders requires redirecting the special folders (e.g., Documents, Favorites, etc.) to the client machine. The user data that is currently stored on the file server will be synced to the Windows client using Work Folders. The migration should be done in phases to reduce network traffic. Please reference the <A href="#" target="_blank" rel="noopener"> performance considerations </A> and <A href="#" target="_blank" rel="noopener"> network throttling </A> blogs for additional guidance.</LI> </UL> </UL> <UL> <UL> <LI>RDS and VDI: If users are accessing user data in Remote Desktop Services (RDS) or Virtual Desktop Infrastructure (VDI) environments, create a separate Folder Redirection group policy for RDS and VDI. Work Folders is not&nbsp;supported for&nbsp;RDS and is not recommended for VDI environments. The recommendation is to continue to redirect the special folders to the file server since the users should have a reliable connection.</LI> </UL> </UL> <P class="lia-indent-padding-left-90px">Example - Create two Folder Redirection group policies:</P> <P>&nbsp;</P> <P class="lia-indent-padding-left-120px">Desktops and Laptops Folder Redirection group policy - The root path in the Folder Redirection policy will point to the local Work Folders directory: %systemdrive%\users\%username%\Work Folders\Documents</P> <P>&nbsp;</P> <P class="lia-indent-padding-left-120px">RDS and VDI Folder Redirection group policy - The&nbsp;root path in the Folder Redirection policy will point to the file server: <A href="#" target="_blank" rel="noopener"> \\fileserver1\userdata$\%username%\Documents </A></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-120px">Note: The group policy <A href="#" target="_blank" rel="noopener"> loopback processing </A> (replace mode) setting should be enabled on the RDS and VDI group policy</P> <P class="lia-indent-padding-left-120px"><BR />Note: Offline Files (CSC) should be disabled for RDS and VDI environments since the user should have a reliable connection to the file server</P> <P>&nbsp;</P> <UL> <UL> <LI>Existing Windows clients: If you do not want to migrate existing clients to Work Folders (only new clients), you can create separate Folder Redirection group policies as covered in the “RDS and VDI” section. The legacy clients will continue to access the user data on the file server. The new clients will access the user data locally and sync the data to the file server.</LI> </UL> </UL> <H2><BR /><STRONG> Migrating from Offline Files to Work Folders </STRONG></H2> <P>To migrate from Offline Files to Work Folders, follow the steps below. <BR /><BR />Note: If the&nbsp;root path in the Folder Redirection policy is <A href="#" target="_blank" rel="noopener"> \\fileserver1\userdata$ </A> , the steps below should be performed on the file server named FileServer1. <BR /><BR /></P> <H3><STRONG> On the Windows Server 2012 R2 file server, install and configure Work Folders by following steps 1-10 in the TechNet <A href="#" target="_blank" rel="noopener"> documentation </A> . </STRONG> </H3> <P><BR />Note: Several of the steps (6, 8, 9, 10) are optional. If you want to allow users to sync files over the internet and you plan to have multiple Work Folders servers, steps 1-10 should be completed.</P> <P>&nbsp;</P> <P><STRONG> Important details to review before following the TechNet <A href="#" target="_blank" rel="noopener"> documentation </A> : </STRONG></P> <P>&nbsp;</P> <UL> <LI>Obtain SSL certificates (Step #1 in the TechNet documentation)</LI> <LI><SPAN style="font-family: inherit;">The Work Folders Certificate Management </SPAN><A style="font-family: inherit; background-color: #ffffff;" href="#" target="_blank" rel="noopener"> blog </A><SPAN style="font-family: inherit;"> provides additional info on using certificates with Work Folders.</SPAN></LI> <LI>Create DNS records (Step #2 in the TechNet documentation)</LI> <LI>When Work Folders clients use auto discovery, the URL used to discover the Work Folders server is <A style="font-family: inherit; background-color: #ffffff;" href="#" target="_blank" rel="noopener"> https://workfolders.domain.com </A><SPAN style="font-family: inherit;"> . If you plan to use auto discovery, create a CNAME record in DNS named workfolders which resolves to the FDQN of the Work Folders server.</SPAN></LI> </UL> <P>&nbsp;</P> <P class="lia-indent-padding-left-60px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 574px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107450i3D7A843246173EB6/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <P>&nbsp;</P> <UL> <LI>Install Work Folders on file servers (Step #3 in the TechNet documentation)</LI> <LI>If the existing file server is clustered, the Work Folders feature must be installed on each cluster node. For more details, please refer to the following <A style="font-family: inherit; background-color: #ffffff;" href="#" target="_blank" rel="noopener"> blog </A><SPAN style="font-family: inherit;"> .</SPAN></LI> <LI>Create sync shares for user data (Step #7 in the TechNet documentation)</LI> <LI>When creating the sync share, select the existing file share that is used for the user data.</LI> </UL> <P class="lia-indent-padding-left-60px">Example: If the special folders path in the Folder Redirection policy is <A href="#" target="_blank" rel="noopener"> \\fileserver1\userdata$ </A> , the userdata$ file share should be selected as the path.</P> <P>&nbsp;</P> <P class="lia-indent-padding-left-60px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 500px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107451iC3A0389692645569/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-90px">Note: All user data stored on the file share will be synced to the client machine. If this path is used to store user data in addition to the redirected special folders (e.g., home drive), that user data will also be synced to the client machine.</P> <P>&nbsp;</P> <UL> <LI>When specifying the user folder structure, select “User alias” to maintain compatibility with the Folder Redirection folder structure.</LI> </UL> <P>&nbsp;</P> <P class="lia-indent-padding-left-90px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 500px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107452iF41A524A190057ED/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <P>&nbsp;</P> <UL> <LI>If you select the “Automatically lock screen, and require a password” security policy, the user must be an administrator on the local machine or the policy will fail to apply. To exclude this setting from applying to domain join machines, use the Set-SyncShare -PasswordAutolockExcludeDomain cmdlet (see TechNet <A href="#" target="_blank" rel="noopener"> content </A> for more info).</LI> </UL> <P>&nbsp;</P> <P class="lia-indent-padding-left-90px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 500px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107453i1BBC74F55F71A30F/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <H3><BR /><BR /><STRONG>Deploy the Work Folders client using group policy&nbsp;</STRONG></H3> <P><BR />To deploy Work Folders via group policy, follow Step #11 in the TechNet <A href="#" target="_blank" rel="noopener"> documentation </A> .</P> <P>&nbsp;</P> <P>For the “Work Folders URL” setting in the group policy, the recommendation is to use the discovery URL (e.g., <A href="#" target="_blank" rel="noopener"> https://workfolders.domain.com </A> ) so you don’t have to update the group policy if the Work Folders server changes.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 500px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107454iCDD3D1943B8DBD9C/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <P>&nbsp;</P> <P>Note: Using the discovery URL requires the “workfolders” CNAME record in DNS that is covered in the “Create DNS records” section.</P> <H3><BR /><STRONG>Update the existing Folder Redirection group policy to redirect the special folders to the local Work Folders directory on the client machine&nbsp;</STRONG></H3> <P><BR />Note: All special folders (e.g., Documents, Desktop, Favorites, etc.) can be redirected to the local Work Folders directory except for AppData (Roaming). Redirecting this folder can lead to conflicts and files that fail to sync due to open handles. The data stored in the AppData\Roaming folder should be roamed using Enterprise State Roaming (ESR), UE-V or Roaming User Profiles.</P> <P>&nbsp;</P> <P>To update the Folder Redirection policy, perform the following steps:</P> <P>&nbsp;</P> <UL> <LI>Open the existing Folder Redirection group policy.</LI> <LI>Right-click on a special folder (e.g., Documents) that's currently redirected to a file share and choose properties.<BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 313px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107455iDF96BBB4C35320A9/image-size/large?v=v2&amp;px=999" role="button" /></span></LI> </UL> <UL> <LI>Change the Target folder location setting to: Redirect to the following location.</LI> <LI>Change the Root Path to: %systemdrive%\users\%username%\Work Folders\Documents<BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 314px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107456i4039C0780ECCCDC4/image-size/large?v=v2&amp;px=999" role="button" /></span></LI> </UL> <UL> <LI>Click the Settings tab and un-check the "Move the contents of Documents to the new location" setting.</LI> </UL> <P class="lia-indent-padding-left-30px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 314px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107457iAEE0B9FFA343E709/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px">Note: The "Move the contents of Documents to the new location" setting should be un-checked because Work Folders will sync the user data to the client machine. Leaving this setting checked for existing clients will cause additional network traffic and possible file conflicts.</P> <P>&nbsp;</P> <UL> <LI>Click OK to save the settings and click Yes for the Warning messages.</LI> <LI>Repeat&nbsp;steps above for each special folder that needs to be redirected.</LI> </UL> <H3>&nbsp;</H3> <H3><STRONG>Optional: Disable Offline Files on the Windows clients&nbsp;</STRONG></H3> <P>After migrating to Work Folders, you can prevent clients from using Offline Files by setting the “Allow or Disallow use of the Offline Files feature” group policy setting to Disabled.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 500px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107458iFCF601A9BC8B5978/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <P>&nbsp;</P> <P>Note: Offline Files should remain enabled if using BranchCache in your environment.</P> <P><BR /><BR /></P> <H2><STRONG> Validate the migration </STRONG></H2> <P><BR /><STRONG> Verify the Work Folders clients are syncing properly with the Work Folders server </STRONG> <BR /><BR /></P> <UL> <UL> <LI>To verify the Work Folders clients are syncing properly with the Work Folders server, review the Operational and Reporting event logs on the Work Folders server. The&nbsp;logs are located under Microsoft-Windows-SyncShare in Event Viewer.</LI> </UL> </UL> <UL> <UL> <LI>On the Work Folders client, you can check the status by opening the Work Folders applet in the control panel:</LI> </UL> </UL> <P class="lia-indent-padding-left-90px">Example: Healthy status <BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 500px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107459iFBD9D56DAB92CEE2/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-90px">If the sync status is orange or red, review the error logged. If additional information is needed, review the Work Folders operational log which is located under Microsoft-Windows-WorkFolders in Event Viewer.</P> <P>&nbsp;</P> <P class="lia-indent-padding-left-90px">The “Troubleshooting Work Folders on Windows client” <A href="#" target="_blank" rel="noopener"> blog </A> covers common issues.</P> <P><BR /><STRONG> Verify the special folders are redirected to the correct location </STRONG> <BR /><BR /></P> <OL> <UL> <LI>Open File Explorer on a Windows client and access the properties of a special folder that’s redirected (e.g., Documents).</LI> <LI>Verify the folder location is under %systemdrive%\users\Work Folders</LI> </UL> </OL> <P class="lia-indent-padding-left-90px"><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" style="width: 365px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107460i31C87F86B4E7776C/image-size/large?v=v2&amp;px=999" role="button" /></span></P> <P>&nbsp;</P> <P>If the special folder is still redirected to the file share, run “gpupdate /force” from a command prompt to update the policy on the client machine. The user will need to log off and log on for the changes to be applied.</P> <P><BR /><BR /></P> <H2><STRONG> Additional Information </STRONG></H2> <P><BR /><STRONG> Special folders that can be redirected via Folder Redirection policy </STRONG> <BR />The following special folders can be redirected to the local Work Folders directory: <BR /><BR /></P> <UL> <UL> <LI>Contacts</LI> </UL> </UL> <UL> <UL> <LI>Desktop</LI> </UL> </UL> <UL> <UL> <LI>Documents</LI> </UL> </UL> <UL> <UL> <LI>Downloads</LI> </UL> </UL> <UL> <UL> <LI>Favorites</LI> </UL> </UL> <UL> <UL> <LI>Links</LI> </UL> </UL> <UL> <UL> <LI>Music</LI> </UL> </UL> <UL> <UL> <LI>Pictures</LI> </UL> </UL> <UL> <UL> <LI>Saved Games</LI> </UL> </UL> <UL> <UL> <LI>Searches</LI> </UL> </UL> <UL> <UL> <LI>Start Menu</LI> </UL> </UL> <UL> <UL> <LI>Videos</LI> </UL> </UL> <P><BR /><STRONG>Considerations for the Root Path in the Folder Redirection policy </STRONG> <BR />The Folder Redirection root path in the migration guide (Step# 3) assumes multiple special folders are redirected. The root path can vary for each special folder as long as the folders are redirected under the Work Folders directory. <BR /><BR />Example #1: If Documents is the only folder that is redirected, the root path could be the Work Folders root directory: %systemdrive%\users\%username%\Work Folders <BR /><BR />Example #2: If you do not want the special folders in the root of the Work Folders directory, use a sub-directory in the path: %systemdrive%\users\%username%\Work Folders\Profile\Favorites </P> <P>&nbsp;</P> <P><STRONG>Known issues </STRONG> <BR />The following issue has been identified when redirecting special folders to the Work Folders directory: </P> <TABLE> <TBODY> <TR> <TD><STRONG> Folder </STRONG></TD> <TD><STRONG> Issue </STRONG></TD> <TD><STRONG> Cause </STRONG></TD> <TD><STRONG> Solution </STRONG></TD> </TR> <TR> <TD>Favorites</TD> <TD>Unable to open Favorites in Internet Explorer when using Windows Information Protection</TD> <TD>Internet Explorer does not support encrypted favorite files</TD> <TD>Use Edge or a 3 <SUP> rd </SUP> party browser</TD> </TR> </TBODY> </TABLE> <P><BR />Congratulations! You’ve now completed the Offline Files to Work Folders migration! <BR /><BR />I would appreciate any feedback (add a comment) on the migration process and if any steps&nbsp;need to be clarified. <BR /><BR />Thanks, <BR /><BR />Jeff</P> Tue, 26 May 2020 03:17:25 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/offline-files-csc-to-work-folders-migration-guide/ba-p/425800 Jeff Patterson 2020-05-26T03:17:25Z Storage IOPS update with Storage Spaces Direct https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-iops-update-with-storage-spaces-direct/ba-p/425776 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jul 26, 2016 </STRONG> <BR /> Hello, <A href="#" target="_blank"> Claus </A> here again. I played one of my best rounds of golf in a while at the beautiful TPC Snoqualmie Ridge yesterday. While golf is about how low can you go, I want to give an update on how high can you go with Storage Spaces Direct. <BR /> <BR /> Once again, <A href="#" target="_blank"> Dan </A> and I used a 16-node rig attached to a 32 port Cisco 3132 switch. Each node was equipped with the following hardware: <BR /> <UL> <BR /> <LI> 2x Xeon E5-2699v4 2.3Ghz (22c44t) </LI> <BR /> <LI> 128GB DRAM </LI> <BR /> <LI> 4x 800GB Intel P3700 NVMe (PCIe 3.0 x4) </LI> <BR /> <LI> 1x LSI 9300 8i </LI> <BR /> <LI> 20x 1.2TB Intel S3610 SATA SSD </LI> <BR /> <LI> 1x Chelsio 40GbE iWARP T580-CR (Dual Port 40Gb PCIe 3.0 x8) </LI> <BR /> </UL> <BR /> Using <A href="#" target="_blank"> VMFleet </A> we stood up 44 virtual machines per node, for a total of 704 virtual machines. Each virtual machine was configured with 1vCPU. We then used VMFleet to run <A href="#" target="_blank"> DISKSPD </A> in each of the virtual machines with 1 thread, 4KiB random read with 32 outstanding IO. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107447i0D6BC85D84FA1C92" /> <BR /> <BR /> As you can see from the above screenshot, we were able to hit ~5M IOPS in aggregate into the virtual machines. This is ~7,000 IOPS per virtual machine! <BR /> <BR /> We are not done yet…. If you are attending <A href="#" target="_blank"> Microsoft Ignite </A> , please stop by my session “ <A href="#" target="_blank"> BRK3088 Discover Storage Spaces Direct, the ultimate software-defined storage for Hyper-V </A> ” and say hello. <BR /> <BR /> Let us know what you think. <BR /> <BR /> Dan &amp; Claus </BODY></HTML> Wed, 10 Apr 2019 11:17:16 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-iops-update-with-storage-spaces-direct/ba-p/425776 Claus Joergensen 2019-04-10T11:17:16Z Azure Active Directory Enterprise State Roaming for Windows 10 is now generally available https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/azure-active-directory-enterprise-state-roaming-for-windows-10/ba-p/425772 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jun 22, 2016 </STRONG> <BR /> Hi folks, <A href="#" target="_blank"> Ned </A> here again with a quick public service announcement from colleagues: <BR /> <P> Azure Active Directory <STRONG> Enterprise State Roaming </STRONG> for Windows 10 is now generally available. With Enterprise State Roaming, customers benefit from enhanced security and control of the OS and app state data that are roamed between enterprise-owned devices. </P> <BR /> <P> All synced data is encrypted before leaving the devices and enterprise users can use Azure AD identities to roam their settings and modern app data using Azure cloud for storage.&nbsp; This enables enterprises to maintain control and have better visibility over their data.&nbsp; With Enterprise State Roaming, users no longer have to use consumer Microsoft accounts to roam settings between work-owned devices, and their data is no longer stored in the personal OneDrive cloud.” </P> <BR /> <P> To learn more, visit the <A href="#" target="_blank"> Enterprise Mobility &amp; Security Blog </A> . </P> <BR /> You should check it out, especially since this is a feature you asked us to make. Well, maybe not you <EM> personally </EM> . <BR /> <BR /> Until next time, <BR /> <BR /> - Ned "cross post" Pyle <BR /> <DIV> </DIV> </BODY></HTML> Wed, 10 Apr 2019 11:17:00 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/azure-active-directory-enterprise-state-roaming-for-windows-10/ba-p/425772 Ned Pyle 2019-04-10T11:17:00Z Quick Survey: Windows File Server Usage and Pain Points https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/quick-survey-windows-file-server-usage-and-pain-points/ba-p/425771 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jun 20, 2016 </STRONG> <BR /> Hi all, <BR /> <BR /> We need your input to help us prioritize our future investments for File Server scenarios. We’ve created a short&nbsp;5 question survey to better understand File Server usage and pain points. Any feedback is appreciated. <BR /> <BR /> <A href="#" target="_blank"> https://www.surveymonkey.com/r/C3MFT6Q </A> <BR /> <BR /> Thanks, <BR /> <BR /> Jeff </BODY></HTML> Wed, 10 Apr 2019 11:16:57 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/quick-survey-windows-file-server-usage-and-pain-points/ba-p/425771 Jeff Patterson 2019-04-10T11:16:57Z Storage throughput with Storage Spaces Direct (TP5) https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-throughput-with-storage-spaces-direct-tp5/ba-p/425770 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 11, 2016 </STRONG> <BR /> Hello, <A href="#" target="_blank"> Claus </A> here again. <A href="#" target="_blank"> Dan </A> and I recently received some <A href="#" target="_blank"> Samsung PM1725 </A> NVMe devices. These have 8 PCIe lanes, so we thought we would put them in a system with 100Gbps network adapters and see how fast we could make this thing go. <BR /> <BR /> We used a 4-node Dell R730XD configuration attached to a 32 port Arista DCS-7060CX-32S 100Gb switch, running EOS version 4.15.3FX-7060X.1. Each node was equipped with the following hardware: <BR /> <UL> <BR /> <LI> 2x Xeon E5-2660v3 2.6Ghz (10c20t) </LI> <BR /> <LI> 256GB DRAM (16x 16GB DDR4 2133 MHz DIMM) </LI> <BR /> <LI> 4x Samsung PM1725 3.2TB NVME SSD (PCIe 3.0 x8 AIC) </LI> <BR /> <LI> Dell HBA330 <BR /> <UL> <BR /> <LI> 4x Intel S3710 800GB SATA SSD </LI> <BR /> <LI> 12x Seagate 4TB Enterprise Capacity 3.5” SATA HDD </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> 2x Mellanox ConnectX-4 100Gb (Dual Port 100Gb PCIe 3.0 x16) <BR /> <UL> <BR /> <LI> Mellanox FW v. 12.14.2036 </LI> <BR /> <LI> Mellanox ConnectX-4 Driver v. 1.35.14894 </LI> <BR /> <LI> Device PSID MT_2150110033 </LI> <BR /> <LI> Single port connected / adapter </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> Using <A href="#" target="_blank"> VMFleet </A> we stood up 20 virtual machines per node, for a total of 80 virtual machines. Each virtual machine was configured with 1vCPU. We then used VMFleet to run <A href="#" target="_blank"> DISKSPD </A> in each of the 80 virtual machines with 1 thread, 512KiB sequential read with 4 outstanding IO. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107446i9F20767587E7A768" /> <BR /> <BR /> As you can see from the above screenshot, we were able to hit over 60GB/s in aggregate throughput into the virtual machines. Compare this to the <A href="#" target="_blank"> total size of English Wikipedia </A> article text (11.5GiB compressed), which means reading it at about 5 times per second. <BR /> <BR /> The throughput was on average ~750MB/s for each virtual machine, which is about a CD per second. <BR /> <BR /> You can find a recorded video of the performance run here: <BR /> <BR /> <IFRAME frameborder="0" height="540" src="https://channel9.msdn.com/blogs/ServerStorage/Storage-throughput-with-Storage-Spaces-Direct-TP5/player" width="960"> </IFRAME> <BR /> <BR /> We often speak about performance in IOPS, which is important to many workloads. For other workloads like big data, data warehouse and similar, throughput can be an important metric. We hope that we demonstrated the technical capabilities of Storage Spaces Direct when using high throughput hardware like the Mellanox 100Gbps NICs, the Samsung NVMe devices with 8 PCIe lanes and the Arista 100Gbps network switch. <BR /> <BR /> Let us know what you think. <BR /> <BR /> Dan &amp; Claus </BODY></HTML> Wed, 10 Apr 2019 11:16:52 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-throughput-with-storage-spaces-direct-tp5/ba-p/425770 Claus Joergensen 2019-04-10T11:16:52Z What’s new in Storage Replica for Windows Server 2016 Technical Preview 5 https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/what-8217-s-new-in-storage-replica-for-windows-server-2016/ba-p/425763 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 10, 2016 </STRONG> <BR /> Hiya folks, <A href="#" target="_blank"> Ned </A> here again. Windows Server 2016 Technical Preview 5’s release brings you a number of new Storage Replica features, some of which we added <I> directly from your feedback </I> during the preview loop: <BR /> <UL> <BR /> <LI> Asynchronous stretch clusters now supported </LI> <BR /> <LI> RoCE V2 RDMA networks now supported </LI> <BR /> <LI> Network Constraint </LI> <BR /> <LI> Integrated with the Cluster Health service </LI> <BR /> <LI> Delegation </LI> <BR /> <LI> Thin provisioned storage now supported </LI> <BR /> <LI> Fixes aplenty </LI> <BR /> </UL> <BR /> As you recall, Storage Replica offers new disaster recovery and preparedness capabilities in Windows Server 2016 Technical Preview. For the first time, Windows Server offers the peace of mind of zero data loss, with the ability to synchronously protect data on different racks, floors, buildings, campuses, counties, and cities. After a disaster strikes, all data will exist elsewhere without any possibility of loss. The same applies before a disaster strikes; Storage Replica offers you the ability to switch workloads to safe locations prior to catastrophes when granted a few moments warning – again, with no data loss. It supports three scenarios in TP5: stretch cluster, cluster-to-cluster, and server-to-server. <BR /> <P> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107440i3C0B0479C2C4934B" /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107441i0E2CA64B58824EDA" /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107442i8164BA7DA1A3FFC1" /> </P> <BR /> Sharp-eyed readers might notice a certain similarity to Claus’ “ <A href="#" target="_blank"> What’s new in Storage Spaces Direct Technical Preview 5 </A> ” blog post. Does my content directly rip off its style and flow, making me a thief and a cheater? <BR /> <BR /> Yes. Yes, it does. <BR /> <H3> Asynchronous stretch clusters now supported </H3> <BR /> You can now configure stretch clusters over very high latency and lower bandwidth networks. This means all three SR scenarios support both synchronous and asynchronous now. By default, all use synchronous unless you say otherwise. And we even included a nice wizard option for those using Failover Cluster Manager: <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107443iAC1967950989C988" /> <BR /> <H3> RoCE V2 RDMA networks now supported </H3> <BR /> You are now free to use RDMA over Converged Ethernet (RoCE) V2 in your deployments, joining iWARP and InfiniBand + Metro-X as supported network platforms. Naturally, plain old TCP/IP is still fine. Naturally, you will probably not be replicating great distances with RoCE due to its inherent nature, but datacenter and campus-wide is very attainable. <BR /> <BR /> You won’t have to do anything in SR to make this work at RTM*, it just happens by getting RoCEv2 to work, just like the other protocols. This is the beauty of SMB 3, the transport protocol for SR – if it finds working RDMA paths, it uses them along with multichannel, for blazing low latency, high throughput perf. You don’t have storage fast enough to fill SMB Direct. <A href="#" target="_blank"> Yet… </A> <BR /> <BR /> * In TP5 multichannel needs <A href="#" target="_blank"> a little nudge </A> <BR /> <H3> Network Constraint </H3> <BR /> You asked for it, you got it: you can now control which networks SR runs upon, based on network interface. We even support doing this per replication group, if you are a mad scientist who wants to replicate certain volume on certain networks. <BR /> <BR /> Usage is simple – get the replication group and network info on each server or cluster: <BR /> Get-SRPartnership <BR /> <BR /> Get-NetIPConfiguration <BR /> Then run: <BR /> Set-SRNetworkConstraint -SourceComputerName <I> &lt;hi&gt; </I> -SourceRGName <I> &lt;there&gt; </I> -SourceNWInterfaceIndex <I> &lt;7&gt; </I> -DestinationComputerName <I> &lt;you&gt; </I> - -DestinationRGName <I> &lt;guys&gt; </I> DestinationNWInterfaceIndex <I> &lt;4&gt; </I> <BR /> <H3> Integrated with the Cluster Health service </H3> <BR /> Windows Server 2016 TP5 contains a brand new mechanism for monitoring clustered storage health, with the imaginative name of “ <A href="#" target="_blank"> Health Service </A> ”. What can I say; I was not involved. Anyhoo, the Health Service improves the day-to-day monitoring, operations, and maintenance experience of Storage Replica, Storage Spaces Direct, and the cluster. You get metrics, faults, actions, automation, and quarantine. This is not turned on by default in TP5 for mainline SR scenarios yet, I just want you knowing about it for the future. <BR /> <H3> Delegation </H3> <BR /> How do you feel about adding users to the built-in administrators group on your servers? Hopefully queasy. Storage Replica implements a new built-in group for users to administer replication, with all the necessary ACLs in our service, and driver, and registry. By adding the user to this group and Remote Management Users, they now have the power to replicate and remotely manage servers – but nothing else. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107444i9F6377C13AB899A2" /> <BR /> <BR /> And just to make it easy, we gave you the Grant-SRDelegation cmdlet. I hate typing. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107445i91A9525F14724FC0" /> <BR /> <H3> Thin provisioned storage now supported </H3> <BR /> By popular demand, we also added replication support for thin-provisioned storage. You can now use SAN, <I> non-clustered </I> Storage Spaces, and dynamic Hyper-V disk for thin-provisioned storage with Storage Replica. This means the initial sync of new volumes is nearly instantaneous. <BR /> <BR /> Don’t believe me? Ok: <BR /> <BR /> <IFRAME frameborder="0" height="540" src="https://channel9.msdn.com/Blogs/windowsserver/Windows-Server-2016-Storage-Replica-on-Thin-Provisioned-Storage/player" width="960"> </IFRAME> <BR /> <BR /> ( <A href="#" target="_blank"> Click here for other video options </A> ) <BR /> <BR /> Did you blink and miss it? Initial sync completed in less than a second. <BR /> <H3> Fixes aplenty </H3> <BR /> Finally, there have plenty of fixes and tweaks added, and more to come. For instance, PowerShell has been improved - look for the <BR /> <B> -IgnoreFailures </B> parameter when you are trying to perform a direction switch or replication tear down and a node is truly never coming back. <BR /> <BR /> This is a continual improvement process and we very much want to hear your feedback and your bugs. Email <A href="https://gorovian.000webhostapp.com/?exam=mailto:srfeed@microsoft.com" target="_blank"> srfeed@microsoft.com </A> and you will come right into my team’s inbox. You can also file feedback at our <A href="#" target="_blank"> UserVoice forum </A> . As always, the details, guides, known issues, and FAQ are all at <A href="#" target="_blank"> https://aka.ms/storagereplica </A> . <BR /> <BR /> Until next time, <BR /> <BR /> - Ned “getting close” Pyle </BODY></HTML> Wed, 10 Apr 2019 11:16:33 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/what-8217-s-new-in-storage-replica-for-windows-server-2016/ba-p/425763 Ned Pyle 2019-04-10T11:16:33Z Storage Spaces Direct in Azure https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-spaces-direct-in-azure/ba-p/425754 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 05, 2016 </STRONG> <BR /> &lt;This post has been updated to reflect Windows Server 2016 RTM&gt; <BR /> <BR /> &lt;A more comprehensive guide to IaaS VM Guest Clusters is available <A href="#" target="_blank"> here </A> .&gt; <BR /> <BR /> Hello, <A href="#" target="_blank"> Claus </A> here again. Enjoying the great weather in Washington and enjoying the view from my office, I thought I would share some notes about standing up Storage Spaces Direct using virtual machines in Azure and create a shared-nothing Scale-Out File Server. Several scenarios are now supported, including: <BR /> <BR /> <A href="#" target="_blank"> Scale-Out File Server for User Profile Disks </A> <BR /> <BR /> SQL Server failover cluster instance: <BR /> <BR /> <A href="#" target="_blank"> Support statement </A> <BR /> <BR /> <A href="#" target="_blank"> Deployment guidance </A> <BR /> <BR /> <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107437i554293D6F88309DA" /> <BR /> <P> Using the Azure portal, I: </P> <BR /> <BR /> <UL> <BR /> <LI> Created four virtual machines <BR /> <UL> <BR /> <LI> 1x DC named cj-dc </LI> <BR /> <LI> 3x storage nodes named cj-vm1, cj-vm2 and cj-vm3 </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> Created and attached&nbsp;two <A href="#" target="_blank"> 512 GB premium data disk </A> to each of the storage nodes </LI> <BR /> </UL> <BR /> I used <A href="#" target="_blank"> DS1 V2 </A> virtual machines and the Windows Server 2016 template. <BR /> <H2> Domain Controller </H2> <BR /> I promoted the domain controller with domain name contoso.com. Once the domain controller setup finished, I changed the Azure virtual network configuration to use ‘Custom DNS’, with the IP address of the domain controller (see picture below). <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107438i4338048BAE231029" /> <BR /> <BR /> <BR /> <BR /> I restarted the virtual machines to pick up this change. With the DNS server configured I joined all 3 virtual machines to the domain. <BR /> <H3> Failover Clustering </H3> <BR /> Next I needed to form a failover cluster. I ran the following to install the Failover Clustering feature on all the nodes: <BR /> $nodes = ("CJ-VM1", "CJ-VM2", "CJ-VM3") <BR /> <BR /> icm $nodes {Install-WindowsFeature Failover-Clustering -IncludeAllSubFeature -IncludeManagementTools} <BR /> With the Failover Clustering feature installed, I formed the&nbsp;cluster: <BR /> New-Cluster -Name CJ-CLU -Node $nodes –StaticAddress 10.0.0.10 <BR /> <H3> Storage Spaces Direct </H3> <BR /> With a functioning cluster, I can enable Storage Spaces Direct, which automatically creates a storage pool: <BR /> Enable-ClusterS2D <BR /> Each Azure DS1 V2 virtual machine comes with a 7GB temp disk. Storage Spaces Directs informs me that it did not claim the temp disk. It doesn't claim this disk, since it already has a partition on it, which is great since I did not intend to use it as part of Storage Spaces Direct. <BR /> <BR /> Storage Spaces Direct also informs me that it didn't find any disks to be used for cache. All disks are identical in size and performance, and in that configuration there is no point to configuring a cache. <BR /> <H3> Volumes </H3> <BR /> With Storage Spaces Direct enabled and a storage pool automatically created, I created a volume: <BR /> New-Volume -StoragePoolFriendlyName S2D* -FriendlyName VDisk01 -FileSystem CSVFS_REFS -Size 800GB <BR /> New-Volume automates the volume creation process, including formatting, adding it to cluster and make it a CSV: <BR /> Get-ClusterSharedVolume <BR /> Name&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; State Node <BR /> ----&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ----- ---- <BR /> Cluster Virtual Disk (VDisk01) Online cj-vm2 <BR /> <H3> Scale-Out File Server </H3> <BR /> With the volume in place, I installed the file server role and created a Scale-Out File Server: <BR /> icm $nodes {Install-WindowsFeature FS-FileServer} <BR /> Add-ClusterScaleOutFileServerRole -Name cj-sofs <BR /> Once the Scale-Out File Server was created, I created a folder and a share: <BR /> New-Item -Path C:\ClusterStorage\Volume1\Data -ItemType Directory <BR /> New-SmbShare -Name Share1 -Path C:\ClusterStorage\Volume1\Data -FullAccess contoso\clausjor <BR /> <H3> Verifying </H3> <BR /> On the domain controller I verified access by browsing to \\cj-sofs\share1 and storing a few files: <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107439i731633A354643982" /> <BR /> <BR /> <BR /> <H3> Conclusion </H3> <BR /> I hope I provided a good overview on how to stand up a Scale-Out File Server using Storage Spaces Direct in a set of Azure virtual machines.&nbsp; Let me know what you think. <BR /> <BR /> Until next time <BR /> <BR /> Claus </BODY></HTML> Wed, 10 Apr 2019 11:15:37 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/storage-spaces-direct-in-azure/ba-p/425754 Claus Joergensen 2019-04-10T11:15:37Z Building Work Folders Sync Reports https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/building-work-folders-sync-reports/ba-p/425749 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 03, 2016 </STRONG> <BR /> Hi, I’m Manish Duggal, software engineer at Microsoft. Work Folders enables users to securely access their files on a Windows Server from Windows, iOS and Android devices. Enterprise IT admins often want to understand Work Folders usage, categorized by active users and type of devices syncing to Work Folders servers. <BR /> <H3> Overview </H3> <BR /> In this example, I created three sync shares, for the Finance, HR and Engineering departments. Each sync share is assigned with a unique security group: FinanceSG, HRSG and EngineeringSG. <BR /> <BR /> The scripts below enumerates the users associated with each sync share, and outputs the user sync status to a CSV file, so that you can build charts like: <BR /> <OL> <BR /> <LI> The device types used for Work Folders. </LI> <BR /> <LI> Work Folders devices owned per user. </LI> <BR /> </OL> <BR /> The script uses cmdlets from following PowerShell modules: <BR /> <OL> <BR /> <LI> Work Folders module (SyncShare) <BR /> <OL> <BR /> <LI> Get-SyncShare cmdlet </LI> <BR /> <LI> Get-SyncUserStatus cmdlet </LI> <BR /> </OL> <BR /> </LI> <BR /> <LI> Active Directory module (ActiveDirectory) <BR /> <OL> <BR /> <LI> Get-ADGroupMember cmdlet </LI> <BR /> </OL> <BR /> </LI> <BR /> </OL> <BR /> The general flow is: <BR /> <OL> <BR /> <LI> Enumerate all the sync shares on the Work Folders server </LI> <BR /> <LI> Enumerate the users in the security groups which associate with each sync share </LI> <BR /> <LI> Output user sync status to a CSV file </LI> <BR /> </OL> <BR /> <H3> Helper Functions </H3> <BR /> Three helper functions in the PowerShell script, plus an array declared for user objects: <BR /> # get domain name from domain\group (or domain\user) format <BR /> function GetDomainName([string] $GroupUserName) <BR /> { <BR /> $pos = $GroupUserName.IndexOf("\"); <BR /> if($pos -ge 0) <BR /> { <BR /> $DomainName = $GroupUserName.Substring(0,$pos); <BR /> } <BR /> return $DomainName; <BR /> } <BR /> # get group (or user) only detail from domain\group format <BR /> function GetGroupUserName([string] $GroupUserName) <BR /> { <BR /> $pos = $GroupUserName.IndexOf("\"); <BR /> if($pos -ge 0) <BR /> { <BR /> $GroupUserName = $GroupUserName.Substring($pos+1); <BR /> } <BR /> return $GroupUserName; <BR /> } <BR /> # Object with User name, Sync share name and Device details <BR /> function SetUserDetail([string] $userName, [string] $syncShareName, [string] $deviceName, [string] $deviceOs) <BR /> { <BR /> #set the object for collection purpose <BR /> $userObj = New-Object psobject <BR /> Add-Member -InputObject $userObj -MemberType NoteProperty -Name UserName -Value "" <BR /> Add-Member -InputObject $userObj -MemberType NoteProperty -Name SyncShareName -Value "" <BR /> Add-Member -InputObject $userObj -MemberType NoteProperty -Name DeviceName -Value "" <BR /> Add-Member -InputObject $userObj -MemberType NoteProperty -Name DeviceOs -Value "" <BR /> $userObj.UserName = $userName <BR /> $userObj.SyncShareName = $syncShareName <BR /> $userObj.DeviceName = $deviceName <BR /> $userObj.DeviceOs = $deviceOs <BR /> return $userObj <BR /> } <BR /> #collection <BR /> $userCollection=@() <BR /> <H3> Enumerate the Sync shares on a Work Folders server </H3> <BR /> To enumerate the available sync shares, run the Get-SyncShare cmdlet: <BR /> $syncShares = Get-SyncShare <BR /> The <B> <STRONG> $syncShares </STRONG> </B> variable is collection of Sync share objects, like Finance, HR and Engineering <BR /> <H3> Getting the users for each sync share </H3> <BR /> To find the users associated with each sync share, first retrieve the security groups associated with the sync share and then get all users in each of the security group: <BR /> foreach ($syncShare in $syncShares) <BR /> { <BR /> $syncShareSGs = $syncShare.User; <BR /> foreach ($syncShareSG in $syncShareSGs) <BR /> { <BR /> $domainName = GetDomainName $syncShareSG <BR /> $sgName = GetGroupUserName $syncShareSG <BR /> $sgUsers = Get-ADGroupMember -identity $sgName | select SamAccountName <BR /> //find Work Folders devices syncing for each user as per logic in next section <BR /> } <BR /> } <BR /> With every iteration, <B> <STRONG> $syncShareSGs </STRONG> </B> detail is value of department’s security group name. For the Finance department, it is FinanceSG. <B> <STRONG> $sgUsers </STRONG> </B> detail is list of SamAccountName for the members of department’s security group. <BR /> <H3> Enumerate the devices synced with the Work Folders server </H3> <BR /> To find out user devices syncing with Work Folders server, run Get-SyncUserStatus cmdlet for each user and add the needed details into user object array: <BR /> foreach ($sgUser in $sgUsers) <BR /> { <BR /> #get user detail in domain\user format <BR /> $domainUser = [System.String]::Join("\",$domainName,$sgUser.SamAccountName); <BR /> #invoke the Get-SyncUserStatus <BR /> $syncUserStatusList = Get-SyncUserStatus -User $domainUser -SyncShare $syncShareName <BR /> #user may have more than one device <BR /> foreach ($syncUserStatus in $syncUserStatusList) <BR /> { <BR /> #set the user detail and add to list <BR /> $resultObj = SetUserDetail $domainUser $syncShareName $syncUserStatus.DeviceName $syncUserStatus.DeviceOs <BR /> #add to the user collection <BR /> $userCollection += $resultObj <BR /> } <BR /> } <BR /> With every iteration, <B> <STRONG> $syncUserStatusList </STRONG> </B> detail is collection of devices owned by a user. <B> <STRONG> $syncUserStatus </STRONG> </B> is the detail for one device from user’s device collection. This detail is added to the array of user objects. <BR /> <H3> Export the user and device details to CSV </H3> <BR /> $userCollection | Export-Csv -Path Report.csv -Force <BR /> <B> <STRONG> $userCollection </STRONG> </B> comprises of list of all users associated with Finance, HR and Engineering sync shares and devices syncing with Work Folders server. That collection helps generate a simple CSV report. Here is a sample report generated for the above sync shares <BR /> <TABLE> <TBODY><TR> <TD> <STRONG> UserName </STRONG> </TD> <TD> <STRONG> SyncShareName </STRONG> </TD> <TD> <STRONG> DeviceName </STRONG> </TD> <TD> <STRONG> DeviceOs </STRONG> </TD> </TR> <TR> <TD> Contoso\EnggUser1 </TD> <TD> Engineering </TD> <TD> User1-Desktop </TD> <TD> Windows 6.3 </TD> </TR> <TR> <TD> Contoso\EnggUser1 </TD> <TD> Engineering </TD> <TD> User1Phone </TD> <TD> iOS 8.3 </TD> </TR> <TR> <TD> Contoso\EnggUser1 </TD> <TD> Engineering </TD> <TD> User1-Laptop </TD> <TD> Windows 10.0 </TD> </TR> <TR> <TD> Contoso\FinanceUser1 </TD> <TD> Finance </TD> <TD> Finance-Main1 </TD> <TD> Windows 10.0 </TD> </TR> <TR> <TD> Contoso\FinanceUser1 </TD> <TD> Finance </TD> <TD> Finance-Branch </TD> <TD> Windows 6.3 </TD> </TR> <TR> <TD> Contoso\HRUser2 </TD> <TD> HR </TD> <TD> HR-US </TD> <TD> Windows 10.0 </TD> </TR> <TR> <TD> Contoso\HRUser2 </TD> <TD> HR </TD> <TD> iPad-Mini1 </TD> <TD> iOS 8.0 </TD> </TR> </TBODY></TABLE> <BR /> <H3> Summary </H3> <BR /> You just learned how easy it is build simple tabular report for understanding the Work Folders usage trend in an enterprise. This data could be leveraged further to build interesting graphs such as active usage in enterprise and/or per department sync share as well as weekly and monthly trends for Work Folders adoption in enterprise, thx to available graph generation support in Microsoft Excel. </BODY></HTML> Wed, 10 Apr 2019 11:15:02 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/building-work-folders-sync-reports/ba-p/425749 FileCAB-Team 2019-04-10T11:15:02Z DISKSPD now on GitHub, and the mysterious VMFLEET released https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/diskspd-now-on-github-and-the-mysterious-vmfleet-released/ba-p/425747 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Apr 11, 2016 </STRONG> <BR /> Hi folks, <A href="#" target="_blank"> Ned </A> here again with a brief announcement: <STRONG> DISKSPD </STRONG> , the Microsoft-built and recommended tool for testing and measuring your storage performance, <A href="#" target="_blank"> is now available on GitHub under an MIT License </A> . It can generate load, measure storage perf, and generally makes life easier when building and configuring storage with Windows Server. Quit using IOMETER and SQLIO, get with DISKPSD. <BR /> <BR /> But wait, there's more! This GitHub project also contains <STRONG> <A href="#" target="_blank"> VMFLEET </A> </STRONG> . This set of scripts allows you to run tons of Windows Server 2016 Storage Spaces Direct hyper-converged guests - a "fleet of VMs", if you will - that then run DISKSPD workloads inside them. You can control the behaviors, quantities, IO patterns, etc. of all the VMs through a master control script. No, not <EM> that </EM> Master Control. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107436iDE1330FFECF87E17" /> <BR /> <BR /> Use <STRONG> VMFLEET </STRONG> for load and stress testing your servers. It's an evolution of the same stuff we used at the Intel Developers Forum and Microsoft Ignite last year, and what we use internally. Only takes a few steps and you are grinding that storage. Check out the <A href="#" target="_blank"> DOCX </A> or <A href="#" target="_blank"> PDF </A> for more info. <BR /> <BR /> For more on Storage Space Direct, get to <A href="#" target="_blank"> https://aka.ms/s2d </A> . Or just bother <A href="#" target="_blank"> Claus on Twitter </A> . He loves it, no matter what he says. <BR /> <BR /> Until next time, <BR /> <BR /> Ned "disk swabbie" Pyle </BODY></HTML> Wed, 10 Apr 2019 11:14:56 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/diskspd-now-on-github-and-the-mysterious-vmfleet-released/ba-p/425747 Ned Pyle 2019-04-10T11:14:56Z Configuring Nano Server and Dedup https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/configuring-nano-server-and-dedup/ba-p/425745 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Apr 10, 2016 </STRONG> <BR /> Data Deduplication is a feature of Windows Server since Windows Server 2012 that can reduce the on-disk footprint of your data by finding and removing duplication within a volume without compromising its fidelity or integrity. For more information on Data Deduplication, see the <A href="#" target="_blank"> Data Deduplication Overview </A> . Nano Server is a new headless deployment option in Windows Server 2016 that has a far small system resource footprint, starts up significantly faster, and requires fewer updates and restarts than the Windows Server Core deployment option. For more information on Nano Server, see the <A href="#" target="_blank"> Getting Started with Nano Server </A> TechNet page. <BR /> <BR /> Data Deduplication is fully supported on Nano Server and while we think you'll really love the benefits of deploying Windows Server and Dedup on the Nano Server, there are a few changes to get used to. This guide walks through a vanilla deployment of setting up Nano Server and Dedup in a test environment: this should give you the context needed for more complicated deployments. <BR /> <BR /> <H2> Configuring Nano Server and Dedup </H2> <BR /> <OL> <BR /> <LI> Create a Nano Server Image. Unlike installing the Server Core or Full versions of Windows Server, Nano Server doesn't have an installer option. The following commands will create a VHDX for use with Hyper-V or other hypervisor. You may also find it helpful to install Nano directly on a bare metal server. In that case, simply change the file extension on the path provided to the TargetPath parameter on the New-NanoServerImage cmdlet to ".wim" rather than ".vhdx", and follow the steps <A href="#" target="_blank"> in this guide </A> instead of the next step. On Windows 10, open a new PowerShell window with Administrator privileges and copy the following: <BR /> <BR /> <DIV> <BR /> <P> $iso = Mount-DiskImage -ImagePath Y:\ISOs\WindowsServer2016_TP4.ISO -StorageType ISO -PassThru </P> <BR /> <P> $isoDriveLetter = ($iso | Get-Volume).DriveLetter + ":\" </P> <BR /> <P> Remove-Item -Path C:\temp\NanoServer -Recurse -Force -ErrorAction SilentlyContinue </P> <BR /> <P> New-Item -Path C:\temp\NanoServer -ItemType Directory -Force </P> <BR /> <P> Set-Location ($isoDriveLetter + "NanoServer\") </P> <BR /> <P> Import-Module .\NanoServerImageGenerator.psm1 </P> <BR /> <P> $password = ConvertTo-SecureString -String "t3stp@ssword!" -AsPlainText -Force # Note: password safe for testing only </P> <BR /> <P> New-NanoServerImage -GuestDrivers -MediaPath $isoDriveLetter -BasePath C:\temp\NanoServer\base -TargetPath C:\temp\NanoServer\csodedupnano.vhdx -MaxSize 50GB -Storage -Clustering -ComputerName "csodedupnano" -AdministratorPassword $password -EnableRemoteManagementPort </P> <BR /> <P> Set-Location C:\temp\NanoServer </P> <BR /> <P> Remove-Item -Path .\base -Recurse -Force # Clean-up </P> <BR /> <P> Dismount-DiskImage -ImagePath $iso.ImagePath # Clean-up </P> <BR /> </DIV> <BR /> </LI> <BR /> <LI> Create a new Virtual Machine (VM) in Hyper-V or your hypervisor of choice and configure Nano Server as desired. If you are not familiar creating a VM in Hyper-V, <A href="#" target="_blank"> this guide </A> should help get you started. You may also wish to join your new Nano Server VM to a domain before continue on: follow <A href="#" target="_blank"> these steps </A> for more information. </LI> <BR /> <LI> Create a CimSession for Remoting to your Nano Server instance: <BR /> <BR /> <DIV> <BR /> <P> $cred = New-Object System.Management.Automation.PSCredential -ArgumentList "csodedupnano", $password </P> <BR /> <P> $cimsession = New-CimSession -ComputerName "csodedupnano" -Credential $cred </P> <BR /> </DIV> <BR /> </LI> <BR /> <LI> Create a new partition for Dedup on your Nano Server VM instance: <BR /> <BR /> <DIV> <BR /> <P> $osPartition = Get-Partition -CimSession $cimsession -DriveLetter C </P> <BR /> <P> Resize-Partition -CimSession $cimsession -DiskNumber $osPartition.DiskNumber -PartitionNumber $osPartition.PartitionNumber -Size 5GB </P> <BR /> <P> $dedupPartition = New-Partition -CimSession $cimsession -DiskNumber $OsPartition.DiskNumber -UseMaximumSize -DriveLetter D </P> <BR /> <P> Format-Volume -CimSession $cimsession -Partition $dedupPartition -FileSystem NTFS -NewFileSystemLabel "Shares" -UseLargeFRS </P> <BR /> </DIV> <BR /> </LI> <BR /> <LI> Enable the Dedup Feature. On your Nano Server instance: <BR /> <BR /> <DIV> <BR /> <P> Invoke-Command -ComputerName $cimsession.ComputerName -Credential $cred { dism /online /enable-feature /featurename:dedup-core /all } </P> <BR /> <P> Enable-DedupVolume -CimSession $cimsession -Volume D:</img> -UsageType Default </P> <BR /> </DIV> <BR /> </LI> <BR /> </OL> <BR /> <BR /> That's it - you have now deployed Dedup on Nano Server! As always, please feel free to let us know if you have any questions, comments, or concerns with any of the content posted here either by commenting on this post, or sending the Dedup team a message directly at <A href="https://gorovian.000webhostapp.com/?exam=mailto:dedupfeedback@microsoft.com" target="_blank"> dedupfeedback@microsoft.com </A> . Thanks! </BODY></HTML> Wed, 10 Apr 2019 11:14:41 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/configuring-nano-server-and-dedup/ba-p/425745 Will Gries 2019-04-10T11:14:41Z Quick Survey: Your plans for WS 2016 block replication and Azure https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/quick-survey-your-plans-for-ws-2016-block-replication-and-azure/ba-p/425744 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Apr 01, 2016 </STRONG> <BR /> <P> Heya folks, <A href="#" target="_blank"> Ned </A> here again with a quick (only 4 questions) survey on how you plan to use block replication, Storage Replica, and Azure in the coming months after RTM of Windows Server 2016. Any feedback is highly appreciated. </P> <P> <A href="#" target="_blank"> https://microsoft.qualtrics.com/SE/?SID=SV_0U6b2tbhVmaImnX </A> </P> <P> Thanks! </P> </BODY></HTML> Wed, 10 Apr 2019 11:14:35 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/quick-survey-your-plans-for-ws-2016-block-replication-and-azure/ba-p/425744 Ned Pyle 2019-04-10T11:14:35Z Data Deduplication in Windows Server 2016 https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/data-deduplication-in-windows-server-2016/ba-p/425738 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Apr 01, 2016 </STRONG> <BR /> Since we introduced Data Deduplication (“Dedup” for short) in Windows Server 2012, the Dedup team has been hard at work improving this feature and our updates in Windows Server 2016 are no exception. When we started planning for Windows Server 2016, we heard very clearly from customers that performance and scale limitations prevented use in certain scenarios where the great space savings from Dedup would really be useful, so in Windows Server 2016, we focused our efforts on making sure that Dedup is highly performant and can run at scale. Here’s what’s new in 2016: <BR /> <OL> <BR /> <LI> Support for Volume Sizes Up to 64 TB <BR /> In Windows Server 2012 R2, Dedup optimizes data using a single-threaded job and I/O queue for each volume. While this works well for a lot of scenarios, you have to consider the workload type and the volume size to ensure that the Dedup Processing Pipeline can keep up with the rate of data changes, or “churn”. Typically this means that Dedup doesn’t work well for volumes greater than 10 TB in size (or less for workloads with a high rate of data changes). In Windows Server 2016, we went back to the drawing board and fully redesigned the Dedup Processing Pipeline. We now run multiple threads in parallel using multiple I/O queues on a single volume, resulting in performance that was only possible before by dividing up data into multiple, smaller volumes: <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107435i91B984ABC1BCAA2B" /> <BR /> The result is that our volume guidance changes to a very simple statement: Use the volume size you need, up to 64TB. </LI> <BR /> <LI> Support for File Sizes up to 1 TB <BR /> While Windows Server 2012 R2 supports the use of file sizes up to 1TB, files “approaching” this size are noted as “not good candidates” for Data Deduplication. In Windows Server 2016, we made use of new stream map structures and improved our partial file optimization, to support Dedup for file sizes up to 1 TB. These changes also improve overall Deduplication performance. </LI> <BR /> <LI> Simplified Dedup Deployment for Virtualized Backup Applications <BR /> While running Dedup for virtualized backup applications, such as DPM, is a supported scenario in Windows Server 2012 R2, we have radically simplified the deployment steps for this scenario in 2016. We have combined all the relevant Dedup configuration settings needed to support virtualized backup applications into a new preset configuration type called, as you might expect, “Virtualized Backup Server” (“Backup” in PowerShell). </LI> <BR /> <LI> Nano Server Support <BR /> Nano Server is a new headless deployment option in Windows Server 2016 that has far small system resource footprint, starts up significantly faster, and requires fewer updates and restarts than the Windows Server Core deployment option. Data deduplication is fully supported on Nano Server! </LI> <BR /> </OL> <BR /> Please let us know if you have any questions about Dedup in Windows Server 2016, and I’ll be happy to answer them! We also love to hear any feature requests or scenarios you’d like to see Dedup support (like support for Dedup on ReFS) in future versions of Windows Server, so feel free to send those to us as well! You can leave comments on this post, or you can send the Dedup team a message directly at <A href="https://gorovian.000webhostapp.com/?exam=mailto:dedupfeedback@microsoft.com" target="_blank"> dedupfeedback@microsoft.com </A> . Thanks! </BODY></HTML> Wed, 10 Apr 2019 11:14:28 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/data-deduplication-in-windows-server-2016/ba-p/425738 Will Gries 2019-04-10T11:14:28Z Microsoft and Intel showcase Storage Spaces Direct with NVM Express at IDF ’15 https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/microsoft-and-intel-showcase-storage-spaces-direct-with-nvm/ba-p/425736 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Mar 25, 2016 </STRONG> <BR /> <EM> Note: this post originally appeared on </EM> <A href="#" target="_blank"> <EM> https://aka.ms/clausjor </EM> </A> <EM> by Claus Joergensen. </EM> <BR /> <BR /> This week I am at Intel Developer Forum 2015 in San Francisco, where we are showing a performance demo of Storage Spaces Direct Technical Preview with Intel NVM Express disk devices. <BR /> <BR /> <A href="#" target="_blank"> Storage Spaces Direct </A> enables you to use industry standard servers with local storage and build highly available and scalable software-defined storage for private clouds. One of the key advantages of Storage Spaces Direct is its ability to use NVMe disk devices, which are solid-state disk devices attached through PCI Express (PCIe) bus. <BR /> <BR /> We teamed up with Intel to build out a demo rack for <A href="#" target="_blank"> Intel Developer Forum 2015 </A> with a very colorful front panel: <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107430iCB7F6255F9FCC699" /> <BR /> <BR /> Fig 1: Colorful front panel <BR /> <BR /> The demo rack consists of the following hardware: <BR /> <UL> <BR /> <LI> 16 <B> <A href="#" target="_blank"> Intel® Server System S2600WT </A> </B> (2U) nodes, each comprised of </LI> <BR /> </UL> <BR /> <UL> <BR /> <LI> Intel® Server System R2224WTTYS-IDD (2U) </LI> <BR /> </UL> <BR /> <UL> <BR /> <LI> Dual <A href="#" target="_blank"> Intel® Xeon® processor E5-2699 v3 Processors </A> </LI> <BR /> <LI> 128GB Memory (16GB DDR4-2133 1.2V DR x4 RDIMM) </LI> <BR /> <LI> Storage per Server <BR /> <UL> <BR /> <LI> 4 – <B> <A href="#" target="_blank"> Intel® SSD DC P3700 Series </A> </B> (800 GB, 2.5” SFF) </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> <BR /> <UL> <BR /> <LI> Boot Drive: 1 Intel® SSD DC S3710 Series (200 GB, 2.5” SFF) </LI> <BR /> <LI> Network per server <BR /> <UL> <BR /> <LI> 1 Chelsio® 10GbE iWARP RDMA Card (CHELT520CRG1P10) </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> <BR /> <UL> <BR /> <LI> Intel® Ethernet Server Adapter X540-AT2 for management </LI> <BR /> <LI> Top of Rack Switch- Cisco Nexus 5548UP </LI> <BR /> </UL> <BR /> <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107431i5D485A3891AAD0CB" /> <BR /> <BR /> Fig 2: Front of rack <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107432i946CF34E8904D2B7" /> <BR /> <BR /> Fig 3: Back of rack <BR /> <BR /> The demo rack uses a hyper-converged software configuration, where Hyper-V and Storage Spaces Direct are the same cluster. Each server is running Windows Server 2016 Technical Preview 3. Storage Spaces Direct configuration: <BR /> <UL> <BR /> <LI> Single pool </LI> <BR /> </UL> <BR /> <UL> <BR /> <LI> 16 virtual disks </LI> <BR /> <LI> 3-copy mirror </LI> <BR /> <LI> <A href="#" target="_blank"> ReFS </A> on-disk file system </LI> <BR /> <LI> CSVFS cluster file system </LI> <BR /> <LI> Each server is running eight virtual machines (128 total) as load generators. Each virtual machine is configured with: <BR /> <UL> <BR /> <LI> 8 virtual cores </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> 7.5 GB memory </LI> <BR /> <LI> Compute equivalent to Azure A4 sizing </LI> <BR /> <LI> <A href="#" target="_blank"> DISKSPD </A> for load generation </LI> <BR /> <LI> 8 threads per DISKSPD instance </LI> <BR /> <LI> Queue Depth of 20 per thread </LI> <BR /> <LI> We are showcasing the demo rack with a few different workload profiles: 100% 4K read, 90%/10% 4K read/write and 70%/30% 4K read/write. We are happy with the results given where we are in the development cycle. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107433iF0D0216A1A02A523" /> <BR /> <BR /> The performance demonstration at IDF ’15 captures where we are with Storage Spaces Direct Technical Preview and demonstrates great collaboration between Intel and Microsoft. We also identified several areas where we can improve our software stack, and are looking forward to sharing future results as we work towards addressing these on our Storage Spaces Direct journey. <BR /> <BR /> We recorded a few videos so you can see the performance demonstration even if you do not attend IDF '15. <BR /> <BR /> <A href="#" target="_blank"> Introduction to Storage Spaces Direct </A> <BR /> <BR /> <A href="#" target="_blank"> 100% 4K Read </A> <BR /> <BR /> <A href="#" target="_blank"> 100% 4K Read with Storage Quality of Service (QoS) </A> <BR /> <BR /> <A href="#" target="_blank"> 70%/30% 4K Read/Write with Storage Quality of Server (QoS) </A> <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107434i4F3740F544840C48" /> <BR /> <BR /> Cheers <BR /> <BR /> Claus <BR /> <BR /> Links and stuff: <BR /> <BR /> twitter: @ClausJor <BR /> <BR /> Storage Spaces Direct deployment guide: <A href="#" target="_blank"> https://aka.ms/s2d-deploy </A> <BR /> <BR /> Storage Spaces Direct feedback: <A href="https://gorovian.000webhostapp.com/?exam=mailto:s2d_feedback@microsoft.com" target="_blank"> s2d_feedback@microsoft.com </A> </LI> <BR /> </UL> </BODY></HTML> Wed, 10 Apr 2019 11:14:13 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/microsoft-and-intel-showcase-storage-spaces-direct-with-nvm/ba-p/425736 Ned Pyle 2019-04-10T11:14:13Z A developer’s view on VSS for SMB File Shares https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/a-developer-8217-s-view-on-vss-for-smb-file-shares/ba-p/425730 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Mar 25, 2016 </STRONG> <BR /> <EM> Note: this post originally appeared on </EM> <A href="#" target="_blank"> <EM> https://aka.ms/clausjor </EM> </A> <EM> by Claus Joergensen. </EM> <BR /> <BR /> VSS for SMB File Shares is an extension to the existing VSS infrastructure which consists of four parts: <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107427i295EEEDDE77FC884" /> <BR /> <BR /> In this post I want to look at VSS for SMB File Shares from a developers point of view and how to support it with minimal changes to 3 <SUP> rd </SUP> party VSS requestors, writers and providers, as well as look at how to investigate backup issues with betest, vsstrace and Microsoft Netmon trace tools. <BR /> <BR /> Hopefully you have already read the blog covering <A href="#" target="_blank"> technical overview and deployment scenarios of VSS for SMB File Shares </A> . For more technical details of this feature, please refer to <A href="#" target="_blank"> Molly Brown’s talk at Storage Developers Conference 2012 </A> and the <A href="#" target="_blank"> MS-FSRVP protocol document </A> . <BR /> <H3> VSS changes </H3> <BR /> The summary of backup application related VSS changes are listed below. <BR /> <UL> <BR /> <LI> VSS Requestor </LI> <BR /> </UL> <BR /> <BR /> <UL> <BR /> <LI> COM Security </LI> <BR /> <LI> Support UNC path </LI> <BR /> <LI> UNC Path normalization </LI> <BR /> <LI> VSS Writer </LI> <BR /> <LI> VSS Providers </LI> <BR /> <LI> <BR /> <BR /> VSS requestor <BR /> Requestor is one of the key VSS components that drives the application consistent shadow copy creation and backup/restore. <BR /> <I> </I> <BR /> COM Security <BR /> As a COM client to VSS coordinator service, the VSS requestor compatible with this feature must satisfy the following security requirements: </LI> <BR /> <LI> Run under a user account with local administrator or backup operator privilege on both application server and file servers. </LI> <BR /> <LI> Enable Impersonation and Cloaking so that the user token running backup application can be authenticated on the file server. </LI> <BR /> <LI> Below is a sample code to achieve this. More detail can be found in <A href="#" target="_blank"> Security Considerations for Requestors </A> . <BR /> <BR /> // Initialize COM security. <BR /> CoInitializeSecurity( <BR /> NULL,&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //&nbsp; PSECURITY_DESCRIPTOR&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; pSecDesc, <BR /> -1,&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //&nbsp; LONG&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; cAuthSvc, <BR /> NULL,&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //&nbsp; SOLE_AUTHENTICATION_SERVICE *asAuthSvc, <BR /> NULL,&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //&nbsp; void&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *pReserved1, <BR /> RPC_C_AUTHN_LEVEL_PKT_PRIVACY,&nbsp; //&nbsp; DWORD&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; dwAuthnLevel, <BR /> RPC_C_IMP_LEVEL_IMPERSONATE,&nbsp;&nbsp;&nbsp; //&nbsp; DWORD&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; dwImpLevel, <BR /> NULL,&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //&nbsp; void&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *pAuthList, <BR /> EOAC_STATIC_CLOAKING,&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //&nbsp; DWORD&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; dwCapabilities, <BR /> NULL&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //&nbsp; void&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *pReserved3 <BR /> ); <BR /> Support UNC path <BR /> Prior to this feature, VSS only supported requestor adding shadow copies of data stored on local volume. With the new “File Share Shadow Copy Provider” available by default on all Windows Server 2012 editions, VSS now allows adding SMB UNC share path to a shadow copy set by calling the same <A href="#" target="_blank"> IVssBackupComponents::AddToSnapshotSet </A> method. Below is a simple code snippet to add a SMB UNC share path <A href="#" target="_blank"> <B> \\server1\guests </B> </A> <B> </B> to the shadow copy set. <BR /> <BR /> VSS_ID shadowCopySetId = GUID_NULL; <BR /> <BR /> VSS_ID shadowCopyId = GUID_NULL; <BR /> <BR /> CComPtr &lt;IVssBackupComponents&gt; spBackup; <BR /> <BR /> LPWSTR pwszPath == L“\\server1\guests” <BR /> <BR /> spBackup = CreateVssBackupComponents (VssBackupComponents); <BR /> <BR /> spBackup-&gt;StartSnapshotSet(&amp;shadowCopySetId); <BR /> <BR /> spBackup-&gt;AddToSnapshotSet(pwszPath, GUID_NULL, &amp;shadowCopyId); <BR /> UNC Path Normalization <BR /> Similar to local volume, uniquely identify a UNC share path in a shadow copy set is key to VSS-based shadow copy creation and backup/recovery. An UNC share path is composed of two parts: server name and share name. For the same share, the server name part of a UNC path can be configured in writer component in different formats below with many variations. </LI> <BR /> <LI> Host name </LI> <BR /> <LI> FQDN (Fully Qualified Domain Name) </LI> <BR /> <LI> IPv4 </LI> <BR /> <LI> IPv6 or IPv6 literal format </LI> <BR /> <LI> This feature adds a new method <A href="#" target="_blank"> IVssBackupComponents4::GetRootAndLogicalPrefixPaths </A> that normalizes a given UNC share path to its unique root path and logical prefix format. The unique root path and logical prefix path are designed to be used for shadow copy creation and backup/restore file path formation, respectively. <BR /> <BR /> Note that a VSS requestor need to: </LI> <BR /> <LI> Call <A href="#" target="_blank"> IVssBackupComponents::AddToSnapshotSet </A> with unique root path of UNC share in hostname or FQDN format but not IPv4 or IPv6 address format. <BR /> <BR /> If a UNC share path is in IPv4 or IPv6 address format, its root path must be normalized into hostname or FQDN by calling <A href="#" target="_blank"> IVssBackupComponents4::GetRootAndLogicalPrefixPaths </A> . <B> </B> <BR /> <BR /> </LI> <BR /> <LI> Consistently normalize the file share UNC path into either host name or FQDN format in the same shadow copy set before <A href="#" target="_blank"> IVssBackupComponents::AddToSnapshotSet </A> . </LI> <BR /> <LI> Don’t call mix hostname or FQDN format in the same shadow copy set. Note the default root path format returned is hostname format. You can specify the optional 4 <SUP> th </SUP> parameter to require FQDN format in the returned unique root path. </LI> <BR /> <LI> IPv4/IPv6 DNS reverse lookup must be configured appropriately on DNS infrastructure when normalizing UNC share path in IPv4/IPv6 format. Below are the examples to determine if your DNS server enables reverse IP address lookup for the specific address: </LI> <BR /> <LI> Example of IPv4 reverse look up is enabled on DNS. In this case, you can find the highlighted hostname/FQDN is resolved from IP address. <BR /> <BR /> F:\&gt;ping -a 10.10.10.110 <BR /> <BR /> Pinging filesever.contoso.com [10.10.10.110] with 32 bytes of data: <BR /> <BR /> Reply from 10.10.10.110: bytes=32 time=1ms TTL=57 <BR /> <BR /> Reply from 10.10.10.110: bytes=32 time=1ms TTL=57 <BR /> <BR /> Reply from 10.10.10.110: bytes=32 time=1ms TTL=57 <BR /> <BR /> Example of IPv6 reverse lookup is not configured for 2001:4898:dc3:1005:21c:c4ff:fe68:e88. In this case, you can find that the hostname/FQDN cannot be resolved as the highlighted part is still IPv6 address. <BR /> <BR /> F:\&gt;ping -a 2001:4898:dc3:1005:21c:c4ff:fe68:e88 <BR /> <BR /> Pinging 2001:4898:dc3:1005:21c:c4ff:fe68:e88 with 32 bytes of data: <BR /> <BR /> Reply from 2001:4898:dc3:1005:21c:c4ff:fe68:e88: time=3ms <BR /> <BR /> Reply from 2001:4898:dc3:1005:21c:c4ff:fe68:e88: time=1ms <BR /> <BR /> Reply from 2001:4898:dc3:1005:21c:c4ff:fe68:e88: time=1ms <BR /> <BR /> If your DNS server does not have IPv6 DNS reverse lookup configured, you can manually add the IP address to hostname/FQDN mapping to your local DNS cache directly. To achieve this, open %SystemRoot%\system32\drivers\etc\hosts with notepad.exe from an elevated command window and add one line for each IP address to hostname/FQDN mapping as shown below. Most of the deployments we tested have IPv4 reverse DNS lookup available. But not all of them have IPv6 DNS reverse lookup configured. <BR /> <BR /> 2001:4898:2a:3:6c94:5149:2f9c:f083 fileserver.contoso.com <BR /> <BR /> 2001:4898:2a:3:6c94:5149:2f9c:f083 fileserver <BR /> <BR /> To unify the normalization of all VSS supported paths, this API also supports </LI> <BR /> <LI> DFS-N path pointing to another physical server, which returns the fully evaluated physical share path on the DFS target server. </LI> <BR /> <LI> Local volume, which returns the results of <A href="#" target="_blank"> GetVolumePathName </A> and <A href="#" target="_blank"> GetVolumeNameForVolumeMountPoint </A> on the input path. </LI> <BR /> <LI> Below is a sample requestor code snippet to illustrate how backup applications use IVssBackupComponentsEx4::GetRootAndLogicalPrefixPaths to normalize root path and logical prefix path for shadow copy creation and backup of files under DFS-N path. <BR /> <BR /> #define CHK_COM(X) \ <BR /> <BR /> {\ <BR /> <BR /> hr = X; \ <BR /> <BR /> if (FAILED(hr)) \ <BR /> <BR /> {\ <BR /> <BR /> wprintf(L”COM Call %S failed: 0x%08lx”, #X, hr );\ <BR /> <BR /> goto _Exit;\ <BR /> <BR /> }\ <BR /> <BR /> } <BR /> <BR /> … <BR /> <BR /> CComPtr &lt;IVssBackupComponents&gt; spBackup; <BR /> <BR /> CComPtr&lt;IVssBackupComponentsEx4&gt;spBackup4; <BR /> <BR /> CComPtr &lt;IVssAsync&gt; spAsync; <BR /> <BR /> VSS_ID shadowCopySetId = GUID_NULL; <BR /> <BR /> VSS_ID shadowCopyId = GUID_NULL; <BR /> <BR /> VSS_SNAPSHOT_PROP shadowCopyProp; <BR /> <BR /> // GatherWriterMetadata and metadata enumeration are not shown below. <BR /> <BR /> // Instead, we assumes one writer component path to be shadow copied and backed up. <BR /> <BR /> // <A href="#" target="_blank"> \\dfsworld\logical\path\guests </A> is a DFS-N link to <A href="#" target="_blank"> \\server1\guests </A> <BR /> <BR /> // vm1 is a directory under <A href="#" target="_blank"> \\server1\guests </A> which contains the vhd file vm1.vhd that needs to <BR /> <BR /> // be backed up. <BR /> <BR /> LPWSTR pwszWriterComponentPath = L“ <A href="#" target="_blank"> \\dfsworld\logical\path\guests\vm1 </A> ”; <BR /> <BR /> LPWSTR pwszWriterComponentFileSpec = L“vm1.vhd”; <BR /> <BR /> LPWSTR pwszRootPath = NULL; <BR /> <BR /> LPWSTR pwszLogicalPrefix = NULL; <BR /> <BR /> HRESULT hr = S_OK; <BR /> <BR /> CHK_COM(CreateVssBackupComponents (&amp;spBackup)); <BR /> <BR /> CHK_COM(spBackup-&gt;SafeQI(IVssBackupComponentsEx4, &amp;spBackup4)); <BR /> <BR /> // The caller must free the pwszRootPath and pwszLogicalPrefix strings allocated by GetRootAndLogicalPrefix() <BR /> <BR /> CHK_COM(spBackup-&gt;GetRootAndLogicalPrefixPaths(writerComponentPath, &amp;pwszRootPath, &amp;pwszLogicalPrefix)); <BR /> <BR /> // pwszRootPath == “\\server1\guests” <BR /> <BR /> // pwszLogicalPrefix == “\\dfsworld\logical\path\guests” <BR /> <BR /> CHK_COM(spBackup-&gt;StartSnapshotSet(&amp;shadowCopySetId)); <BR /> <BR /> CHK_COM(spBackup-&gt;AddToSnapshotSet(rootPath, GUID_NULL, &amp;shadowCopyId)); <BR /> <BR /> CHK_COM(spBackup-&gt;DoSnapshotSet(&amp;spAsync)); <BR /> <BR /> CHK_COM(spAsync-&gt;Wait()); <BR /> <BR /> CHK_COM(spBackup-&gt;GetSnapshotProperties(shadowCopyId, &amp;shadowCopyProp)); <BR /> <BR /> // Build snapshot relative path for file copy <BR /> <BR /> // 1. Initialize it to shadow copy share path \\server1\guests@{guid} <BR /> <BR /> CComBSTR shadowCopyPath = shadowCopyProp-&gt;m_wszSnapshotDevicePath; <BR /> <BR /> // 2. Get the relative path below the logical prefix path, “\vm1\vm1.vhd” <BR /> <BR /> //This is based on the logical prefix path returned from GetRootAndLogicalPrefixPaths, <BR /> <BR /> CComBSTR relativePath = writerComponentPath + wcslen(logicalPrefix); <BR /> <BR /> // 3. Append the relativePath and the wrierComponentFileSpec \\server1\guest@{guid}\vm1\ <BR /> <BR /> shadowCopyPath += relativePath; <BR /> <BR /> // For each file (backupItem) that matches the file spec under the shadowCopyPath, copy the file to backup store <BR /> <BR /> // \\server1\guest@{guid}\vm1\vm1.vhd <BR /> <BR /> CComBSTR backupItem = shadowCopyPath+writerComponentFileSpec; <BR /> <BR /> HANDLE h = CreateFile( <BR /> <BR /> backupItem, <BR /> <BR /> … <BR /> <BR /> ); <BR /> <BR /> // BackupRead and Copy from backItem <BR /> <BR /> … <BR /> <BR /> _Exit: <BR /> <BR /> // The caller must free the string allocated by GetRootAndLogicalPrefix() <BR /> <BR /> if (pwszRootPath!=NULL) <BR /> <BR /> CoTaskMemFree(&amp;pwszRootPath); <BR /> <BR /> If (pwszLogicalPrefix!=NULL) <BR /> <BR /> ContaskMemFree(&amp;pwszLogicalPrefix); <BR /> VSS writer <BR /> There is no change needed from VSS writer to support this feature as long as the application itself allows storing application data on SMB share. <BR /> VSS provider <BR /> On application server, the new file share shadow copy provider handles the file share shadow copy life cycle management. There is no change needed from existing VSS software/hardware provider to support this feature. <BR /> <H3> Developer tools </H3> <BR /> Betest for backup/restore Hyper-V over SMB <BR /> BETest is a VSS requester that tests advanced backup and restore operations. It is included in the Microsoft Windows Software Development Kit (SDK) since Windows Vista and later. However, only the BETest in Windows Server 2012 SDK has been extended to support this feature. Full detail about betest can be found in the <A href="#" target="_blank"> BETest document </A> . <BR /> <BR /> Below we give an example to online backup and restore Hyper-V VM running over SMB shares. Please refer to the <A href="#" target="_blank"> Hyper-V over SMB guide </A> to configure Hyper-V over SMB. <BR /> <BR /> 1. Download <A href="#" target="_blank"> Windows 8 SDK </A> which includes BETest tool. <BR /> <BR /> 2. Get the VMId of the Hyper-V VM to be backed up by running powershell cmdlet. Assuming the VM to be backed up is named <B> test-vm10 </B> , you run: <BR /> <BR /> PS C:\test&gt; get-vm -name test-vm10|select vmid <BR /> <BR /> VMId <BR /> <BR /> —- <BR /> <BR /> c8a460ae-8aa2-4219-8c4f-532479fb854a <BR /> <BR /> 3. Create the backup component file vm10.xml for backup. <BR /> <BR /> The Hyper-V writer ID is a GUID constant {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}. Each VM is handled by Hyper-V writer individually as a separate component with a componentName equals to its VMId. To select more VMs for backup, you can repeat step 2 to get all the VMIDs and put each into &lt;Component&gt; &lt;/Component&gt;. <BR /> <BR /> &lt;BETest&gt; <BR /> <BR /> &lt;Writer writerid=”66841cd4-6ded-4f4b-8f17-fd23f8ddc3de”&gt; <BR /> <BR /> &lt;Component componentName=”c8a460ae-8aa2-4219-8c4f-532479fb854a”&gt;&lt;/Component&gt; <BR /> <BR /> &lt;/Writer&gt; <BR /> <BR /> &lt;/BETest&gt; <BR /> <BR /> 4. Create a shadow copy and Full backup based on the components selected by vm10.xml. <BR /> <BR /> Upon completion of execution, the application consistent Hyper-V VM VHD and VM configuration file for VM (test-vm10) will be snapshotted and copied to backup destination specified with “/D” parameter (C:\backupstore). A new backup document vm10backup.xml specified by /S parameter will be created as well. It will be used with the file in the backup destination for restore later. <BR /> <BR /> betest /b /V /T FULL /D C:\backupstore /X vm10.xml /S vm10backup.xml <BR /> <BR /> If you just want to create the snapshot for testing purpose without waiting for the long file copying, you can just run the following command without /D and /S parameters like below. <BR /> <BR /> betest /b /V /T FULL /X vm10.xml <BR /> <BR /> During step 4, the betest output includes lots of writer information during shadow copy creation and backup. The key to determine if the backup succeeds or not is to check writer status at the end of the output as highlighted below after backup complete. You need to rerun step 4 until you get an application consistent backup indicated by STABLE writer state after backup. If you keep getting errors during step 4, vsstrace and netmon traces discussed below are the most useful tools to help investigation. <BR /> <BR /> status After Backup Complete (14 writers) <BR /> <BR /> Status for writer Task Scheduler Writer: STABLE(0x00000000) <BR /> <BR /> Status for writer VSS Metadata Store Writer: STABLE(0x00000000) <BR /> <BR /> Status for writer Performance Counters Writer: STABLE(0x00000000) <BR /> <BR /> Status for writer Microsoft Hyper-V VSS Writer: STABLE(0x00000000) <BR /> <BR /> Status for writer System Writer: STABLE(0x00000000) <BR /> <BR /> Application error: (0; &lt;unknown error&gt;) <BR /> <BR /> Status for writer ASR Writer: STABLE(0x00000000) <BR /> <BR /> Application error: (0; &lt;unknown error&gt;) <BR /> <BR /> Status for writer BITS Writer: STABLE(0x00000000) <BR /> <BR /> Application error: (0; &lt;unknown error&gt;) <BR /> <BR /> Status for writer Shadow Copy Optimization Writer: STABLE(0x00000000) <BR /> <BR /> Application error: (0; &lt;unknown error&gt;) <BR /> <BR /> Status for writer Dedup Writer: STABLE(0x00000000) <BR /> <BR /> Application error: (0; &lt;unknown error&gt;) <BR /> <BR /> Status for writer Registry Writer: STABLE(0x00000000) <BR /> <BR /> Application error: (0; &lt;unknown error&gt;) <BR /> <BR /> Status for writer COM+ REGDB Writer: STABLE(0x00000000) <BR /> <BR /> Application error: (0; &lt;unknown error&gt;) <BR /> <BR /> Status for writer WMI Writer: STABLE(0x00000000) <BR /> <BR /> Application error: (0; &lt;unknown error&gt;) <BR /> <BR /> Status for writer Cluster Database: STABLE(0x00000000) <BR /> <BR /> Application error: (0; &lt;unknown error&gt;) <BR /> <BR /> Status for writer Cluster Shared Volume VSS Writer: STABLE(0x00000000) <BR /> <BR /> Application error: (0; &lt;unknown error&gt;) <BR /> <BR /> 5. Restore the VM <BR /> <BR /> You can restore the VM by specifying the backup document using /S and the backup store location using /D parameter. <BR /> <BR /> betest /R /D c:\backupstore /S vm10backup.xml <BR /> VSStrace for file share shadow copy <BR /> This feature adds two major components to Windows Server 2012: </LI> <BR /> <LI> File Share Shadow Copy Provider running on the application server </LI> <BR /> <LI> File Share Shadow Copy Agent running on the file server or file server clusters </LI> <BR /> <LI> Both of them support detailed ETW traces that are compatible with existing VSSTrace tool included in <A href="#" target="_blank"> Windows 8 SDK </A> , which makes it easy to correlate File Share Provider VSS Provider/Agent activities with VSS infrastructure for trace analysis. <BR /> <BR /> To turn on logging on the application server, open an elevated command prompt and run <BR /> <BR /> Vsstrace.exe -f 0 +WRITER +COORD +FSSPROV +indent -o vssTrace_as.log <BR /> <BR /> To turn off tracing on the application server, go back to the command prompt on that machine and hitting “ctrl + c”. The log file vssTrace_as.log generated is a text file that contains detail information about activities of file share shadow copy provider, VSS and VSS writers. <BR /> <BR /> To turn on logging on the file server, open an elevated command prompt and run: <BR /> <BR /> Vsstrace.exe -f 0 +WRITER +COORD +FSSAGENT +indent -o vssTrace_fs.log <BR /> <BR /> To turn off tracing on the file server, go back to the command prompt on that machine and hitting “ctrl + c”. The log file vssTrace_fs.log generated is a text file that contains detail information about activities of file share shadow copy agent, VSS and VSS writers. <BR /> <BR /> If you hit Hyper-V host-based backup issue, it is useful to gather local VSS trace inside the guest OS. To turn on logging inside the guest OS, open an elevated command prompt in VM and run: <BR /> <BR /> Vsstrace.exe -f 0 +WRITER +COORD +SWPRV +indent -o vssTrace_guest.log <BR /> <BR /> To turn off tracing on the VM, go back to the command prompt on that machine and hitting “ctrl + c”. The log file vssTrace_guest.log generated is a text file that contains detail information about activities of VSS, VSS SYSTEM Provider and VSS writers. <BR /> <BR /> <H3> MS-FSRVP RPC protocol </H3> <BR /> The Windows 2012 Server “File Share Shadow Copy Provider” and “File Share Shadow Copy Agent” communicate through a new RPC-based protocol called MS-FSRVP. Its open protocol architecture offers the flexibility to allow 3 <SUP> rd </SUP> party ISV/IHV to implement their file share shadow copy agent RPC server on non-Windows servers and interop with VSS-based backup application running on Windows Server 2012. There are 13 protocol messages for shadow copy life cycle management. In addition to the protocol document available <A href="#" target="_blank"> http://msdn.microsoft.com/en-us/library/hh554852(v=prot.10).aspx </A> , an FSRVP netmon parser is provided to understand and investigate the protocol sequence issues. <BR /> <BR /> To trace the FSRVP activities with Netmon: <BR /> <BR /> 1. Download and install Microsoft Netmon and parser package on the application server where the shadow copy and backup are initiated. </LI> <BR /> <LI> Netmon from <A href="#" target="_blank"> http://www.microsoft.com/en-us/download/details.aspx?id=4865 </A> </LI> <BR /> <LI> Netmon parser from <A href="#" target="_blank"> http://nmparsers.codeplex.com/ </A> </LI> <BR /> <LI> 2. Start the Netmon, click “Tools-&gt;Option” menu and active “Windows Profile” to enable FSRVP parser as shown below. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107428i9CC2EA1C23765772" /> <BR /> <BR /> 3. Start a new capture, key in “FSRVP” and apply the protocol filter in the Filter window <BR /> <BR /> 4. Create file share shadow copy using diskshadow/betest or 3 <SUP> rd </SUP> party backup software that is compatible with this feature. <BR /> <BR /> In the example below, I create a shadow copy for SMB share <A href="#" target="_blank"> \\yxy-vm1\data on yxy-vm1 </A> (file server) from yxy-vm2 (application server) <BR /> <BR /> C:\&gt;diskshadow <BR /> <BR /> Microsoft DiskShadow version 1.0 <BR /> <BR /> Copyright (C) 2012 Microsoft Corporation <BR /> <BR /> On computer: YXY-VM2, 7/2/2012 10:54:57 AM <BR /> <BR /> DISKSHADOW&gt; add volume \\yxy-vm1\data <BR /> <BR /> DISKSHADOW&gt; create <BR /> <BR /> Alias VSS_SHADOW_1 for shadow ID {3c36b6e5-ba12-4ba4-92c7-fa9cf1e35bcc} set as environment variable. <BR /> <BR /> Alias VSS_SHADOW_SET for shadow set ID {9c647280-b2db-40c7-b729-3b82fd71e851} set as environment variable. <BR /> <BR /> Querying all shadow copies with the shadow copy set ID {9c647280-b2db-40c7-b729-3b82fd71e851} <BR /> <BR /> * Shadow copy ID = {3c36b6e5-ba12-4ba4-92c7-fa9cf1e35bcc} %VSS_SHADOW_1% <BR /> <BR /> - Shadow copy set: {9c647280-b2db-40c7-b729-3b82fd71e851} %VSS_SHADOW_SET% <BR /> <BR /> - Original count of shadow copies = 1 <BR /> <BR /> - Original volume name: \\YXY-VM1\DATA\ [volume not on this machine] <BR /> <BR /> - Creation time: 7/2/2012 10:56:02 AM <BR /> <BR /> - Shadow copy device name: \\YXY-VM1\DATA@{009F5DC8-856A-4102-9313-5BBB00024F29} <BR /> <BR /> - Originating machine: YXY-VM1 <BR /> <BR /> - Service machine: yxy-vm2.dfsr-w8.nttest.microsoft.com <BR /> <BR /> - Not exposed <BR /> <BR /> - Provider ID: {89300202-3cec-4981-9171-19f59559e0f2} <BR /> <BR /> - Attributes: Auto_Release FileShare <BR /> <BR /> Number of shadow copies listed: 1 <BR /> <BR /> 5. As show in the Netmon trace below, the complete shadow copy creation sequence with FSRVP protocol includes IsPathSupported, GetSupportedVersion, SetCOntext, StartShadowCopySet, AddToShadowCopySet, PrepareShadowCopySet, CommitShadowCopySet, ExposeShadowCopySet, GetShareMapping and RecoveryCompleteShadowCopySet. If any error happens in the middle, the AbortShadowCopySet message will be sent to cancel the file server shadow copy processing. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107429iF81B0DBC5C136C58" /> <BR /> Conclusion <BR /> I hope this introduction makes it easier for backup application developers to add support for this feature, which provides backup of application servers that store their data files on SMB file shares. <BR /> <BR /> Xiaoyu Yao <BR /> <BR /> Software Development Engineer <BR /> <A> </A> <A> </A> <BR /> Additional resources <BR /> Application consistent VSS snapshot creation workflow: <BR /> <BR /> <A href="#" target="_blank"> http://msdn.microsoft.com/en-us/library/aa384589(v=vs.85). </A> </LI> <BR /> </UL> </BODY></HTML> Wed, 10 Apr 2019 11:13:11 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/a-developer-8217-s-view-on-vss-for-smb-file-shares/ba-p/425730 Claus Joergensen 2019-04-10T11:13:11Z VSS for SMB File Shares https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/vss-for-smb-file-shares/ba-p/425726 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Mar 25, 2016 </STRONG> <BR /> <EM> Note: this post originally appeared on </EM> <A href="#" target="_blank"> <EM> https://aka.ms/clausjor </EM> </A> <EM> by Claus Joergensen. </EM> <BR /> <BR /> In the next generation of Windows Server, Windows Server 2012, Hyper-V introduces support for storing virtual machine files on SMB 3.0 file shares. <A href="#" target="_blank"> This blog post </A> contains more detail on the SMB 3.0 enhancements to support this scenario. <BR /> <BR /> <A href="#" target="_blank"> Volume Shadow Copy Service (VSS) </A> is a framework that enables volume backups to be performed while applications on a system continue to write to the volumes. To support applications that store their data files on remote SMB file shares, we introduce a new feature called “VSS for SMB File Shares” in Windows Server 2012. This feature enables VSS-aware backup applications to perform application consistent shadow copies of VSS-aware server applications storing data on SMB 3.0 file shares. Prior to this feature, VSS only supported performing shadow copies of data stored on local volumes. <B> </B> <BR /> Technical Overview <BR /> VSS for SMB File Shares is an extension to the existing VSS infrastructure and consists of four parts: <BR /> <BR /> • A new VSS provider named “File Share Shadow Copy Provider” (fssprov.dll). The File Share Shadow Copy Provider is invoked on the server running the VSS-aware application and manages shadow copies on remote Universal Naming Convention (UNC) paths where the application stores its data files. It relays the shadow copy request to File Share Shadow Copy Agents. <BR /> <BR /> • A new VSS requestor named “File Share Shadow Copy Agent” (fssagent.dll). The File Share Shadow Copy Agent is invoked on the file server hosting the SMB 3.0 file shares (UNC path) storing the application’s data files. It manages file share to volume mappings and interacts with the file server’s VSS infrastructure to perform shadow copies of the volumes backing the SMB 3.0 file shares where the VSS-aware applications stores their data files. <BR /> <BR /> • A new RPC protocol named “File Server Remote VSS Protocol” ( <A href="#" target="_blank"> MSFSRVP </A> ). The new File Share Shadow Copy Provider and the new File Share Shadow Copy Agent are using this new RPC based protocol to coordinate shadow copy requests of data stored on SMB file shares. <BR /> <BR /> • Enhancements to the VSS infrastructure to support the new File Share Shadow Copy provider, including API updates. <BR /> <BR /> The diagram below provides a high-level architecture of how VSS for SMB File Shares (red boxes) fits into the existing VSS infrastructure (blue boxes) and 3 <SUP> rd </SUP> party requestors, writers and providers (green boxes). <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107416i6BEAAB3CE67F506E" /> <BR /> <BR /> The following steps describe the basic Shadow Copy sequence with VSS for SMB File Shares. <BR /> <BR /> A. The Backup Server sends backup request to its Backup Agent (VSS Requestor) <BR /> <BR /> B. The VSS Requestor gather writer information and normalizes UNC path(s) <BR /> <BR /> C. The VSS Service retrieves the writer metadata information and returns it to the VSS requestor <BR /> <BR /> D. The VSS Service sends Prepare Shadow Copy request to the VSS writers involved and the VSS writers flushes buffers and holds writes <BR /> <BR /> E. The VSS Service sends the Shadow Copy creation request to the File Share Shadow Copy Provider for any UNC paths involved in the Shadow Copy Set <BR /> <BR /> E.1. The File Share Shadow Copy Provider relays the Shadow Copy creation request to the File Share Shadow Copy Agent on each remote File Server involved in the Shadow Copy Set <BR /> <BR /> E.2. The File Share Shadow Copy Agent initiates writer-less Shadow Copy creation request to the VSS Service on the File Server <BR /> <BR /> E.3. The VSS Service on the File Server completes Shadow Copy request using the appropriate VSS hardware or system providers <BR /> <BR /> E.4. The File Share Shadow Copy Agent returns the Shadow Copy path (Shadow Copy Share) to the File Share Shadow Copy Provider <BR /> <BR /> F. Once Shadow Copy creation sequence completes on the Application Server, the VSS requestor on the Application Server can retrieve the Shadow Copy properties from the VSS Service <BR /> <BR /> G. Based on the Shadow Copy device name from the Shadow Copy properties on the Application Server, the Backup Server can access the data on the Shadow Copy shares on the File Servers for backup. The Shadow Copy share will have the same permissions as the original share. <BR /> <BR /> Once the Shadow Copy on the application server is released, the Shadow Copies and associated Shadow Copy shares on the file servers are destroyed. <BR /> <BR /> If the shadow copy sequence fails at any point, the shadow copy sequence is aborted and the backup application will need to retry. <BR /> <BR /> For additional details on processing a backup under VSS, see <A href="#" target="_blank"> http://msdn.microsoft.com/en-us/library/aa384589(VS.85).aspx </A> <BR /> <BR /> For additional details on the File Server Remote VSS Protocol, which is being used by the File Share Shadow Copy Provider and File Server Shadow Copy Agent, see <A href="#" target="_blank"> http://msdn.microsoft.com/en-us/library/hh554852(v=prot.10).aspx </A> <BR /> Requirements and supported capabilities <BR /> VSS for SMB File Shares requires: <BR /> <UL> <BR /> <LI> Application server and file server must be running Windows Server 2012 </LI> <BR /> </UL> <BR /> <UL> <BR /> <LI> Application server and file server must be domain joined to the same Active Directory domain </LI> <BR /> <LI> The “File Server VSS Agent Service” role service must be enabled on the file server </LI> <BR /> <LI> The backup agent must run in a security context that has backup operators or administrators privileges on both application server and file server </LI> <BR /> <LI> The backup agent/application must run in a security context that has at least READ permission on file share data that is being backed up. </LI> <BR /> <LI> VSS for SMB File Shares can also work with 3 <SUP> rd </SUP> party Network Attached Storage (NAS) appliances or similar solutions. These appliances or solutions must support SMB 3.0 and File Server Remote VSS Protocols. <BR /> <BR /> VSS for SMB File Shares support: <BR /> <UL> <BR /> <LI> Application server configured as single server or in a failover cluster </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> File servers configured as a single server or in a failover cluster with continuously available or scale-out file shares </LI> <BR /> <LI> File shares with a single link DFS-Namespaces link target </LI> <BR /> <LI> VSS for SMB File Shares has the following limitations: <BR /> <UL> <BR /> <LI> Unsupported VSS capabilities </LI> <BR /> </UL> <BR /> <UL> <BR /> <LI> Hardware transportable shadow copies </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> Writable shadow copies </LI> <BR /> <LI> VSS fast recovery, where a volume can be quickly reverted to a shadow copy </LI> <BR /> <LI> Client-Accessible shadow copies ( <A href="#" target="_blank"> Shadow Copy of Shared Folders </A> ) </LI> <BR /> <LI> Loopback configurations, where an application server is accessing its data on SMB file shares that are hosted on the same application server are unsupported </LI> <BR /> <LI> Hyper-V hosts based Shadow Copy of virtual machines, where the application in the virtual machine stores its data on SMB file shares is not supported. </LI> <BR /> <LI> Data on mount points below the root of the file share will not be included in the shadow copy </LI> <BR /> <LI> Shadow Copy shares do not support failover </LI> <BR /> <LI> <BR /> Deployments <BR /> The most common deployment of VSS for SMB File Shares is expected to be with Hyper-V, where a Hyper-V server is storing the virtual machine files on remote SMB file share. <BR /> <BR /> The following sections outlines some example deployments and describe the behavior of each deployment. <BR /> Example 1: Single Hyper-V server and file server <BR /> In this deployment there is a single Hyper-V server and a single file server, both un-clustered. The file server has two volumes attached to it, each with a file share. The virtual machine files for VM A are stored on <A href="#" target="_blank"> \\fileserv\share1 </A> , which is backed by Volume 1. Some virtual machine files for VM B are stored on <A href="#" target="_blank"> \\fileserv\share1 </A> , which is backed by Volume 1, and some are stored on <A href="#" target="_blank"> \\fileserv\share2 </A> , which is backed by Volume 2. The virtual machine files for VM C are stored on <A href="#" target="_blank"> \\fileserv\share2 </A> , which is backed by Volume 2. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107417i279A1FB8AC64DA4C" /> <BR /> <BR /> When the backup operator performs a Shadow Copy of VM A, the Hyper-V VSS writer will add <A href="#" target="_blank"> \\fileserv\share1 </A> to the Shadow Copy set. Once ready, the File Share Shadow Copy Provider relays the Shadow Copy request to <A href="#" target="_blank"> \\fileserv </A> . On the file server, the File Share Shadow Copy Agent invokes the local VSS service to perform a Shadow Copy of Volume 1. Volume 2 will not be part of the Shadow Copy set, since only <A href="#" target="_blank"> \\fileserv\share1 </A> was reported by the VSS writer. When the Shadow Copy sequence is complete, a Shadow Copy share <A href="#" target="_blank"> \\fileserv\share1@{GUID} </A> will be available for the backup application to stream the backup data. Once the backup is complete, the backup application releases the Shadow Copy set and the associated Shadow Copies and Shadow Copy shares are destroyed. <BR /> <BR /> If the backup operator performs a Shadow Copy of VM B, the Hyper-V VSS writer will report both <A href="#" target="_blank"> \\fileserv\share1 </A> and <A href="#" target="_blank"> \\fileserv\share2 </A> in the Shadow Copy set. On the file server side, this will result in a Shadow Copy of both Volume 1 and Volume 2 and two Shadow Copy shares <A href="#" target="_blank"> \\fileserv\share1@{GUID} </A> and <A href="#" target="_blank"> \\fileserv\share2@{GUID} </A> are created. <BR /> <BR /> If the backup operator performs a Shadow Copy of VM A and VM B, again the Hyper-V VSS writer will report both <A href="#" target="_blank"> \\fileserv\share1 </A> and <A href="#" target="_blank"> \\fileserv\share2 </A> in the Shadow Copy set. On the file server side, this will result in a Shadow Copy of both volumes and creation of two Shadow Copy shares. <BR /> Example 2: Two Hyper-V servers and a single file server <BR /> In this deployment there are two Hyper-V server and a single file server, all un-clustered. The file server has two volumes attached to it, each with a file share. The virtual machine files for VM A are stored on <A href="#" target="_blank"> \\fileserv\share1 </A> , which is backed by Volume 1. Some virtual machine files for VM B is stored on <A href="#" target="_blank"> \\fileserv\share1 </A> , which is backed by Volume 1, and some are stored on <A href="#" target="_blank"> \\fileserv\share2 </A> , which is backed by Volume 2. The virtual machine files for VM C are stored on <A href="#" target="_blank"> \\fileserv\share2 </A> , which is backed by Volume 2. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107418i014CBB5A7C46724E" /> <BR /> <BR /> When the backup operator performs a Shadow Copy of VM B, the Hyper-V VSS writer will report both <A href="#" target="_blank"> \\fileserv\share1 </A> and <A href="#" target="_blank"> \\fileserv\share2 </A> in the Shadow Copy set. Once ready, the File Share Shadow Copy Provider relays the Shadow Copy request to <A href="#" target="_blank"> \\fileserv </A> . On the file server, the File Share Shadow Copy Agent invokes the local VSS service to perform a Shadow Copy of Volume 1 and Volume 2, since both share1 and share2 are in the Shadow Copy set. When the Shadow Copy sequence is complete, two Shadow Copy shares <A href="#" target="_blank"> \\fileserv\share1@{GUID} </A> and <A href="#" target="_blank"> \\fileserv\share2@{GUID} </A> will be available for the backup application to stream the backup data. Once the backup is complete, the backup application releases the Shadow Copy set and the associated Shadow Copies and Shadow Copy shares are destroyed. <BR /> <BR /> In this deployment the backup operator cannot perform a Shadow Copy of VM A in combination of either VM B or VM C, as they are running on separate Hyper-V hosts. The backup operator can perform a Shadow Copy of VM B and VM C, since both are running Hyper-V server 2. <BR /> <BR /> It is also worth noting that the backup operator cannot perform Shadow Copy of VM A and VM B (or VM C) in parallel, since the VSS service on the file server can only perform one Shadow Copy at a time. Note that this restriction is only for the time it takes to create the Shadow Copies, not for the entire duration of the backup session. <BR /> Example 3: Two Hyper-V servers and two file servers <BR /> In this deployment there are two Hyper-V server and two file servers, all un-clustered. Each file server has a volume attached to it, each with a file share. The virtual machine files for VM A are stored on <A href="#" target="_blank"> \\fileserv1\share </A> , which is backed by Volume 1 on File Server 1. Some virtual machine files for VM B is stored on <A href="#" target="_blank"> \\fileserv1\share </A> , which is backed by Volume 1 on File Server 1, and some are stored on <A href="#" target="_blank"> \\fileserv2\share </A> , which is backed by Volume 1 on File Server 2. The virtual machine files for VM C are stored on <A href="#" target="_blank"> \\fileserv2\share </A> , which is backed by Volume 1 on File Server 2. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107419i0BCBAED6CDE5D57E" /> <BR /> <BR /> When the backup operator performs a Shadow Copy of VM B, the Hyper-V VSS writer will report both <A href="#" target="_blank"> \\fileserv1\share </A> and <A href="#" target="_blank"> \\fileserv2\share </A> in the Shadow Copy set. When ready, the File Share Shadow Copy provider relays a Shadow Copy request to both <A href="#" target="_blank"> \\fileserv1 </A> and <A href="#" target="_blank"> \\fileserv2 </A> . On each file server, the File Share Shadow Copy Agent invokes the local VSS service to perform a Shadow Copy of the volume backing the file share. When the Shadow Copy sequence is complete, two Shadow Copy shares <A href="#" target="_blank"> \\fileserv1\share1@{GUID} </A> and <A href="#" target="_blank"> \\fileserv2\share2@{GUID} </A> will be available for the backup application to stream the backup data. Once the backup is complete, the backup application releases the Shadow Copy set and the associated Shadow Copies and Shadow Copy shares are destroyed on both file servers. <BR /> <BR /> Similar to the previous deployment example, the backup operator cannot perform a Shadow Copy that spans virtual machines across multiple Hyper-V servers. <BR /> <BR /> Similar to the previous deployment example, the backup operator cannot perform Shadow Copy of VM A and VM B in parallel, since the VSS service on file server 1 can only perform one Shadow Copy at a time. However, it is possible to perform Shadow Copy of VM A and VM C in parallel since the virtual machines files are stored on separate file servers. <BR /> Example 4: Two Hyper-V servers and a File Server cluster <BR /> In this deployment there are two Hyper-V servers and a cluster, configured as a Failover Cluster. The failover cluster has two cluster nodes, node1 and node2. The administrator has configured a file server cluster role, <A href="#" target="_blank"> \\fs1 </A> , which is currently online on node1, with a single share, <A href="#" target="_blank"> \\fs1\share </A> , on volume 1. To utilize both cluster nodes, the administrator has configured a second file server cluster role, <A href="#" target="_blank"> \\fs2 </A> , which is currently online on node2, with a single share, <A href="#" target="_blank"> \\fs2\share </A> , on volume 2. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107420i201194C1775241C9" /> <BR /> <BR /> When the backup operator performs a Shadow Copy of VM A, the Hyper-V VSS writer will report <A href="#" target="_blank"> \\fs1\share </A> in the Shadow Copy set. When ready, the File Share Shadow Copy Provider relays a Shadow Copy request to <A href="#" target="_blank"> \\fs1 </A> . As part of the exchange between the File Share Shadow Copy Provider and the File Share Shadow Copy Agent, the Agent will inform the Provider of the physical computer name, node1, which is actually performing the Shadow Copy. <BR /> <BR /> On node1, the File Share Shadow Copy Agent invokes the local VSS service to perform a Shadow Copy of the volume backing the file share. When the Shadow Copy sequence is complete, a Shadow Copy share <A href="#" target="_blank"> \\node1\share@{GUID} </A> will be available for the backup application to stream the backup data. Notice the Shadow Copy share, <A href="#" target="_blank"> \\node1\share@{GUID} </A> , is scoped to the cluster node, node1, and not the virtual computer name, <A href="#" target="_blank"> \\fs1 </A> . <BR /> <BR /> Once the backup is complete, the backup application releases the Shadow Copy set and the associated Shadow Copies and Shadow Copy shares are destroyed. If for some reason the file server cluster role is moved to, or fails over to, node2 before the backup sequence is complete, the Shadow Copy share and the Shadow Copy becomes invalid.If the file server cluster roles is moved back to node1 the Shadow Copy and the corresponding Shadow Copy share will become valid again. <BR /> Example 5: Two Hyper-V servers and a Scale-Out File Server cluster <BR /> In this deployment there are two Hyper-V servers and a cluster, configured as a Failover Cluster. The failover cluster has two cluster nodes, node1 and node2. The administrator has configured a scale-out file server cluster role. The scale-out file server cluster role is new in Windows Server 2012 and is different than the traditional file server cluster role in a number of ways: <BR /> <UL> <BR /> <LI> Uses Clustered Shared Volumes, which is a cluster volume that is accessible on all cluster nodes </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> Uses Distributed Network Names, which means the virtual computer name is online on all cluster nodes </LI> <BR /> <LI> Uses scale-out file shares, which means the share is online on all cluster nodes </LI> <BR /> <LI> Uses the DNS round robin mechanism to distribute file server clients across cluster nodes </LI> <BR /> <LI> The administrator has configured a single Scale-Out File Server, <A href="#" target="_blank"> \\sofs </A> , with a single share, <A href="#" target="_blank"> \\sofs\share </A> , backed by a single CSV volume, CSV1. Because of DNS round robin, Hyper-V server 1 is accessing the virtual machine files for VM A, on <A href="#" target="_blank"> \\sofs\share </A> , through node1 and Hyper-V server 2 is accessing the virtual machine files for VM B and VM C, on <A href="#" target="_blank"> \\sofs\share </A> , through node2. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107421i2AC2404C67AA7DC0" /> <BR /> <BR /> When the backup operator performs a Shadow Copy of VM B and C, the Hyper-V VSS writer will report <A href="#" target="_blank"> \\sofs\share </A> in the Shadow Copy set. When ready, the File Share Shadow Copy Provider relays a Shadow Copy request to <A href="#" target="_blank"> \\sofs </A> . As part of the exchange between the File Share Shadow Copy Provider and the File Share Shadow Copy Agent, the Agent will inform the Provider of the physical computer name which is actually performing the Shadow Copy. In this scenario, the physical computer name will be the name of the CSV coordinator node, and the File Share Shadow Copy Provider will connect to the cluster node that is currently the CSV coordinator node, which could be node1. <BR /> <BR /> On node1, the File Share Shadow Copy Agent invokes the local VSS service to perform a Shadow Copy of the CSV volume backing the file share. When the Shadow Copy sequence is complete, a Shadow Copy share <A href="#" target="_blank"> \\node1\share@{GUID} </A> will be available for the backup application to stream the backup data. Notice the Shadow Copy share, <A href="#" target="_blank"> \\node1\share@{GUID} </A> , is scoped to the cluster node, <A href="#" target="_blank"> \\node1 </A> , and not the virtual computer name, <A href="#" target="_blank"> \\fs </A> , similar to example 4. <BR /> <BR /> Once the backup is complete, the backup application releases the Shadow Copy set and the associated Shadow Copies and Shadow Copy shares are destroyed. If for some reason node1 becomes unavailable before the backup sequence is complete, the Shadow Copy share and the Shadow Copy become invalid. Actions, such as moving the CSV coordinator for CSV1 to node 2, do not affect the Shadow Copy share. <BR /> Installation and configuration <BR /> This section contains information about installing and configuring the File Share Shadow Copy Provider and File Share Shadow Copy Agent. <BR /> Installation of File Share Shadow Copy Provider <BR /> The File Share Shadow Copy Provider is installed by default on all editions of Windows Server, so no further installation is necessary. <BR /> Installation of File Share Shadow Copy Agent <BR /> To install the File Share Shadow Copy Agent on the file server(s), do the following on each file server: <BR /> <BR /> Do the following, with administrative privileges, to install the File Server role and the File Server Shadow Copy Agent role service on each file server: <BR /> GUI <BR /> 1. In the Server Manager Dashboard click <B> Add roles and features </B> <BR /> <BR /> 2. In the Add Roles and Features Wizard <BR /> <BR /> a. In the <B> Before you begin </B> wizard page, click <B> Next </B> <BR /> <BR /> b. In the <B> Select installation type </B> wizard page, select <B> Role-based or feature-based installation </B> <BR /> <BR /> c. In the <B> Select destination server </B> wizard page, select the server where you want to install the File Share Shadow Copy Agent <BR /> <BR /> d. In the <B> Select server roles </B> wizard page: <BR /> <BR /> d.i. Expand File and Storage Services <BR /> <BR /> d.ii. Expand File Services <BR /> <BR /> d.iii. Check File Server <BR /> <BR /> d.iv. Check File Server VSS Agent Service <BR /> <BR /> e. In the <B> Select features </B> wizard page, click <B> Next </B> <BR /> <BR /> f. In the <B> Confirm installation selections </B> , verify File Server and File Server VSS Agent Service are listed, and click <B> Install </B> <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107422i4B8A686E8880EFA7" /> <BR /> PowerShell <BR /> 1. Start elevated Windows PowerShell (Run as Administrator) <BR /> <BR /> 2. Run the following command: <BR /> <BR /> Add-WindowsFeature -Name File-Services,FS-VSS-Agent <BR /> Add backup user to Backup Operators local group on file server <BR /> The user context in which the shadow copy is performed must have the backup privilege on the remote file server(s) that are part of the shadow copy set. <BR /> <BR /> Commonly this is done by adding the user that is performing the shadow copy to the Backup Operators group on the file server(s). <BR /> <BR /> To add a user to the local Backup Operators group, do the following with administrative privileges on each file server: <BR /> GUI <BR /> 1. In the Server Manager Dashboard click <B> Tools </B> and select <B> Computer Management </B> <BR /> <BR /> 2. <B> In Computer Management: </B> <BR /> <BR /> a. Expand <B> Local Users and Groups </B> <BR /> <BR /> b. Expand <B> Groups </B> <BR /> <BR /> c. In the results pane <B> , </B> double click <B> Backup Operators </B> <BR /> <BR /> d. In the <B> Backup Operators Properties </B> page <B> , </B> click <B> Add </B> <BR /> <BR /> e. Type the username to add to the Backup Operators group, click <B> OK </B> <BR /> <BR /> f. In the <B> Backup Operators Properties </B> page <B> , </B> click <B> OK </B> <BR /> <BR /> g <STRONG> . </STRONG> Close <B> Computer Management </B> <BR /> Windows PowerShell <BR /> 1. Start elevated Windows PowerShell (Run as Administrator) <BR /> <BR /> 2. Run the following commands, adjusting user account and file server name to your environment: <BR /> <BR /> $objUser = [ADSI](“WinNT:// <I> domain/user </I> “) <BR /> <BR /> $objGroup = [ADSI](“WinNT:// <I> fileserv </I> /Backup Operators”) <BR /> <BR /> $objGroup.PSBase.Invoke(“Add”,$objUser.PSBase.Path) <BR /> Perform a Shadow Copy <BR /> To perform a Shadow Copy of an applications data that is stored on a file share, a VSS-aware backup application that supports VSS for SMB File Shares functionality must be used. <BR /> <BR /> <B> Note </B> : Windows Server Backup in Windows Server 2012 does not support VSS for SMB File Shares. <BR /> <BR /> The following section shows examples of performing a Shadow Copy of a virtual machine that has its data files stored on a SMB file share, using: <BR /> <UL> <BR /> <LI> DISKSHADOW </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> Microsoft System Center Data Protection Manager 2012 SP1 CTP1 </LI> <BR /> <LI> The following diagram illustrates the configuration of the setup used for the examples in this section: <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107423iF5CFBA5230033845" /> <BR /> <BR /> The following details the configuration of the virtual machine: <BR /> <BR /> 1. Start elevated Windows PowerShell (Run as Administrator) and do the following: <BR /> <BR /> PS C:\Users\administrator.SMBTEST&gt; Get-VM | select VMName, State, Path | FL <BR /> <BR /> VMName : vm1 <BR /> <BR /> State : Running <BR /> <BR /> Path : \\smbsofs\vm\vm1\vm1 <BR /> DISKSHADOW <BR /> To perform a Shadow Copy of virtual machine using DISKSHADOW on the Hyper-V hosts (clausjor04): <BR /> <BR /> 1. Start elevated Windows PowerShell (Run as Administrator) and do the following: <BR /> <BR /> PS C:\Users\administrator.SMBTEST&gt; DISKSHADOW <BR /> <BR /> Microsoft DiskShadow version 1.0 <BR /> <BR /> Copyright (C) 2012 Microsoft Corporation <BR /> <BR /> On computer: CLAUSJOR04, 5/30/2012 5:34:42 PM <BR /> <BR /> DISKSHADOW&gt; Writer Verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} <BR /> <BR /> DISKSHADOW&gt; Set Context Persistent <BR /> <BR /> DISKSHADOW&gt; Set MetaData vm1backup.cab <BR /> <BR /> DISKSHADOW&gt; Begin Backup <BR /> <BR /> DISKSHADOW&gt; Add Volume \\smbsofs\vm\vm1 <BR /> <BR /> DISKSHADOW&gt; Create <BR /> <BR /> Alias VSS_SHADOW_1 for shadow ID {7b53b887-76e5-4db8-821d-6828e4cbe044} set as environment variable. <BR /> <BR /> Alias VSS_SHADOW_SET for shadow set ID {2bef895d-5d3f-4799-8368-f4bfc684e95b} set as environment variable. <BR /> <BR /> Querying all shadow copies with the shadow copy set ID {2bef895d-5d3f-4799-8368-f4bfc684e95b} <BR /> <BR /> * Shadow copy ID = {7b53b887-76e5-4db8-821d-6828e4cbe044} %VSS_SHADOW_1% <BR /> <BR /> - Shadow copy set: {2bef895d-5d3f-4799-8368-f4bfc684e95b} %VSS_SHADOW_SET% <BR /> <BR /> - Original count of shadow copies = 1 <BR /> <BR /> - Original volume name: \\SMBSOFS\VM\ [volume not on this machine] <BR /> <BR /> - Creation time: 5/30/2012 5:35:52 PM <BR /> <BR /> - Shadow copy device name: \\FSF-260403-09\VM@{F1C5E17A-4168-4611-9CD4-8366F9F935C3} <BR /> <BR /> - Originating machine: FSF-260403-09 <BR /> <BR /> - Service machine: CLAUSJOR04.SMBTEST.stbtest.microsoft.com <BR /> <BR /> - Not exposed <BR /> <BR /> - Provider ID: {89300202-3cec-4981-9171-19f59559e0f2} <BR /> <BR /> - Attributes: No_Auto_Release Persistent FileShare <BR /> <BR /> Number of shadow copies listed: 1 <BR /> <BR /> DISKSHADOW&gt; End Backup <BR /> <BR /> The <B> Writer Verify </B> command specifies that the backup or restore operation must fail if the writer or component is not included. For more information see this <A href="#" target="_blank"> TechNet article </A> . <BR /> <BR /> The <B> Set Context Persistent </B> command (and attributes), highlighted in orange, sets the Shadow Copy to be persistent, meaning that it is up to the user or application to delete the Shadow Copy when done. <BR /> <BR /> The <B> Set MetaData </B> stores the metadata information for the Shadow Copy, which is needed for restore, in the specified file. <BR /> <BR /> The <B> Add Volume </B> command, highlighted in yellow, adds the UNC path to the Shadow Copy set. You can specify multiple paths by repeating the <B> Add Volume </B> command. <BR /> <BR /> The <B> Create </B> command, initiates the Shadow Copy. Once the Shadow Copy creation is complete, DISKSHADOW outputs the properties of the Shadow Copy. The Shadow Copy device name, highlighted in green, is the path for the Shadow Copy data, which we can copy to the backup store using XCOPY or similar tools. <BR /> <BR /> During the backup session, you can see the virtual machine status reporting “Backing up..” in Hyper-V Manager. The backup session starts with the CREATE command and ends with the END BACKUP command in the DISKSHADOW sequence above. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107424i4990CBC3E24E6B47" /> <BR /> <BR /> After the Shadow Copy is complete, we can browse the Shadow Copy share (Shadow Copy device name from above) and copy the data we want to back up to an alternate location: <BR /> <BR /> 1. Start elevated Windows PowerShell (Run as Administrator) and do the following: <BR /> <BR /> PS C:\Users\administrator.SMBTEST&gt; Get-ChildItem -Recurse -Path “\\FSF-260403-09\VM@{F1C5E17A-4168-4611-9CD4-8366F9F935C3}“ <BR /> <BR /> Directory: \\FSF-260403-09\VM@{F1C5E17A-4168-4611-9CD4-8366F9F935C3} <BR /> <BR /> Mode LastWriteTime Length Name <BR /> <BR /> —- ————- —— —- <BR /> <BR /> d—- 5/30/2012 5:19 PM vm1 <BR /> <BR /> Directory: \\FSF-260403-09\VM@{F1C5E17A-4168-4611-9CD4-8366F9F935C3}\vm1 <BR /> <BR /> Mode LastWriteTime Length Name <BR /> <BR /> —- ————- —— —- <BR /> <BR /> d—- 5/30/2012 5:19 PM vm1 <BR /> <BR /> -a— 5/30/2012 5:35 PM 8837436928 vm1.vhd <BR /> <BR /> Directory: \\FSF-260403-09\VM@{F1C5E17A-4168-4611-9CD4-8366F9F935C3}\vm1\vm1 <BR /> <BR /> Mode LastWriteTime Length Name <BR /> <BR /> —- ————- —— —- <BR /> <BR /> d—- 5/30/2012 5:19 PM Virtual Machines <BR /> <BR /> Directory: \\FSF-260403-09\VM@{F1C5E17A-4168-4611-9CD4-8366F9F935C3}\vm1\vm1\Virtual Machines <BR /> <BR /> Mode LastWriteTime Length Name <BR /> <BR /> —- ————- —— —- <BR /> <BR /> d—- 5/30/2012 5:19 PM 87B27972-46C2-406B-87A4-C3FFA1FB6822 <BR /> <BR /> -a— 5/30/2012 5:35 PM 28800 87B27972-46C2-406B-87A4-C3FFA1FB6822.xml <BR /> <BR /> Directory: \\FSF-260403-09\VM@{F1C5E17A-4168-4611-9CD4-8366F9F935C3}\vm1\vm1\Virtual <BR /> <BR /> Machines\87B27972-46C2-406B-87A4-C3FFA1FB6822 <BR /> <BR /> Mode LastWriteTime Length Name <BR /> <BR /> —- ————- —— —- <BR /> <BR /> -a— 5/30/2012 5:22 PM 2147602688 87B27972-46C2-406B-87A4-C3FFA1FB6822.bin <BR /> <BR /> -a— 5/30/2012 5:22 PM 20971520 87B27972-46C2-406B-87A4-C3FFA1FB6822.vsv <BR /> <BR /> Once we are done copying the data, we can go ahead and delete the Shadow Copy, as highlighted below: <BR /> <BR /> 1. Start elevated Windows PowerShell (Run as Administrator) and do the following: <BR /> <BR /> PS C:\Users\administrator.SMBTEST&gt; DISKSHADOW <BR /> <BR /> Microsoft DiskShadow version 1.0 <BR /> <BR /> Copyright (C) 2012 Microsoft Corporation <BR /> <BR /> On computer: CLAUSJOR04, 5/30/2012 5:44:21 PM <BR /> <BR /> DISKSHADOW&gt; Delete Shadows Volume \\smbsofs\vm <BR /> <BR /> Deleting shadow copy {7b53b887-76e5-4db8-821d-6828e4cbe044} on volume \\SMBSOFS\VM\ from provider {89300202-3cec-4981-91 <BR /> <BR /> 71-19f59559e0f2} [Attributes: 0x04400009]… <BR /> <BR /> Number of shadow copies deleted: 1 <BR /> Restore data from a Shadow Copy <BR /> To restore the virtual machine data from the backup store back to its original location: <BR /> <BR /> 1. Start elevated Windows PowerShell (Run as Administrator) and do the following: <BR /> <BR /> DISKSHADOW&gt; Set Context Persistent <BR /> <BR /> DISKSHADOW&gt; Load MetaData vm1backup.cab <BR /> <BR /> DISKSHADOW&gt; Begin Restore <BR /> <BR /> DISKSHADOW&gt; //xcopy files from backup store to the original location <BR /> <BR /> DISKSHADOW&gt; End Restore <BR /> <BR /> The <B> Load MetaData </B> command loads the metadata information for the Shadow Copy, which is needed for restore, from the specified file. <BR /> <BR /> After issuing the <B> Begin Restore </B> command, you can copy the virtual machine files from the backup store to the original location ( <A href="#" target="_blank"> \\smbsofs\vm\vm1 </A> ). See this <A href="#" target="_blank"> TechNet article </A> for more information on XCOPY restore of Hyper-V <BR /> Data Protection Manager 2012 SP1 CTP1 <BR /> To perform data protection with Microsoft System Center Data Protection Manager 2012 SP1 CTP1 (DPM), we create a new protection group that includes the virtual machine we want to protect. After installing the DPM agent on the Hyper-V server and allocate some disk to the storage pool, we can create a protection group using the following steps: <BR /> <BR /> 1. In the <B> System Center 2012 DPM Administrator Console </B> , select <B> Protection </B> <BR /> <BR /> 2. In the <B> Protection </B> view, select <B> New </B> <BR /> <BR /> 3. In Create New Protection Group wizard, do the following <BR /> <BR /> a. In <B> Welcome </B> , click <B> Next </B> <BR /> <BR /> b. In <B> Select Protection Group Type, </B> select <B> Servers </B> <BR /> <BR /> c. <B> In Select Group Members </B> <BR /> <BR /> c.i. Locate the server where the VM is running <BR /> <BR /> c.ii. Expand the Hyper-V node <BR /> <BR /> c.iii. Select the virtual machine you want to backup (see screenshot below) <BR /> <BR /> c.iv. Click <B> Next </B> <BR /> <BR /> d. In <B> Select Data Protection Method </B> <BR /> <BR /> d.i. Enter a protection group name <BR /> <BR /> d.ii. Select <B> I want short-term protection using Disk </B> <BR /> <BR /> d.iii. Click <B> Next </B> <BR /> <BR /> d.e. Complete the remainder of the wizard using defaults <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107425i405C98A10A3F36CF" /> <BR /> <BR /> When the protection group is created and the initial replica is completed, you should see the following in the DPM Administrator Console: <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107426i0CA7AE91D90F0F51" /> <BR /> <BR /> If you inspect the application server during the initial replica using DISKSHADOW, you will be able to see the Shadow Copy in progress. The following shows the list shadows all during the creation of the initial replica: <BR /> <BR /> DISKSHADOW&gt; list shadows all <BR /> <BR /> Querying all shadow copies on the computer … <BR /> <BR /> * Shadow copy ID = {c0024211-bd08-4374-ac47-399df2d20075} &lt;No Alias&gt; <BR /> <BR /> - Shadow copy set: {28e88c97-f5b1-4124-ae7b-83f5600d54ff} &lt;No Alias&gt; <BR /> <BR /> - Original count of shadow copies = 1 <BR /> <BR /> - Original volume name: \\SMBSOFS.SMBTEST.STBTEST.MICROSOFT.COM\VM\ [volume not on this machine] <BR /> <BR /> - Creation time: 5/30/2012 6:49:35 PM <BR /> <BR /> - Shadow copy device name: \\FSF-260403-09.SMBTEST.STBTEST.MICROSOFT.COM\VM@{B6995DEF-A951-4379-9A3E-0B3 <BR /> <BR /> 619FB9A6A} <BR /> <BR /> - Originating machine: FSF-260403-09 <BR /> <BR /> - Service machine: CLAUSJOR04.SMBTEST.stbtest.microsoft.com <BR /> <BR /> - Not exposed <BR /> <BR /> - Provider ID: {89300202-3cec-4981-9171-19f59559e0f2} <BR /> <BR /> - Attributes: Auto_Release FileShare <BR /> <BR /> Number of shadow copies listed: 1 <BR /> <BR /> The highlighted in yellow is the remote UNC path which DPM specified for the Shadow Copy. The highlighted in green is the Shadow Copy device name, where DPM access the Shadow Copy data for replication. The highlighted in orange are the attributes used when DPM created the Shadow Copy. In this case the Shadow Copy is auto-release, meaning that the Shadow Copy is automatically released and deleted once DPM stops using it. <BR /> Tips and Tricks <BR /> Event logs <BR /> VSS for SMB File Shares logs events for the Agent and Provider respectively. The event logs can be found on this path in Event Viewer: <BR /> <UL> <BR /> <LI> Microsoft-Windows-FileShareShadowCopyProvider </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> Microsoft-Windows-FileShareShadowCopyAgent </LI> <BR /> <LI> <BR /> Encryption <BR /> By default the network traffic between the computer running the VSS provider and the computer running the VSS Agent Service requires mutual authentication and is signed. However the traffic is not encrypted, as it doesn’t contain any user data. It is possible to enable encryption of network traffic. <BR /> <BR /> You can control this behavior using Group Policy (gpedit.msc) in “Local Computer Policy-&gt;Administrator Templates-&gt;System-&gt;File Share Shadow Copy Provider”. You can also configure it in a Group Policy Object in the Active Directory domain. <BR /> Garbage collecting orphaned Shadow Copies <BR /> In case of unexpected computer restarts or similar events on the application server after the Shadow Copy has been created on the file server, some Shadow Copies may be left as orphaned on the file server. It is important to remove these Shadow Copies to ensure best possible system performance. By default the VSS Agent Service will remove Shadow Copies older than 24 hours. <BR /> <BR /> You can control this behavior using Group Policy in “Local Computer Policy-&gt;Administrator Templates-&gt;System-&gt;File Share Shadow Copy Agent”. You can also configure it in a Group Policy Object in the Active Directory domain. <BR /> Long running shadow copies <BR /> The VSS Agent Service maintains a sequence timer during Shadow Copy creation requested by an application server. By default the VSS Agent Service will abort a Shadow Copy sequence if it doesn’t complete in 30 minutes to ensure other application servers are not blocked for extended period of time. To configure the file server to use a different value through the registry, set the following registry on each file server <BR /> <BLOCKQUOTE> Set-ItemProperty -Path “HKLM: SYSTEM\CurrentControlSet\Services\fssagent\Settings” LongWriterOperationTimeoutInSeconds-Value 1800 –Force </BLOCKQUOTE> <BR /> If you are using a Scale-Out File Server, it may be necessary to adjust the cluster property SharedVolumeVssWriterOperationTimeout as well. The default value is 1800 seconds (minimum is 60 seconds and maximum is 7200 seconds). The backup user is expected to tweak this value based on the expected time for the VSS writer operations during PrepareForSnapshot and PostSnapshot calls (whichever is higher). For example, if a VSS writer is expected to take up to 10 minutes during PrepareForSnapshot and up to 20 minutes during PostSnapshot, the recommended value for SharedVolumeVssWriterOperationTimeout would be 1200 seconds. <BR /> Accessing file shares using IP addresses <BR /> In general you should use hostnames (e.g. <A href="#"https://gorovian.000webhostapp.com/?exam= target="_blank"> \\fileserver\share\ </A> ) or fully qualified domain names (e.g. <A href="#"https://gorovian.000webhostapp.com/?exam= target="_blank"> \\fileserver.smbtest.stbtest.microsoft.com\share\ </A> ) when configuring your application server to use SMB file shares. If for some reason you need to use IP addresses (e.g. <A href="#"https://gorovian.000webhostapp.com/?exam= target="_blank"> \\192.168.1.1\share\ </A> ), then the following are supported: <BR /> <BR /> <STRONG> Note </STRONG> : DNS reverse lookup (IP address to host name) must be available to successfully use IP addresses). <BR /> <BR /> IPv4: <BR /> <BR /> Strict four-part dotted-decimal notation, e.g. <A href="#" target="_blank"> \\192.168.1.1\share </A> <BR /> <BR /> IPv6: <BR /> <BR /> 1. Global IPv6 and its literal format, e.g., <BR /> <BR /> <A> \\2001:4898:2a:3:2c03:8347:8ded:2d5b\share </A> <BR /> <BR /> <A href="#" target="_blank"> \\2001-4898-2a-3-2c03-8347-8ded-2d5b.ipv6-literal.net\share </A> <BR /> <BR /> 2. Site Local IPv6 format (Start with <B> FEC0: ) </B> and its literal format <BR /> <BR /> \\fec0::1fd9:ebee:ea74:ffd8%1\share <BR /> <BR /> <A href="#" target="_blank"> \\fec0–1fd9-ebee-ea74-ffd8s1.ipv6-literal.net\share </A> <BR /> <BR /> 3. IPv6 tunnel address and its literal format <BR /> <BR /> <A> \\2001:4898:0:fff:0:5efe:172.30.182.42\share </A> <BR /> <BR /> <A href="#" target="_blank"> \\2001-4898-0-fff-0-5efe-172.30.182.42.ipv6-literal.net\share </A> <BR /> <BR /> IPv6 Link Local addresses (Starts with FE80:) are not supported. <BR /> Conclusion <BR /> I hope you enjoyed this introduction to VSS for SMB File Shares and agree how this feature is useful to being able to provide backup of application servers that store their data files on SMB file shares, which includes host-based backup of Hyper-V computers storing virtual machines on SMB file shares. <BR /> <BR /> Claus Joergensen <BR /> <BR /> Principal Program Manager <BR /> <BR /> Windows File Server Team </LI> <BR /> </UL> </BODY></HTML> Wed, 10 Apr 2019 11:12:38 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/vss-for-smb-file-shares/ba-p/425726 Claus Joergensen 2019-04-10T11:12:38Z SMB Transparent Failover – making file shares continuously available https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/smb-transparent-failover-8211-making-file-shares-continuously/ba-p/425693 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Mar 25, 2016 </STRONG> <BR /> <EM> Note: this post originally appeared on </EM> <A href="#" target="_blank"> <EM> https://aka.ms/clausjor </EM> </A> <EM> by Claus Joergensen. </EM> <BR /> <BR /> SMB Transparent Failover is one of the key features in the feature set introduced in Server Message Block (SMB) 3.0. SMB 3.0 is new in Windows Server 2012 and Windows 8. I am the program manager for SMB Transparent Failover and in this blog post I will give an overview of this new feature. <BR /> <BR /> In Windows Server 2012 the file server introduces support for storing server application data, which means that server applications, like Hyper-V and SQL Server, can store their data files, such as virtual machine files or SQL databases on Windows file shares. These server applications expect their storage to reliable and always available and they do not generally handle IO errors or unexpected closures of handles very well. If the server application cannot access its storage this often leads to databases going offline or virtual machines stopping or crashing because they can no longer write to their disk. <BR /> <BR /> SMB Transparent Failover enables administrators to configure Windows file shares, in Windows Failover Clustering configurations, to be continuously available. Using continuously available file shares enables administrators to perform hardware or software maintenance on any cluster node without interrupting the server applications that are storing their data files on these file shares. Also, in case of a hardware or software failure, the server application nodes will transparently reconnect to another cluster node without interruption of the server applications. In case of a SMB scale-out file share (more on Scale-Out File Server in a following blog post), SMB Transparent Failover allows the administrator to redirect a server application node to a different file server cluster node to facilitate better load balancing. <BR /> <BR /> For more information on storing server application data on SMB file shares and other features to support this scenario, see <A href="#" target="_blank"> Windows Server “8” – Taking Server Application Storage to Windows File Shares </A> <BR /> <BR /> <BR /> <H3> Installation and configuration </H3> <BR /> SMB Transparent Failover has the following requirements: <BR /> <UL> <BR /> <LI> A failover cluster running Windows Server 2012 with at least two nodes. The configuration of servers, storage and networking must pass the all tests performed in the Validate a Configuration wizard. </LI> <BR /> </UL> <BR /> <UL> <BR /> <LI> File Server role is installed on all cluster nodes. </LI> <BR /> <LI> Clustered file server configured with one or more file shares created with the continuously available property. This is the default setting. </LI> <BR /> <LI> SMB client computers running the Windows 8 client or Windows Server 2012. </LI> <BR /> <LI> To realize SMB Transparent Failover, both the SMB client computer and the SMB server computer must support SMB 3.0, which is introduced in Windows 8 and Windows Server 2012. Computers running down-level SMB versions, such as 1.0, 2.0 or 2.1 can connect and access data on a file share that has the continuously available property set, but will not be able to realize the benefits of the SMB Transparent Failover feature. <BR /> Installing and creating a Failover Cluster <BR /> Information about how to install the Failover Clustering feature, creating and troubleshooting a Windows Server 2012 Failover Cluster see these blog posts: <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> Installing the Failover Cluster Feature and Tools in Windows Server 2012 </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Creating a Windows Server 2012 Failover Cluster </A> </LI> <BR /> <LI> <A href="#" target="_blank"> How to Troubleshoot Create Cluster failures in Windows Server 2012 </A> </LI> <BR /> </UL> <BR /> Installing the File Server role <BR /> Once the Failover Cluster is up and running, we can install the File Server role. Do the following for each node in the Failover Cluster: <BR /> Graphical User Interface <BR /> <UL> <BR /> <LI> Start <B> Server Manager </B> </LI> <BR /> <LI> Click <B> Add roles and features </B> </LI> <BR /> <LI> In the <B> Add Roles and Features Wizard, </B> do the following <B> : </B> <BR /> <UL> <BR /> <LI> In <B> Before you begin </B> , click <B> Next </B> </LI> <BR /> <LI> In <B> Select installation type </B> , click <B> Next </B> </LI> <BR /> <LI> In <B> Select destination server </B> , choose the server where you want to install the File Server role, and click <B> Next </B> </LI> <BR /> <LI> In <B> Select server roles </B> , expand <B> File And Storage Services </B> , expand <B> File and iSCSI Services </B> , and <B> check </B> the check box for <B> File Server </B> and click <B> Next </B> </LI> <BR /> <LI> In <B> Select features </B> , click <B> Next </B> </LI> <BR /> <LI> In <B> Confirm installation selections </B> , click <B> Install </B> </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107399i948E2A8BCC0B7DAD" /> <BR /> <BR /> Figure 1 – Installing File Server role <BR /> PowerShell <BR /> In an elevated PowerShell shell, do the following: <BR /> <BR /> Add-WindowsFeature -Name File-Services <BR /> Create clustered File Server <BR /> Once the File Server role is installed on all cluster nodes, we can create a clustered file server. In this example we will create a clustered file server of type “File Server for general use” and name it SMBFS. I will provide more information on “Scale-Out File Server for application data” in a follow-up blog post. <BR /> <BR /> Do the following to create a clustered file server. <BR /> Graphical User Interface <BR /> <UL> <BR /> <LI> Start <B> Server Manager </B> </LI> <BR /> <LI> Click <B> Tools </B> and select <B> Failover Cluster Manager </B> </LI> <BR /> <LI> In the <B> console tree </B> , do the following </LI> <BR /> </UL> <BR /> <UL> <BR /> <LI> Select and expand the cluster you are managing </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> Select <B> Roles </B> </LI> <BR /> <LI> In the <B> Actions </B> pane, click <B> Configure Role </B> </LI> <BR /> <LI> In <B> Before You Begin </B> , click <B> Next </B> </LI> <BR /> <LI> In <B> Select Role </B> , select <B> File Server </B> and click <B> Next </B> </LI> <BR /> <LI> In <B> File Server Type </B> , select the type of clustered file server you want to use </LI> <BR /> <LI> In <B> Client Access Point </B> , enter the name of the clustered file server </LI> <BR /> <LI> In <B> Client Access Point </B> , complete the Network Address for static IP addressed as needed and click <B> Next </B> </LI> <BR /> <LI> In <B> Select Storage </B> , select the disks that you want to assign to this clustered file server and click <B> Next </B> </LI> <BR /> <LI> In <B> Confirmation </B> , review your selections and when ready click <B> Next </B> </LI> <BR /> <LI> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107400i6944C0997B699265" /> Figure 2 – Select File Server Type <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107401i83BA0FB69F83253B" /> Figure 3 – Configure Client Access Point <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107402iEA7B04B1F28A87B5" /> <BR /> <BR /> Figure 4 – Select Storage <BR /> PowerShell <BR /> In an elevated PowerShell shell, do the following: <BR /> <BR /> Add-ClusterFileServerRole -Name SMBFS -Storage "Cluster Disk 1" -StaticAddress 192.168.9.99/24 <BR /> Create a file share that is continuously available <BR /> Now that we have created the clustered file server, we can create file shares that are continuously available. In this example we will create a file share named “appstorage” on the clustered file server we created previously. <BR /> <BR /> Do the following to create a file share that is continuously available: <BR /> Graphical User Interface <BR /> <UL> <BR /> <LI> Start <B> Server Manager </B> </LI> <BR /> <LI> Click <B> Tools </B> and select <B> Failover Cluster Manager </B> </LI> <BR /> <LI> In the <B> console tree </B> , do the following </LI> <BR /> </UL> <BR /> <UL> <BR /> <LI> Select and expand the cluster you are managing </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> Select <B> Roles </B> </LI> <BR /> <LI> In the <B> Results </B> pane, select the <B> file server </B> where you want to create the file share and in the <B> Actions </B> pane click <B> Add File Share </B> . This will start the <B> New Share Wizard </B> </LI> <BR /> <LI> In the New Share Wizard, do the following <BR /> <UL> <BR /> <LI> In Select Profile, select the appropriate profile (SMB Share – Applications in this example) and click Next </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> In <B> Share Location </B> , select the <B> volume </B> where you want to create the share and click <B> Next </B> </LI> <BR /> <LI> In <B> Share Name </B> , enter the <B> share name </B> and click <B> Next </B> </LI> <BR /> <LI> In <B> Configure Share Setting, </B> verify <B> Enable continuous availability </B> is set and click <B> Next </B> </LI> <BR /> <LI> In <B> Specify permissions and control access </B> , modify the permissions as needed to enable access and click Next </LI> <BR /> <LI> In <B> Confirmation </B> , review your selections and when ready click <B> Create </B> </LI> <BR /> <LI> Click <B> Close </B> </LI> <BR /> <LI> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107403i3D49AD41D8D48A35" /> Figure 5 – Select Profile <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107404i3315B70C55DDCD58" /> Figure 6 – Select server and path <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107405i98DD55A189EEA5A8" /> <BR /> <BR /> Figure 7 – Share Name <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107406i66726D397854C568" /> <BR /> <BR /> Figure 8 – Configure Share Settings <BR /> <BR /> To verify a share has the continuously available property set, do the following: <BR /> <UL> <BR /> <LI> Start <B> Server Manager </B> </LI> <BR /> <LI> Click <B> Tools </B> and select <B> Failover Cluster Manager </B> </LI> <BR /> <LI> In the <B> console tree </B> , do the following </LI> <BR /> </UL> <BR /> <UL> <BR /> <LI> Select and expand the cluster you are managing </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> Select <B> Roles </B> </LI> <BR /> <LI> In the <B> Results </B> pane, select the <B> file server </B> you want to examine </LI> <BR /> <LI> In the <B> bottom window </B> , click the <B> Shares </B> tab </LI> <BR /> <LI> Locate the share of interest and examine the <B> Continuous Availability </B> property </LI> <BR /> <LI> <BR /> PowerShell <BR /> These steps assume the folder for the share is already created. If this is not the case, create folder before continuing. <BR /> <BR /> In an elevated PowerShell shell on the cluster node where the clustered file server is online, do the following to create a file share with continuous availability property set: <BR /> <BR /> New-SmbShare -Name AppStorage –Path f:\appstorage –Scope smbfs –FullControl smbtest\administrator <BR /> <BR /> In an elevated PowerShell shell on the cluster node where the clustered file server is online, do the following to verify a file share has continuous availability property set. <BR /> <BR /> Get-SmbShare -Name AppStorage | Select * <BR /> <BR /> PresetPathAcl : System.Security.AccessControl.DirectorySecurity <BR /> <BR /> ShareState : Online <BR /> <BR /> AvailabilityType : Clustered <BR /> <BR /> ShareType : FileSystemDirectory <BR /> <BR /> FolderEnumerationMode : Unrestricted <BR /> <BR /> CachingMode : None <BR /> <BR /> CATimeout : 0 <BR /> <BR /> ConcurrentUserLimit : 0 <BR /> <BR /> ContinuouslyAvailable : True <BR /> <BR /> CurrentUsers : 0 <BR /> <BR /> Description : <BR /> <BR /> EncryptData : False <BR /> <BR /> Name : appstorage <BR /> <BR /> Path : F:\Shares\appstorage <BR /> <BR /> Scoped : True <BR /> <BR /> ScopeName : SMBFS <BR /> <BR /> SecurityDescriptor : O:BAG:DUD:(A;OICI;FA;;;WD) <BR /> <BR /> ShadowCopy : False <BR /> <BR /> Special : False <BR /> <BR /> Temporary : False <BR /> <BR /> Volume : \\?\Volume{266f94b0-9640-4e1f-b056-6a3e999e6ecf}\ <BR /> <BR /> Note that we didn’t request the continuous availability property to be set. This is because the property is set by default. If you want to create a file share without the property set, do the following: <BR /> <BR /> New-SmbShare -Name AppStorage -Path f:\appstorage -Scope smbfs –FullControl smbtest\administrator -ContinuouslyAvailable:$false <BR /> <H3> Using a file share that is continuously available </H3> <BR /> Now that we have created a clustered file server with a file share that is continuously available, let’s go ahead and use it. <BR /> <BR /> The below diagram illustrates the setup that I will be using in this section. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107407i1FB90B37169A7CBA" /> <BR /> <BR /> Figure 9 – Clustered File Server <BR /> <BR /> On the file share is a 10GB data file (testfile.dat) that is being accessed by an application on the SMB client computer (FSF-260403-10). The below screenshot shows the SMB Client Shares performance counters for <A href="#" target="_blank"> \\smbfs\appstorage </A> share as seen from the SMB Client. As you can see the application is doing 8KB reads and writes. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107408iD53EF82DD598A70F" /> <BR /> <BR /> Figure 10 – Data Access <BR /> <BR /> Zeroing in on data requests/sec in graph form, we see the following: <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107409i3E6A2B866E4DB963" /> <BR /> <BR /> In an elevated PowerShell shell on the cluster node where the clustered file server is online, do the following to: <BR /> <BR /> Get-SmbOpenFile | Select * <BR /> <BR /> ClientComputerName : [2001:4898:e0:32af:890b:6268:df3b:bf8] <BR /> <BR /> ClientUserName : SMBTEST\Administrator <BR /> <BR /> ClusterNodeName : <BR /> <BR /> ContinuouslyAvailable : True <BR /> <BR /> Encrypted : False <BR /> <BR /> FileId : 4415226380557 <BR /> <BR /> Locks : 0 <BR /> <BR /> Path : F:\Shares\appstorage\testfile.dat <BR /> <BR /> Permissions : 1180059 <BR /> <BR /> ScopeName : SMBFS <BR /> <BR /> SessionId : 4415226380341 <BR /> <BR /> ShareRelativePath : testfile.dat <BR /> Planned move of the cluster group <BR /> With assurance that the file handle is indeed continuously available, let’s go ahead and move the cluster group to another cluster node. In an elevated PowerShell shell on one of the cluster nodes, do the following to move the cluster group: <BR /> <BR /> Move-ClusterGroup -Name smbfs -Node FSF-260403-08 <BR /> <BR /> Name&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; OwnerNode&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; State <BR /> <BR /> —-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ———&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; —– <BR /> <BR /> smbfs&nbsp;&nbsp;&nbsp;&nbsp; FSF-260403-08&nbsp;&nbsp; Online <BR /> <BR /> Looking at Data Requests/sec in Performance Monitor, we see that there is a short brown-out where IO is stalled of a few seconds while the cluster group is moved, but continues uninterrupted when the cluster group has completed the move. <BR /> <BR /> The tear down and setup of SMB session, connections and active handles between the SMB client and the SMB server on the cluster nodes is handled completely transparent to the application. The application does not see any errors during this transition, only a brief stall in IO. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107410iFC395822AECB4F60" /> <BR /> <BR /> Figure 11 – Move Cluster Group <BR /> <BR /> Let’s take a look at the operational log for SMB Client in Event Viewer (Applications and Services Log – Microsoft – Windows – SMB Client – Operational) on the SMB Client computer. <BR /> <BR /> In the event log we see a series of warning events around 9:36:01PM. These warning events signal the tear down of SMB connections, sessions and shares. There is also a series of information events around 9:36:07PM. These information events signal the recovery of SMB sessions, connections and shares. These events are very useful in understanding the activities during the recovery and that the recovery was successfulJ <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107411iA9B55324F81C6B69" /> <BR /> <BR /> Figure 12 – Events for planned move <BR /> <BR /> So how does SMB Transparent Failover actually work? When the SMB client initially connects to the file share, the client determines whether the file share has the continuous availability property set. If it does, this means the file share is a clustered file share and supports SMB transparent failover. When the SMB client subsequently opens a file on the file share on behalf of the application, it requests a persistent file handle. When the SMB server receives a request to open a file with a persistent handle, the SMB server interacts with the Resume Key filter to persist sufficient information about the file handle, along with a unique key (resume key) supplied by the SMB client, to stable storage. <BR /> <BR /> If a planned move or failure occurs on the file server cluster node to which the SMB client is connected, the SMB client attempts to reconnect to another file server cluster node. Once the SMB client successfully reconnects to another node in the cluster, the SMB client starts the resume operation using the resume key. When the SMB server receives the resume key, it interacts with the Resume Key filter to recover the handle state to the same state it was prior to the failure with end-to-end support (SMB client, SMB server and Resume Key filter) for operations that can be replayed, as well as operations that cannot be replayed. Resume Key filter also protects the handle state after failover to ensure namespace consistency and that the client can reconnect. The application running on the SMB client computer does not experience any failures or errors during this operation. From an application perspective, it appears the I/O operations are stalled for a small amount of time. <BR /> <BR /> To protect against data loss from writing data into an unstable cache, persistent file handles are always opened with write through. <BR /> Unplanned failure of the active cluster node <BR /> Now, let’s introduce an unplanned failure. The cluster group was moved to FSF-260403-08. Since all these machines are running as virtual machines in a Hyper-V setup, I can use Hyper-V manager to reset FSF-260403-08. <BR /> <BR /> Looking at Data Requests/sec in Performance Monitor, we see that there is a slightly longer brown-out where IO is stalled. In this time period cluster detects that FSF-260403 has failed and starts the cluster group on another node. Once started, SMB can perform transparent recovery. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107412i9D992927F75B31A1" /> <BR /> <BR /> Figure 13 – Unplanned Failure <BR /> <BR /> And again the SMBClient event log shows events related to the event: <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107413i9974BF8A10C28F37" /> <BR /> <BR /> Figure 14 – Events for unplanned failure <BR /> <BR /> Now you will probably ask yourself: “Wait a minute. SMB is running over TCP and TCP timeout is typically 20 seconds and SMB uses a couple of them before determining the cluster node failed. So how come the recovery is ~10 seconds and not 40 or 60 seconds??” <BR /> <BR /> Enter Witness service. <BR /> <BR /> Witness service was created to enable faster recovery from unplanned failures, allowing the SMB client to not have to wait for TCP timeouts. Witness is a new service that is installed automatically with the failover clustering feature. When the SMB client initially connects to a cluster node, the SMB client notifies the Witness client, which is running on the same computer. The Witness client obtains a list of cluster nodes from the Witness service running on the cluster node it is connected to. The Witness client picks a different cluster node and issues a registration request to the Witness service on that cluster node. The Witness service then listens to cluster events related to the clustered file server the SMB client is connected to. <BR /> <BR /> If an unplanned failure occurs on the file server cluster node the SMB client is connected to, the Witness service on the other cluster node receives a notification from the cluster service. The Witness service notifies the Witness client, which in turns notifies the SMB client that the cluster node has failed. Upon receiving the Witness notification, the SMB client immediately starts reconnecting to a different file server cluster node, which significantly speeds up recovery from unplanned failures. <BR /> <BR /> You can examine the state of the Witness service across the cluster using the Get-SmbWitnessClient command. Notice that Get-SmbWitnessClient can be run on any cluster node and provides a cluster aggregate view of Witness service, similar to Get-SmbOpenFile and Get-SmbSessions. In an elevated PowerShell shell on one of the cluster nodes, do the following to: <BR /> <BR /> Get-SmbWitnessClient | select * <BR /> <BR /> State : RequestedNotifications <BR /> <BR /> ClientName : FSF-260403-10 <BR /> <BR /> FileServerNodeName : FSF-260403-08 <BR /> <BR /> IPAddress : 2001:4898:E0:32AF:3256:8C83:59E5:BDB5 <BR /> <BR /> NetworkName : SMBFS <BR /> <BR /> NotificationsCancelled : 0 <BR /> <BR /> NotificationsSent : 0 <BR /> <BR /> QueuedNotifications : 0 <BR /> <BR /> ResourcesMonitored : 1 <BR /> <BR /> WitnessNodeName : FSF-260403-07 <BR /> <BR /> Examining the above output (run before the unplanned failure), we can see the SMB client (FSF-260403-10) is currently connected to cluster node FSF-260403-08 (SMB connection) and has registered for witness notification for SMBFS with Witness service on FSF-260403-07. <BR /> <BR /> Looking at Event Viewer (Applications and Services Log – Microsoft – Windows – SMBWitnessClient – Operational) on the SMB Client computer, we see that the Witness client received notification for SMBFS. Since the cluster group was moved to FSF-260403-07, which is also the Witness node for the Witness client, the following event shows the Witness client unregistering from FSF-260403-07 and registering with FSF-260403-09. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107414i8F72DA7A8ADDB9A3" /> <BR /> <BR /> Figure 15 – Witness event log <BR /> <BR /> <BR /> <H3> Tips and Tricks </H3> <BR /> Protecting file server services <BR /> LanmanServer and LanmanWorkstation runs in service hosts with other services. In extreme cases other services running in the same service hosts can affect the availability of LanmanServer and LanmanWorkstation. You can configure these services to run in their own service host using the following commands: <BR /> <BR /> sc config lanmanserver type= own <BR /> <BR /> sc config lanmanworkstation type= own <BR /> <BR /> The computer needs to be restarted for this change to take effect. <BR /> Loopback configurations <BR /> Accessing a file share, that has continuously available property set, as a loopback share is not supported. <BR /> <BR /> For example, SQL Server or Hyper-V storing their data files on SMB file shares must run on computers that are not a member of the file server cluster for the SMB file shares. <BR /> Using legacy tools <BR /> When creating file shares, the continuous availability property is set by default on tools introduced in Windows Server 2012, including the new file share creation wizard and the New-SmbShare command. If you have automation built around using older tools, such as NET SHARE or Explorer or using the NET APIs the continuous availability property will not be set by default and these tools do not support setting it. To work around this issue you can set the following registry key, which will cause all shares to be created with the property set regardless if they support it or not: <BR /> <BR /> Set-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters" EnableCaAlways -Value 1 –Force <BR /> Witness service <BR /> By default the network traffic between the Witness Client and Witness Server requires mutual authentication and is signed. However the traffic is not encrypted, as it doesn’t contain any user data. It is possible to enable encryption of Witness network traffic. <BR /> <BR /> To configure the Witness client to send traffic encrypted, set the following registry key on each client: <BR /> <BR /> Set-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters" WitnessFlags -Value 1 –Force <BR /> <BR /> To configure the Witness Service to not accept unencrypted traffic, set the following registry key on each cluster node: <BR /> <BR /> Set-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Services\SMBWitness\Parameters" Flags -Value 1 –Force <BR /> Disabling NetBios over TCP/IP <BR /> I have seen disabling NetBios over TCP/IP speed up failover times. To disable NetBios over TCP/IP for an interface, do the following in Network Connections: <BR /> <BR /> · Select the interface you want to modify, <B> right-click </B> and select <B> Properties </B> <BR /> <BR /> · In interface properties, select <B> Internet Protocol Version 4 (TCP/IPv4) </B> and click <B> Properties </B> <BR /> <BR /> · In <B> Internet Protocol Version 4 (TCP/IPv4) </B> Properties, click <B> Advanced </B> <BR /> <BR /> · In <B> Advanced TCP/IP Settings </B> , click the <B> WINS </B> tab <BR /> <BR /> · On the <B> WINS </B> tab, select the <B> Disable NetBIOS over TCP/IP </B> radio button <BR /> <BR /> When disabling NetBIOS over TCP/IP it should be configured for all network interfaces on all cluster nodes. <BR /> <BR /> <IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/107415i8F0BE862AA10EFA5" /> <BR /> <BR /> Figure 16 – Disable NetBIOS over TCP/IP <BR /> Disable 8.3 name generation <BR /> SMB Transparent Failover does not support cluster disks with 8.3 name generation enabled. In Windows Server 2012 8.3 name generation is disabled by default on any data volumes created. However, if you import volumes created on down-level versions of Windows or by accident create the volume with 8.3 name generation enabled, SMB Transparent Failover will not work. An event will be logged in (Applications and Services Log – Microsoft – Windows – ResumeKeyFilter – Operational) notifying that it failed to attach to the volume because 8.3 name generation is enabled. <BR /> <BR /> You can use <A href="#" target="_blank"> fsutil </A> to query and setting the state of 8.3 name generation system-wide and on individual volumes. You can also use fsutil to remove previously generated short names from a volume. <BR /> <H3> Conclusion </H3> <BR /> I hope you enjoyed this introduction to SMB Transparent Failover and agree how this feature is useful to provide continued access despite needing to occasionally restart servers when performing software or hardware maintenance or in the unfortunate event where a cluster node fails. Providing continued access to file share during these events is extremely important, especially for workloads such as Microsoft Hyper-V and Microsoft SQL Server. <BR /> <BR /> I am looking forward to dive into Scale-Out File Server in a future post. <BR /> <BR /> Claus Joergensen <BR /> <BR /> Principal Program Manager <BR /> <BR /> Windows File Server Team </LI> <BR /> </UL> </BODY></HTML> Wed, 10 Apr 2019 11:10:58 GMT https://gorovian.000webhostapp.com/?exam=t5/storage-at-microsoft/smb-transparent-failover-8211-making-file-shares-continuously/ba-p/425693 Claus Joergensen 2019-04-10T11:10:58Z