Containers articles Containers articles Mon, 18 Oct 2021 13:50:52 GMT Containers 2021-10-18T13:50:52Z Nano Server x Server Core x Server - Which base image is the right one for you? <P>Almost every week I'm faced with this question from customers: Which Windows base container image is the right one for me? For that reason, I decided to write this blog post to help customers understand the differences between the three Windows container base images, its use cases, and pros and cons from each one.</P> <P>&nbsp;</P> <P>Before we get started, it's important to note that in many cases, there is a framework image that leverages these images above. The most common examples are <A href="#" target="_blank" rel="noopener">.Net Framework</A>, <A href="#" target="_blank" rel="noopener">ASP. Net</A>, .<A href="#" target="_blank" rel="noopener">Net</A> (formerly .Net Core), among many others.</P> <P>&nbsp;</P> <P><FONT size="5">What are the base container images</FONT></P> <P>Base images are used (as the name suggests) as basis for the framework and application you want to host. They dictate which OS APIs are available for your application and it could be a large or a small set of APIs. The more APIs are available, the more binaries are needed, resulting in a larger base image. For that reason, when we started to produce these base images we had Server Core and Nano Server, with the Server image being added later to address scenarios not supported by the former images.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="LTSC2022 Images.png" style="width: 999px;"><img src=";px=999" role="button" title="LTSC2022 Images.png" alt="LTSC2022 Images.png" /></span></P> <P>&nbsp;</P> <P><FONT size="5">Nano Server base container image</FONT></P> <P>This is our smallest base container image. As mentioned above, this means less APIs available. For Nano Server, we focused on scenarios where developers will be writing new applications on which the framework can target the specific APIs of Nano Server. Examples of frameworks, languages, or apps that are supported on Nano Server are .Net Core (now called .Net), Apache, NodeJS, Phyton, Tomcat, Java runtime, JBoss, Redis, among others.</P> <P>Once pulled and extracted, the Nano Server base image has around 290MB in size. That means its pull time is extremely fast, allowing for faster scale-up processes. The other side of this is the already mentioned requirement that only specific frameworks will be supported in this base image.</P> <P>To provide some history, the Nano Server is actually based on a previously available Nano Server installation option. We discontinued this option a few years ago specifically to focus on the Nano Server base container image.&nbsp;Overall, the general question I recommend customers to ask when developing new applications is to always check if the Nano Server provides the necessary APIs for that application. If so, choose this image as the benefits of a smaller image are great.</P> <P>&nbsp;</P> <P><FONT size="5">Server Core base container image</FONT></P> <P>In this case, the scenario expected is different than the Nano Server base image. The Server Core base container image is based on the Server Core installation. This image is focused on lift and shift scenarios, on which the expectation is to get the same application that was working on a VM and put it in a container - as is/no code changes. Of course there's a bunch of limitations here. For example, if the application did not work on Server Core already, it most likely won't work on the Server Core base container image.</P> <P>More importantly, the Server Core base image supports .Net Framework, which is the framework used on most of the existing (not to say, legacy) applications out there. We see a lot of customers using this image to containerize applications from the Windows Server 2008 era. Not surprisingly, web applications written in ASP.Net 3.5.&nbsp;</P> <P>The Server Core image has around 4.8GB after pulled and extracted, which means longer pull times, adding up to the total time of scale-up operations. However, the larger API area should be your target here. Just like on the Nano Server, I have some guidance on this image I provide to customers: First, check if the application you're trying to containerize works on regular Server Core deployments. If not, containers might not be an option here. If it does, you should still make sure you try out the application on a Server Core container to check if the app is not trying to use something from the OS that was removed when building the base container image.</P> <P>&nbsp;</P> <P><FONT size="5">Server base container image</FONT></P> <P>This is our largest image, by a lot - but for all the good reasons. This image is based on the "Server with Desktop Experience" installation mode. Of course, the GUI is not present, but the UI APIs are there. This image enables a whole new set of scenarios, including support to GPU via DirectX.&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="GPU_Server2022.png" style="width: 861px;"><img src=";px=999" role="button" title="GPU_Server2022.png" alt="GPU_Server2022.png" /></span></P> <P>Another interesting scenario enabled by this image is UI automation tests for developers. As mentioned before, you don't get a GUI with this image - rather, the UI APIs are present for UI automation tests to work. Also, some UI automation tools might not be supported on Windows containers at all. If you are familiar with such tools and you know one that doesn't work with Windows containers, please let us know as we're interested in these scenarios.</P> <P>Of course, the compromise on the Server image is the total size of around 11.2GB, once its layers are downloaded and extracted. On the other hand, think of this base image as your an alternative to containerize an existing app that is not supported on Server Core. The question I ask customers here is: Does the app works on Server Core? If not, try the Server image as that should probably work. See that we start by giving the Server Core a try, as the size would be a great benefit. If it doesn't, the Server image should do the work.</P> <P>&nbsp;</P> <P><FONT size="5">Why not say the exact size of these images?</FONT></P> <P>It's easy to fall on the temptation of comparing image sizes for comparison sake. However, just like on regular servers, you should not simply install your system and let it go. Containers are subject to Patch Tuesday and in fact, Microsoft updates these base images every month with security updates. That means you should be updating your production systems composed by containers every month to leverage these security updates. The way you perform that is a topic for another blog post, but the point here is that given these monthly changes, the size of the base container images will vary. Some months we gain some, some months we lose some. The fact that these images have around 290MB (Nano Server), 4.8GB (Server Core), and 11.2GB (Server) should give you a ballpark on what to expect in terms of size. Furthermore, these image layers are compacted for pulling operations (download) so the size that does over the wire is smaller than what you see on disk.</P> <P>&nbsp;</P> <P><FONT size="5">Summary</FONT></P> <TABLE border="1" width="100%"> <TBODY> <TR> <TD width="25%">&nbsp;</TD> <TD width="25%">Nano Server</TD> <TD width="25%">Server Core</TD> <TD width="25%">Server</TD> </TR> <TR> <TD>Size (uncompressed on disk)</TD> <TD>~290MB</TD> <TD>4.8GB</TD> <TD>11.2GB</TD> </TR> <TR> <TD width="25%">Based on (installation mode)</TD> <TD width="25%">Nano Server</TD> <TD width="25%">Server Core</TD> <TD width="25%">Server with Desktop</TD> </TR> <TR> <TD width="25%">Scenarios</TD> <TD width="25%">New applications being developed for Nano Server</TD> <TD width="25%">Lift and Shift, web applications</TD> <TD width="25%">Lift and Shift, GPU dependent apps, UI automation</TD> </TR> <TR> <TD width="25%">Examples</TD> <TD width="25%">.Net (formerly .Net Core)</TD> <TD width="25%">.Net Framework, ASP.Net Framework</TD> <TD width="25%">.Net Framework, DirectX, ML based applications</TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P>As always, let us know if you have questions or comments, either in the comments section below or in our GitHub repo.</P> <P>&nbsp;</P> <P>Vinicius</P> <P>Twitter:&nbsp;<A href="#" target="_blank" rel="noopener">@vrapolinario</A></P> <P>&nbsp;</P> <P><FONT size="5">Edit:</FONT></P> <P>Some of you might have noticed I left the already known "Windows" image out of the article. That was on purpose, but I wanted to comment on that image so it doesn't create any confusion for current customers. Currently, Microsoft has 4 main base images for Windows containers, the already covered Nano Server, Server Core, and Server. The last one is the Windows image, which is not available for the Windows Server 2022 wave - reason why I chose not to cover it originally.</P> <P>The Windows image has the same essence of the Server image already discussed above. It's base however, was different and over the time, we decided to reshape that into what is the Server image today.&nbsp;</P> <P>If you are a customer using Windows containers for the purposes already explained in the Server image above (Lift and Shift, GPU support, ML, or anything that might need a larger API set from the OS) you have two options:</P> <UL> <LI>If you are running a Windows Server 2019 or SAC container host, you should use the Windows image.</LI> <LI>If you are running a Windows Server 2022 container host, you should use the new Server image.</LI> </UL> <P>Hopefully that clarifies the use case as well as which image to use on this scenario.</P> Wed, 13 Oct 2021 22:04:01 GMT Vinicius Apolinario 2021-10-13T22:04:01Z Contain your excitement: Updates on Windows Server 2022 and Containerd <P>We have seen a great deal of interest from the community around our September 1 announcement of the <A href="" target="_blank" rel="noopener">General Availability of Windows Server 2022</A>. This blog will answer common questions as well as provide an update around a topic the community is talking about: Windows Server 2022 support on Azure Kubernetes Service (AKS) and Azure Kubernetes Service on Azure Stack HCI (AKS on Azure Stack HCI).</P> <P>&nbsp;</P> <P><STRONG>Why are AKS and AKS on Azure Stack HCI only supporting Windows Server 2022 with containerd?</STRONG></P> <P>&nbsp;</P> <P><A href="#" target="_self">Containerd</A> is a popular container runtime widely used in Kubernetes. A container runtime is software that executes containers and manages container images on a node in a Kubernetes cluster. On each node, kubelet – the primary ‘node agent’ – uses the container runtime interface as an abstraction to orchestrate and schedule any pods made of containers.</P> <P>&nbsp;</P> <P>As explained <A href="#" target="_blank" rel="noopener">here</A>, the Kubernetes community is aiming “at a deprecation and subsequent removal of dockershim from kubelet.” Dockershim is <A href="#" target="_blank" rel="noopener">Container Runtime Interface (CRI)</A> for docker. Currently the Kubernetes community aims to release kubelet without dockershim in Kubernetes version 1.24 around April 2022.</P> <P>&nbsp;</P> <P>Both AKS and AKS on Azure Stack HCI are managed Kubernetes services. Hence one of the most important goals is to take care of the changes from the underlying platform, be it from the OS, Kubernetes, or other open-source software, so customers don’t have to deal with those and focus on their applications instead. This is the same case for handing this upcoming dockershim deprecation.&nbsp;</P> <P>&nbsp;</P> <P><SPAN>AKS and </SPAN>AKS on Azure Stack HCI <SPAN>today support Windows Server 2019. <SPAN style="font-weight: normal !msorm;"><STRONG>Rather than</STRONG></SPAN></SPAN> <SPAN style="font-weight: normal !msorm;"><SPAN><STRONG>requiring </STRONG></SPAN></SPAN><SPAN style="font-weight: normal !msorm;"><SPAN><STRONG>customers</STRONG></SPAN></SPAN><SPAN style="font-weight: normal !msorm;"><SPAN><STRONG> to</STRONG></SPAN></SPAN> <SPAN style="font-weight: normal !msorm;"><SPAN><STRONG>chang</STRONG></SPAN></SPAN><SPAN style="font-weight: normal !msorm;"><SPAN><STRONG>e</STRONG></SPAN></SPAN> <SPAN><STRONG>infrastructure twice</STRONG></SPAN><SPAN> (</SPAN><SPAN>from Windows Serer 2019 to Windows Server 2022</SPAN><SPAN>,</SPAN><SPAN> from docker to containerd</SPAN><SPAN> – </SPAN><SPAN>with the associated validation and conformance costs</SPAN><SPAN> on the Kubernetes community),</SPAN><SPAN> <STRONG>we </STRONG></SPAN><SPAN><STRONG>are</STRONG></SPAN><SPAN><STRONG> streamlin</STRONG></SPAN><SPAN><STRONG>ing</STRONG></SPAN><SPAN><STRONG> the experience</STRONG></SPAN><SPAN> by </SPAN><SPAN>aligning the latest version of Windows Server, Windows Server 2022,</SPAN> <SPAN>with the latest and best&nbsp;</SPAN><SPAN>container runtime for Kubernetes – containerd.</SPAN> <SPAN>For example,&nbsp;Kubernetes version 1.22 introduced a new alpha feature: </SPAN><SPAN><U><A href="#" target="_blank" rel="noopener">HostProcess containers</A></U></SPAN> that are supported only on containerd<SPAN>.&nbsp; </SPAN><SPAN>HostProcess&nbsp;containers aim to extend the Windows container model to enable a wider range of Kubernetes cluster management scenarios, such as deploying <A href="#" target="_self">CNI</A> and monitoring tool. </SPAN><SPAN>We believe this containerd focus will help make sure we give the end users a great user experience.</SPAN></P> <P>&nbsp;</P> <P>This aligns with other containerd related efforts for AKS and AKS on Azure Stack HCI. For example, on the Windows side, <A href="#" target="_blank" rel="noopener">AKS has started the Preview of Windows Server 2019 on containerd since July 2021</A>. AKS on Azure Stack HCI is also actively working on containerd support for Windows. On the Linux side, <A href="#" target="_blank" rel="noopener">AKS already uses containerd by default on Kubernetes version 1.19 and greater</A>. In addition, AKS on Azure Stack HCI recently announced that, in the Aug 2021 update, <A href="" target="_blank" rel="noopener">containerd is now the default container runtime for Linux worker nodes</A>.</P> <P>&nbsp;</P> <P><STRONG>What remains to be done for&nbsp;</STRONG><STRONG>AKS and AKS on Azure Stack HCI to support Windows Server 2022?</STRONG></P> <P>&nbsp;</P> <P>AKS and AKS on Azure Stack HCI are certified distributors of Kubernetes. To support Windows Server 2022, there are two main steps: 1) Enable support in the upstream Kubernetes community; and 2) Integrate and validate the upstream Kubernetes support on AKS and AKS on Azure Stack HCI.</P> <P>&nbsp;</P> <P data-unlink="true">For the first, in addition to what&nbsp;what we announced at the <A href="" target="_self">general availability (GA) of Windows Server 2022</A>, we have been closely working with the <A href="#" target="_self">Sig-Windows Community</A>&nbsp;on the following:&nbsp;</P> <UL> <LI><SPAN>In&nbsp;the</SPAN> <A href="#" target="_blank" rel="noopener"><SPAN>Sig-Windows Meeting on Sep 7th, 2021</SPAN></A><SPAN><U>, </U></SPAN><SPAN>we shared&nbsp;</SPAN><SPAN>an overview of Windows Server 2022 and addressed questions.&nbsp; </SPAN>We also aligned with the community that <SPAN>validation on Windows Server 2022 will be targeting&nbsp;at the&nbsp;active branch of Kubernetes version 1.23 and containerd only. Anyone can run validations on older versions of Kubernetes or other runtimes and file bugs. The community will triage and resolve bugs as needed.</SPAN></LI> <LI><SPAN>We submitted 8 Pull Requests (PR) on <A href="#" target="_blank" rel="noopener">Container Storage Interface (CSI)</A> in Kubernetes. Here is one PR as an example: <A href="#" target="_blank" rel="noopener">adding Windows Server 2022 support to CSI build progress</A>.</SPAN></LI> <LI><SPAN>We submitted a <A href="#" target="_self">Pull Request to add Windows 2022 support on image builder</A> which will enable VHD creation and eventually allow us to add Windows Server 2022 to the Azure Marketplace. </SPAN></LI> <LI><SPAN>In our private testing, we achieved a&nbsp;99.6%&nbsp;pass rate on upstream Kubernetes tests using Windows Server 2022 worker nodes on a simulated AKS cluster built with </SPAN><SPAN><A href="#" target="_blank" rel="noopener">AKS Engine</A></SPAN><SPAN>.</SPAN></LI> <LI><SPAN>We are currently&nbsp;working&nbsp;to&nbsp;onboard&nbsp;Windows Server 2022&nbsp;test pass reporting </SPAN><SPAN>to the </SPAN><A href="#" target="_blank" rel="noopener"><SPAN>TestGrid</SPAN></A><SPAN>.</SPAN></LI> </UL> <P><SPAN>&nbsp;</SPAN></P> <P><SPAN>For the second, while working with upstream Kubernetes, we are integrating and validating the end-to-end customer experience with our own distributions on AKS and AKS on Azure Stack HCI.</SPAN></P> <P><SPAN>&nbsp;</SPAN></P> <P><SPAN>While Microsoft is working to go above and beyond in bringing up Windows Server 2022 support in upstream Kubernetes,</SPAN><SPAN> like any other Kubernetes distributor, we also rely on the collective effort of the community to get to the ready state. </SPAN>Kubernetes is an open-source project with many distributions. It’s great to see Windows Server 2022 customers benefiting from the broader ecosystem.</P> <P><SPAN>&nbsp;</SPAN></P> <P><SPAN><STRONG>A sneak preview demo…</STRONG></SPAN></P> <P>&nbsp;</P> <P><SPAN>Below is </SPAN><SPAN>a short demo using the simulated AKS cluster, created with AKS Engine for our private testing, to deploy&nbsp;a&nbsp;Windows Server 2022&nbsp;based IIS workload.</SPAN></P> <P>&nbsp;</P> <UL> <LI><SPAN>The container host, aka the Windows node, is a Windows Server 2022 Datacenter SKU.</SPAN></LI> <LI><SPAN>The container image is </SPAN><A href="#" target="_blank" rel="noopener">an IIS image</A><SPAN>:</SPAN></LI> </UL> <P>&nbsp;</P> <P>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;<span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ws2022_demo_smallest.gif" style="width: 500px;"><img src=";px=999" role="button" title="ws2022_demo_smallest.gif" alt="ws2022_demo_smallest.gif" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P><SPAN>It is a journey. It takes a village. We appreciate the interest and value the feedback from everyone in the community.&nbsp;Please feel free to post in our&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">Windows Container GitHub community.</A><SPAN> If you have a Windows Server 2022 use case,&nbsp;please share </SPAN>them <A href="#" target="_blank" rel="noopener"><SPAN>here</SPAN></A><SPAN>.&nbsp;It will help us make sure we are doing the right validation and delivering to your needs. We look forward to hearing more from you. </SPAN></P> <P>&nbsp;</P> <P><SPAN>Weijuan&nbsp;</SPAN></P> Fri, 08 Oct 2021 01:15:46 GMT Weijuan Shi Davis 2021-10-08T01:15:46Z New Microsoft Learn courses for Azure Migrate App Containerization <P>We covered Azure Migrate App Containerization here in the blog already - and we continue to work to improve it bringing new capabilities, such as support for Azure App Service as a destination for your containerized application.</P> <P>&nbsp;</P> <P>However, moving an application from its regular deployment to a container (either on Azure Kubernetes Service or Azure App Services) does change how things usually work and operators might find it tricky. For that reason, the team did an amazing job and recently published four new courses on <A href="#" target="_blank" rel="noopener">Microsoft Learn</A>!</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MS Learn 01.png" style="width: 999px;"><img src=";px=999" role="button" title="MS Learn 01.png" alt="MS Learn 01.png" /></span></P> <P>The courses are focused on the scenarios supported by App Containerization:</P> <UL> <LI><A href="#" target="_blank" rel="noopener">Containerize and migrate ASP.NET applications to Azure Kubernetes Service</A></LI> <LI><A href="#" target="_blank" rel="noopener">Containerize and migrate Java web applications to Azure Kubernetes Service</A></LI> <LI><A href="#" target="_blank" rel="noopener">Containerize and migrate ASP.NET applications to Azure App Service</A></LI> <LI><A href="#" target="_blank" rel="noopener">Containerize and migrate Java web applications to Azure App Service</A></LI> </UL> <P>The courses are extremely easy to follow along and explain key concepts of containers, Kubernetes, and the App Containerization service itself.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MS Learn 02.png" style="width: 999px;"><img src=";px=999" role="button" title="MS Learn 02.png" alt="MS Learn 02.png" /></span></P> <P>Since App Containerization targets scenarios on which customers are moving existing applications to AKS or App Services, it might be complex to customers to create a test environment to try out the functionality. Because of that, to me the most interesting thing about these courses is that they allow you to try out App Containerization directly in Azure. The courses have a dedicated section to walk you through the configuration of your Azure subscription, and a handy "Deploy to Azure" button which deploys all the necessary infrastructure for you to try App Containerization - including a VM running the application to be containerized and another VM with the App Containerization tooling.</P> <P>&nbsp;</P> <P><FONT size="5">Let us know what you think!</FONT>&nbsp;</P> <P>Try out the courses and check for yourself! We hope you like the content and the easy approach to try the App Containerization capabilities. Let us know what you think!</P> Mon, 04 Oct 2021 16:00:00 GMT Vinicius Apolinario 2021-10-04T16:00:00Z Updates to the Windows Container Runtime support <P>Over the next year, Microsoft will transition support for the Mirantis Container Runtime (previously known as Docker Engine – Enterprise) to Mirantis support services. Windows Server containers will continue to function regardless of the runtime. The difference will be the coordination of associated technical support previously provided by Microsoft and Mirantis. The Mirantis Container Runtime will continue to be available from and supported by Mirantis. For more information, see Mirantis’s <A href="#" target="_blank" rel="noopener">blog here</A>.</P> <P>&nbsp;</P> <P><FONT size="5">What happens if my production workloads run on the Mirantis Container Runtime?</FONT><BR />Windows Server containers are a feature of the Windows and Windows Server operating systems and are supported in accordance with the documented <A href="#" target="_blank" rel="noopener">support lifecycles</A>. There is no change to the support for any customer experiencing an issue with the operating system functionality. The change is only in reference to the container runtime that builds on top of the operating system functionality.</P> <P>&nbsp;</P> <P>Customers running Windows Server containers in AKS and AKS-HCI will continue to be supported during the transition to containerd. Microsoft’s decision is in alignment with the recent move <A href="#" target="_blank" rel="noopener">made by the Kubernetes community</A> to drop maintenance of dockershim as part of the effort for Kubernetes to remain runtime agnostic through the container runtime interface (CRI).</P> <P>&nbsp;</P> <P><FONT size="5">Best practices for a smooth transition</FONT></P> <P>For customers who cannot immediately transition to AKS or AKS-HCI, there are several options:</P> <UL> <LI>A supported version of the Mirantis Container Runtime with the same features and capabilities as had been available from Docker is available from Mirantis. This includes support for Docker Swarm. Customers will benefit from updates, patches for security vulnerabilities, as well as global support. Customers using Windows containers in production environments are encouraged to <A href="#" target="_blank" rel="noopener">contact Mirantis</A> to pursue this option.<BR /><BR />For Kubernetes environments, Mirantis will continue to provide a supported, CRI compliant container runtime that gives developers the familiar Docker CLI and container management features.<BR /><BR /></LI> <LI>Customers can choose to run their Windows Server containers using the runtime from the open-source <A href="#" target="_blank" rel="noopener">Moby project</A>. Note that the Moby project does not provide formal support for several enterprise features present in <A href="#" target="_blank" rel="noopener">the Mirantis Container Runtime</A>.</LI> </UL> <P>At the end of September 2022, Microsoft will no longer maintain the DockerMsftProvider API, and instead customers should go to the <A href="#" target="_blank" rel="noopener">Mirantis site</A> for installation. Customers looking to Microsoft for support will still have access to Mirantis services at no cost until September 2022, after which customers may purchase Mirantis’ annual support contract.</P> <P>&nbsp;</P> <P>The build process for Windows containers using Docker will remain unaffected. Customers will still be able to interact with their container images using the Docker Desktop CLI. Windows containers will continue to work, only the Microsoft-supported runtime for Windows Server is changing.</P> <P>&nbsp;</P> <P>Both Microsoft and Mirantis are committed to a smooth transition. We encourage you to reach out to us with questions and concerns so we can find the best way to help you through this process.</P> Mon, 27 Sep 2021 21:44:08 GMT Brandon_Smith 2021-09-27T21:44:08Z Windows Server 2022 and beyond for containers <P>Following the <A href="#" target="_blank" rel="noopener">General Availability (GA) of Windows Server 2022</A>, we’re sharing more on what’s on the horizon for Windows containers customers.</P> <P>&nbsp;</P> <P><FONT size="5">Kubernetes community engagement</FONT></P> <P>As developers ourselves, we know that the containers and Kubernetes world requires a faster cadence of innovation. Our team continues to engage with these communities, and the work we’re doing to improve the Windows containers platform can be seen in Kubernetes releases. One example of that is the new <A href="#" target="_blank" rel="noopener">HostProcess feature</A> which became available in alpha with the release of Kubernetes 1.22.</P> <P>Our approach is grounded in new features our communities want and need being supported not only on the latest release of Windows containers, but any Windows Server supported . And for cases on which additional work is needed to have the feature supported on previous releases, we’re listening to customer feedback to implement and backport where we’re hearing business needs.</P> <P>&nbsp;</P> <P><FONT size="5">Long-Term Servicing Channel</FONT></P> <P>Based on customer feedback and customer adoption patterns, we’ll move to the Long-Term Servicing Channel (LTSC) as our primary release channel. Current Semi-Annual Channel (SAC) releases will continue through their mainstream support end dates, which are May 10, 2022 for Windows Server version 20H2 and December 14, 2021 for Windows Server version 2004.</P> <P>The focus on container and microservices innovation previously released in the Semi-Annual Channel will now continue with&nbsp;Azure Kubernetes Service (AKS), AKS on Azure Stack HCI, and other platform improvements made in collaboration with the Kubernetes community. And with the Long-Term Servicing Channel, a major new version of Windows Server will be released every 2-3 years, so customers can expect both container host and container images to align with that cadence.</P> <P>&nbsp;</P> <P><FONT size="5">5+5 years of support for all container images</FONT></P> <P>Customers will receive five years of mainstream support and additional 5 years of extended support for all Windows Server 2022 images: Server Core, Nano Server, and the recently announced <A href="" target="_blank" rel="noopener">Server image</A>. This will ensure time to implement, use, and upgrade or migrate at the right time.</P> <P>&nbsp;</P> <P><FONT size="5">Down-level compatibility between host and container images with process isolation</FONT></P> <P>Down-level compatibility between container and host versions has long been an important request from our customers, and we have been working hard to deliver this functionality. Starting with Windows Server 2022, customers will be able to run their Windows Server 2022 container images with either process or Hyper-V isolation on any build of Windows Server 2022 or Windows 11. Customers won’t need to worry whether their container image will run on a newer build of Windows Server 2022 or Windows 11 host.<BR />Since the beginning the Windows OS was designed in such a way that its user and kernel modes were always shipped together. Modern container infrastructure breaks this paradigm. Microsoft has undergone a monumental effort in stabilizing the user/kernel boundary to ensure that Windows answers your needs. We've put the OS through rigorous testing to ensure interactions across the Windows ABI work in down-level scenarios. It is for this reason, however, that we are releasing this feature as a preview for customers. We want to ensure flexibility for customer workloads and are committed to addressing any concerns that arise from apps running in down-level scenarios.<BR />This helps address the alternating cadences for Windows Server and Windows, previously addressed with Hyper-V isolation. Customers have the flexibility to test out their container workloads directly on a Windows 11 machine using either process or Hyper-V isolation and then deploy the same workload to the cloud. </P> <P>&nbsp;</P> <P><FONT size="5">Windows and developer scenarios</FONT></P> <P>Speaking of developer scenarios using Windows containers, with the launch of Windows Server 2022 we are temporarily enabling forward compatibility for Windows 10 21H1 container hosts to run Windows Server 2022 images using Hyper-V isolation. This will be possible via a specific tag for Windows Server 2022 container images for Server Core, Nano Server, and Server images that will be published specifically for running on Windows 10 21H1 hosts with Hyper-V isolation. This functionality will allow developers running Windows 10 21H1 to kick the tires with Windows Server 2022 images until <A href="#" target="_blank" rel="noopener">Windows 11 is generally available (GA) on October 5</A>. Once Windows 11 becomes GA, we will remove these specific images. Customers running Windows 11 and Windows Server 2022 hosts will be able to use the regular images.</P> <P>&nbsp;</P> <P>If you are running a Windows 10 21H1 container host, you can pull these temporary images via:</P> <LI-CODE lang="applescript">docker pull docker pull docker pull </LI-CODE> <P>&nbsp;</P> <P>To run a new container based on these images, remember you have to specify the Hyper-V isolation mode:</P> <LI-CODE lang="applescript">docker run -it --isolation=hyperv powershell</LI-CODE> <P><FONT size="5">Keep sending your feedback!</FONT></P> <P>We hope you are as excited about the release of Windows Server 2022 and the updates for the Windows container and Kubernetes community as we are. We strive to address your feedback, so as always, please keep comments coming via our <A href="#" target="_blank" rel="noopener">GitHub repo</A> or below in the comments section.</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> Thu, 02 Sep 2021 16:09:15 GMT Vinicius Apolinario 2021-09-02T16:09:15Z Windows Server 2022 Now Generally Available <H1>Overview</H1> <P>Today we announced the <A href="#" target="_blank" rel="noopener">General Availability (GA) of Windows Server 2022</A>. It is such an exciting milestone for the Windows Server community and the broader ecosystem. As someone who shipped the last few Windows Server releases and have been working with teams across Microsoft to bring Windows Server 2022 to GA, I am thrilled to share this with our Windows Server container community!</P> <P>&nbsp;</P> <P>As outlined in this&nbsp;<A href="" target="_blank" rel="noopener">What’s new for Windows Containers on Windows Server 2022</A>&nbsp;blog, Windows Server 2022 brings innovations and improvements in Windows Server container platform, application compatibility and containerization tooling. In addition, a new Server container image that supports better application compatibility for Server applications was introduced with this release as shared in this <A href="" target="_blank" rel="noopener">Announcing a New Windows Server Container Image Preview</A>&nbsp;blog.</P> <P>&nbsp;</P> <H1>Partnering with the Kubernetes Community</H1> <P>The most exciting part with this release is we are aiming to work with the Kubernetes community to enable Windows Server 2022 support there, bring it to Azure Kubernetes Service (AKS) and Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) and support the whole ecosystem to adopt as well. We are excited to share the following PRs are already submitted in the Kubernetes community:</P> <OL> <LI><A href="#" target="_self">Pause Images: Added base image for Windows Server 2022</A></LI> <LI><A href="#" target="_self">test images: Adds Windows Server 2022 to the BASEIMAGEs </A></LI> <LI><A href="#" target="_self">test images: Adds Windows Server 2022 to the BASEIMAGEs (part 2)</A></LI> </OL> <P>In addition, these test jobs on networking are being added:</P> <OL> <LI><A href="#" target="_self"></A></LI> <LI><A href="#" target="_self"></A></LI> </OL> <P>There will be more coming as we continue to work with the Kubernetes community on this journey. You are welcome to contribute as well.</P> <P>&nbsp;</P> <P>With the <A href="#" target="_self">upcoming deprecation of dockershim in Kubernetes</A>, we intended to support containerd as the only container runtime for Windows Server 2022 on Microsoft’s first party Kubernetes services aka AKS and AKS-HCI. We started the containerd work back in Windows Server 2019 and have been supporting containerd internally for various Azure services. This containerd-only focus starting with Windows Server 2022 aligns with where the ecosystem is going and will give our customers a more robust and performant experience. Containerd support for Windows Server 2019 on AKS is already in preview per this update <A href="#" target="_self">Azure Kubernetes Service (AKS) support for containerd runtime is in preview</A>.</P> <P>&nbsp;</P> <H1>Container Images</H1> <P><A href="#" target="_self">Windows base OS images by Microsoft</A> is the landing page of the product family of all Windows Server base OS container images. As usual the actual images are available on Microsoft Container Registry (MCR). This release comes with these 3 container images types:</P> <UL> <LI><A href="#" target="_self">windows/nanoserver</A>: <STRONG>Nano Server</STRONG> base OS image</LI> <LI><A href="#" target="_blank" rel="noopener">windows/servercore</A>: Windows <STRONG>Server Core</STRONG> base OS image</LI> <LI><A href="#" target="_blank" rel="noopener">windows/server</A>: Windows <STRONG>Server</STRONG> base OS image; <EM>new for this release.</EM></LI> </UL> <P>For those of you using the <A href="#" target="_self"><STRONG>Windows</STRONG> base OS image</A> and wanting to run the image on a Windows Server 2022 host in <A href="#" target="_self">Process Isolation</A>, you will need to switch to use that new Server base OS image instead. Otherwise, you can run the Windows base OS image with <A href="#" target="_self">Hyper-V isolation</A> on a Windows Server 2022 host. You can always check out more details on container host and container image version compatibility at the <A href="#" target="_self">Windows container version compatibility documentation</A> page.</P> <P>&nbsp;</P> <P>If you are interested in container images from other teams from Microsoft, the following are also available for Windows Server 2022:</P> <UL> <LI><A href="#" target="_self">dotnet/core</A></LI> <LI><A href="#" target="_blank" rel="noopener">dotnet/framework</A></LI> <LI><A href="#" target="_self">iis</A></LI> <LI><A href="#" target="_blank" rel="noopener">powershell</A></LI> </UL> <P>&nbsp;</P> <H1>Image Tags</H1> <P>There will be 3 types of tags for this release:</P> <OL> <LI>Featured Tag: <STRONG>“ltsc2022”</STRONG>. This will always be the latest release with the latest patches going forward. Note: We don’t have the “latest” tag for any Windows base OS container images.</LI> <LI>Featured Tag + a KB number: “<STRONG>ltsc2022-KBxxxxxxx</STRONG>”, e.g. “ltsc022-KB5005039”.</LI> <LI>Build number: “<STRONG>10.0.20348.XX”</STRONG>, e.g., 10.0.20348.169. Note: 20348 is the build number of Windows Server 2022 release.</LI> </OL> <P>For example, this is how you pull Windows Server 2022 container images:</P> <UL> <LI>docker pull;&nbsp;</LI> <LI>docker pull</LI> <LI>docker pull</LI> </UL> <P>As we move forward with monthly patches, you can still use the featured tag and get the latest images; or you can specify the specific KB number or the build number for a specific release.&nbsp; You can use the&nbsp;<A href="#" target="_self">Windows Server container update history</A>&nbsp;for reference.</P> <P>&nbsp;</P> <H1>How to get started?</H1> <UL> <LI>To get a Windows Server 2022 as your container host on Azure, start here <A href="#" target="_self">Windows Server on Azure Marketplace.&nbsp;</A>&nbsp;<SPAN>Be sure to follow the instructions to&nbsp;<A href="#" target="_self">Install Docker</A>.</SPAN></LI> <LI>To get a Windows Server 2022 base OS container image, start here <A href="#" target="_self">Windows base OS images by Microsoft</A>.</LI> <LI>To get started with Windows containers in general, start with our&nbsp;<A href="#" target="_self">Containers on Windows documentation </A>.</LI> </UL> <P>&nbsp;</P> <H1>What’s Next?</H1> <P>We are actively working with the Kubernetes community as well as our own AKS and AKS-HCI teams to bring up Windows Server 2022 support. To keep up to date on when Windows Server 2022 will be available on AKS and AKS-HCI, please follow these GitHub threads:</P> <UL> <LI><A href="#" target="_self">[Feature] Support WS2022 on Windows · Issue #2115 · Azure/AKS </A></LI> <LI><A href="#" target="_self">Windows Server 2022 Support on AKS-HCI · Issue #123 · Azure/aks-hci </A></LI> </UL> <P>&nbsp;</P> <H1>Feedback and Issues</H1> <P>We would love for our customers and community to try out and let us know your feedback. Please feel free to post in our <A href="#" target="_self">Windows Container GitHub community.</A></P> <P>&nbsp;</P> <P>Thank you!</P> <P>Weijuan</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> Wed, 01 Sep 2021 15:17:17 GMT Weijuan Shi Davis 2021-09-01T15:17:17Z A closer look into Azure Migrate App Containerization <P>A while ago, the Azure Migrate team released the App Containerization functionality under the umbrella of services of Azure Migrate. The goal of App Containerization is to provide an end to end solution for IT Admins and developers containerizing existing applications that are currently deployed on Web Servers with minimal effort and no code changes.&nbsp;</P> <P>&nbsp;</P> <P><IFRAME src="" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen" title="YouTube video player" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"></IFRAME></P> <P>On the video above, I sat down with Damian from the DevOps Lab to show the functionalities of App Containerization. The tool is under preview and supports containerization of ASP.Net Framework on IIS for Windows containers and Java on Apache Tomcat for Linux containers.</P> <P>The tool takes you through a wizard on which you target a running server, provide appropriate credentials and the tool can then extract the content to be containerized. Then, using Azure Container Registry (ACR) Tasks, it creates a new container image based on the extracted content. From there, we start a the process of deploying the image as an application on Azure Kubernetes Service.</P> <P>Throughout the process, the tool also helps you create both the ACR registry on which your image will be stored as well as the AKS cluster to run your application. In that process it also takes cares of authentication between these Azure services.&nbsp;To learn more about the tool and its capabilities, check out our <A href="#" target="_blank" rel="noopener">documentation</A>.</P> <P>&nbsp;</P> <P>Take a look at the vide above and let us know what you think! As a service in preview, we'd love to get your feedback on what is working, what can be improved, and what other scenarios you'd like to see available on the tool.</P> <P>&nbsp;</P> <P>Vinicius.</P> <P>You can find me on Twitter&nbsp;<A href="#" target="_blank" rel="noopener">@vrapolinario</A></P> Tue, 13 Jul 2021 17:02:16 GMT Vinicius Apolinario 2021-07-13T17:02:16Z May 2021 update to Containers extension on Windows Admin Center <P>Since last year, we've been hard at work on the Containers extension adding <A href="" target="_blank" rel="noopener">new functionality</A> to make the management of Windows containers easier. Today we have yet another update for you!</P> <P>&nbsp;</P> <P>The journey of containerizing existing applications using Windows Admin Center starts by creating a new containers image - Windows Admin Center can create a container image based on existing Visual Studio solutions, Web Deploy files, and more. Next, you can run the container image locally or push your image to Azure Container Registry (ACR) and run it in the cloud with Azure Container Instance (ACI). However, these options for running the container image are options for single instances - no container orchestrator in place.</P> <P>&nbsp;</P> <P>As we listened for customer feedback, it became clear that many IT admins out there were looking for simple solutions to get their apps running - but the barrier of entry for Kubernetes is high. Creating your workload definition on YAML is not trivial. Our goal with this update is make the process of deploying applications on Azure Kubernetes Service (AKS) and AKS on Azure Stack HCI (AKS-HCI) as easy as possible.</P> <P>&nbsp;</P> <P><FONT size="5">Creating new Workload Definitions</FONT></P> <P>With the updated version of the Container extension you'll see a new Kubernetes section on the side-menu. The first option is the Workload Definition, on which you can create YAML files that are used to specify how your application should be deployed. If you want to create a new Workload Definition, you can click the option to create new and this will start a wizard-like experience:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="01.png" style="width: 999px;"><img src=";px=999" role="button" title="01.png" alt="01.png" /></span></P> <P>In the wizard you'll provide some information about your application, such as the container image to use, the CPU and Memory configuration, the number of pod replicas, etc. With this information, Windows Admin Center will create a new YAML file - which you can edit if you want to. With the YAML file created you can then deploy it to AKS or AKS-HCI.</P> <P>&nbsp;</P> <P><FONT size="5">Deploying workloads</FONT></P> <P>Once you have a YAML file created, you can select the option to apply the workload definition to AKS or AKS-HCI. In the background, Windows Admin Center will use kubectl to connect to the cluster you specified. If you select an AKS cluster, the kubectl configuration will be retrieved from Azure. If you select AKS-HCI, you need to specify the host cluster name and credentials, so Windows Admin Center can retrieve the configuration for your target cluster.&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="02.png" style="width: 999px;"><img src=";px=999" role="button" title="02.png" alt="02.png" /></span></P> <P>&nbsp;</P> <P>However, there's more happening under the covers. When you specify your container image to use, the nodes will have to authenticate against ACR, and configuring this authentication is not trivial. Luckily, Windows Admin Center handles that for you. Basically, what it is doing is safely retrieving the configuration for ACR directly from Azure and storing it securely on your nodes as a Kubernetes secret.</P> <P>&nbsp;</P> <P><FONT size="5">Check your workloads</FONT></P> <P>Once you deploy your workloads, you can check if it is running correctly. You can click the Kubernetes Service option on the menu, which allows you to specify which cluster you wan to check the deployments.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="03.png" style="width: 999px;"><img src=";px=999" role="button" title="03.png" alt="03.png" /></span></P> <P>The image above shows the deployment of an application on AKS-HCI. The display is similar to running kubectl with the "get deployments" option. It shows some details on the deployment, including the number of pods and how many are ready.</P> <P>&nbsp;</P> <P><FONT size="5"><STRONG>Feedback</STRONG></FONT></P> <P>As always, please continue sending us your feedback! Each new feature we design and ship is informed by your input. You can send your comments and feedback our way either via comments below, or our&nbsp;<A href="#" target="_blank" rel="noopener noreferrer">GitHub repo</A>&nbsp;by opening a new issue.</P> <P>&nbsp;</P> Tue, 25 May 2021 16:00:00 GMT Vinicius Apolinario 2021-05-25T16:00:00Z Announcing Active Directory Identity Improvement on AKS on Azure Stack HCI <P>We’re very pleased to announce that Group Managed Service Account (gMSA) for Windows containers with non-domain joined host solution is now available in the recently announced AKS on Azure Stack HCI <A href="#" target="_self">Release Candidate</A>!</P> <P>&nbsp;</P> <P><STRONG>The Journey</STRONG></P> <P>Since the team started the journey bringing containers to Windows Server several years ago, we have heard from customers that the majority of traditional Windows Server apps rely on Active Directory (AD). We have made a lot of investments in our OS platform, such as leveraging <A href="#" target="_self">Group Managed Service Accounts (gMSA)</A> to give containers an identity and can be authenticated with Active Directory. For example, this blog showcased improvements in the Windows Server 2019 release wave:&nbsp;<A href="" target="_self">What's new for container identity</A>. We have also partnered with the Kubernetes community and enabled <A href="#" target="_self">gMSA for Windows pods and containers in Kubernetes v1.18</A>. This is extremely exciting news. But this solution needs Windows worker nodes to be domain joined with an Active Directory Domain. In addition, multiple steps need to be executed to install webhook and config gMSA Credential Spec resources to make the scenario working end to end.</P> <P>&nbsp;</P> <P>To ease the complexities, as announced in this blog on <A href="" target="_self">What’s new for Windows Containers on Windows Server 2022</A>, improvements are made in the OS platform to support gMSA with a non-domain joined host. We have been working hard to light up this innovation in AKS and AKS on Azure Stack HCI. We are very happy to share that AKS on Azure Stack HCI is the first Kubernetes based container platform that supports this “<A href="#" target="_self">gMSA with non-domain joined host</A>” end-to-end solution. No domain joined Windows worker nodes anymore, plus a couple of cmdlets to simplify an end-to-end user experience!</P> <P>&nbsp;</P> <P>&nbsp;</P> <P><STRONG>“gMSA with non-domain joined host” vs. “gMSA with domain-joined host”</STRONG></P> <TABLE border="1" width="100%"> <TBODY> <TR> <TD width="50%"><SPAN>gMSA with non-domain joined host</SPAN></TD> <TD width="50%"><SPAN>gMSA with domain-joined host</SPAN></TD> </TR> <TR> <TD width="50%"> <UL> <LI>Credentials are stored as K8 secrets and authenticated parties can retrieve the secrets. These creds are used to retrieve the gMSA identity from AD.</LI> <LI>This eliminates the need for container host to be domain joined and solves challenges with container host updates.&nbsp;</LI> </UL> </TD> <TD width="50%"> <UL> <LI><SPAN>Updates to Windows container host can pose considerable challenges.</SPAN></LI> <LI><SPAN>All previous settings need to be reconfigured to domain join the new container host.</SPAN></LI> </UL> </TD> </TR> <TR> <TD width="50%"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="msjingli_0-1621557740287.png" style="width: 476px;"><img src="" width="476" height="321" role="button" title="msjingli_0-1621557740287.png" alt="msjingli_0-1621557740287.png" /></span> <P>&nbsp;</P> </TD> <TD width="50%"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="msjingli_1-1621557781952.png" style="width: 425px;"><img src="" width="425" height="311" role="button" title="msjingli_1-1621557781952.png" alt="msjingli_1-1621557781952.png" /></span> <P>&nbsp;</P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P><STRONG>Simplified end-to-end gMSA configuration process by built-in cmdlets</STRONG></P> <P>In AKS on Azure Stack HCI, even though you don't need to domain join Windows worker nodes anymore, there are other configuration steps that you can't skip. These steps include installing the webhook, the custom resource definition (CRD), and the credential spec, as well as enabling role-based access control (RBAC). We provide a few PowerShell cmdlets to simply the end-to-end experience. Please refer to <A href="#" target="_self">Configure Group Managed Service Account with AKS on Azure Stack HCI.</A></P> <P>&nbsp;</P> <P><STRONG>Getting started </STRONG></P> <P>We have provided detailed <A href="#" target="_self">documentation</A> on how to integrate your gMSA with containers in AKS-HCI with non-domain joined solution.</P> <OL> <LI>Preparing gMSA in domain controller.&nbsp;</LI> <LI>Prepare the gMSA credential spec JSON file (This is a one-time action. Please use the gMSA account in your domain.)</LI> <LI>Install webhook, Kubernetes secret and add Credential Spec.</LI> <LI>Deploy your application.</LI> </OL> <P>&nbsp;</P> <P>If you are looking for this support on AKS, you can follow this entry on AKS Roadmap&nbsp;<A href="#" target="_self">[Feature] gMSA v2 support on Windows AKS · Issue #1680</A>.</P> <P>&nbsp;</P> <P><SPAN>As always, we love to see you try it out, and give us feedback. You can share your feedback at our <A href="#" target="_self">GitHub community Issues · microsoft/Windows-Containers </A> , or contact us directly at <A href="" target="_self"></A>.</SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <P><SPAN>Jing</SPAN></P> <P><SPAN>Twitter:&nbsp;<A href="#" target="_self"></A></SPAN></P> Fri, 21 May 2021 03:21:37 GMT msjingli 2021-05-21T03:21:37Z Announcing a New Windows Server Container Image Preview <P>Today we are very pleased to announce a new Windows Server base OS container image <STRONG>preview</STRONG> built from Windows Server 2022 with Desktop Experience. To try it out, on a <A href="" target="_self">Windows Server 2022 Insider Build 20344</A> as the container host, run this command to start:</P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P class="lia-indent-padding-left-30px"><EM>docker pull<STRONG>server</STRONG>/insider:10.0.20344.1</EM></P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P>The direct link of the image repo on Docker Hub is here <A href="#" target="_self"></A>.&nbsp;&nbsp;</P> <P>&nbsp;</P> <H1>Why did we build this new image?</H1> <P>There are <A href="#" target="_blank" rel="noopener">3 Windows Base OS container images</A> today that nicely cover the broad spectrum of customer needs: <STRONG>Nano Server</STRONG> – ultralight, modern Windows offering for new app development; <STRONG>Server Core</STRONG> - medium size, best fit for Lift and Shift Windows Server apps; <STRONG>Windows </STRONG>- largest size, almost full Windows API support for special workloads. Nano Server and Server Core container image adoption has been steadily growing and widely used for a while. In the last year or so, we are also seeing uptake of the Windows image adoption. Meanwhile, in the <A href="#" target="_self">Windows Container community on GitHub</A> and through our Customer Support, we have received feedback regarding constraints when using that <A href="#" target="_self">Windows base OS container image</A>. For example,</P> <P>&nbsp;</P> <UL> <LI><A href="#" target="_self">Windows server 2004 has IIS application pool connection limit of 10</A></LI> <LI><A href="#" target="_self">Windows Server Container - should have been a Server OS Container</A></LI> </UL> <P>&nbsp;</P> <P>Some of the constraints are by design because that Windows container image is built from a full Windows Client edition and enabled to run on Windows Server. As we are committed to invest in the Windows containers business, we believe it is a right thing at this right time to build a new image based on a “full” Windows Server edition to enable more capabilities. “Full” in the sense that we choose to use the Windows Server 2022 with Desktop Experience edition. Some of you may refer this as “Server Core” + “Desktop UI” informally. That’s how this new container image was born and built. It will be added to all the relevant repos on Microsoft Container Registry (MCR) and Docker Hub pages. I should note, though this image is built from an edition with Desktop Experience, Windows containers today by design do not have GUI. That’s not changed with this new image. &nbsp;</P> <P>&nbsp;</P> <H1>What’s the name again?</H1> <P>Windows containers by themselves are not stand-alone products. They are considered features of Windows Server. Whatever name we choose needs to show that connection but also avoid potential confusion or duplication. That leaves us limited room for creativity <img class="lia-deferred-image lia-image-emoji" src="" alt=":smile:" title=":smile:" />. As you can see, with the path on MCR, “<STRONG>server</STRONG>”, this image is referred as “Windows <STRONG><EM>Server</EM></STRONG> base OS image”, or just short like this, “Server base image”, or “Server image”.</P> <P>&nbsp;</P> <P>&nbsp;</P> <H1>What about this new image?</H1> <P>This new image will be available with Windows Server 2022 release only. For those of you who are using the <EM>Windows </EM>images from previous releases that are still in support such as Windows Server SAC v1809, SAC v1909, SAC v2004 and SAC v20H2, those images are not changed and have their respective <A href="#" target="_blank" rel="noopener">support cycles</A>. This new image is <STRONG><EM>not</EM></STRONG> available with those previous releases. We encourage you to adopt Windows Server 2022 and move to use this new <EM>Server</EM> image.</P> <P>&nbsp;</P> <P>Here is a quick comparison between all the 4 images:</P> <TABLE> <TBODY> <TR> <TD rowspan="2" width="90"> <P><STRONG>Container Image</STRONG></P> </TD> <TD rowspan="2" width="168"> <P><STRONG>Main Use Case</STRONG></P> </TD> <TD rowspan="2" width="60"> <P><STRONG>Compressed Size</STRONG></P> <P>&nbsp;</P> </TD> <TD colspan="3" width="306"> <P><STRONG>Supported Versions today *</STRONG></P> </TD> </TR> <TR> <TD width="102"> <P>Windows Server 2022</P> </TD> <TD width="84"> <P>Windows Server 2016, 2019</P> </TD> <TD width="120"> <P>Windows Server SAC v1809<STRONG>**</STRONG>, v1909, v2004, v20H2</P> </TD> </TR> <TR> <TD width="90"> <P><STRONG>Nano Server</STRONG></P> </TD> <TD width="168"> <P>Mainly for modern apps such as .NET Core apps;</P> <P>Limited App Compatibility</P> </TD> <TD width="60"> <P>112MB</P> </TD> <TD width="102"> <P>X</P> </TD> <TD width="84"> <P>&nbsp;</P> </TD> <TD width="120"> <P>X</P> </TD> </TR> <TR> <TD width="90"> <P><STRONG>Server Core</STRONG></P> </TD> <TD width="168"> <P>Mainly for .NET Framework apps;</P> <P>Better App Compatibility</P> </TD> <TD width="60"> <P>1.2GB</P> </TD> <TD width="102"> <P>X</P> </TD> <TD width="84"> <P>X</P> </TD> <TD width="120"> <P>X</P> </TD> </TR> <TR> <TD width="90"> <P><STRONG>Windows</STRONG></P> </TD> <TD width="168"> <P>Mainly for .NET Framework apps;</P> <P>Best App Compatibility with by-design constraints</P> </TD> <TD width="60"> <P>3.4GB</P> </TD> <TD width="102"> <P>&nbsp;</P> </TD> <TD width="84"> <P>&nbsp;</P> </TD> <TD width="120"> <P>X</P> </TD> </TR> <TR> <TD width="90"> <P><STRONG>Server</STRONG></P> </TD> <TD width="168"> <P>Mainly for .NET Framework apps;</P> <P>Best App Compatibility</P> </TD> <TD width="60"> <P>3.1GB</P> </TD> <TD width="102"> <P>X</P> </TD> <TD width="84"> <P>&nbsp;</P> </TD> <TD width="120"> <P>&nbsp;</P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P><STRONG>Note:&nbsp;</STRONG></P> <P><STRONG>*&nbsp;</STRONG>“<STRONG>Supported Version today</STRONG>” lists the Windows Server releases that the container image was or will be released and is or will be supported. For example, with the first row, it means, Nano Server image was released with Windows Server SAC v1809, v1909, v2004 and v20H2 releases and will be in Windows Server 2022 release. That list can change as some releases reach their end of support.</P> <P>&nbsp;</P> <P><STRONG>**</STRONG>In the release wave of Windows Server 2019 and <STRONG>SAC v1809</STRONG>, Nano Server container and Windows container images were only shipped as an SAC with 18 months support cycle. Based on customer feedback, last year we have extended Nano Server container in the SAC v1809 release to be supported for 5 years. This Windows container image currently will reach its end-of-life support (EOL) in May hence we will be sure to share the update on the extension in May before the EOL. On the other hand, Server Core container image in that release wave was shipped both as an LTSC and an SAC. In an oversimplified way, you can think all the 3 images released during Windows Server 2019 and SAC v1809 wave are now being made aligned to be supported for 5 years to Jan 2024. We understand this is a confusing topic. We’ll come back with details in future blogs.</P> <H1>&nbsp;</H1> <H1>What are the key benefits and capabilities of the new image?</H1> <P>Compared to the current <EM>Windows</EM> image:</P> <UL> <LI><STRONG>Size Smaller: </STRONG>slightly smaller, from 3.4GB down to 3.1GB.</LI> <LI><STRONG>Performance and Reliability Improved: </STRONG>Over the years we have improved performance and reliability of the Server Core container images thanks to the large adoption internally and externally. This image inherits all the improvements from Server Core.</LI> <LI><STRONG>LTSC Support from the get-go:</STRONG> we are planning to support this image as an LTSC with 5 years mainstream support.</LI> <LI><STRONG>Server functionality: </STRONG>we are still validating so the list here is not complete but we expect this image will enable more Server scenarios/features. <UL> <LI><STRONG>IIS Connection: </STRONG>as mentioned earlier, there was a 10 connection limit. This new image should no longer have this limit. We have customers validating with their scenarios.</LI> <LI><STRONG>Web APIs e.g. Web Management Services (WMSVC): </STRONG>in that same GitHub issue related to IIS, it was reported this feature is not supported. We are yet to validate but we believe this should be supported.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</LI> </UL> </LI> <LI><STRONG>Fuller API support</STRONG> <UL> <LI><STRONG>GPU Support</STRONG>: We announced GPU support back in April 2019 in this blog <A href="" target="_self">Bringing GPU acceleration to Windows containers</A>, with this accomplishing GitHub page on setting up the demo: <A href="#" target="_self">Virtualization-Documentation/windows-container-samples/directx at live </A>. We are very glad to share GPU support is validated on this new image. Below is a screenshot:</LI> </UL> </LI> </UL> <P class="lia-indent-padding-left-120px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Server Image GPU Validation Result.jpg" style="width: 544px;"><img src=";px=999" role="button" title="Server Image GPU Validation Result.jpg" alt="Server Image GPU Validation Result.jpg" /></span></P> <DIV id="tinyMceEditorWeijuan Shi Davis_2" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <H1>OK, that’s interesting. How do I get started?</H1> <P><STRONG>Step 1: Install a Windows Server 2022 Insider </STRONG></P> <P>To get started, you’ll need a Windows Server 2022 installation based on the Insiders preview build <STRONG>20344</STRONG>. You can download the bits from the Insiders page here: <A href="#" target="_self">Download Windows Server Insider Preview</A>. Once you download the ISO (or VHD), create a new VM based on this image.</P> <P>&nbsp;</P> <P><STRONG>Step 2: Install Docker</STRONG></P> <P>Once you have a working Windows Server 2022 Preview deployment, follow this to install Docker:</P> <P>&nbsp;</P> <P><EM>Install-Module -Name DockerMsftProvider -Repository PSGallery -Force</EM></P> <P><EM>Install-Package -Name docker -ProviderName DockerMsftProvider</EM></P> <P><EM>Restart-Computer -Force</EM></P> <P>&nbsp;</P> <P>Once the machine is restarted, run ‘docker info’ to ensure Docker was correctly installed.</P> <P>&nbsp;</P> <P><STRONG>Step 3: Pull the new image</STRONG></P> <P><EM>docker pull</EM></P> <P>&nbsp;</P> <P><STRONG>Step 4: Run the new image</STRONG></P> <P><EM>docker run -it cmd</EM></P> <P>&nbsp;</P> <P>Here is a screenshot for your reference:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Server Image Insider Validation Screenshot.jpg" style="width: 780px;"><img src=";px=999" role="button" title="Server Image Insider Validation Screenshot.jpg" alt="Server Image Insider Validation Screenshot.jpg" /></span></P> <P>&nbsp;</P> <P><STRONG>Note:</STRONG></P> <UL> <LI>On a Windows Server host, containers are by default run in the <STRONG>process-isolation</STRONG> mode. You can find more at this doc page <SPAN><A href="#" target="_self">Windows Server Container Isolation Modes</A>.</SPAN></LI> <LI>If you would like to try out on a Windows 10 machine, please follow the instructions here <A href="#" target="_self">Use Containers with Windows Insider Program</A>. Be sure to use the latest Insider release. For example, I used a Windows 10 21364 Insider build. On a Windows 10 host, containers are by default run in the <STRONG>Hyper-V isolation</STRONG> mode.</LI> </UL> <H1>&nbsp;</H1> <H1>References:</H1> <UL> <LI><A href="#" target="_self">Use Windows containers with the Windows Insider Program</A></LI> <LI><A href="" target="_blank" rel="noopener">Windows Server Insiders Community on Tech Community</A></LI> <LI><A href="#" target="_blank" rel="noopener">Windows Container community on GitHub</A></LI> <LI><A href="#" target="_blank" rel="noopener">Windows Server Base OS Image Insider Build on Docker Hub</A></LI> </UL> <H1>&nbsp;</H1> <H1>Closing</H1> <P>It’s been such a great journey since we started working on making this new image early this year. Your feedback really propelled us on the direction and innovations. We are so excited to share this with the community. We may get it right or we may get it wrong. Fail fast if needed. But we are in <img class="lia-deferred-image lia-image-emoji" src="" alt=":smile:" title=":smile:" />!</P> <P>&nbsp;</P> <P>As always, we love to see you try it out, and give us feedback. You can share your feedback at our GitHub community, or contact us directly.</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Weijuan</P> <P>Twitter: @WeijuanLand</P> <P>Email:</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> Thu, 29 Apr 2021 14:11:07 GMT Weijuan Shi Davis 2021-04-29T14:11:07Z Busting the Myths around Kubernetes Deprecation of Dockershim – Windows Edition <P>You may have heard that the Kubernetes v1.20 release <A href="#" target="_blank" rel="noopener">deprecated dockershim</A>. Our friends in the community published a <A href="#" target="_blank" rel="noopener">DON’T PANIC blog</A> that does a great job of clarifying, since a lot of people kind of freaked out.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="crpeters_1-1617311301237.jpeg" style="width: 400px;"><img src=";px=400" role="button" title="crpeters_1-1617311301237.jpeg" alt="crpeters_1-1617311301237.jpeg" /></span></P> <P><EM><A href="#" target="_blank" rel="noopener">Jim Linwood</A>, <A href="#" target="_blank" rel="noopener">CC BY 2.0</A>, via Wikimedia Commons</EM></P> <P>This didn’t quite do the trick, so they wrote a <A href="#" target="_blank" rel="noopener">FAQ</A> too. Still, the message hasn’t landed everywhere it needs to. And none of these publications address the specialness that is Windows head-on. So, we felt we need to share what this means to you as a user of Windows Server containers in Kubernetes (K8s). &nbsp;As dockershim slowly exits Kubernetes, building containers is no different. For both Windows and Linux, containers built with different toolsets can be run with different runtimes. This is no different for Kubernetes. <EM>Containers built with Docker will run without modification in Kubernetes with containerd.</EM> Microsoft contributes to containerd to ensure that running those containers on Windows takes advantage of the latest and greatest the platform has to offer. For fun, we thought we’d share some of the myths about what all this means for Windows containers and bust (dispel) them for you.</P> <P>&nbsp;</P> <P><STRONG><EM>Myth</EM></STRONG> – the K8s docker shim deprecation will break my Windows container builds for Kubernetes!</P> <P><STRONG><EM>Busted</EM></STRONG> – Docker Desktop for Windows will continue to build containers! That is what Docker makes it to do! Kubernetes can run those containers using containerd. (The small print: if your containers depend on Docker sockets (aka docker in docker), you’re out of luck.)</P> <P>&nbsp;</P> <P><STRONG><EM>Myth</EM></STRONG> – Docker Desktop for Windows uses containerd already!</P> <P><STRONG><EM>Busted</EM></STRONG> – Docker Desktop for Windows uses Docker Engine which is built on moby. Moby, as of this writing, partially depends on containerd. There is <A href="#" target="_blank" rel="noopener">ongoing work</A> to adapt moby to use more of containerd on Windows.</P> <P>&nbsp;</P> <P><STRONG><EM>Myth</EM></STRONG> – All of my Docker CLIs I depend on my local machine for build process are broken!</P> <P><STRONG><EM>Busted</EM></STRONG> – Docker CLIs on your dev box are not being affected, and you may continue to use them to build container images. All this works thanks to the way Docker, containerd, and other tools conform to the <A href="#" target="_blank" rel="noopener">Open Container Initiative</A> (OCI) – a set of standards which help ensure tools used to build, publish, and run containers all interoperate together.</P> <P>&nbsp;</P> <P><STRONG><EM>Myth</EM></STRONG> – If I upgrade my Azure Kubernetes Service (AKS) cluster to Kubernetes v1.24 (<A href="#" target="_blank" rel="noopener">when dockershim is currently planned for removal from kubelet</A>) my Windows containers won’t run!</P> <P><STRONG><EM>Busted</EM></STRONG> – Your upgrade will deploy the new containerd runtime on the Windows nodes. But the containers will run just fine.</P> <P>&nbsp;</P> <P><STRONG><EM>Myth</EM></STRONG> - I must rebuild all my containers and K8s clusters to use containerd!</P> <P><STRONG><EM>Busted</EM></STRONG> – The containerd change is only on the host runtime. Container images build with Docker and other tools that are OCI compliant do not require you to rebuild. You can still use the same container image to run with Kubernetes and containerd. If you are using AKS, all you need to do is deploy your workload on a host which has containerd runtime. For more detail read the <A href="#" target="_blank" rel="noopener">Don’t Panic blog</A>.</P> <P>&nbsp;</P> <P><STRONG><EM>Myth</EM></STRONG> – I’m running my own DIY (do it yourself - unmanaged) K8s cluster and not using a distro and removing dockershim will break me!</P> <P><STRONG><EM>Busted</EM></STRONG> – The K8s community has tested the containerd container runtime for both Linux and Windows to ensure that containers that work with the Docker Engine runtime work with the containerd. Before the &nbsp;should replace your docker runtime on both Windows Server nodes and Linux nodes with containerd. You can find instructions on <A href="#" target="_blank" rel="noopener">how to configure runtimes in the community documentation</A>.</P> <P>&nbsp;</P> <P><STRONG><EM>Myth</EM></STRONG> – My air-gapped Kubernetes cluster will break with the move to containerd!</P> <P><STRONG><EM>Busted</EM></STRONG> – Air-gapped k8s operation still requires your container images to be available to the Windows host in the same way, either from a local container registry, or baked into the OS image’s local containerd image store. This is no different whether you are using dockershim or not.</P> <P>&nbsp;</P> <P>Finally, a note for customers looking into adopting <A href="#" target="_blank" rel="noopener">AKS-HCI</A>: The current preview release uses dockershim as the runtime on Windows. Containerd will be the default runtime in a future release and just like AKS, customers can expect a smooth transition – along with documented instructions on how to upgrade.</P> <P>&nbsp;</P> <P>As a part of the Kubernetes community, we are working to make sure you are covered. Docker and other tools that build OCI containers will work with the containerd runtime in Kubernetes. These topics, and more, are covered in the Kubernetes Special Interest Group for Windows (<A href="#" target="_blank" rel="noopener">SIG Windows</A>) where all are welcome. Please reach out to us if you have questions or feedback.</P> Mon, 05 Apr 2021 16:00:00 GMT crpeters 2021-04-05T16:00:00Z What’s new for Windows Containers on Windows Server 2022 <P>Today we announced the upcoming release of <A href="#" target="_blank" rel="noopener">Windows Server 2022</A> - and yes, it is packed with many new features across the board, including Security, Hybrid, and Application Platform. In this blog, we’ll go over some details about what’s new for Windows Containers in this next release.<BR />Before we dive into the bits, it’s important to reiterate that we’ve been listening to customer feedback and your input has been driving our work across platform improvements, application compatibility, and better Kubernetes experience. In fact, many of the new features below are the result of work made upstream in direct collaboration with the Kubernetes community, benefiting not only Windows Server 2022, but also Windows Server 2019. Furthermore, we’ve been hard at work enhancing our tooling to help you containerize existing applications with Windows Containers.<BR />With that, let’s take a look at what’s new for containers in Windows Server 2022:<BR /><BR /><FONT size="6">Platform improvements</FONT><BR /><FONT size="5">General size improvements</FONT><BR />Image size plays a big role in the container’s world. When you deploy a containerized application, you want it to start as fast as it can, but before the container can start the container image layers need to be downloaded and extracted on the container host. Since the launch of Windows containers in Windows Server 2016 we’ve made huge improvements to the Server Core container image, which is recommended for Lift and Shift scenarios. In Windows Server 2022, we continued to make progress in reducing the size of that image:</P> <TABLE width="624"> <TBODY> <TR> <TD width="118"> <P>&nbsp;</P> </TD> <TD width="298"> <P>Insider build 10.0.20292.1</P> </TD> <TD width="208"> <P>LTSC 2019 (RTM)</P> </TD> </TR> <TR> <TD width="118"> <P>Size uncompressed, on disk (GB)</P> </TD> <TD width="298"> <P>2.73</P> </TD> <TD width="208"> <P>3.7</P> </TD> </TR> <TR> <TD width="118"> <P>Image name</P> </TD> <TD width="298"> <P></P> </TD> <TD width="208"> <P> ltsc2019</P> <P>&nbsp;</P> </TD> </TR> </TBODY> </TABLE> <P><BR />As you can see from the table above, we have almost a full gig reduction in terms of size. Keep in mind that the image used for comparison is the latest Insiders build for Windows Server 2022 and final size can change until general availability.<BR />Comparing these images is not trivial, as every month said images are serviced with feature and security updates - which end up increasing the image size. In a future blog, we will go into more details on how we compare the image sizes, how monthly updates affect them, and explain how the team is constantly working to reduce the Server Core image size overall.<BR /><FONT size="5">Nano Server support lifecycle changes and update</FONT><BR />Nano Server has been our recommended base container image for new modern applications since its launch with Windows Server 2016. Over the years we’ve seen the adoption of this image grow and one of the feedbacks on it was the support lifecycle - for customers running on a Windows Server 2019 host, the corresponding Nano Server 1809 image had the support lifecycle ending back in November 2020. We heard your feedback and worked to extend that support period until Jan-9-2024 - aligning with the Server Core LTSC 2019/1809 base container image.<BR />With the upcoming release of Windows Server 2022, we also decided to release the Nano Server base container image with a longer support cycle - that image will be supported until 2026. This aligns with the mainstream support of Windows Server 2022 and gives customers peace of mind to use that image for that period.<BR /><FONT size="5">Virtualized time zone</FONT><BR />With Windows Server 2022 you can now configure the time zone of a container without needing access to the host. Previously containers were required to mirror the time zone of the host, but now these configurations are virtualized entirely within the container. Upon boot the container starts by referencing the host time zone to set its virtual configuration. If at any time you configure the container to follow a specific time zone then that configuration will persist across boots throughout the end of the container's lifetime. This feature is essential for many applications which localize time-sensitive data to the region they are serving.<BR />To configure the time zone within a container you can simply use the tzutil command or Set-TimeZone Powershell cmdlet.<BR /><FONT size="5">Scalability improvements enhancing overlay networking support</FONT><BR />Windows Server 2022 aggregates several performance and scale improvements which have been made across the last 4 Semi-Annual Channel (SAC) releases of Windows Server but have not been backported into Windows Server 2019. The areas of improvement are:</P> <UL> <LI>Fix port exhaustion issue with 100s of Kubernetes services and pods on a node.</LI> <LI>Improved packet forwarding performance in vSwitch.</LI> <LI>Increased reliability across CNI restarts in Kubernetes.</LI> <LI>Several other improvements in the HNS control plane and in the data plane used by Windows Server Containers and Kubernetes networking.</LI> </UL> <P><FONT size="5">Direct Server Return (DSR) routing for overlay and l2bridge networks</FONT><BR />DSR is an implementation of asymmetric network load distribution in load balanced systems, meaning that the request and response traffic use a different network path. The use of different network paths helps avoid extra hops and reduces the latency by which not only speeds up the response time between the client and the service but also removes some extra load from the load balancer.<BR />Using DSR is a transparent way to achieve increased network performance for your applications with little to no infrastructure changes.<BR /><BR /><FONT size="6">Application Compatibility</FONT><BR /><FONT size="5">gMSA updates</FONT><BR />Group Managed Service Accounts (gMSA) can be used with Windows containers to enable Active Directory (AD) authentication scenarios. The utilization of gMSA has been introduced in Windows Server 2019 and is <A href="#" target="_blank" rel="noopener">documented</A>. So far, it has required domain joining the container host to retrieve the gMSA’s password from AD.<BR />With Windows Server 2022, you can now use gMSAs with Windows containers without having to domain join the host. We have introduced a new model where an AD identity protected in a secret store can be used by the un-joined host to retrieve the gMSA password. Not having to domain join the host will make usage of gMSA in Kubernetes environments much more manageable and scalable. We will update our documentation when this feature is available.<BR /><FONT size="5">IPv6 support</FONT><BR />As part of the upstream work mentioned above, we have introduced the first step to full dual stack IPv6 support for Kubernetes in Windows. The implementation enables IPv6 dual stack support for L2Bridge based networks in the platform. There is a dependency on the CNI used in Kubernetes and the Kubernetes version &amp;gt; 1.20 to enable the IPv6 support end to end. We’ll share more details about this implementation and how you can leverage it on a dedicated blog post soon.<BR /><BR /><FONT size="6">Better Kubernetes experience</FONT><BR /><FONT size="5">Multi-subnet support for Windows worker nodes with Calico for Windows</FONT><BR />The Host Network Service (HNS) was restricting Kubernetes container endpoint configurations to only use the prefix length of the underlying subnet. We have improved HNS to allow the use of more restrictive subnets (i.e. subnets with a longer prefix length) as well as multiple subnets per Windows worker node. The first CNI making use of this functionality is Calico for Windows. You can find more information on Calico for Windows in <A href="" target="_blank" rel="noopener">this blog post</A>.<BR /><FONT size="5">HostProcess containers for node management</FONT><BR />Alongside Windows Server 2022 we will be introducing a new container type called HostProcess containers, which aims to extend the Windows container model to enable a wider range of Kubernetes cluster management scenarios. HostProcess containers run directly on the host and maintain behavior and access similar to that of a regular process. With HostProcess containers, users can package and distribute management operations and functionalities that require host access while retaining versioning and deployment methods provided by containers. This allows Windows containers to be used for a variety of device plugin, storage, and networking management scenarios in Kubernetes. With this comes the enablement of host network mode - allowing HostProcess containers to be created within the host's network namespace instead of their own.<BR />Using HostProcess containers, cluster operators no longer need to log onto and individually configure each Windows node for administrative tasks and management of Windows services. Operators can now utilize the container model to deploy management logic to as many clusters as needed with ease. HostProcess containers can be built on top of existing Windows server 2019 (or later) base images, managed through the Windows container runtime, and run as any user that is available on or in the domain of the host machine. All in all, HostProcess containers are the best way to manage Windows nodes in Kubernetes and we’ll share more information on it as we approach the release of Windows Server 2022.<BR /><BR /><FONT size="6">Containerization tooling</FONT><BR /><FONT size="5">Updates to Windows Admin Center experience</FONT><BR />Back in June last year we embarked on a journey to create new tooling to help you containerize your existing applications. This work started after we talked to many customers who inquired us on how they could leverage Windows containers to not only modernize existing applications that are not maintained by developers any more, but also how to get rid of legacy systems such as Windows Server 2003 and 2008. We then added new functionality to the Containers extension on Windows Admin Center to help you containerize existing web applications based on ASP.Net from .Net Framework. You could either provide a static folder or a Visual Studio solution from your developer. However, we also heard that in many cases the web application was deployed to a server and that was all Operations teams had, so we added support for Web Deploy files, allowing you to extract the app and its configuration from a running server to then containerize the application. We also added mechanisms to validate the image locally and push that image to Azure Container Registry.<BR />Next, we added basic management of Azure Container Registry and Azure Container Instance. This allows you to create and delete registries, manage images, start and stop new container instances - all directly from the Windows Admin Center UI.<BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="WS2022_Blog_01.png" style="width: 999px;"><img src=";px=999" role="button" title="WS2022_Blog_01.png" alt="WS2022_Blog_01.png" /></span><BR />The functionality above on the Containers extension is available already and currently documented <A href="#" target="_blank" rel="noopener">here</A>. This allows you to go from having an application back in a legacy system all the way to having a container image ready to be deployed on AKS or other Azure services. We have more to come for Windows Admin Center, so stay tuned for new updates soon.<BR /><FONT size="5">Azure Migrate App Containerization tooling now in Public Preview</FONT><BR />While Windows Admin Center provides a modular approach to containerize existing applications, giving you the ability to choose which source type you have to containerize the existing application, how to deal with the image you create and where to deploy that image, we also heard from customers that a focused solution for AKS was necessary. Azure Migrate App Containerization is an end-to-end solution to containerize and move existing web applications to AKS. It’s step-by-step approach provides functionality to assess existing web servers, create a container image, push the image to ACR, create a Kubernetes deployment, and finally deploy it to AKS.<BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="WS2022_Blog_02.png" style="width: 999px;"><img src=";px=999" role="button" title="WS2022_Blog_02.png" alt="WS2022_Blog_02.png" /></span><BR />This functionality is reaching Public Preview today and you can try it out <A href="#" target="_blank" rel="noopener">here</A>. For this public preview, we’re targeting scenarios for Windows applications for ASP.Net on IIS and Linux with Java on Tomcat. More scenarios and functionalities will be added soon.<BR /><BR /><FONT size="7">Keep sending your feedback!</FONT><BR />Please continue sending us your feedback! Each new feature we design and ship is informed by your input. Some of the features above are still being worked on and your help and support in validating it is extremely valuable!<BR />You can send your comments and feedback our way either via comments below, or <A href="#" target="_blank" rel="noopener">GitHub repo</A> by opening a new issue.</P> Mon, 29 Mar 2021 16:18:58 GMT Vinicius Apolinario 2021-03-29T16:18:58Z New Year, New Resolution and New Era of Windows Containers! <P>My new year resolution of Year 2021 is to write more. Before I know, it’s already February and I even celebrated Chinese Lunar Near of 2021 two weeks ago. OK, I guess I can count the clock that way. This year is the Year of the Ox. In Chinese Ox is “牛“. That same Chinse character also means super cool, super awesome. I hope this speaks to the year of Windows containers too <img class="lia-deferred-image lia-image-emoji" src="" alt=":smile:" title=":smile:" />.</P> <P>&nbsp;</P> <P>There have been a lot to share and celebrate in the first 2 months.</P> <P>&nbsp;</P> <P><STRONG>AKS on Azure Stack HCI February Update</STRONG></P> <P>The biggest news is <A href="" target="_blank" rel="noopener">AKS on Azure Stack HCI February Update</A> released last Friday! Ben Armstrong led the team and delivered lots of new changes and fixes in this release. Nice job! The crown jewel for me is the <A href="#" target="_blank" rel="noopener">guide for evaluating AKS-HCI inside an Azure VM</A> authored by Matt McSpirit. Anyone with an Azure subscription can try out AKS on Azure Stack HCI in an Azure VM and of course spin up your Windows containers on it. You won’t be constrained on hardware availability. Matt’s team runs the customer engagement program. They are actively looking at customers who are interested in enrolling in the Early Access Program (EAP) of AKS-HCI. You can <A href="#" target="_blank" rel="noopener">fill out this survey</A> if you are interested.</P> <P>&nbsp;</P> <P><A href="" target="_blank" rel="noopener">AKS-HCI now supports strong authentication using Active Directory Credentials</A> is <SPAN>another great improvement that enables Active Directory authentication, fully integrated into Kubernetes authentication and configuration workflows. Sumit Lahiri did an excellent job explaining the architecture and what’s under the hood. The new scenario was delivered from </SPAN>the same team that we’ve been partnering closely to bring more gMSA related innovations to Windows containers to make the Lift and Shift experience easier for workloads that need Active Directory. If you are new to gMSA – Group Managed Service Account, it’s a service account that enables Windows containers to have an identity on the wire to allow Active Directory authentications. Our documentation <A href="#" target="_blank" rel="noopener">gMSA for Windows containers</A> have more details. We’ll share gMSA related improvements for Windows containers in coming months.</P> <P>&nbsp;</P> <P><STRONG>Docs. Docs. Docs.</STRONG></P> <P>The team took the quiet time during the Christmas holiday and made improvements on <A href="" target="_blank" rel="noopener">Windows container documentation.</A> I want to list them so everyone can see and benefit. &nbsp;</P> <P>&nbsp;</P> <UL> <LI>We added a new page <A href="#" target="_blank" rel="noopener">Lift and Shift to containers&nbsp;</A>under “Get Started”.&nbsp;&nbsp;It shares high level benefits of using containers, applications supported in Windows containers, and a decision tree. This page will help those of you who just started looking at moving your Windows applications to containers.</LI> </UL> <P>&nbsp;</P> <UL> <LI>We added a new section under “Tutorials” - Manage containers with Windows Admin Center starting with this page “<A href="#" target="_blank" rel="noopener">Configure the Container extension on Windows Admin Center</A>”. If you are looking for some tooling to help containerize your apps and deploy them, this will be a great starting point.</LI> </UL> <P>&nbsp;</P> <UL> <LI>We updated the <A href="#" target="_blank" rel="noopener">Base image servicing cycle</A> page under “Reference” to reflect that we have extended Nano Server container SAC1809 release to be supported to 1/9/2024, and the Window container SAC1809 to be supported to 5/11/2021. &nbsp;There is a bit more update on this that I’ll cover later in the blog.</LI> </UL> <P>&nbsp;</P> <UL> <LI>We added the <A href="#" target="_blank" rel="noopener">Events</A> page under “Reference” where all related content from last 2-3 years in major Microsoft and industry conferences are now compiled. We spent lot of effort and time on building quality slides and demos for events. So even though some content could be slightly outdated, they can still be valuable if you are just starting on Windows containers.</LI> </UL> <P>&nbsp;</P> <UL> <LI>We added the GitHub page of <A href="#" target="_blank" rel="noopener">Windows Server container roadmap</A> under “Resources”.</LI> </UL> <P>&nbsp;</P> <P>Overall, we aim to make this documentation page as a one-stop shop for you to find all the relevant resources no matter which stage you are in leveraging Windows containers to lift and shift and modernize your Windows applications. We welcome your feedback and love to see you help contributing directly on documentation.</P> <P>&nbsp;</P> <P><STRONG>Lifecyle Management Update</STRONG></P> <P>We streamlined Server Core container and Nano Server container support and lifecycle management. Some of you may recall Nano Server container SAC1809 release was about to reach its end of life (EOL) in Nov 2020. We listened to your feedback especially those of you in the Kubernetes community that moved to use Nano Server containers for comformance testing in addition to regular workload use. Nano Server container SAC1809 now has the same EOL as Server Core container LTSC2019/1809 on 1/9/2024.</P> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener">The Windows container</A>, sometimes also referred as “the 3<SUP>rd</SUP> Windows container”, has been gaining popularity thanks to its broader Windows API support. Recently we were brought to the attention that its SAC1809 release is going to reach its EOL on 5/11/2021. We understand customers who need to stay on Windows Server 2019 as the container host are concerned. That is because those customers can only use containers released in the same wave with the same Build number (also referred as "Major release number") due to <A href="#" target="_blank" rel="noopener">Windows container host and guest version compatibility</A>&nbsp;We are actively looking at options and will update when we are ready.</P> <P><STRONG>&nbsp;</STRONG></P> <P><STRONG>New Development from the .NET Team</STRONG></P> <P>We work very closely with the .NET Team, and I know that many of you run .NET apps on Windows containers. There are two recently blogs from Richard Lander of the .NET team that I want to share.</P> <P>&nbsp;</P> <UL> <LI><A href="#" target="_blank" rel="noopener">Staying safe with .NET containers | .NET Blog (</A>.</LI> </UL> <P>You will notice this blog is mainly about Linux and only a small portion on Windows. I take it as a positive thing for Windows because it mostly just works with our current company-wide security practices. But we stay vigilant and keep innovating. There will be updates to our docs and blogs coming related to Windows Server container security.</P> <P>&nbsp;</P> <UL> <LI><A href="#" target="_blank" rel="noopener">Announcing .NET 6 Preview 1 | .NET Blog (</A>.</LI> </UL> <P>In the Container section, you will notice this is mentioned:&nbsp; “Improve&nbsp;<A href="#" target="_blank" rel="noopener">scaling in containers</A>, and better support for&nbsp;<A href="#" target="_blank" rel="noopener">Windows process-isolated containers</A>.” This issue in <A href="#" target="_self">process-isolated containers</A>&nbsp;was reported from a few of our AKS customers. In a nutshell, the issue is that CPUs and memory are not being honored for process-isolated containers by .NET runtime. <SPAN>We are happy to see the .NET Team </SPAN><SPAN>making improvements that will better take advantage of the capabilities of Windows containers.</SPAN></P> <P>&nbsp;</P> <P>I also wanted to call out this update <A href="#" target="_blank" rel="noopener">.NET 5.0 Support for Windows Server Core Containers</A> that was made available last November. Both .NET team and our team are very curious of your feedback on this.</P> <P>&nbsp;</P> <P><STRONG>What’s Ahead</STRONG></P> <P>Two exciting events are coming up on the horizon.</P> <UL> <LI><A href="#" target="_blank" rel="noopener"><STRONG>Microsoft Spring Ignite 2021</STRONG> </A>is next week March 2-4. I am excited to see quite a few new things our team have been working on will be showcased.</LI> <LI><A href="#" target="_blank" rel="noopener"><STRONG>Microsoft Global MVP Summit 2021</STRONG></A> is also coming on March 29-31. MVPs are our best customers and friends. I am excited to see some old and new friends again.</LI> </UL> <P>&nbsp;</P> <P>That reminds me recently I had a few email exchanges and GitHub discussions with one of our MVPs Tobias Fenster, CTO of COSMO CONSULT Group based in Germany. To my pleasant surprise, Tobias has been writing blogs on Windows containers, like this one <A href="#" target="_blank" rel="noopener">”Building Docker images for multiple Windows Server versions using self hosted containerized Github runners”</A> . Hidden gems! Go check out Tobias’s <A href="#" target="_blank" rel="noopener">presentation list</A>.</P> <P>&nbsp;</P> <P>To close, I really liked what Satya said in <A href="#" target="_self">Microsoft Fiscal Year 2021 2nd Quarter Earnings Conference Call in January</A></P> <P>&nbsp;</P> <P><EM>“What we are witnessing is the dawn of a second wave of digital transformation sweeping every company and every industry.</EM></P> <P><EM>&nbsp;Digital capability is key to both resilience and growth.</EM></P> <P><EM>&nbsp;It’s no longer enough to just adopt technology. Businesses need to&nbsp;build&nbsp;their own technology to compete and grow. “ </EM></P> <P><EM>&nbsp;</EM></P> <P>Borrowing that perspective, it’s no longer just about adopting Windows containers to lift and shift and modernize with AKS and AKS on Azure Stack HCI. It’s about leveraging Windows containers, differentiate your company and grow to new heights.</P> <P>&nbsp;</P> <P>As always, we’d love to hear from you, how you use Windows containers, on AKS, AKS on Azure Stack HCI, or other environments, and what we can do better to help you make your digital transformation journey easier.&nbsp;</P> <P>&nbsp;</P> <P>Weijuan</P> <P>Twitter: @WeijuanLand</P> <P>Email:</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> Fri, 26 Feb 2021 19:57:04 GMT Weijuan Shi Davis 2021-02-26T19:57:04Z MSMQ and Windows Containers <P>Ever since we introduced Windows Containers in Windows Server 2016, we’ve seen customers do amazing things with it - either with new applications that leverage the latest and greatest of .Net Core and all other cloud technologies, but also with existing applications that were migrated to run on Windows Containers. MSMQ falls into this second scenario.</P> <P>MSMQ was an extremely popular messaging queue manager launched in 1997 that became extremely popular in the 2000’s with enterprises using .Net and WCF applications. Today, as companies look to modernize existing applications with Windows Containers, many customers have been trying to run these MSMQ dependent applications on containers “as is” - which means no code changes or any adjustments to the application. However, MSMQ has different deployment options and not all are currently supported on Windows Containers.</P> <P>In the past year, our team of developers have tested and validated some of the scenarios for MSMQ and we have made amazing progress on this. This blog post will focus on the scenarios that work today on Windows Containers and some details on these scenarios. In the future we’ll publish more information on how to properly set up and configure MSMQ for these scenarios using Windows Containers.</P> <P>&nbsp;</P> <P><FONT size="5">Supported Scenarios</FONT></P> <P>MSMQ can be deployed on different modes to support different needs from customers. Between private and public queues, transactional or not, and anonymous or with authentication, MSMQ can fit different scenarios - but not all can be easily moved to Windows Containers. The table below lists the currently supported scenarios:</P> <TABLE width="642"> <TBODY> <TR> <TD width="70px"> <P><STRONG>Scope</STRONG></P> </TD> <TD width="111px"> <P><STRONG>Transactional</STRONG></P> </TD> <TD width="218px"> <P><STRONG>Queue location</STRONG></P> </TD> <TD width="124px"> <P><STRONG>Authentication</STRONG></P> </TD> <TD width="118px"> <P><STRONG>Send and receive</STRONG></P> </TD> </TR> <TR> <TD width="70px"> <P>Private</P> </TD> <TD width="111px"> <P>Yes</P> </TD> <TD width="218px"> <P>Same container (single container)</P> </TD> <TD width="124px"> <P>Anonymous</P> </TD> <TD width="118px"> <P>Yes</P> </TD> </TR> <TR> <TD width="70px"> <P>Private</P> </TD> <TD width="111px"> <P>Yes</P> </TD> <TD width="218px"> <P>Persistent volume</P> </TD> <TD width="124px"> <P>Anonymous</P> </TD> <TD width="118px"> <P>Yes</P> </TD> </TR> <TR> <TD width="70px"> <P>Private</P> </TD> <TD width="111px"> <P>Yes</P> </TD> <TD width="218px"> <P>Domain Controller</P> </TD> <TD width="124px"> <P>Anonymous</P> </TD> <TD width="118px"> <P>Yes</P> </TD> </TR> <TR> <TD width="70px"> <P>Private</P> </TD> <TD width="111px"> <P>Yes</P> </TD> <TD width="218px"> <P>Single host (two containers)</P> </TD> <TD width="124px"> <P>Anonymous</P> </TD> <TD width="118px"> <P>Yes</P> </TD> </TR> <TR> <TD width="70px"> <P>Public</P> </TD> <TD width="111px"> <P>No</P> </TD> <TD width="218px"> <P>Two hosts</P> </TD> <TD width="124px"> <P>Anonymous</P> </TD> <TD width="118px"> <P>Yes</P> </TD> </TR> <TR> <TD width="70px"> <P>Public</P> </TD> <TD width="111px"> <P>Yes</P> </TD> <TD width="218px"> <P>Two hosts</P> </TD> <TD width="124px"> <P>Anonymous</P> </TD> <TD width="118px"> <P>Yes</P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P>The scenarios above have been tested and validated by our internal teams. In fact, here are some other important information on the results of these tests:</P> <UL> <LI>Isolation mode: All tests worked fine with both isolation modes for Windows containers, process and hyper-v isolation.</LI> <LI>Minimal OS and container image: We validated the scenarios above with Windows Server 2019 (or Windows Server, version 1809 for SAC), so that is the minimal version recommended for using with MSMQ.</LI> <LI>Persistent volume: Our testing with persistent volume worked fine. In fact, we were able to run MSMQ on Azure Kubernetes Service (AKS) using Azure files.</LI> </UL> <P><FONT size="5">Authentication with gMSA</FONT></P> <P>From the table above, you can deduce that the only scenario we don’t support is for queues that require authentication with Active Directory. The integration of gMSA with MSMQ is currently not supported as MSMQ has dependencies on Active Directory that are not in place at this point. Our team will continue to listen to customer feedback, so let us know if this is a scenario you and your company are interested in. You can file a request/issue on our <A href="#" target="_blank">GitHub repo</A> and we’ll track customer feedback there.</P> <P>&nbsp;</P> <P>Let us know how the validation of MSMQ goes with your applications. We’re looking forward to hear back from you as you continue to modernize your applications with Windows containers.</P> Mon, 14 Dec 2020 17:00:00 GMT Vinicius Apolinario 2020-12-14T17:00:00Z November 2020 Containers extension updates on Windows Admin Center <P>We are back with more features to make working with containers easier with Windows Admin Center. The features in this update focus on making your experiences working with containers on-premises and in Azure easier.</P> <P>&nbsp;</P> <P><STRONG>New Containers extension layout</STRONG></P> <P>With the growing set of functionalities in the Containers extension, we have reformatted the tool to be easier to use and navigate. The tool now features its own extension tool bar instead of tabs, formatted to group tools based on their scenarios. We have added a section specifically for Azure to house the functionality coming in this and future releases.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Nov2020-UpdateBlog Img01.png" style="width: 999px;"><img src=";px=999" role="button" title="Nov2020-UpdateBlog Img01.png" alt="Nov2020-UpdateBlog Img01.png" /></span></P> <P>&nbsp;</P> <P><STRONG>Manage Azure Container Registry</STRONG></P> <P>You can now manage your Azure Container Registry without having to navigate to the portal. From this tool, you can create new registries, remove images, pull images into your local image repository, or run an instance of an image as an Azure Container Instance.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Nov2020-UpdateBlog Img02.png" style="width: 999px;"><img src=";px=999" role="button" title="Nov2020-UpdateBlog Img02.png" alt="Nov2020-UpdateBlog Img02.png" /></span></P> <P><STRONG>&nbsp;</STRONG></P> <P><STRONG>Manage Azure Container Instances</STRONG></P> <P>In tandem with the new features to manage and run container images out of your Azure Container Registry, you can now also manage your Azure Container Instances from Windows Admin Center. From here you can check the instance details, such as FQDN and IP address, as well as stop, delete, or restart container instances. In addition, you have a quick link to the Azure Portal for advanced management of each instance.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Nov2020-UpdateBlog Img03.png" style="width: 999px;"><img src=";px=999" role="button" title="Nov2020-UpdateBlog Img03.png" alt="Nov2020-UpdateBlog Img03.png" /></span></P> <P>&nbsp;</P> <P><STRONG>Containers extension available on public extension feed</STRONG></P> <P>Since our last update, the Containers extension is available on the public extension feed. Navigate from the top right in Windows Admin Center to Settings &gt; Extensions &gt; Available extensions to install the Containers tool. If you already have the tool installed, you can update it from the Installed extensions tab.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Nov2020-UpdateBlog Img04.png" style="width: 999px;"><img src=";px=999" role="button" title="Nov2020-UpdateBlog Img04.png" alt="Nov2020-UpdateBlog Img04.png" /></span></P> <P>&nbsp;</P> <P><STRONG>Feedback</STRONG></P> <P>As before, please continue sending us your feedback! Each new feature we design and ship is informed by your input. You can send your comments and feedback our way either via comments below, or our&nbsp;<A href="#" target="_blank" rel="noopener">GitHub repo</A>&nbsp;by opening a new issue.</P> Mon, 30 Nov 2020 17:00:00 GMT Vinicius Apolinario 2020-11-30T17:00:00Z Microsoft Ignite 2020: Windows Containers and Azure Kubernetes Service on Azure Stack HCI! <P>If Azure Kubernetes Services (AKS) was the first Kubernetes based home on Azure that Windows containers have, I am so happy now we got a second brand new home. That is AKS on Azure Stack HCI announced at Ignite this week! As the driver to build and execute the product strategy helping customers lift and shift, and modernize traditional Windows apps with Windows containers, I am thrilled we are bringing AKS on Azure Stack HCI to customers for on-prem or hybrid needs of leveraging Windows containers.</P> <P>&nbsp;</P> <P>As a starter, from the official doc <A href="#" target="_self">here</A>, “Azure Kubernetes Service on Azure Stack HCI is an on-premises implementation of Azure Kubernetes Service (AKS), which automates running containerized applications at scale. Azure Kubernetes Service is now in preview on Azure Stack HCI, making it quicker to get started hosting Linux and Windows containers in your datacenter.”</P> <P>&nbsp;</P> <P>I had the honor participating as an SME in a few Digital Breakout sessions and the Ask the Experts sessions related to AKS on Azure Stack HCI this Ignite. I was blown away by the strong interest from the community. Lots of great questions were asked. To make it easy, I compiled the following relevant links for folks who wanted to get started:</P> <P>&nbsp;</P> <P>General materials:</P> <UL> <LI>Julia White’s Blog <A href="#" target="_blank">Bring innovation anywhere with Azure’s multi-cloud, multi-edge hybrid capabilities</A> where AKS on Azure Stack HCI was announced.</LI> <LI>AKS on Azure Stack HC Documentation: <A href="#" target="_blank"></A> <UL> <LI>Specifically, here is the tutorial of Windows containers on AKS on Azure Stack HCI:&nbsp;<A href="#" target="_blank"></A></LI> <LI>As a reference, here is the tutorial to run Windows containers on AKS:&nbsp;<A href="#" target="_blank"></A></LI> </UL> </LI> <LI>Download the preview: <A href="#" target="_blank"></A></LI> </UL> <P>Ignite Sessions:</P> <UL> <LI>Ben Armstrong is the product owner of AKS on Azure Stack HCI. So definitely check out his session (4 demos!!!): <A href="#" target="_blank">Azure Kubernetes Service on Azure Stack HCI</A>.</LI> <LI>Roanne Sones’s session <A href="#" target="_blank">Transform your Windows Server workloads on Azure</A> where how to quickly modernize a .NET app and deploy onto AKS on Azure Stack HCI was demoed.</LI> <LI>Brendon Burn’s session on <A href="#" target="_blank">Enterprise-grade Kubernetes on Azure</A> where he’ll also cover AKS on Azure Stack HCI.</LI> <LI>All the three sessions above have accompanying Ask the Expert sessions. So be sure to check them out too. Someone might be asking or answering your questions.</LI> <LI>Skilling session <A href="#" target="_blank">Quick app containerization with Azure Kubernetes Service on Azure Stack HCI</A> by Subodh Bhargava and Vinicius Apolinario where Vinicius showed you how to use WAC to containerize a .NET app and deploy to AKS on Azure Stack HCI with ease.</LI> </UL> <P>If you are interested in learning more about WAC tooling related to containers, check out 2 previous blogs from Vinicius:</P> <UL> <LI><SPAN><A href="" target="_blank">Announcement: New updates to the Containers extension in Windows Admin Center</A></SPAN></LI> <LI><SPAN><A href="" target="_blank">September-2020 updates to Containers extension on Windows Admin Center</A></SPAN></LI> </UL> <P>&nbsp;</P> <P>Have fun at Ignite. Have fun trying out Windows containers on AKS on Azure Stack HCI. Keep your questions coming in. Share your feedback with us. Thank you!</P> <P>&nbsp;</P> <P>Weijuan</P> <P>Twitter: @WeijuanLand</P> <P>Email:</P> Thu, 24 Sep 2020 06:38:42 GMT Weijuan Shi Davis 2020-09-24T06:38:42Z September-2020 updates to Containers extension on Windows Admin Center <P>It’s time for some more container goodness coming your way! As you probably know, we have been adding some <A href="" target="_blank" rel="noopener">new capabilities</A> into the Containers extension of Windows Admin Center. In recent months we added new capabilities to help not only better manage container images and containers, but to also help you build new container images based on the source of your application. With Windows Admin Center, customers can now containerize existing applications, <A href="" target="_blank" rel="noopener">even if you don’t have the code</A> from which the app was built from and with no developer involvement.</P> <P>&nbsp;</P> <P>Today we are adding some cool new functionality to the extension again!</P> <P>&nbsp;</P> <P><STRONG>Install and configure the container host</STRONG></P> <P>Until now, the Containers extension assumed the container host was already configured and ready to go. If that was not the case, users would have to go to the server and install Docker and its dependencies. Now, we have a totally streamlined process inside of Windows Admin Center itself:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sep2020-UpdateBlog Img01.png" style="width: 723px;"><img src=";px=999" role="button" title="Sep2020-UpdateBlog Img01.png" alt="Sep2020-UpdateBlog Img01.png" /></span></P> <P>When you select the option to install Docker from Windows Admin Center, we will download the package, install the Docker module and the Containers feature, and restart the server. When the server is back online, we will ensure the Docker service is up and running so your container host is properly configured.</P> <P>&nbsp;</P> <P><STRONG>Common base container images to pull</STRONG></P> <P>When getting started with containers, the first thing you want to do is to ensure you have the base images pulled so when you run a new container or create a new image, you don’t have to wait for the image pull times. However, sometimes we’re not even sure which images to pull. To help with that, we’re adding an option to check the most common Windows base container images:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sep2020-UpdateBlog Img02.png" style="width: 530px;"><img src=";px=999" role="button" title="Sep2020-UpdateBlog Img02.png" alt="Sep2020-UpdateBlog Img02.png" /></span></P> <P>Keep in mind that while the Windows Admin Center UI allows you select any of the images available, the pull will fail if you try to pull an image that has an OS version higher than the container host you’re targeting. For example: If you have a Windows Server 2019 container host, you can pull LTSC 2019 images or older - not newer.</P> <P>&nbsp;</P> <P><STRONG>Disabling functionalities on Kubernetes nodes</STRONG></P> <P>One of the flexibilities of the Containers extension on Windows Admin Center is that you can target any Windows container host- even the ones that are part of a Kubernetes cluster. However, some actions on the Containers extension might cause issues in your Kubernetes environment.</P> <P>For that reason, we are disabling destructive functionalities when Windows Admin Center finds the “kublet.exe” service running on a container host. The disabled functionalities when targeting a Kubernetes node are:</P> <UL> <LI>On the containers tab: <UL> <LI>End containers</LI> <LI>Delete containers</LI> </UL> </LI> <LI>On the images tab: <UL> <LI>Delete container images</LI> <LI>Run container images</LI> </UL> </LI> </UL> <P>&nbsp;</P> <P><STRONG>Containers extension now available on public extension feed</STRONG></P> <P>On previous updates, the Containers extension was available on the Insiders feed, which required users to manually add that feed to Windows Admin Center. As of today, new updates will go to the public extension feed, so you don’t have to do anything - other than install/update the extension:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sep2020-UpdateBlog Img04.png" style="width: 999px;"><img src=";px=999" role="button" title="Sep2020-UpdateBlog Img04.png" alt="Sep2020-UpdateBlog Img04.png" /></span></P> <P>&nbsp;</P> <P><STRONG>We want your feedback!</STRONG></P> <P>We’ve been hammering this message over and over, but it is never too much! We need your feedback!</P> <P>The team would love to understand how you are using the Containers extension, what is working and what is not, as well as what you would like to see added! Do you have an app you’d like to see containerized with Windows Admin Center that you currently can't? Great! Let us know!</P> <P>You can send your comments and feedback our way either via comments below, or our <A href="#" target="_blank" rel="noopener">GitHub repo</A> by opening a new issue.</P> <P>&nbsp;</P> <P>Find on Twitter <A href="#" target="_blank" rel="noopener">@vrapolinario</A></P> Wed, 16 Sep 2020 22:49:37 GMT Vinicius Apolinario 2020-09-16T22:49:37Z Announcement: Adding WebDeploy support to container image creation in Windows Admin Center <DIV id="tinyMceEditorambguo_2" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <DIV id="tinyMceEditorambguo_3" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="WebDeploy_LI.jpg" style="width: 999px;"><img src=";px=999" role="button" title="WebDeploy_LI.jpg" alt="WebDeploy_LI.jpg" /></span></P> <P>&nbsp;</P> <P>Last month we <A href="" target="_self">announced a collection</A> of updates to our containers extension tool in Windows Admin Center. These updates drew a lot of excitement and feedback from users, including requests for new ways to containerize web applications that are already deployed on Windows Server IIS servers into a Windows Container.</P> <P>&nbsp;</P> <P>With this new preview release, we have included new functionality for users to build new Windows container images from WebDeploy files. WebDeploy is a Microsoft officially supported tool that simplifies the migration, management, and deployment of IIS applications. With WebDeploy, a user can extract a running IIS application into a package through IIS Manager or PowerShell and use the containers extension to create a new container image from this package. Existing documentation on how to extract an application is available on <A href="#" target="_blank" rel="noopener">Microsoft Docs</A>.</P> <P>&nbsp;</P> <H2>How to get the new functionality</H2> <P>The new functionality is being made available for Windows Admin Center under a new Insiders extension feed. If you don’t have Windows Admin Center installed, you can download it from&nbsp;<A href="#" target="_blank" rel="noopener">here</A>. While the new feed is for Insiders, you do not need the Insiders build of Windows Admin Center.</P> <P>&nbsp;</P> <P>After installing Windows Admin Center, you can add the new Insiders feed by going to Settings&gt;Extensions&gt;Feeds and add the new feed “<A href="#" target="_blank" rel="noopener">”</A>:</P> <P>&nbsp;</P> <P class="lia-align-center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ambguo_1-1597083336899.png" style="width: 999px;"><img src=";px=999" role="button" title="ambguo_1-1597083336899.png" alt="ambguo_1-1597083336899.png" /></span></P> <P>&nbsp;</P> <P>Once you have added the new feed, you will be able to see the updated Containers extension available under Available extensions. You can now go ahead and install the extension to enable the new functionality.</P> <P>&nbsp;</P> <P>Notice that the Containers extension will only show up under Server Manager and if you are targeting an existing container host. For more information on how to install Docker on Windows Server, check out our&nbsp;<A href="#" target="_blank" rel="noopener">documentation</A>.</P> <P>&nbsp;</P> <H2>Let us know what you think</H2> <P>This new functionality was built based on your feedback from our last update. Thank you for engaging with us, and we hope to continue to provide useful tools for your work with Windows containers. You can continue to share your feedback either here in the comments, suggest a new feature via&nbsp;<A href="#" target="_blank" rel="noopener">User Voice</A>, or start a conversation via&nbsp;<A href="" target="_blank" rel="noopener">Tech Community</A>.</P> <P>&nbsp;</P> Mon, 10 Aug 2020 19:02:34 GMT ambguo 2020-08-10T19:02:34Z New Windows Containers GitHub repo - Roadmap and more! <P>Over the years the Windows Containers team has strived to listen to customer feedback to implement features and allow new scenarios based on what we hear from you. Today we’re taking another step in expanding our ability to interact directly with customers! We are proud to announce a new GitHub repository dedicated to customer engagement and detailing our product roadmap.</P> <P>The GitHub is available at <A href="#" target="_blank" rel="noopener"></A>. This will be the location on which you can interact with the product team. If you’re having trouble with Windows Containers or wish to ask for new features, track the status of existing issues, and check out our plan for the future, then this is the place to do so!</P> <P>&nbsp;</P> <H2>Interacting with the product team</H2> <P>One of the greatest things about repositories on GitHub is that they allow for interaction between repo owners and the public. The Issues tab allows you to add new items to our repository that will then be triaged by the product team. You can open issues to track bugs, feature requests, general questions, and more.</P> <P>The product team composed by members of the platform (networking, security, and kernel) will regularly monitor this channel and triage new items as they come in. We will interact with users on these issues and populate our roadmap with the according stage.</P> <P>&nbsp;</P> <H2>Check out our roadmap</H2> <P>For the first time, the Windows Containers team will be publishing a roadmap of features and scenarios that are in progress, planned, or under consideration. We hope that this model will demonstrate our dedication to the investment we’re driving into the platform alongside our desire to meet your needs as the customer. Many of your production and mission critical workloads rely on the quality and stability of our product, so it is our utmost priority that the issues you care about most are addressed directly.</P> <P>The Projects tab on our repository will display a Windows Containers roadmap project on which you can check backlog, planned items, what is in progress, what is currently in public preview, and what has been released:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="WindowsContainersRoadmap-Img01.png" style="width: 999px;"><img src=";px=999" role="button" title="WindowsContainersRoadmap-Img01.png" alt="WindowsContainersRoadmap-Img01.png" /></span></P> <P>&nbsp;</P> <P>The items in our roadmap might be from our customers or directly from the product team. You can then open the items to check the details, comment, up-vote, etc.</P> <P>&nbsp;</P> <P>Let us know what you think about this new channel. We hope you enjoy it and more importantly: that you use it to interact with us!</P> Tue, 23 Jun 2020 16:00:00 GMT Vinicius Apolinario 2020-06-23T16:00:00Z Announcement: New updates to the Containers extension in Windows Admin Center <P>If you are accustomed to building and managing applications in virtual machines (VMs) getting started with containers can be a daunting proposition. Containers offer a different approach to package and instantiate your workloads along with benefits, such as faster startup and better resource utilization. For a developer, tools like Visual Studio and Visual Studio Code are filled with add-ons to package the application being developed into a container. However, up until now ITPros have been left to their luck on how to modernize their applications, and more importantly, how to take the leap from VMs to containers.</P> <P>Today we are pleased to announce an update to the containers extension within Windows Admin Center to address that. Windows Admin Center has become the number one tool that Windows admins have come to love and use on their day to day activities. The containers extension has been providing a great experience for troubleshooting containers running on your container host, such as opening a console connection to a container, checking logs, monitoring resource consumption, and more. However, it does assume you have everything in place and already running. With the new functionalities introduced today, we are enabling Windows admins, Ops, and IT operators to easily get started with Windows containers and containerize their first application. Let’s take a look at these new possibilities:</P> <P>&nbsp;</P> <H2>How to get the new functionality</H2> <P>The new functionalities are being made available for Windows Admin Center under a new Insiders extension feed. If you don’t have Windows Admin Center installed, you can download it from <A href="#" target="_blank" rel="noopener">here</A>. While the new feed is for Insiders, you don’t need the Insiders build of Windows Admin Center.</P> <P>After installing Windows Admin Center, you can add the new Insiders feed by going to Settings&gt;Extensions&gt;Feeds and add the new feed “<A href="#" target="_blank" rel="noopener">”</A>:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Image01.png" style="width: 999px;"><img src=";px=999" role="button" title="Image01.png" alt="Image01.png" /></span></P> <P>Once you have added the new feed, you will be able to see the new Containers extension available under Available extensions. You can now go ahead and install the extension to enable the new functionality.</P> <P>Notice that the Containers extension will only show up under Server Manager and if you are targeting an existing container Host. For more information on how to install Docker on Windows Server, check out our <A href="#" target="_blank" rel="noopener">documentation</A>.</P> <P>&nbsp;</P> <H2>Pulling container images</H2> <P>Your journey with Windows containers will most likely start by pulling containers images to your container host. These images might be on Docker Hub where we host base images, such as Server Core, Nano Server, and many others. In addition, other images you might need may require you to log into that container registry. With this update, Windows Admin Center makes it easier to pull container images:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Image02.png" style="width: 999px;"><img src=";px=999" role="button" title="Image02.png" alt="Image02.png" /></span></P> <P>All the information needed to pull an image is there. If you have to authenticate to pull the image and you can do that and we’ll also store that information for future use.</P> <P>&nbsp;</P> <H2>Spinning up a new container</H2> <P>If you have a container image ready to go, you can now start a new container from Windows Admin Center. Simply select the container image and click Run:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Image03.png" style="width: 999px;"><img src=";px=999" role="button" title="Image03.png" alt="Image03.png" /></span></P> <P>Once the required information is specified, you can fire up a new container. Here you have the option to select which isolation mode you want to use, which ports to open, and memory and CPU allocation. However, you are not limited to that. The “Add” option gives you the ability to add other “docker run” parameters, such as specify environment variables, persistent storage, and much more.</P> <P>&nbsp;</P> <H2>Creating new container images</H2> <P>Probably one of the most complex concepts to grasp, creating new container images requires you to write a docker file with the instructions on how to build a container image. There are virtually an infinitude number of ways to write a docker file, but here we want to make the process as simple as possible for Windows admins who just want to take an application and put it on a container image.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Image04.png" style="width: 999px;"><img src=";px=999" role="button" title="Image04.png" alt="Image04.png" /></span></P> <P>In this first update to the Containers extension we are limiting the type of applications you can containerize to IIS Web Applications. While we plan to add more options in the future, we do want to hear customer feedback on how we’re doing on this, what can be improved for the existing options, and what are the applications they want to containerize.</P> <P>There are a few options and scenarios we are enabling in this first update:</P> <UL> <LI>Static Web Applications: A static application running on IIS that is not dependent on any framework.</LI> <LI>ASP.Net applications: ASP.Net applications can be containerized using the ASP.Net container images. As part of this process we will look for the Visual Studio solution to map which projects you want to containerize.</LI> <LI>Existing dockerfiles: If you have a docker file ready, you can use this tool to rebuild your container image. This is especially useful when you need to rebuild the container image because of a change in the application, or because you want to update the base container image.</LI> <LI>PowerShell script: If your application requires any other additional configuration and you already have a PowerShell script to take care of that, you can add that to your dockerfile and container image.</LI> </UL> <P>An important aspect of this tool is that the dockerfile has a preview at the bottom, allowing you to see the changes to it as you pass on the configuration of your application. This will allow you to learn how the dockerfile is built for future usage.</P> <P>As you build the container image using Windows Admin Center, we will store the dockerfile in the same place your application resides. You can then use this dockerfile for other purposes - even integrate to your DevOps pipeline in the future.</P> <P>&nbsp;</P> <H2>Pushing container images to registries</H2> <P>You might want to run your container images on other container hosts, so pushing these images to external registries will allow for further utilization. In Windows Admin Center you can now easily push images to Azure Container Registry or registries such as Docker Hub.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Image05.png" style="width: 999px;"><img src=";px=999" role="button" title="Image05.png" alt="Image05.png" /></span></P> <P>For Azure Container Registry you can use your Azure account which is integrated with your Windows Admin Center installation. For other registries, you can follow the same process as pulling container images, by just providing the URL, username, and password.</P> <P>&nbsp;</P> <H2>Let us know what you think</H2> <P>All this new functionality was built based on your feedback. We strive to deliver the tools you need to perform your job, so let us know how we can improve the functionalities above, or what is missing for you to be successful in using Windows containers! You can share your feedback either here in the comments, suggest a new feature via <A href="#" target="_blank" rel="noopener">User Voice</A>, or start a conversation via <A href="" target="_blank" rel="noopener">Tech Community</A>!</P> <P>&nbsp;</P> <P>You can find me on Twitter at&nbsp;<A href="#" target="_blank" rel="noopener">@vrapolinario</A>.</P> Tue, 09 Jun 2020 20:19:02 GMT Vinicius Apolinario 2020-06-09T20:19:02Z Windows Server, version 2004 Now Available <P>Last week was exciting with <A href="#" target="_self">//build</A>. This week, the excitement continues with the general availability of Windows Server, version 2004 today.</P> <P>&nbsp;</P> <P>Windows Server, version 2004 is a <A href="#" target="_blank" rel="noopener">Semi-Annual Channel (SAC)</A> Release. In our most recent Windows Server SAC releases, we’ve optimized for containers. In this release, we continued improving fundamentals for the core container platform such as performance and reliability. We’ve also worked with .NET team and PowerShell team and further optimized image size and performance for Server Core containers. We will share more details below.&nbsp;On container networking side, we implemented several improvements to allow for better scalability, robustness, and reliability. One example is additional changes and improvements to <A href="" target="_blank" rel="noopener">Direct Server Return (DSR)</A>.&nbsp;</P> <P>&nbsp;</P> <P>Here’s how you can pull the new Windows Server, version 2004 base OS container images from MCR:</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="applescript">docker pull docker pull docker pull </LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>The Server Core container image is one of <A href="#" target="_blank" rel="noopener">four Windows Base OS Images</A>. &nbsp;It’s designed for maximum application compatibility so customers can modernize their traditional Windows Server applications. The majority of those apps are ASP.NET-based web apps. In Windows Server, version 2004, the Server Core container image no longer optimizes the .NET Framework for performance, which saves a lot of space. Instead, .NET Framework optimization (aka “NGEN”) is done in the higher-level .NET Framework runtime image.</P> <P>&nbsp;</P> <P>The following table gives a quick overview of the image size reduction of Server Core container images among the three recent SAC releases. The Download size (or “Compressed ”) numbers were captured when running “docker pull” and Size on disk (or “Uncompressed size”) numbers were captured when running “docker images.” All values in this table are based on the latest images available today, including the RTM and monthly updates bits. In this case, the table reflects the <A href="#" target="_blank" rel="noopener">May, 2020 monthly security updates</A>, or the so called “5B” updates. For more information about Windows Container updates, see <A href="#" target="_blank" rel="noopener">Update Windows Server containers</A>.</P> <P>&nbsp;</P> <TABLE width="594px"> <TBODY> <TR> <TD width="143px"> <P>&nbsp;</P> </TD> <TD width="144px"> <P>Windows Server, version 1903</P> </TD> <TD width="132px"> <P>Windows Server, version 1909</P> </TD> <TD width="174px"> <P>Windows Server, version 2004</P> </TD> </TR> <TR> <TD width="143px"> <P>Download size (GB)</P> <P>&nbsp;</P> </TD> <TD width="144px"> <P>2.311</P> </TD> <TD width="132px"> <P>2.257</P> </TD> <TD width="174px"> <P>1.830</P> </TD> </TR> <TR> <TD width="143px"> <P>Size on disk (GB)</P> <P>&nbsp;</P> </TD> <TD width="144px"> <P>5.1</P> </TD> <TD width="132px"> <P>4.97</P> </TD> <TD width="174px"> <P>3.98</P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P>.NET Framework container images are also smaller. .NET Framework NGEN optimization in containers is now more targeted to ASP.NET applications and Windows PowerShell scripts. In addition, the change to optimizing assemblies in the .NET Framework Runtime image (and not the Server Core base image) led to technical benefits that also enabled us to reduce container size. For more details about the improvements, see the .NET Team <A href="#" target="_blank" rel="noopener">blog</A> published in Dec 2019.</P> <P>&nbsp;</P> <P>In conversations with customers, we understand that Windows containers have provided a bright path forward to modernize traditional Server apps and leverage Kubernetes and other cutting-edge technologies. However, we also hear that the size of Windows containers, especially Server Core containers, is large enough to impact the time to download and decompress locally. We’ve heard your feedback, which is why we looked closely at multiple ways to optimize. This release is yet another leap forward for customers looking at scaling applications in production, CI/CD, and any other workflow that benefits from faster startup or pulls un-cached images. .</P> <P>&nbsp;</P> <P>While we are all adjusting to a new normal for work and life, I’m always amazed by the innovations and new possibilities brought by technologies, and more importantly, by the amazing people behind the technologies: both my colleagues at Microsoft and you, our customers.</P> <P>&nbsp;</P> <P>Please give this new release a try and let us know what you think! You can contact us at <A href="" target="_blank" rel="noopener"></A>. Thank you!</P> <P>&nbsp;</P> <P>Weijuan Davis</P> <P>Twitter: @WeijuanLand</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> Wed, 27 May 2020 23:17:22 GMT Weijuan Shi Davis 2020-05-27T23:17:22Z Making Windows Server Core Containers 40% Smaller <P>Windows Server Insider Preview builds have brought an exciting change to the Windows Server Core Container base image that we’d like to share with you.&nbsp;<STRONG>The Server Core Insider container base image download size is over 40% smaller than the 1903 base image, and container startup into Windows PowerShell is 30% faster!</STRONG></P> <P>&nbsp;</P> <P><STRONG><SPAN>This should be a big win for scaling applications in production, CI/CD, and any other workflow that benefits from faster startup or pulls uncached images.</SPAN></STRONG></P> <P>&nbsp;</P> <P class="code-line" data-line="6">The new images are currently available on the<SPAN>&nbsp;</SPAN><A title="" href="#" target="_blank" rel="noopener" data-href="#">windows/servercore/insider</A><SPAN>&nbsp;</SPAN>repo on Docker Hub, and will land on the<SPAN>&nbsp;</SPAN><A title="" href="#" target="_blank" rel="noopener" data-href="#">windows/servercore</A><SPAN>&nbsp;</SPAN>repo with our 20H1 release.</P> <P class="code-line" data-line="6">&nbsp;</P> <H2 id="what-changed" class="code-line" data-line="8">What changed?</H2> <P class="code-line" data-line="10">For performance benefits, Server Core containers have always included a set of .NET pre-compiled native images generated by the<SPAN>&nbsp;</SPAN><A title="" href="#" target="_blank" rel="noopener" data-href="#">Native Image Generator tool</A><SPAN>&nbsp;</SPAN>(Ngen.exe) to improve startup performance. Starting with our Insider images, the Server Core base image will include a much smaller set of NGEN images.</P> <P class="code-line" data-line="12">A larger set is included in<SPAN>&nbsp;</SPAN><A title="" href="#" target="_blank" rel="noopener" data-href="#">.NET Framework runtime images</A><SPAN>&nbsp;</SPAN>(which are based on Windows Server Core), however, the .NET Framework runtime images are also much smaller because we can now ensure that there is only one copy of each NGEN image. Additionally, The NGEN images in the .NET Framework runtime image are now more intentionally selected to target<SPAN>&nbsp;</SPAN><A title="http://ASP.NET" href="#" target="_blank" rel="noopener" data-href="#">ASP.NET</A><SPAN>&nbsp;</SPAN>and PowerShell performance. Tests have shown that container startup into Windows PowerShell has improved by 45% when using the Insider-based .NET Framework runtime image compared to the 1903 .NET Framework image.</P> <P class="code-line " data-line="14">How exactly have the sizes changed over time? Depends on what you measure sizes. Using the .NET team's<SPAN>&nbsp;</SPAN><A title="" href="#" target="_blank" rel="noopener" data-href="#">method for retrieving Docker Image Sizes</A>, here is how the latest Insider image compares to the first generally available* 1903 image, released in May 2019:</P> <P class="code-line " data-line="14">&nbsp;</P> <P class="code-line " data-line="14">*[May, 22, 2020 Update]: Note that the 1903 image at time of general availability, includes an RTM ("Release to Manufacturing") layer, as well as a second layer containing a collection of monthly updates that accumulate over time.</P> <P class="code-line " data-line="14">&nbsp;</P> <TABLE style="width: 60%; border-collapse: collapse;" border="1"> <TBODY> <TR> <TD style="width: 25%;">&nbsp;</TD> <TD style="width: 25%;"><STRONG>1903 GA</STRONG></TD> <TD style="width: 25%;"><STRONG>20H1</STRONG></TD> <TD style="width: 25%;"><STRONG>%Δ</STRONG></TD> </TR> <TR> <TD style="width: 25%;">Download Size (GB)</TD> <TD style="width: 25%;">1.92</TD> <TD style="width: 25%;">1.13</TD> <TD style="width: 25%;">-41%</TD> </TR> <TR> <TD style="width: 25%;">Size on Disk (GB)</TD> <TD style="width: 25%;">4.63</TD> <TD style="width: 25%;">2.63</TD> <TD style="width: 25%;">-43%</TD> </TR> </TBODY> </TABLE> <P class="code-line " data-line="21">&nbsp;</P> <P class="code-line " data-line="21">Note that running <STRONG>docker pull</STRONG> on the latest Insider image at the time of writing (Insider Build 19023) will display a download size of 1.22. This is because Docker's method for calculating differs slightly from the method linked above.</P> <P class="code-line " data-line="21">&nbsp;</P> <BLOCKQUOTE class="code-line " data-line="23"> <P class="code-line" data-line="23">Image sizes fluctuate month-to-month as they receive monthly servicing updates. The results of the metrics referenced in this and other related blogs will fluctuate accordingly. Note that the size of 1903 used as a benchmark in the related blogs below are based on a a more recent version of 1903, one that has been serviced with several monthly patches that each cause size increases.</P> </BLOCKQUOTE> <P class="code-line" data-line="25"><STRONG>Check out the<SPAN>&nbsp;</SPAN><A title="" href="#" target="_blank" rel="noopener" data-href="#">.NET Blog</A><SPAN>&nbsp;</SPAN>for a deeper dive into these changes, and the<SPAN>&nbsp;</SPAN><A title=";preview=1&amp;_ppp=46b0526056" href="#" target="_blank" rel="noopener" data-href="#">PowerShell Blog</A><SPAN>&nbsp;</SPAN>for PowerShell-centric guidance.</STRONG></P> <P class="code-line" data-line="25">&nbsp;</P> <H2 id="what-does-this-mean" class="code-line" data-line="27">What does this mean?</H2> <P class="code-line" data-line="29">Two things to keep in mind:</P> <UL> <LI class="code-line" data-line="30">These changes will remain available for preview to registered Insiders on all current and upcoming builds featured on the<SPAN>&nbsp;</SPAN><A title="" href="#" target="_blank" rel="noopener" data-href="#">windows/servercore/insider</A><SPAN>&nbsp;</SPAN>repo, and are expected to be Generally Available in the 20H1<SPAN>&nbsp;</SPAN><A title="" href="#" target="_blank" rel="noopener" data-href="#">windows/servercore</A> release in 2020</LI> <LI class="code-line" data-line="31">Going forward, container images that make significant use of .NET Framework or Windows PowerShell should be built on top of the<SPAN>&nbsp;</SPAN><A title="" href="#" target="_blank" rel="noopener" data-href="#">dotnet/framework/runtime&nbsp;</A>container images, which will include additional .NET pre-compiled native images to maintain performance for those scenarios.</LI> </UL> <BLOCKQUOTE class="code-line" data-line="33"> <P class="code-line" data-line="33">Note that while tests show that container startup<SPAN>&nbsp;</SPAN><EM>into</EM><SPAN>&nbsp;</SPAN>Windows PowerShell is 30% faster when using the Insider Server Core image when compared to 1903, Windows PowerShell startup<SPAN>&nbsp;</SPAN><EM>within</EM><SPAN>&nbsp;</SPAN>a running container is 100ms (15%) slower for the Insider image and 20ms (15%) slower on the Insider-based .NET Framework runtime image, when compared to their respective 1903 equivalents.</P> </BLOCKQUOTE> <P class="code-line" data-line="35">We’d love for you to test out the smaller Server Core image today on<SPAN>&nbsp;</SPAN><A title="" href="#" target="_blank" rel="noopener" data-href="#">windows/servercore/insider</A><SPAN>&nbsp;</SPAN>and give us your feedback! Also, remember to check out the<SPAN>&nbsp;</SPAN><A title="" href="#" target="_blank" rel="noopener" data-href="#">.NET blog</A><SPAN>&nbsp;</SPAN>and look out for corresponding .NET Framework images when 20H1 Windows Server Core containers are Generally Available.</P> <P class="code-line" data-line="35">&nbsp;</P> <H2 id="related-blogs" class="code-line" data-line="37">Related Blogs</H2> <UL> <LI class="code-line" data-line="38"><A title="" href="#" target="_self" data-href="#">.NET Blog: We made Windows Server Core container images &gt;40% smaller</A></LI> <LI class="code-line" data-line="39"><A title=";preview=1&amp;_ppp=46b0526056" href="#" target="_self" data-href="#">PowerShell Blog: Improvements in Windows PowerShell Container Images</A></LI> <LI class="code-line " data-line="40"><A title="" href="" target="_blank" rel="noopener" data-href="">A smaller Windows Server Core Container with better Application Compatibility (published January 2018)</A></LI> </UL> Fri, 22 May 2020 21:13:42 GMT markosmezgebu 2020-05-22T21:13:42Z Modernize your IT environment and applications with Windows Containers <P>This is Ignite week and part of the team is in Orlando talking to customers and delivering sessions. It's amazing to see the enthusiasm on Windows Containers with more and more people stopping by to ask how they can get started and how they can containerize their first application.</P> <P>That's why Craig and I are doing a session today called "<A href="#" target="_blank" rel="noopener">Modernize your IT environment and applications with Windows Containers</A>". The idea of this blog post is to server as a reference blog for that session, with all the instructions for the demos we presented today.</P> <P>During the session we cover how you can containerize an ASP.Net 3.5 web application running on Windows Server 2008 R2 with Windows containers on Windows Server 2019.</P> <P>&nbsp;</P> <P><FONT size="5">Test proofing your application on Windows Server 2019</FONT></P> <P>Before we get into the containers portion, it's important to understand how your application behaves, what are its components, and how you can validate that the application actually runs on Windows Server 2019 - or at least Windows Server 2016 for that matter - so you know it will work in a container.</P> <P>In order to do that, you'll have to check how do you deploy that application. For our case, since this is a web application running on IIS, you have a few options:</P> <UL> <LI>If you have access to the source code, you should probably work with the dev team to containerize the application directly from Visual Studio. Visual Studio has all the tooling embedded to create a container image with your application on it.</LI> <LI>If you have a package with your application, you need the instructions on how to deploy this. Traditionally, developers will provide minimal instructions on how to deploy the application and you can leverage that.</LI> <LI>If the application is already deployed, you need a way to extract the application from the current server. For web applications on IIS there are, thankfully, a couple options here. You could simply export the applications using native IIS commands from command prompt. Another option is to use Web Deploy to export the application into a Zip file that can be used to import the application on the other side.</LI> </UL> <P>Once you have your strategy to get the application from the current server to the new Windows Server 2016 or 2019, you can deploy the application and validate it runs fine.</P> <P>&nbsp;</P> <P><FONT size="5">Containerizing your application</FONT></P> <P>The process to containerize the application is a bit different than most admins are used to with VMs. With VMs you deploy an OS, than the app dependencies, than the app, and finally you configure the app. Often times, all manually. After all that, you generalize the image with Sysprep and store the VHD file in a place that consumers of this app can deploy a new VM.</P> <P>With containers, you actually declare how your application will be composed in a Docker file (which is just a text file called dockerfile - no extension - in the same folder you have your application.</P> <P>The Docker file provides the instructions for the command "docker build" to build your application and store it in a container.</P> <P>Here's the example of the docker file Craig and I used in our session today:</P> <LI-CODE lang="markup">FROM WORKDIR /ViniBeer COPY . . RUN PowerShell Install-WindowsFeature NET-Framework-45-ASPNET; \ Install-WindowsFeature Web-Asp-Net45 RUN PowerShell Import-Module WebAdministration; \ New-WebApplication "ViniBeer" -Site 'Default Web Site' -ApplicationPool "DefaultAppPool" -PhysicalPath "C:\ViniBeer"</LI-CODE> <P>From top to bottom, here's what you should know about the above:</P> <UL> <LI>FROM: This is where you start your application from. There are multiple container images available in the <A href="#" target="_blank" rel="noopener">Docker Hub</A>, so instead of deploying Windows components, you can most probably find an image that already has what you need to deploy your app. In this case, we're starting with the IIS container image that has IIS pre-installed.</LI> <LI>WORKDIR: This is your working directory. It assumes the context of the C:\ drive, so in the case above it will create a new folder at C:\ViniBeer and set that folder as the working directory. Throughout the file, you can simply reference this folder with a ".".</LI> <LI>COPY: As the name says, this will copy content into the container. The way the context work here is the first path is the local container host from which you are running docker build. The second path is the path inside the container. In our case, we're copying the content from the folder we're running docker build to the C:\ViniBeer working directory we referenced earlier.</LI> <LI>RUN: Run will execute commands inside the container to get it prepared. The example above is using PowerShell and then the commands to run on it. The first one deploys the ASP.Net IIS dependencies and the second one imports the IIS PowerShell module to then install the web application - just like you would do in a regular Windows Server with Server Core.</LI> </UL> <P>The docker build command you run to execute all this is:</P> <LI-CODE lang="markup">docker build -t vinibeerimage .</LI-CODE> <P>This will tag the image as "vinibeerimage" and store the image locally. Now all you have to do is deploy new containers based on that container image.</P> <P>&nbsp;</P> <P>Try it yourself with our Vini Beer sample app.</P> <P>If you want to try the exact same demo we ran in our session at Microsoft Ignite, you can use the Vini Beer sample ASP.Net 3.5 app that was running in a Windows Server 2008 R2 machine. The app is stored in this <A href="#" target="_blank" rel="noopener">GitHub repository</A> - along with the docker file above.</P> <P>The <A href="#" target="_blank" rel="noopener">recording for the session</A> will be available later today.</P> <P>&nbsp;</P> <P>We hope this helps in your containerization journey!</P> Thu, 07 Nov 2019 13:47:01 GMT Vinicius Apolinario 2019-11-07T13:47:01Z Windows Containers Log Monitor Opensource Release <P><FONT size="4">TL;DR: Windows Containers Log Monitor is opensource released <A href="#" target="_blank" rel="noopener">on Github.</A></FONT></P> <P>&nbsp;</P> <P><FONT size="4">In this <A href="" target="_blank" rel="noopener">blog post</A> we announced our efforts to improve the tooling experience for containers. The first step in this journey was with a log tool concept we demoed during our container session at //build. This demo was met with clear excitement from attendees and interest from the community. After gathering customer feedback, we set out to bring this tool to production quality and make it open source available for our Windows Containers users.</FONT></P> <P>&nbsp;</P> <P><FONT size="4">Today, we are happy to announce the opensource release of the Windows Container Log Monitor, <A href="#" target="_blank" rel="noopener">now available on Github</A>! This blog offers a deep dive into the architecture and usage of the tool.</FONT></P> <P>&nbsp;</P> <P><FONT size="4">To recap, unlike Linux applications that log to STDOUT, Windows applications log to Windows log locations such as ETW, Event Log, and custom log files. Since many container ecosystem logging solutions are built to pull from the STDOUT pipeline as standard with Linux, Windows containers app logs historically have not been accessible via these solutions. The Log Monitor bridges this gap between Windows log locations and STDOUT, as depicted in the diagram below. The scope of the Log Monitor tool is to bridge Windows application logs to the STDOUT pipeline.</FONT></P> <DIV id="tinyMceEditorclipboard_image_2" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="ignitelogblog.png" style="width: 999px;"><img src=";px=999" role="button" title="ignitelogblog.png" alt="ignitelogblog.png" /></span></P> <P>&nbsp;</P> <P><FONT size="4">The Log Monitor tool consists of two primary elements: LogMonitor.exe and LogMonitorConfig.json. Like the //build demo POC, the Log Monitor is authored with an observer, publisher-subscriber design pattern; however, instead of subscribing to hardcoded log source providers as in the original concept, the Log Monitor is configured to subscribe to a&nbsp; set of sources defined by the user in the Log Monitor Config file.</FONT></P> <P><FONT size="4">The log sources supported by the Log Monitor tool include:</FONT></P> <UL> <LI><FONT size="4">ETW</FONT></LI> <LI><FONT size="4">Event Logs</FONT></LI> <LI><FONT size="4">Log files</FONT></LI> </UL> <P><FONT size="4">Note that these sources differ from the original concept, which collected Perf Counters and did not collect Event Logs. From feedback we concluded that Perf Counters, while useful, fall closer to system metrics than application logs, and were thus out of scope for this release. Additionally, the Log Monitor tool does not monitor the lifecycle of services within the container as in the original concept, but instead provides a nesting pattern so users can define their own service monitor or desired application.</FONT></P> <P>&nbsp;</P> <P><FONT size="4">The log tool is supported for Windows, Server Core, and Nano images.</FONT></P> <P>&nbsp;</P> <P><U><STRONG><FONT size="4">Usage</FONT></STRONG></U></P> <P>&nbsp;</P> <P><FONT size="4">A key point of feedback we received at //build and in the post-build survey was that the entry point-only solution restricts users from using their own app as the entry point process. We mitigate this with several options on usage:</FONT></P> <OL> <LI><FONT size="4">The Log Monitor tool can be used as the SHELL or the ENTRYPOINT</FONT></LI> <LI><FONT size="4">As either the SHELL or ENTRYPOINT, another application can be indicated as a parameter to the Log Monitor tool as a nested process. The Log Monitor tool will spin up that process and monitor its STDOUT so that it also continues to show up in the container STDOUT. This mitigates the foreground/background process issue that existed in the original concept.</FONT></LI> </OL> <P><FONT size="4">An example dockerfile for the SHELL usage pattern (with nesting):</FONT></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="markup">FROM #LogMonitor directory contains LogMonitor.exe and LogMonitorConfig.json file COPY LogMonitor/ C:/LogMonitor WORKDIR /LogMonitor SHELL ["C:\\LogMonitor\\LogMonitor.exe", "cmd", "/S", "/C"] CMD c:\windows\system32\ping.exe -n 20 localhost</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P><FONT size="4">An example dockerfile for the ENTRYPOINT usage pattern (with nesting):</FONT></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="markup">FROM #LogMonitor directory contains LogMonitor.exe and LogMonitorConfig.json file COPY LogMonitor/ C:/LogMonitor WORKDIR /LogMonitor ENTRYPOINT C:\LogMonitor\LogMonitor.exe c:\windows\system32\ping.exe -n 20 localhost</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P><FONT size="4">Note that in the SHELL usage pattern the CMD/ENTRYPOINT instruction should be specified in the SHELL form and not exec form. When exec form of the CMD/ENTRYPOINT instruction is used, SHELL is not launched, and the Log Monitor tool will not be launched inside the container.</FONT></P> <P>&nbsp;</P> <P><FONT size="4">Additionally, if you are building from an image that pre-defines an entrypoint (i.e. an IIS image that pre-defines <A href="#" target="_blank" rel="noopener">IIS.ServiceMonitor</A> as the entry point), you must redefine the entrypoint in your dockerfile, either as the Log Monitor if you are using the ENTRYPOINT pattern, or as the pre-defined entrypoint. If you do not redefine the ENTRYPOINT of your dockerfile and are using the SHELL usage pattern, the Log Monitor tool will not work. This is because the SHELL is tied to the entrypoint definition.</FONT></P> <P>&nbsp;</P> <P><FONT size="4">Both example usages wrap the ping.exe application. Other applications such as IIS.ServiceMonitor can be nested with Log Monitor in a similar fashion:</FONT></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="markup">COPY LogMonitor.exe LogMonitorConfig.json C:\LogMonitor\ WORKDIR /LogMonitor SHELL ["C:\\LogMonitor\\LogMonitor.exe", "powershell.exe"] # Start IIS Remote Management and monitor IIS ENTRYPOINT Start-Service WMSVC; ` C:\ServiceMonitor.exe w3svc; </LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P><U><STRONG><FONT size="4">Configuration</FONT></STRONG></U></P> <P>&nbsp;</P> <P><FONT size="4">A sample Log Monitor Config file would be structured as follows:</FONT></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="markup">{ "LogConfig": { "sources": [ { "type": "EventLog", "startAtOldestRecord": true, "eventFormatMultiLine": false, "channels": [ { "name": "system", "level": "Error" } ] }, { "type": "File", "directory": "c:\\inetpub\\logs", "filter": "*.log", "includeSubdirectories": true }, { "type": "ETW", "providers": [ { "providerName": "IIS: WWW Server", "ProviderGuid": "3A2A4E84-4C21-4981-AE10-3FDA0D9B0F83", "level": "Information" }, { "providerName": "Microsoft-Windows-IIS-Logging", "ProviderGuid ": "7E8AD27F-B271-4EA2-A783-A47BDE29143B", "level": "Information" , "keywords": "0xFF" } ] } ] } }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P><FONT size="4">In the config file there are the three log types you can configure the Log Monitor tool to pull from:</FONT></P> <UL> <LI><FONT size="4"><A href="#" target="_blank" rel="noopener">Event Logs</A>. The event log provides a standard, centralized way for apps and the OS to record important software and hardware events. Each event is in a structured data format, which makes it easy to parse for event log view panes (i.e. Event Viewer) or examine with a programmatic interface. The configuration file supports the following fields:</FONT></LI> <UL> <LI><FONT size="4">startAtOldestRecord. This Boolean field indicates whether the Log Monitor tool should output event logs from the start of the container boot (true) or from the start of the Log Monitor tool itself (false).</FONT></LI> <LI><FONT size="4">eventFormatMultiLine. This Boolean field indicates whether the Log Monitor should format the logs to STDOUT as multi-line or single line. If set true, the tool does not format the event messages to a single line (and thus event messages can span multiple lines). If set to false, the tool formats the event log messages to a single line and removes new line characters. Many container tools that build upon docker logs and the STDOUT pipe assume each log entry is on a single line as Linux event messages are typically single line.</FONT></LI> <LI><FONT size="4"><A href="#" target="_blank" rel="noopener">channels</A>. &nbsp;A channel is a sync that collects events. The <A href="#" target="_blank" rel="noopener">wevutil tool</A> can be used to enumerate channels via the command “wevutil el.”</FONT></LI> <UL> <LI><FONT size="4">Name. Name of the event channel</FONT></LI> <LI><FONT size="4">Level. This string field specifies the verboseness of the events collected. Possible values in ascending order of verbosity are Critical, Error, Warning, Information, and Verbose. Verbose encompasses Critical, Error, and Warning, and Information.</FONT></LI> </UL> </UL> <LI><FONT size="4">Custom Log File. Many apps log to custom app log files. Some services such as the IIS service also log to files in well-known locations. The Log Monitor tool can be configured to parse through custom log files and bridge these entries to STDOUT. The configuration file supports the following fields:</FONT></LI> <UL> <LI><FONT size="4">directory. This string field indicates the directory that the Log Monitor tool should monitor. Note that there can be more than one directory, though additionally directories should be added as an additional json source object with type=File.</FONT></LI> <LI><FONT size="4">filter. This string field indicates what files under the directory the Log Monitor should monitor. The string is parsed for regex format.</FONT></LI> <LI><FONT size="4">includeSubdirectories. This Boolean field indicates whether the Log Monitor should recurse the subdirectories of the indicated directory (true) or remain in the indicated directory (false).</FONT></LI> </UL> <LI><FONT size="4"><A href="#" target="_blank" rel="noopener">ETW (Event Tracing for Windows).</A> ETW is an efficient kernel-level tracing facility that lets you log system or application defined events to a log file or access them real-time. Applications often instrument providers that are enabled to generate events as relevant to the application. The Log Monitor tool can be configured to watch certain relevant providers. The configuration file supports the following fields:</FONT></LI> <UL> <LI><FONT size="4">eventFormatMultiLine. The behavior is the same as described above. This does not have to match the Boolean chosen for the Event Log.</FONT></LI> <LI><FONT size="4">Providers. This field is a list of providers that indicates what providers the Log Monitor tool should monitor. The following fields are supported for each provider:</FONT></LI> <UL> <LI><FONT size="4">providerName.</FONT></LI> <LI><FONT size="4">ProviderGuid. For Server Core or Windows base images, note that either providerName or ProviderGuid can be provided. For Nanoserver ProviderGuid is required as resolving provider name to GUID is a known issue.</FONT></LI> <LI><FONT size="4">level: a 1-byte integer that enables filtering based on the severity or verbosity of events</FONT></LI> <LI><FONT size="4">keywords: This string field is a bitmask that specifies what events to collect. Only events with keywords matching the bitmask are collected This is an optional parameter. Default is 0 and all the events will be collected.</FONT></LI> </UL> </UL> </UL> <P><FONT size="4">Note that which events, files, and ETW providers an application logs to is defined by the application. There are some standard patterns for some common applications such as IIS. Some sample config files for these popular scenarios are included in the <A href="#" target="_blank" rel="noopener">Github repo</A>. If information about which events, files, and ETW providers a particular application logs to is not readily available, wevutil or Event Viewer can help.</FONT></P> <P>&nbsp;</P> <P><U><STRONG><FONT size="4">What is next?</FONT></STRONG></U></P> <P>&nbsp;</P> <P><FONT size="4">While we are excited to bring this functionality to our users, this is just the beginning of our journey. By open sourcing this tool, we hope to continuously gather feedback and improve the tool to meet customer needs. Some areas we have already identified and are investigating for future releases:</FONT></P> <UL> <LI><FONT size="4">Rotating log support</FONT></LI> <LI><FONT size="4">Environment variable configuration support</FONT></LI> <LI><FONT size="4">ConfigMap support</FONT></LI> <LI><FONT size="4">Integrations with log aggregation services at scale</FONT></LI> <LI><FONT size="4">Configuration updates during container runtime</FONT></LI> <LI><FONT size="4">Performance</FONT></LI> <LI><FONT size="4">Sidecar usage patterns</FONT></LI> <LI><FONT size="4">Log driver support</FONT></LI> </UL> Mon, 04 Nov 2019 13:02:02 GMT ambguo 2019-11-04T13:02:02Z Containers for ITPros - Resource limits on containers <P>One of the first things that struck admins when making the transition from managing Virtual Machines (VMs) to containers is resource constraint. Whenever I demonstrate containers to an audience of ITPros, and I use the docker run command to spin up a container, the question that comes back is "How much memory is this container using?" or something like "How do I set how many processors this container can use?". This is a natural concern for admins since ensuring their applications and hosts have the proper amount of resources and don't interfere with one another is one of the primary tasks of a System Admin.</P> <P>The main thing that is different from VMs and containers in terms of resource utilization is that for VMs you set a limit of how much CPU and memory that VM can get/see. Over the years we evolved that to over provision, share, quota limit, etc. This is primarily because VMs emulate the hardware that we presented to them to the OS running on that given VM. With containers however, this is not true. Instead, since containers are a virtualization of the OS, not the hardware, the container can use as much resources the container host can provide. Of course, you don't want to get to a case where one container uses all the resources from a host - especially if you are running multiple containers on a on a single node. In today's blog, we'll look at simple ways to limit how much CPU and memory a container can use.</P> <P>&nbsp;</P> <P><FONT size="5">Setting CPU limits</FONT></P> <P>The simplest way to limit the CPU of a container is by specifying the --cpus parameter on docker run:</P> <P>&nbsp;</P> <LI-CODE lang="markup">docker run -d --cpus=1 --name testcontainer</LI-CODE> <P>&nbsp;</P> <P>In this example, we are limiting the container to use only 1 CPU from the container host. You can set a limit of up to the number of CPUs the host has. If you try to configure more CPUs than the host, you'll get an error.</P> <P>Another option is to set half, or another percentage of a CPU:</P> <P>&nbsp;</P> <LI-CODE lang="markup">docker run -d --cpus=1.5 --name testcontainer</LI-CODE> <P>&nbsp;</P> <P>In this case the container can use 1 CPU and 50% of another CPU. The lowest amount you can specify for --cpus is .1 which gives the container 10% of one CPU.</P> <P>&nbsp;</P> <P><FONT size="5">Setting memory limits</FONT></P> <P>To set memory limits, the simplest way is to specify the -m or --memory parameter on docker run:</P> <P>&nbsp;</P> <LI-CODE lang="markup">docker run -d -m 2048 --name testcontainer</LI-CODE> <P>&nbsp;</P> <P>This command will enforce a limit of 2Gb of memory for that container. Just like CPUs you can't over provision memory for containers. In fact, docker will let you create the container but you won't be able to start it.</P> <P>&nbsp;</P> <P><FONT size="5">Checking your configuration on running containers</FONT></P> <P>Once you configure the limits and your containers are running, you can check if the limits are being enforced. The way to do that on docker is by using the command docker inspect:</P> <P>&nbsp;</P> <LI-CODE lang="markup">docker inspect testcontainer</LI-CODE> <P>&nbsp;</P> <P>The command above will return a very unfriendly JSON log:</P> <P>&nbsp;</P> <LI-CODE lang="markup">[ { "Id": "109d65ac9ab7cdc9de5996e2e0853b67ea0d461d723a7c18920f9c176a45ffbe", "Created": "2019-10-22T17:08:59.6228671Z", "Path": "c:\\windows\\system32\\cmd.exe", "Args": [], "State": { "Status": "exited", "Running": false, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 0, "ExitCode": 0, "Error": "", "StartedAt": "2019-10-22T17:16:15.5314257Z", "FinishedAt": "2019-10-22T10:16:15.56805-07:00" }, "Image": "sha256:739b21bd02e70d010b6b75d41aad12b941ab49aeda66d7930dd7b3745511ca9c", "ResolvConfPath": "", "HostnamePath": "", "HostsPath": "", "LogPath": "C:\\ProgramData\\docker\\containers\\109d65ac9ab7cdc9de5996e2e0853b67ea0d461d723a7c18920f9c176a45ffbe\\109d65ac9ab7cdc9de5996e2e0853b67ea0d461d723a7c18920f9c176a45ffbe-json.log", "Name": "/testcontainer", "RestartCount": 0, "Driver": "windowsfilter", "Platform": "windows", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "", "ExecIDs": null, "HostConfig": { "Binds": null, "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "default", "PortBindings": {}, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "Capabilities": null, "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": false, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": null, "UTSMode": "", "UsernsMode": "", "ShmSize": 0, "ConsoleSize": [ 0, 0 ], "Isolation": "process", "CpuShares": 0, "Memory": 2048, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "dir": "C:\\ProgramData\\docker\\windowsfilter\\109d65ac9ab7cdc9de5996e2e0853b67ea0d461d723a7c18920f9c176a45ffbe" }, "Name": "windowsfilter" }, "Mounts": [], "Config": { "Hostname": "109d65ac9ab7", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": null, "Cmd": [ "c:\\windows\\system32\\cmd.exe" ], "Image": "", "Volumes": null, "WorkingDir": "", "Entrypoint": null, "OnBuild": null, "Labels": {} }, "NetworkSettings": { "Bridge": "", "SandboxID": "109d65ac9ab7cdc9de5996e2e0853b67ea0d461d723a7c18920f9c176a45ffbe", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": {}, "SandboxKey": "109d65ac9ab7cdc9de5996e2e0853b67ea0d461d723a7c18920f9c176a45ffbe", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "nat": { "IPAMConfig": null, "Links": null, "Aliases": null, "NetworkID": "6994e855c400ea285d24e30bb9f2c6d5d749014ab5a8771e011c506b24edff0b", "EndpointID": "", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "", "DriverOpts": null } } } } ]</LI-CODE> <P>&nbsp;</P> <P>Luckily, there's an option to extract only the information you want:</P> <P>&nbsp;</P> <LI-CODE lang="markup">docker inspect --format='{{.HostConfig.Memory}}' testcontainer</LI-CODE> <P>&nbsp;</P> <P>The command above will return the total memory allocated for this container. If you want to check the CPU limit, you can use:</P> <P>&nbsp;</P> <LI-CODE lang="markup">docker inspect --format='{{.HostConfig.NanoCpus}}' testcontainer</LI-CODE> <P>&nbsp;</P> <P>See that this will return the calculation of CPU period * CPU quota. Although you can't configure these on a Windows container host, these are still present. In our case, the result for the configuration of this container (2 CPUs) is "2000000000"</P> <P>&nbsp;</P> <P><FONT size="5">Windows limitations on resource constraints</FONT></P> <P>When checking the <A href="#" target="_blank" rel="noopener">Docker documentation</A>, you'll notice that there are a few other options for limiting CPU and memory on your containers. Today, these are available for Linux only.&nbsp; If you have a strong case for using one of these options that is not available on Windows, please let us know.</P> <P>&nbsp;</P> <P>Hopefully this is useful. Let us know in the comments!</P> Tue, 22 Oct 2019 18:43:04 GMT Vinicius Apolinario 2019-10-22T18:43:04Z Containers for ITPros - Containers & Windows Admin Center <P>If you're a Windows admin and you haven't heard of <A href="#" target="_blank" rel="noopener">Windows Admin Center</A>, you're absolutely missing on the awesome cool new way to manage your servers - so for this blog post, I'll assume you're familiar with it. If you're not, take a moment to review the link above.</P> <P>What many people managing Windows Admin Center don't know is that there is a Containers extension available:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="WAC01.PNG" style="width: 999px;"><img src=";px=999" role="button" title="WAC01.PNG" alt="WAC01.PNG" /></span></P> <P>Once you install the extension, you can target your container hosts:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="WAC02.PNG" style="width: 999px;"><img src=";px=999" role="button" title="WAC02.PNG" alt="WAC02.PNG" /></span></P> <P>The Summary page gives you an overview on how your container host is operating, how many containers you have running in that host, etc. However, Windows Admin Center can do much more...</P> <P><FONT size="5">Using Windows Admin Center to troubleshoot containers</FONT></P> <P>When clicking the Containers tab inside the Containers extension, you get more details of your running containers:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="WAC03.PNG" style="width: 999px;"><img src=";px=999" role="button" title="WAC03.PNG" alt="WAC03.PNG" /></span></P> <P>In the example above, you can note a few things:</P> <UL> <LI>On the container list, Windows Admin Center gives you the port configuration for that container. This is particularly helpful to troubleshoot cases where you have a container that is not responding correctly to TCP calls. In this example, you can see port 80 is open in the container but mapped to port 8080 on the host.</LI> <LI>On the bottom pane, you get a nice visualization of how your container is performing. In case your application is consuming too much memory, compute, or networking you can quickly identify that and take an action.</LI> </UL> <P>However the thing I like the most here is the ability to see the events from inside the container:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="WAC04.PNG" style="width: 999px;"><img src=";px=999" role="button" title="WAC04.PNG" alt="WAC04.PNG" /></span></P> <P>Windows Admin Center can do more. You can check the container images you have stored, the networking configuration, and volumes available for your containers. In future blog posts we'll look into some of these configurations.</P> <P>&nbsp;</P> <P><FONT size="5">We need your feedback</FONT></P> <P>We have some amazing ideas on how to improve the Containers extension for Windows Admin Center, but we'd like to hear from you! If you have an idea, found a bug, or something you'd like us to look into, please let us know either in this blog post or on our <A href="#" target="_blank" rel="noopener">User Voice page</A>.</P> Wed, 16 Oct 2019 17:28:02 GMT Vinicius Apolinario 2019-10-16T17:28:02Z Containers for ITPros - Working with containers interactively <P>Continuing with our series of blog posts to help ITPros in using containers, today's topic will be on the "Tips and Tricks" area. On our <A href="" target="_blank" rel="noopener">last blog post</A>, I talked on how it's important to spin up a container to ensure the commands you run inside it run correctly before you use them in your dockerfile.</P> <P>Usually, when running containers - especially if you are running multiple containers - you won't be opening an interactive session into it. Rather, you'll run them detached - which means the container runs on the background (yes, this is just like a virtual machine running on Hyper-V or VMWare). However, because of this assumption that containers will run detached, the interaction with containers is way different than VMs. In addition, since containers don't have a GUI, you actually have to be very specific on how you want to interactively access a container. Let's look into it:</P> <P>&nbsp;</P> <P><FONT size="5">Opening an interactive session from Docker Run</FONT></P> <P>The simplest way to open an interactive session is when you run a container for the first time with docker run. Here's an example:</P> <P>&nbsp;</P> <LI-CODE lang="markup">docker run --entrypoint powershell -it --name testcontainer</LI-CODE> <P>&nbsp;</P> <P>In this case we're creating, starting, and entering a PowerShell session all with the command above. However, if you have a stopped container, you have to run another command:</P> <P>&nbsp;</P> <LI-CODE lang="markup">docker start --interactive testcontainer</LI-CODE> <P>&nbsp;</P> <P>Yet another option is when you have a container already running and you want to execute a command inside that container. For that, you can use docker exec:</P> <P>&nbsp;</P> <LI-CODE lang="markup">docker exec testcontainer powershell dir</LI-CODE> <P>&nbsp;</P> <P>The command above is obviously an example and will simply show the output of "dir" from the container. Another variant of docker exec is:</P> <P>&nbsp;</P> <LI-CODE lang="markup">docker exec --interactive testcontainer powershell</LI-CODE> <P>&nbsp;</P> <P>This will open an interactive PowerShell session into a running container, so you don't need to specify what PowerShell command you want to run.</P> <P><EM>Note: I'm assuming you'll want to open a PowerShell session when operating a container interactively, but you could replace "powershell" by "CMD" in all commands above.</EM></P> <P>&nbsp;</P> <P><FONT size="5">Don't fall into the --entrypoint trap!</FONT></P> <P>The commands above are all fine for the container image I'm using - the Server Core base container image. However, you'll see there's a variant of the commands above out there, which omits the "--entrypoint" parameter. This is because you don't actually need to specify that in some cases. When browsing the web for examples, you'll see something like this:</P> <P>&nbsp;</P> <LI-CODE lang="markup">docker run -it --name testcontainer powershell</LI-CODE> <P>&nbsp;</P> <P>See that in this example we're still opening an interactive session (as specified by the -it parameter), but we completely removed the --entrypoint parameter and moved the powershell instruction to the end. As I mentioned, this will work fine and the reason is because the Server Core base container image was created with no predefined entry point.</P> <P>However, some container images do have a predefined entry point already, such as the IIS container image.&nbsp;In fact, you can see for yourself how the IIS container image was created by looking at its <A href="#" target="_blank" rel="noopener">dockerfile</A>.</P> <P>If you try to open an interactive session on the IIS container image without the --entrypoint parameter, it will fail. Instead, make sure you use that just like the examples above.</P> <P>&nbsp;</P> <LI-CODE lang="markup">docker run --entrypoint powershell -it --name testiiscontainer</LI-CODE> <P>&nbsp;</P> <P>By putting the --entrypoint right after the run command, you successfully specify where the container should start.</P> <P>&nbsp;</P> <P>Hopefully this information is helpful to you. If you've been working with containers and have other questions and topics you'd like me to cover, feel free to use the comments section! :)</img></P> Fri, 11 Oct 2019 22:26:37 GMT Vinicius Apolinario 2019-10-11T22:26:37Z Containers for ITPros - PowerShell and Dockerfile <P>Hello folks,</P> <P>Before we jump into the content, I wanted to take a moment to explain this blog series: As container's popularity continues to increase, I see more and more ITPros (particularly Microsoft focused ITPros) struggle to use it and even understand how some settings and configurations work for containers. This happens either because many container-focused tech was created with developers in mind or because the documentation is not exactly focused on Windows scenarios. With that in mind, I wanted to use this channel to share my learning on using Containers. Here I'll be sharing tips and tricks for Microsoft ITPros using containers, from small things to deep dives... My hope is that this helps more and more people start their journey with containers!</P> <P>&nbsp;</P> <P>If you have no idea on what containers are, fear not! We have a great set of <A href="#" target="_blank" rel="noopener">documentation</A> to help you get started. Later this year, we'll also be sharing a few things from our Ignite content here. Stay tuned!</P> <P>Let's get to it!</P> <P>&nbsp;</P> <P><FONT size="5">PowerShell and Dockerfile</FONT></P> <P>When creating a new container image, you'll probably be using a dockerfile file. A dockerfile is a declarative way to tell docker how to prepare the container image you want to create. What you have to do is include all the instructions in the file and docker does the rest.</P> <P>One of the best practices to create a docker file is to create a new containers, run the commands you want to use to prepare the container image and then use these commands in your dockerfile. That reduces the chances of errors when creating your dockerfile.</P> <P>To run a new container based on, let's say, the Server Core base container image you can use:</P> <P>&nbsp;</P> <LI-CODE lang="markup">docker run --entrypoint powershell -it --name testcontainer</LI-CODE> <P>&nbsp;</P> <P>We'll save the details on the command above for another blog post. For now, you need to know that this command will create a new container based off the Server Core base container image and will open an interactive PowerShell session. This way you can run all the commands you want to validate if 1) they run as expected and 2) prove that you can run a series of commands to prepare the image the way you want.</P> <P>&nbsp;</P> <P><FONT size="5">Escaping double quotes</FONT></P> <P>One of the issues in doing the process above, is that on PowerShell you'll use double quotes " to indicate a value that has spaces in between the words. For example, if you're managing an IIS website via PowerShell, you'll use:</P> <P>&nbsp;</P> <LI-CODE lang="markup">New-WebApplication "TestWebApp" -Site "Default Web Site" -ApplicationPool "DefaultAppPool" -PhysicalPath "C:\TestWebApp"</LI-CODE> <P>&nbsp;</P> <P>However, when you transfer this to the dockerfile, you need to change the double quotes "Default Web Site" to single quotes 'Default Web Site'. Otherwise, the docker build command will exit with an error saying Web Site" is not a recognized command.</P> <LI-CODE lang="markup">New-WebApplication "TestWebApp" -Site 'Default Web Site' -ApplicationPool "DefaultAppPool" -PhysicalPath "C:\TestWebApp"</LI-CODE> <P>&nbsp;</P> <P>I confess I had to dedicate some time to understand what was happening when I did this the first time. The problem is that everything works fine on PowerShell, even inside an interactive session in the container, but fails on the dockerfile when docker build is executed. In fact, Docker documented it <A href="#" target="_blank" rel="noopener">here</A>.</P> <P>&nbsp;</P> <P>I hope this helps you folks out there trying containers for the first time! I have much more to share so stay tuned!</P> Wed, 02 Oct 2019 23:44:41 GMT Vinicius Apolinario 2019-10-02T23:44:41Z Schedule Builder is live - Here are the Virtualization sessions at Microsoft Ignite <P>Microsoft Ignite is right around the corner and today the <A href="#" target="_blank" rel="noopener">Schedule Builder</A> tool went live. That means you can now add sessions to your agenda and better prepare for a week full of content! To help you with this, we listed the sessions being delivered by the Virtualization team (includes Hyper-V, security, and containers):</P> <TABLE width="853"> <TBODY> <TR> <TD width="127"> <P><STRONG>Session Code</STRONG></P> </TD> <TD width="330"> <P><STRONG>Session Title</STRONG></P> </TD> <TD width="183"> <P><STRONG>Speaker(s)</STRONG></P> </TD> <TD width="213"> <P><STRONG>Time</STRONG></P> </TD> </TR> <TR> <TD width="127"> <P>BRK3176</P> </TD> <TD width="330"> <P>Windows container and the Azure Kubernetes Service</P> </TD> <TD width="183"> <P>Taylor Brown, Weijuan Davis</P> </TD> <TD width="213"> <P>Tuesday, November 5</P> <P>1:00 PM - 1:45 PM</P> </TD> </TR> <TR> <TD width="127"> <P>BRK3193</P> </TD> <TD width="330"> <P>Maximize security with Windows Server 2019 and Azure</P> </TD> <TD width="183"> <P>Ryan Puffer</P> </TD> <TD width="213"> <P>Thursday, November 7</P> <P>10:30 AM - 11:15 AM</P> </TD> </TR> <TR> <TD width="127"> <P>BRK3173</P> </TD> <TD width="330"> <P>Hyper-V roadmap</P> </TD> <TD width="183"> <P>Ben Armstrong</P> </TD> <TD width="213"> <P>Thursday, November 7</P> <P>1:00 PM - 1:45 PM</P> </TD> </TR> <TR> <TD width="127"> <P>BRK2147</P> </TD> <TD width="330"> <P>Modernize your IT environment and applications with Windows Containers</P> </TD> <TD width="183"> <P>Craig Wilhite, Vinicius Apolinario</P> </TD> <TD width="213"> <P>Thursday, November 7</P> <P>2:15 PM - 3:00 PM</P> </TD> </TR> <TR> <TD width="127"> <P>THR2191</P> </TD> <TD width="330"> <P>Navigate common pitfalls encountered when containerizing Windows Server applications</P> </TD> <TD width="183"> <P>Amber Guo</P> </TD> <TD width="213"> <P>Friday, November 8</P> <P>10:10 AM - 10:30 AM</P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P>We're preparing a lot of great content - and more importantly, lots of demos :)</img> - so be sure to add these sessions! See you at Ignite!</P> Wed, 02 Oct 2019 22:13:01 GMT Vinicius Apolinario 2019-10-02T22:13:01Z Windows base container image now cached into Azure VMs <P>Image pull is a major concern on container environments – when a new image is needed to deploy a container-based service, this process can take long and impact the availability of the service in cases where a new container host is needed to support an application. With that in mind, a while ago we started to offer a new Azure VM image:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Windows container image_img01.png" style="width: 999px;"><img src=";px=999" role="button" title="Windows container image_img01.png" alt="Windows container image_img01.png" /></span></P> <P>&nbsp;</P> <P>In addition to having the Container feature enabled and Docker pre-configured, the Nano Server and Server Core base container images are also pre-cached inside this Azure VM image – so when you need to spin-up a container based off these container images the process would be faster than if you had to pull it from the Microsoft Container Registry (MCR).</P> <P>With the addition of the <A href="" target="_blank" rel="noopener">Windows container image</A>, customers asked us to update this Azure VM image so they could have the same experience with this additional base container image. Finally, we did it:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Windows container image_img02.png" style="width: 999px;"><img src=";px=999" role="button" title="Windows container image_img02.png" alt="Windows container image_img02.png" /></span></P> <P>&nbsp;</P> <P>One important note is that this is true for the non [smalldisk] VM images above. These [smalldisk] images have 30GB OS disks and would not be able to host the Windows base container image. If you want to use these VM images with the Windows base container image, we recommend you add a data disk to your VM to host it.</P> <P>Using these VM images already? <A href="" target="_blank" rel="noopener">Tell us</A> what your experience has been so far!</P> <P>Want to give it try? Check out how to <A href="#" target="_blank" rel="noopener">get started with Azure for free</A>!</P> Wed, 24 Jul 2019 16:56:52 GMT Vinicius Apolinario 2019-07-24T16:56:52Z Windows Server, version 1903 Now Available, Windows Server Containers in AKS in Preview <P><STRONG>Windows Server, version 1903 is Generally Available</STRONG></P> <P>We are excited to announce that Windows Server, version 1903 is now <A href="#" target="_self">generally available</A>. Container-related enhancements include <A href="" target="_blank" rel="noopener">GPU acceleration in Windows containers</A>, and scalability improvements in the latest release of Flannel and Kubernetes v1.14. You can pull the new Windows Server, version 1903 container images via:</P> <P>&nbsp;</P> <PRE>docker pull docker pull docker pull </PRE> <P>&nbsp;</P> <P><STRONG>Windows Server Containers Support in Azure Kubernetes Service </STRONG></P> <P>Following the release of <A href="#" target="_blank" rel="noopener">production-level support for Windows Server containers</A> in Kubernetes comes our announcement that <STRONG>Windows Server containers support in Azure Kubernetes Service (AKS) is now in public preview! </STRONG>Check out preview details in our <A href="#" target="_blank" rel="noopener">announcement</A>.</P> <P>&nbsp;</P> <P>To better support AKS, Azure Monitoring has also announced several new container insight features. Details can be found in our <A href="#" target="_blank" rel="noopener">documentation</A>. Azure Monitoring is one example of our efforts to provide the best experience for our container customers. Read more about these efforts in our <A href="" target="_blank" rel="noopener">container tooling blog post</A>, and fill out this <A href="#" target="_blank" rel="noopener">survey</A> to help us continue to improve our tooling offerings.</P> <P>&nbsp;</P> <P><STRONG>FAQ</STRONG></P> <P><STRONG>Q: </STRONG>I am seeing “Error response from daemon: manifest not found”</P> <P><STRONG>A: </STRONG>You will see this error if you try to pull an image using :latest or without specifying a tag as we have deprecated and removed the ‘latest’ tag on Windows base images to support improved container practices. Instead, declare a specific tag like :1903. More information on tags in this <A href="" target="_blank" rel="noopener">blog post</A>.</P> <P>&nbsp;</P> <P><STRONG>Q: </STRONG>Where can I find Windows Server container images?</P> <P><STRONG>A: </STRONG>Start with the <A href="#" target="_blank" rel="noopener">Windows base image product family repo</A> on Docker Hub.</P> <P>&nbsp;</P> <P>For more information, please visit our Windows Server container documentation at <A href="#" target="_blank" rel="noopener"></A>.</P> <P>&nbsp;</P> Wed, 22 May 2019 21:40:26 GMT markosmezgebu 2019-05-22T21:40:26Z Announcing: Container Log Tooling and Survey <P>Update: The <SPAN>Windows Containers Log Monitor is opensource released&nbsp;</SPAN><A href="#" target="_blank" rel="noopener noopener noreferrer">on Github.</A>&nbsp;More information included in this <A href="" target="_self">recent blog post announcing the release.</A></P> <P>&nbsp;</P> <P>_____________</P> <P>&nbsp;</P> <P>Help us improve your experiences with Windows Containers by filling out this survey:</P> <P><FONT size="1" color="#000000"><A style="background-color: transparent; box-sizing: border-box; color: #146cac; font-family: &amp;quot; segoeui&amp;quot;,&amp;quot;lato&amp;quot;,&amp;quot;helvetica neue&amp;quot;,helvetica,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 300; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: underline; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;" href="#" target="_blank" rel="noopener"><STRONG style="box-sizing: border-box; font-weight: bold;"></STRONG></A></FONT></P> <P>&nbsp;</P> <P><FONT size="3" color="#000000">Session recording:</FONT></P> <P><FONT color="#3366ff"><A href="#" target="_self"><FONT size="3"><STRONG><U><FONT style="background-color: #ffffff;">;&nbsp;</FONT></U></STRONG></FONT></A></FONT><FONT size="3" color="#000000">[18:00 - 31:00] Tooling&nbsp;</FONT></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>As containers have become increasingly popular, the demand for better tooling to help developers and their organizations containerize their applications has grown. At Microsoft, we are kicking off efforts to improve the tooling experience with containers, for developers to IT Pros, on-premises and the cloud, Windows or Linux, and beyond. This blog post offers a deep-dive look into one of the first areas we decided to tackle – Logging.</P> <P>&nbsp;</P> <P>Logs are one of the central methods used for debugging software. They are used to diagnose incidents, understand KPIs such as performance, reliability, and availability, as well as observe user statistics and access patterns. However, while useful, logs tend to be scattered in different locations, are unstructured, and come in wildly different formats. While locating, gathering, and parsing log content is complex even on host processes, doing the same for containers has an additional complexity due to the additional isolation and virtualization layer. Fortunately, many tools have been developed to work with containers; however, many of these tools do not have sufficient log access for Windows Containers.</P> <P>&nbsp;</P> <P>Docker logs and many container ecosystem monitoring solutions rely on STDOUT as the pipeline to receive logs and information from the processes running inside the container. This fits in-line the with the Linux application model, which directs all log output to STDOUT; however, Windows applications and services do not output to STDOUT, and instead deposit information in Windows services such as ETW, Perf Counters, custom log files, etc. As a result, much of the wealth of logging information produced by the Windows systems in container is inaccessible to most container ecosystem logging solutions.</P> <P>&nbsp;</P> <P>The goal of our proof-of-concept tool is to address this accessibility gap between the logs produced within the Windows container and the container ecosystem tools that pull from STDOUT. This tool is designed as a container entry point executable with the following behavior:</P> <OL> <LI>Monitor container services</LI> <LI>Observe container ETW events</LI> <LI>Observe container Perf Counters</LI> <LI>Observe container custom application log file</LI> <LI>Tail observed container log sources to STDOUT</LI> </OL> <P style="text-align: center;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="image.png" style="width: 632px;"><img src=";px=999" role="button" title="image.png" alt="image.png" /></span></P> <P>The POC is authored with an observer, publisher-subscriber design pattern. Loggers subscribe to providers, and each provider watches a data source and publish events to loggers upon discovering new data. This model provides touchpoints to scale the tool to additional data sources and output destinations as desired.</P> <P>&nbsp;</P> <P style="text-align: center;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Capture.PNG" style="width: 400px;"><img src=";px=400" role="button" title="Capture.PNG" alt="Capture.PNG" /></span></P> <P>The POC fits into the larger ecosystem by providing existing tooling access to Windows logs via STDOUT. One such tooling tack is the popular Fluentd, Elasticsearch, Kibana stack (EFK stack). The tool also integrated with other container monitoring panes such as Azure Monitoring.</P> <P>&nbsp;</P> <P>The tool integrates with the EFK stack as below:</P> <P style="text-align: center;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Capture2.PNG" style="width: 999px;"><img src=";px=999" role="button" title="Capture2.PNG" alt="Capture2.PNG" /></span></P> <P>A deployment of an application with the logging tool on AKS with Azure Monitoring would display as below:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="tmp.jpg" style="width: 999px;"><img src=";px=999" role="button" title="tmp.jpg" alt="tmp.jpg" /></span></P> <P>&nbsp;</P> <P>This POC has some known design limitations:</P> <OL> <LI>The tool does not collect from all critical log sources (i.e. IIS Logs, Event Logs etc.)</LI> <LI>To redirect to STDOUT, the tool must run as the foreground entry point process. This restricts developers from wrapping this entry point within their application as the tool would lose the handle to STDOUT.</LI> </OL> <P>This POC log tool is the first step Microsoft is taking to improving the tooling experience with Windows Containers. We are still in the process of iterating on this tool and extending our efforts into other tooling areas. We are seeking feedback on this concept and would appreciate responses to the survey below. At the end of the survey there is a field to leave contact information if you would like further follow-up from the team.</P> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener"><STRONG></STRONG></A></P> Tue, 03 Dec 2019 15:37:31 GMT ambguo 2019-12-03T15:37:31Z Bringing GPU acceleration to Windows containers <P>At the release of Windows Server 2019 last year, we <A href="" target="_blank" rel="noopener">announced support for a set of hardware devices</A> in Windows containers. One popular type of device missing support at the time: GPUs. We’ve heard frequent feedback that you want hardware acceleration for your Windows container workloads, so today, we’re pleased to announce the first step on that journey: starting in Windows Server 2019, we now support GPU acceleration for DirectX-based apps and frameworks in Windows containers.</P> <P>&nbsp;</P> <P>The best part is, you can use the Windows Server 2019 build you have today—no new OS patches or configuration is necessary. All you need is a new build of Docker and the latest display drivers. Read on for detailed requirements and to learn how you can get started with GPU accelerated DirectX in Windows containers today.</P> <P>&nbsp;</P> <H2>Background: Why GPU acceleration?</H2> <P>&nbsp;</P> <P>Containers are an excellent tool for packaging and deploying many kinds of workloads. For many of these, traditional CPU compute resources are sufficient. However, for a certain class of workload, the massively parallel compute power offered by GPUs (graphics processing units) can speed up operations &nbsp;by orders of magnitude, bringing down cost and improving throughput immensely.</P> <P>&nbsp;</P> <P>GPUs are already a common tool for many popular workloads, from traditional rendering and simulation to machine learning training and inference. With today’s announcement, we’re unlocking new app scenarios for Windows containers and enabling more applications to be successfully shifted into Windows containers.</P> <P>&nbsp;</P> <H2>GPU-accelerated DirectX, Windows ML, and more</H2> <P>&nbsp;</P> <P>For some users, DirectX conjures associations with gaming. But DirectX is about more than games—it also powers a large ecosystem of multimedia, design, computation, and simulation frameworks and applications.</P> <P>&nbsp;</P> <P>As we looked at adding GPU support to Windows containers, it was clear that starting with the DirectX APIs—the foundation of accelerated graphics, compute, and AI on Windows—was a natural first step.</P> <P>&nbsp;</P> <P>By enabling GPU acceleration for DirectX, we’ve also enabled GPU acceleration for the frameworks built on top of it. One such framework is <A href="#" target="_blank" rel="noopener">Windows ML</A>, a set of APIs providing fast and efficient AI inferencing capabilities. With GPU acceleration in Windows containers, developers now have access to a first-class inferencing runtime that can be accelerated across a broad set of capable GPU acceleration hardware.<span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="overview-diagram.png" style="width: 329px;"><img src=";px=400" role="button" title="overview-diagram.png" alt="overview-diagram.png" /></span></P> <P>&nbsp;</P> <H2>Usage</H2> <P>&nbsp;</P> <P>On a system meeting the requirements (see below), start a container with hardware-accelerated DirectX support by specifying the <STRONG>--device</STRONG> option at container runtime, as follows:</P> <P>&nbsp;</P> <PRE>docker run --isolation process --device class/5B45201D-F2F2-4F3B-85BB-30FF1F953599 &lt;your Docker image&gt;</PRE> <P>&nbsp;</P> <P>Note that this does not assign GPU resources <EM>exclusively</EM> to the container, nor does it prevent GPU access on the host. Rather, GPU resources are scheduled dynamically across the host and containers in much the same way as they are scheduled among apps running on your personal device today. You can have several Windows containers running on a host, each with hardware-accelerated DirectX capabilities.</P> <P>&nbsp;</P> <H2>Requirements</H2> <P>&nbsp;</P> <P>For this feature to work, your environment must meet the following requirements:</P> <UL> <LI>The container host must be running Windows Server 2019 or Windows 10, version 1809 or newer.</LI> <LI>The container base image must be <A href="#" target="_blank" rel="noopener"></A> or newer. Windows Server Core and Nano Server container images are not currently supported.</LI> <LI>The container must be run in process isolation mode. Hyper-V isolation mode is not currently supported.</LI> <LI>The container host must be running Docker Engine 19.03 or newer.</LI> <LI>The container host must have a GPU running display drivers version WDDM 2.5 or newer.</LI> </UL> <P>To check the WDDM version of your display drivers, run the DirectX Diagnostic Tool (dxdiag.exe) on your container host. In the tool’s “Display” tab, look in the “Drivers” section as indicated below.</P> <P>&nbsp;</P> <P style="text-align: center;"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="dxdiag.png" style="width: 999px;"><img src=";px=999" role="button" title="dxdiag.png" alt="dxdiag.png" /></span></P> <P>&nbsp;</P> <H2>Getting started</H2> <P>&nbsp;</P> <P>Operating system support for this feature is already complete and broadly available as part of Windows Server 2019 and Windows 10, version 1809. Formal Docker support is scheduled for the upcoming Docker EE Engine 19.03 release. Until then, if you’re eager to try out the feature early, you can check out our <A href="#" target="_blank" rel="noopener">sample on GitHub</A> and follow the README instructions to get started. We’ll show you how to acquire a nightly build of Docker and use it to run a containerized Windows ML inferencing app with GPU acceleration.</P> <P>&nbsp;</P> <H2>Going forward</H2> <P>&nbsp;</P> <P>We look forward to getting your feedback on this experience. Please leave a comment below or tweet us with your thoughts. What are the next things you’d like to be able to do with GPU acceleration in containers on Windows?</P> <P>&nbsp;</P> <P>Cheers,</P> <P>Rick Manning, Graphics PM</P> <P><A href="#" target="_blank" rel="noopener">@CraigWilhite</A>, Windows Container PM</P> <P>&nbsp;</P> Wed, 10 Apr 2019 15:11:05 GMT Craig Wilhite 2019-04-10T15:11:05Z Removing the ‘latest’ Tag + An Update on MCR <P><STRONG><FONT size="5">Removing the 'latest' tag</FONT></STRONG></P> <P>&nbsp;</P> <P>With our<SPAN>&nbsp;</SPAN><A href="" target="_blank" rel="noopener">announcement</A>&nbsp;of the availability of Windows Server 2019, we informed you that the ‘latest’ tag was deprecated and would be removed from the<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener nofollow noopener noreferrer">Windows base OS image repos</A>. We have already deprecated the ‘latest’ tag and have not been updating it to point to the latest version of an image.<SPAN>&nbsp;</SPAN><STRONG>The next step is for us to remove it from the list of available tags</STRONG>.<SPAN>&nbsp;</SPAN><STRONG>We will be removing 'latest' from the list of available tags on April 16, 2019.</STRONG></P> <P>&nbsp;</P> <P>Removing the tag will prevent scenarios where users end up downloading an image based on a version of Windows that is several versions older than they had expected. For instance, say you’re using Windows Server 2019 and want to pull the Server Core container base image with the following pull command:</P> <P>&nbsp;</P> <PRE>docker pull</PRE> <P>&nbsp;</P> <P>Because a tag wasn’t specified in the pull command, the Docker client will default to pulling whichever version of the image is tagged :latest, which in this case is a Server Core image based on Windows Server 2016. This scenario is not ideal and would also occur each time someone explicitly used :latest in their pull command.</P> <P>&nbsp;</P> <P>After the removal, pulling :latest will result in the following behavior:</P> <P>&nbsp;</P> <PRE># Try pulling with latest: docker pull # Outcome: Error response from daemon: manifest for not found</PRE> <P>&nbsp;</P> <P><EM>Note that the default pull command displayed in the upper-right corner of the Docker Hub repos under “Copy and paste to pull this image” will also result in the above behavior since they are untagged (as of April 2, 2019). We are working towards improving that experience.</EM></P> <P>&nbsp;</P> <P><STRONG>Instead of using :latest or untagged pull commands, please declare</STRONG> <STRONG>the specific container tag (such as :1709, :1803, :1809, :ltsc2019, etc.) you would like to run in production.</STRONG> You can find our full tag listings at each image’s<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener nofollow noopener noreferrer">Docker Hub Repo</A>.</P> <P>&nbsp;</P> <P><FONT size="5"><STRONG>MCR is the exclusive download source for new tags</STRONG></FONT></P> <P>&nbsp;</P> <P>Also announced in our November<SPAN>&nbsp;</SPAN><A href="" target="_blank" rel="noopener">blog post</A><SPAN>&nbsp;</SPAN>was the news that <STRONG>Microsoft Container Registry has become the official host for Microsoft-published container images, while Docker Hub remains the official source for customers to discover and acquire Microsoft-published container images.</STRONG></P> <P>&nbsp;</P> <P>This is an example of how a pull reference should change to point to the MCR source:</P> <P>&nbsp;</P> <PRE># Here’s the old string for pulling a container docker pull microsoft/windowsservercore:ltsc2016 # Change the string to the new syntax and use the same tag docker pull</PRE> <P>&nbsp;</P> <P>In January, MCR<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener noopener noreferrer">announced</A><SPAN>&nbsp;</SPAN>that as a part of the migration,<SPAN>&nbsp;</SPAN><STRONG>existing tags from the original Docker-hosted source would also be available from the MCR source.</STRONG><SPAN>&nbsp;</SPAN><STRONG>However, new tags, beginning with tags created for the Windows Server 2019/1809 container images, can only be pulled/referenced by including the MCR repo prefix (see example above) in the Docker pull command or Dockerfile reference. The appropriate pull commands are reflected on Docker Hub today.</STRONG></P> <P>&nbsp;</P> <P>As we suggested in November, you should still change your pull/Dockerfile references to the MCR source, even if you’re using a container based on earlier versions of Windows Server.</P> <P>&nbsp;</P> <P>For more information on the move to MCR and its value proposition, check out Steve Lasker’s<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener noopener noreferrer">blog post</A>.</P> <P>&nbsp;</P> <P><FONT size="5"><STRONG>Conclusion</STRONG></FONT></P> <P>&nbsp;</P> <P>For more information, please visit our container docs at&nbsp;<A href="#" target="_blank" rel="noopener noopener noreferrer"></A>&nbsp;as well as our other<SPAN>&nbsp;</SPAN><A href="" target="_blank" rel="noopener">blog</A>&nbsp;on container tagging changes from earlier this year. Let us know your thoughts in the comments below.</P> <P>&nbsp;</P> <P>Best,</P> <P>Markos</P> Wed, 26 Jun 2019 21:49:51 GMT markosmezgebu 2019-06-26T21:49:51Z What's new for container identity <P><FONT size="3"><SPAN>Identity is a crucial component of any application. Whether you’re authenticating users on a web app or trying to query data from a back-end server, chances are you’ll need to integrate with an identity provider. Containerized applications are no exception, which is why we’ve included support for Active Directory identities in Windows Containers from the beginning. Now that Windows Server 2019 has released, we’d like to show you what we’ve been working on the last 3 years to make Windows container identity easier and more reliable.</SPAN></FONT></P> <P>&nbsp;</P> <P><FONT size="3"><SPAN>If you’d like to jump straight into the documentation on container identity, head on over to&nbsp;<A href="#" target="_self"></A></SPAN></FONT></P> <P>&nbsp;</P> <P><FONT size="5"><SPAN>Improved Reliability</SPAN></FONT></P> <P>When we launched support for containers in Windows Server 2016, we set off on an adventure to redefine how people manage their apps. One of those innovations was the use of a group managed service account (gMSA) to replace the computer identity in containers. Before containers were a thing, you would typically domain-join your computer and use its implicit identity or a service account to run the app. With containers, we wanted to avoid the complexity of domain join since it would quickly become difficult to manage short-lived computer objects in Active Directory. But we knew apps would still need to use AD identities, so we came up with a solution to assign a gMSA to the container computer account at runtime. This gave the container a similar experience to being domain joined, but let multiple containers use the same identity and avoided having to store sensitive credentials in the container image.</P> <P>&nbsp;</P> <P>As more customers started using gMSA with a wide variety of applications, we identified two issues that affected the reliability of gMSA with containers:</P> <P>&nbsp;</P> <OL> <LI>If the hostname of the container did not match the gMSA name, certain functionality like inbound NTLM authentication and ASP.NET Membership role lookups would fail. This was an easy doc fix, but led to a new problem…</LI> <LI>When multiple containers used the same hostname to talk to the same domain controller, the last container would supersede the others and terminate their connections, resulting in random authentication failures.</LI> </OL> <P>To address these issues, we changed how the container identifies itself on the network to ensure it uses its gMSA name for authentication regardless of its hostname and made sure multiple connections with the same identity are properly supported. All you need to do to take advantage of this new behavior is upgrade your container host and images to Windows Server 2019 or Windows 10 version 1809.</P> <P>&nbsp;</P> <P>Additionally, if you were unable to use gMSA identities with Hyper-V isolated containers in Windows versions 1703, 1709, and 1803, you’ll be glad to know that we’ve fixed the underlying issue in Windows Server 2019 and Windows 10 version 1809. If you can’t upgrade to the latest version of Windows, you can also use gMSAs with Hyper-V isolation on Windows Server 2016 and Windows 10 version 1607.</P> <P>&nbsp;</P> <P><FONT size="5">Better docs and tooling</FONT></P> <P>We’ve invested in improving our documentation to make it easier for you to get started using gMSAs with your Windows containers. From creating your first gMSA account, updating your Dockerfile to help your app use the gMSA, and troubleshooting tips for when things go wrong, you’ll find it all at<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"></A>.</P> <P>&nbsp;</P> <P>As part of the documentation upgrade, we’ve also made it easier to get the Credential Spec PowerShell module. The<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">source code</A><SPAN>&nbsp;</SPAN>still lives on GitHub, but you can now easily download it from the<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">PowerShell Gallery</A><SPAN>&nbsp;</SPAN>by running<SPAN>&nbsp;</SPAN><STRONG>Install-Module CredentialSpec</STRONG>. There are also a few improvements under the hood, including better support for child domains and improved validation of the account information.</P> <P>&nbsp;</P> <P><FONT size="5">Kubernetes Support</FONT></P> <P><FONT size="3">F<SPAN>inally, we’re excited to announce that alpha support for gMSA with Windows containers is shipping with Kubernetes version 1.14! Kubernetes takes care of copying credential specs automatically to worker nodes and adds role-based access controls to limit which gMSAs can be scheduled by users. While gMSA support is not yet ready for production use, you can try it by enabling alpha features as described in the&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">Kubernetes gMSA docs</A><SPAN>.</SPAN></FONT></P> <P>&nbsp;</P> Thu, 28 Mar 2019 21:15:44 GMT RyanWillis 2019-03-28T21:15:44Z Updates to Our Container Tagging Format <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jan 26, 2019 </STRONG> <BR /> We're introducing&nbsp;several minor changes to the tagging format of Windows containers. These changes will begin taking take effect next Patch Tuesday, February 12, 2019. <BR /> <H2> Build Number Tags </H2> <BR /> We’re re-introducing a build tag across all Windows base images. This will complement the existing KB-formatted tag and release channel tags. Here’s an example: <BR /> #The equivalent build number for the latest KB4480116, which went live for January Patch Tuesday: <BR /> docker pull <BR /> We released Windows Server 2016 container images with two tags forms. The first was the release channels, ltsc2016 and sac2016. The second was a build number. With the release of Windows Server, version 1709, we moved away from the build number to a Knowledge Base number system. That way, users could search the KB number and find the associated <A href="#" target="_blank"> support article </A> to understand what patches were in the image. <BR /> <BR /> Feedback from users indicated that the KB tag alone was not clear enough in communicating which image was in fact newer or whether the container image was an exact match to their host OS version. We believe users will have an easier time distinguishing these things with the reintroduction of the build tag. <BR /> <H2> Consistent Tags Across all Microsoft Repos </H2> <BR /> To have consistency across all Microsoft container repos, going forward, all container image tags that were previously published with an underscore will now be published with a hyphen. Example: <BR /> #This is how it was before: <BR /> docker pull <BR /> <BR /> #This is how it will be going forward: <BR /> docker pull <BR /> This change will affect all image tags published going forward. Old tags that used an underscore will continue to exist. <BR /> <BR /> We’re also updating our arch-specific ARM images to align with the rest of the Docker community, changing from a tag of ‘arm’ for 32-bit arm to ‘arm32v7’. As an example of this change: <BR /> #This is how it was before: <BR /> docker pull <BR /> <BR /> #This is how it will be going forward: <BR /> docker pull <BR /> <BR /> <H2> Container Repo Structure </H2> <BR /> In related news, Docker <A href="#" target="_blank"> introduced </A> changes to Docker Hub. Docker Hub is the home to our new repository structure. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> The new structure allows us to have a repository family page—the “Windows base OS Images”—displayed above. From there, the family page links to the individual repos for all Windows container base images and points users to related repos. You can read more in a <A href="#" target="_blank"> blog post </A> from Rohit Tatachar, PM for Container Registry. <BR /> <H2> Conclusion </H2> <BR /> For more information, please visit our container docs at <A href="#" target="_blank"> </A> . Let us know your thoughts about the new repo structure in the comments below! </BODY></HTML> Fri, 22 Mar 2019 00:18:31 GMT Virtualization-Team 2019-03-22T00:18:31Z KubeCon, Windows Containers on Kubernetes, and 101 Materials for Your Holiday Reading... <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Dec 15, 2018 </STRONG> <BR /> <IMG src="" /> <IMG src="" /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> I attended <A href="#" target="_blank"> KubeCon </A> earlier this week in Seattle and had some little fun there. It was eye-opening to see the vibrant community there. The energy is enormous. Very fortunate to be a witness of this part of the history. Windows container presence on the showcase floor or in sessions appeared small but the Azure booth was busy with lots of customers asking. I am equally exited and proud as the rest of the container ecosystem that I can play a role being in the core of Windows container technology to make a difference - build and grow <EM> this </EM> community step by step. And, we definitely need a lot of you to join us in this journey! <BR /> <BR /> In the <A href="#" target="_blank"> Sig-Windows </A> maintainers meetup on Wed, hosted by the Co-Chairs Patrick Lang and Michael Michael (see pictures above), I was very happy to meet some keen customers who have been testing or even deploying Windows containers in production . I have also seen growing interest in recent other customers meetings. I thought I should share a compiled list of info related to Windows containers and Kubernetes we have today so we can all learn together. In addition, lots of you have also asked some Windows container 101 materials. There are tons of materials out there. Those our team presented in the Microsoft Ignite conference back in Sept are good starters. I added notes to some interesting demos embedded in our sessions. The list below is not meant to be a full list but a good holiday reading list in case you get bored opening presents:). <BR /> <BR /> <STRONG> KubeCon 2018 Sessions: </STRONG> <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> Tutorial “Deploying Windows Apps with Kubernetes, Draft and Helm” </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Breakout Session "Understanding Windows Container Networking in Kubernetes" </A> (in fact it’s a KubeCon Shanghai session!) </LI> <BR /> </UL> <BR /> <STRONG> General Docs: </STRONG> <BR /> <UL> <BR /> <LI> Windows containers docs: <A href="#" target="_blank"> </A> . <BR /> <UL> <BR /> <LI> This is the portal to all docs. If you see things incorrect or missing, let us know or contribute directly! </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> <UL> <BR /> <LI> Windows on Kubernetes: <BR /> <UL> <BR /> <LI> Get-started docs: <A href="#" target="_blank"> </A> </LI> <BR /> <LI> Windows-SIG in Kubernetes community: <A href="#" target="_blank"> </A> </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> <STRONG> </STRONG> <STRONG> Windows Container 101 Sessions from Ignite 2018: </STRONG> <BR /> <UL> <BR /> <LI> BRK2234 - Getting started with Windows Server containers in Windows Server 2019: <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> Video </A> <BR /> <UL> <BR /> <LI> 23:10: High level intro on container identity/gMSA with a demo. </LI> <BR /> <LI> 36:00: a quick demo on Docker for Windows with local Kubernetes supporting Windows &amp; Linux containers side by side </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> <A href="#" target="_blank"> Slides </A> </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> BRK2237 - From Ops to DevOps with Windows Server containers and Windows Server 2019 <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> Video </A> <BR /> <UL> <BR /> <LI> 45:30: Orchestrators including Kubernetes </LI> <BR /> <LI> 1:00:00: Windows Admin Center &amp; Container demo </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> <A href="#" target="_blank"> Slides </A> </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> BRK2236 - Take the next step with Windows Server container orchestration <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> Video </A> </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> You can search for other Ignite sessions here: <A href="#" target="_blank"> </A> <BR /> <BR /> Happy Reading, Happy Holidays! <BR /> <BR /> Weijuan <BR /> <BR /> @WeijuanLand <BR /> <BR /> </BODY></HTML> Fri, 22 Mar 2019 00:18:16 GMT Virtualization-Team 2019-03-22T00:18:16Z Windows Server 2019 Now Available <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Nov 13, 2018 </STRONG> <BR /> <H2> Introduction </H2> <BR /> Windows Server 2019 is once again <A href="#" target="_blank"> generally available </A> . You can pull the new Windows Server 2019 images—including the new ‘Windows’ base image—via: <BR /> <BR /> <CODE> docker pull <BR /> docker pull <BR /> docker pull </CODE> <BR /> <BR /> Just like the Windows Server 2016 release, the Windows Server Core container image is the only Windows base image in our <A href="#" target="_blank"> Long-Term Servicing Channel </A> . For this image we also have an ‘ltsc2019’ tag available to use: <BR /> <BR /> <CODE> docker pull </CODE> <BR /> <BR /> The Nanoserver and Windows base images continue to be Semi-Annual Channel releases only. <BR /> <BR /> <STRONG> (2:02PM PST - All new Windows base images are live) </STRONG> <BR /> <H2> FAQ </H2> <BR /> <STRONG> Q </STRONG> : <EM> I am seeing "no matching manifest for unknown in the manifest list entries" when I try to pull the image. What do I do? </EM> <BR /> <STRONG> A </STRONG> : Users will need to be running the latest version of Windows--Windows Server 2019 or Windows 10, October 2018 update--in order to pull and run the container images. Since older version of Windows do not support running newer version of containers, we disallow a user from pulling an image that they could not run. <BR /> <H2> MCR is the De Facto container source </H2> <BR /> You can now <STRONG> pull any Windows base image:tag combination from the MCR (Microsoft Container Registry) </STRONG> . Whether you’re using a container based on the Windows Server 2016 release, version 1709, version 1803 or any tag in between, you should change your container pull references to the MCR source. Example: <BR /> <BR /> <CODE> #Here’s the old string for pulling a container <BR /> docker pull microsoft/windowsservercore:ltsc2016 <BR /> docker pull microsoft/nanoserver:1709 </CODE> <BR /> <BR /> <CODE> #Change the string to the new syntax and use the same tag <BR /> docker pull <BR /> docker pull </CODE> <BR /> <BR /> Or, update your dockerfiles to reference the new image location: <BR /> <BR /> <CODE> #Here’s the old string to specify the base image <BR /> FROM microsoft/windowsservercore:ltsc2016 </CODE> <BR /> <BR /> <CODE> #Here’s the new, recommend string to specify your base image. Use whichever tag you’d like <BR /> FROM </CODE> <BR /> <BR /> We want to emphasize the MCR is not the place to browse for container images; it’s where you pull images from. Docker Hub continues to be the preferred medium for container image <EM> discovery </EM> . Steve Lasker’s <A href="#" target="_blank"> blog post </A> does a great job outlining the unique value proposition the MCR will bring for our customers. <BR /> <BR /> The Windows Server 2019 VM images for the Azure gallery will be rolling out within the next few days and will come packaged with the most up-to-date Windows Server 2019 container images. <BR /> <H2> Deprecating the ‘latest’ tag </H2> <BR /> We are deprecating the ‘latest’ tag across all our Windows base images to encourage better container practices. <STRONG> At the beginning of the 2019 calendar year, we will no longer publish the tag </STRONG> ; We’ll yank it from the available tags list. <BR /> <BR /> We strongly encourage you to instead declare the <STRONG> specific </STRONG> container tag you’d like to run in production. The ‘latest’ tag is the opposite of specific; it doesn’t tell the user anything about what version the container <EM> actually </EM> is apart from the image name. You can read more about version compatibility and selecting the appropriate tag on our <A href="#" target="_blank"> container docs </A> . <BR /> <H2> Conclusion </H2> <BR /> For more information, please visit our container docs at <A href="#" target="_blank"> </A> . What other topics &amp; content would you like to see written about containers? Let us know in the comments below or send me a tweet. <BR /> <BR /> Cheers, <BR /> <BR /> Craig Wilhite ( <A href="#" target="_blank"> @CraigWilhite </A> ) </BODY></HTML> Fri, 22 Mar 2019 00:17:50 GMT Virtualization-Team 2019-03-22T00:17:50Z Bringing Device Support to Windows Containers <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Aug 13, 2018 </STRONG> <BR /> When we introduced containers to Windows with the release of Windows Server 2016, our primary goal was to support traditional server-oriented applications and workloads. As time has progressed and the utility of containers as a technology is becoming fully realized, we’re now seeing containers playing an important role in enabling Internet of Things (IoT) workloads <A href="#" target="_blank"> through Azure IoT Edge </A> . We’ve seen new types of workloads getting containerized—workloads that rely on talking to peripheral devices and sensors (a problem for Windows containers). We’re introducing support for certain host device access from Windows Server containers, beginning in Insider Build 17735 to enable new <A href="#" target="_blank"> Windows IoT </A> use cases and transform capabilities for existing server workloads. <STRONG> We’re introducing support for <EM> select </EM> host device access from Windows Server containers (see table below). </STRONG> <BR /> <BR /> We’ve contributed these changes back to the Open Containers Initiative (OCI) specification for Windows. We will be submitting changes to Docker to enable this functionality soon. Watch the video below for a simple example of this work in action (hint: maximize the video). <BR /> <BR /> [video width="1748" height="746" mp4="<A href="#" target="_blank">"][/video</A>] <BR /> <H3> What's Happening </H3> <BR /> To provide a simple demonstration of the workflow, we have a simple client application that listens on a COM port and reports incoming integer values (powershell console on the right). We did not have any devices on hand to speak over physical COM, so we ran the application inside of a VM and assigned the VM's virtual COM port to the container. To mimic a COM device, an application was created to generate random integer values and send it over a named pipe to the VM's virtual COM port (this is the powershell console on the left). <BR /> <BR /> As we see in the video at the beginning, if we do not assign COM ports to our container, when the application runs in the container and tries to open a handle to the COM port, it fails with an IOException (because as far as the container knew, the COM port didn't exist!). On our second run of the container, we assign the COM port to the container and the application successfully gets and prints out the incoming random ints generated by our app running on the host. <BR /> <H3> How It Works </H3> <BR /> Let’s look at how it will work in Docker. From a shell, a user will type: <BR /> <BR /> <CODE> docker run --device="&lt;IdType&gt;/&lt;Id&gt;" &lt;windows container image&gt; </CODE> <BR /> <BR /> For example, if you wanted to pass a COM port to your container: <BR /> <BR /> <CODE> docker run --device="class/86E0D1E0-8089-11D0-9CE4-08003E301F73" </CODE> <BR /> <BR /> The value we’re passing to the <EM> device </EM> argument is simple: it looks for an <STRONG> IdType </STRONG> and an <STRONG> Id. </STRONG> For this coming release of Windows&nbsp;, we only support an IdType of “class”. For Id, this is &nbsp;a <A href="#" target="_blank"> device interface class GUID </A> . The values are delimited by a slash, “/”.&nbsp; Whereas &nbsp;in Linux a user assigns individual devices by specifying a file path in the "/dev/" namespace, in Windows we’re adding support for a user to specify an <EM> interface </EM> <EM> class </EM> , and all devices which identify as implementing this class &nbsp;&nbsp;will be plumbed into the container. <BR /> <BR /> If a user wants to specify multiple classes to assign to a container: <BR /> <BR /> <CODE> docker run --device="class/86E0D1E0-8089-11D0-9CE4-08003E301F73" --device="class/DCDE6AF9-6610-4285-828F-CAAF78C424CC" --device="…" </CODE> <BR /> <H3> What are the Limitations? </H3> <BR /> <STRONG> Process isolation only: </STRONG> We only support passing devices to containers running in <EM> process isolation </EM> ; <EM> Hyper-V isolation </EM> is not supported, nor do we support host device access for Linux Containers on Windows (LCOW). <BR /> <BR /> <STRONG> We support a distinct list of devices: </STRONG> In this release, we targeted enabling a <EM> specific </EM> set of features and a <EM> specific </EM> set of host device classes. We're starting with simple buses. The complete list that we currently support&nbsp; is &nbsp;below. <BR /> <TABLE> <TBODY><TR> <TD> <STRONG> Device Type </STRONG> </TD> <TD> <STRONG> Interface Class </STRONG> <STRONG> GUID </STRONG> </TD> </TR> <TR> <TD> <EM> GPIO </EM> </TD> <TD> 916EF1CB-8426-468D-A6F7-9AE8076881B3 </TD> </TR> <TR> <TD> <EM> I2C Bus </EM> </TD> <TD> A11EE3C6-8421-4202-A3E7-B91FF90188E4 </TD> </TR> <TR> <TD> <EM> COM Port </EM> </TD> <TD> 86E0D1E0-8089-11D0-9CE4-08003E301F73 </TD> </TR> <TR> <TD> <EM> SPI Bus </EM> </TD> <TD> DCDE6AF9-6610-4285-828F-CAAF78C424CC </TD> </TR> </TBODY></TABLE> <BR /> Stay tuned for a Part 2 of this blog that explores the architectural decisions we chose to make in Windows&nbsp;to add this support. <BR /> <H3> What’s Next? </H3> <BR /> These changes will enable direct access to a wide variety of sensors for Azure IoT Edge modules running on Windows IoT devices. Look for it with the next release of Windows. We’re eager to get your feedback. What specific devices are most interesting for you and what workload would you hope to accomplish with them? Are there other ways you’d like to be able to access devices in containers? Leave a comment below or feel free to tweet at me. <BR /> <BR /> Cheers, <BR /> <BR /> Craig Wilhite (@CraigWilhite) </BODY></HTML> Fri, 22 Mar 2019 00:16:55 GMT Virtualization-Team 2019-03-22T00:16:55Z Hello World MSMQ! ...from Windows Containers <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jul 25, 2018 </STRONG> <BR /> <IMG src="" /> <BR /> <BR /> Hello from Microsoft Hackathon!!! <BR /> <BR /> What are we hacking? Containerizing MSMQ! Let me tell you more… <BR /> <BR /> MSMQ (Microsoft Message Queuing) is one of the top asks from Enterprise customers who are lifting and shifting traditional apps with Windows Containers. We are hearing that in our conversations with customers directly, especially with major financial institutions. It is a blocker in their containerization journey. We are also seeing the same ask in online communities like the following on User Voice and MSDN Forum: <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> </A> </LI> <BR /> <LI> <A href="#" target="_blank"> </A> </LI> <BR /> </UL> <BR /> Back in January, I <A href="#" target="_blank"> blogged </A> that we fixed the issue that MSMQ didn’t install at all in Windows Server Core containers. With Windows Server version 1803 release, MSMQ now installs. As the old saying, devils are in the details. In our effort to make the end-to-end scenarios work with MSMQ in containers, we learned that there is a long list of components, features, technologies and teams involved to make it fully working end-to-end for typical Enterprise customer scenarios. To name a few: <A href="#" target="_blank"> Container Networking </A> , <A href="#" target="_blank"> Active Directory </A> , <A href="#" target="_blank"> Group Managed Service Accounts </A> . And it also brings another <A href="" target="_blank"> top customer ask supporting MSDTC in Windows containers </A> to the spot light. So, I decided to take this challenge to our internal Hackathon! <BR /> <BR /> In the first a few hours when we started the Hackathon, we identified a prioritized scenario list we’ll target to hack on. I will share the details of the scenarios in future blogs. But first I am happy to share that today we validated the first scenario working. It’s a very “Hello World” alike scenario that sends messages from a Window container using MSMQ. Here is the basics of the scenario: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> <BR /> <UL> <BR /> <LI> Set-up <BR /> <UL> <BR /> <LI> Container Host: Windows Server v1803 or higher, OR, Windows 10 April 2018 Updates or higher <BR /> <UL> <BR /> <LI> A quick note: In today’s Hackathon, my teammate who validated the scenario was actually on the most recent build of Windows 10 which is not even out for Insiders yet but we did test on Windows Server version 1803 and version 1809 pre-release builds before :smiling_face_with_smiling_eyes:</img> </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> Container Image: Windows Server v1803 Server Core container </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> MSMQ Configurations <BR /> <UL> <BR /> <LI> Queue Scope: Private </LI> <BR /> <LI> Message Type: Transactional </LI> <BR /> <LI> 2 Simple apps using MSMQ to simulate the most basic send/receive operations: <BR /> <UL> <BR /> <LI> MSMQSenderTest.exe </LI> <BR /> <LI> MSMQReceiveTest.exe </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> Queue Location: on the same container as the Sender/Receiver. </LI> <BR /> <LI> Where the sender and receive sit: to simplify things, both the sender and receiver testing apps run directly inside the above mentioned Container Image. This is far away from a real-world scenario but remember we are starting with baby steps. </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> And Voila… we can send and receive messages in a Windows container! <BR /> <P> <IMG src="" /> </P> <BR /> <P> </P> <BR /> Is it too basic? Yes it is. We know. As mentioned early on, we have worked on this on and off, and have tested a few other scenarios. Here is a quick glimpse of what we have tested or are still testing on. We will share more details in future blogs. <BR /> <TABLE> <TBODY><TR> <TD> <STRONG> Scope </STRONG> </TD> <TD> <STRONG> Transactional </STRONG> </TD> <TD> <STRONG> Container OS </STRONG> </TD> <TD> <STRONG> Container Host OS </STRONG> </TD> <TD> <STRONG> Queue Location </STRONG> </TD> <TD> <STRONG> Send Works </STRONG> </TD> <TD> <STRONG> Receive Works </STRONG> </TD> </TR> <TR> <TD> Private </TD> <TD> Yes </TD> <TD> Windows Server version 1803 </TD> <TD> Windows Server version 1803 </TD> <TD> On the container </TD> <TD> Yes. </TD> <TD> Yes. </TD> </TR> <TR> <TD> Public </TD> <TD> Yes </TD> <TD> Windows Server version 1803 </TD> <TD> Windows Server version 1803 </TD> <TD> On the Domain Controller (DC) </TD> <TD> Yes.&nbsp; There are some security caveats we'd like to address. </TD> <TD> Haven't tested </TD> </TR> <TR> <TD> Public </TD> <TD> No </TD> <TD> Windows Server version 1803 </TD> <TD> Windows Server version 1803 </TD> <TD> Not on the container, or the container host, or the DC, but another box </TD> <TD> Yes.&nbsp; There are some security caveats we'd like to address. </TD> <TD> Haven't tested </TD> </TR> <TR> <TD> Public </TD> <TD> Yes </TD> <TD> Windows Server version 1803 </TD> <TD> Windows Server version 1803 </TD> <TD> Not on the container, or the container host, or the DC, but another box </TD> <TD> No. </TD> <TD> Haven't tested </TD> </TR> </TBODY></TABLE> <BR /> <BR /> <BR /> The top 2 rows are more basic scenarios for us to get started and troubleshoot. The last 2 rows in our view are most close to what Enterprise customers might be using. Is it true? We’d love to hear your validation. <BR /> <BR /> We didn’t test the scenario where the queue is a Private one and the messages are non-transactional. We believe as the scenario of a private queue with transactional messages works, a private queue with non-transactional one should work too as it’s less complex. <BR /> <BR /> As we peel the onions layers by layers, it’s becoming more and more obvious that we can use all the help inside and outside Microsoft to get this going, as MSMQ is a technology out for a long time, and there are lots of you who are experts on this. We’ll look into to ways we can sort of “crowdsource” or “group-think” to get the community involved. We will share more details on that in future blogs. <BR /> <BR /> OK, this is for now and I need to get back to the Hackathon! Wish me and my teammates (Andy, Jane, and Ross) good luck! <BR /> <BR /> <BR /> <BR /> Weijuan <BR /> <BR /> P.S. I am building up my Twitter presence which seems a popular place to keep people updated. I'll share updates more frequently there. Follow me @WeijuanLand </BODY></HTML> Fri, 22 Mar 2019 00:16:52 GMT Virtualization-Team 2019-03-22T00:16:52Z Insider preview: Windows container image <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jun 27, 2018 </STRONG> <BR /> Earlier this year at Microsoft Build 2018, we announced a third container base image for applications that have additional API dependencies beyond <EM> nano server </EM> and <EM> Windows Server Core </EM> . Now the time has finally come and the <EM> Windows </EM> container image is available for Windows Insiders. <BR /> <BR /> <IMG src="" /> <BR /> <H2> Why another container image? </H2> <BR /> In conversations with IT Pros and developers there were some themes coming up which went beyond the <CODE> nanoserver </CODE> and <CODE> windowsservercore </CODE> container images: <BR /> Quite a few customers were interested in moving their legacy applications into containers to benefit from container orchestration and management technologies like Kubernetes. However, not all applications could be easily containerized, in some cases due to missing components like proofing support which is not included in Windows Server Core. <BR /> Others wanted to leverage containers to run automated UI tests as part of their CI/CD processes or use other graphics capabilities like DirectX which are not available within the other container images. <BR /> <BR /> With the new <CODE> windows </CODE> container image, we're now offering a third option to choose from based on the requirements of the workload. We're looking forward to see what you will build! <BR /> <H2> How can you get it? </H2> <BR /> If you are running a container host on Windows Insider build, you can get the matching container image using the following commands: <BR /> <BR /> <BR /> <BR /> To simply get the latest available version of the container image, you can use the following command: <BR /> <BR /> <CODE> docker pull </CODE> <BR /> <BR /> Please note that for compatibility reasons we recommend running the same build version for the container host and the container itself. <BR /> <BR /> Since this image is currently part of the Windows Insider preview, we're looking forward to your feedback, bug reports, and comments. We will be publishing newer builds of this container image along with the insider builds. <BR /> <BR /> Alles Gute, <BR /> Lars <BR /> <BR /> (update: Added PowerShell gist to download the matching container instead of a static docker pull command) <BR /> <BR /> </BODY></HTML> Fri, 22 Mar 2019 00:16:23 GMT Lars Iwer 2019-03-22T00:16:23Z A smaller Windows Server Core Container with better Application Compatibility <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jan 22, 2018 </STRONG> <BR /> In <A href="#" target="_blank"> Windows Server Insider Preview Build 17074 </A> released on Tuesday Jan 16, 2018, there are some exciting improvements to Windows Server containers that we’d like to share with you. &nbsp;We’d love for you to test out the build, especially the <A href="#" target="_blank"> Windows Server Core container image </A> , and give us feedback! <BR /> <BR /> <STRONG> Windows Server Core Container Base Image Size Reduced to 1.58GB! </STRONG> <BR /> <BR /> You told us that the size of the Server Core container image affects your deployment times, takes too long to pull down and takes up too much space on your laptops and servers alike.&nbsp; In our first Semi-Annual Channel release, Windows Server, version 1709, we made some great progress reducing the size by 60% and your excitement was noted.&nbsp; We’ve continued to actively look for additional space savings while balancing application compatibility. It’s not easy but we are committed. <BR /> <BR /> There are two main directions we looked at: <BR /> <BR /> <STRONG> 1) </STRONG> <STRONG> Architecture optimization to reduce duplicate payloads </STRONG> <BR /> <BR /> <STRONG> </STRONG> We are always looking for way to optimize our architecture. In Windows Server, version 1709 along with the substantial reduction in Server Core container image, we also made some substantial reductions in the Nano Server container image (dropping it below 100MB).&nbsp; In doing that work we identified that some of the same architecture could be leveraged with Server Core container. In partnership with other teams in Windows, we were able to implement changes in our build process to take advantage of those improvements.&nbsp; The great part about this work is that you should not notice any differences in application compatibility or experiences other than a nice reduction in size and some performance improvements. <BR /> <BR /> <STRONG> 2) </STRONG> <STRONG> Removing unused optional components </STRONG> <BR /> <BR /> We looked at all the various roles, features and optional components available in Server Core and broke them down into a few buckets in terms of usage:&nbsp; frequently in containers, rarely in containers, those that we don’t believe are being used and those that are not supported in containers.&nbsp; We leveraged several data sources to help categorize this list. First, those of you that have telemetry enabled, thank you! That anonymized data is invaluable to these exercises. Second was publicly available dockerfiles and of course feedback from GitHub issues and forums.&nbsp; Third, the roles or features that are not even supported in containers were easy to make a call and remove. Lastly, we also removed roles and features we do not see evidence of customers using.&nbsp; We could do more in this space in the future but really need your feedback (telemetry is also very much appreciated) to help guide what can be removed or separated. <BR /> <BR /> [ <STRONG> Added on Jan 14, 2019 </STRONG> : The list here from Gist shows what we removed to optimize the image size: <A href="#" target="_blank"> </A> .] <BR /> <BR /> So, here are the numbers on Windows Server Core container size if you are curious: <BR /> <UL> <BR /> <LI> 1.58GB, download size, 30% reduction from Windows Server, version 1709 </LI> <BR /> <LI> 3.61GB, on disk size, 20% reduction from Windows Server, version 1709 </LI> <BR /> </UL> <BR /> <STRONG> MSMQ now installs in a Windows Server Core container </STRONG> <BR /> <BR /> MSMQ has been one of the top asks we heard from you, and ranks very high on Windows Server User Voice <A href="#" target="_blank"> here </A> . In this release, we were able to partner with our Kernel team and make the change which was not trivial. We are happy to announce now it installs! And passed our in-house Application Compatibility test. Woohoo! <BR /> <BR /> However, there are many different use cases and ways customers have used MSMQ. So please do try it out and let us know if it indeed works for you. <BR /> <BR /> <STRONG> A Few Other Key App Compatibility Bug Fixes: </STRONG> <BR /> <UL> <BR /> <LI> We fixed the issue reported on GitHub that services running in containers do not receive <STRONG> shutdown notification </STRONG> . </LI> <BR /> </UL> <BR /> <A href="#" target="_blank"> </A> <BR /> <UL> <BR /> <LI> We fixed this issue reported on GitHub and User Voice related to <STRONG> BitLocker and </STRONG> <STRONG> FDVDenyWriteAccess </STRONG> policy: Users were not able to run basic Docker commands like Docker Pull. </LI> <BR /> </UL> <BR /> <A href="#" target="_blank"> </A> <BR /> <BR /> <A href="#" target="_blank"> </A> <BR /> <BR /> <A href="#" target="_blank"> </A> <BR /> <UL> <BR /> <LI> We fixed a few issues reported on GitHub related to <STRONG> mounting directories </STRONG> between hosts and containers. </LI> <BR /> </UL> <BR /> <A href="#" target="_blank"> </A> <BR /> <BR /> <A href="#" target="_blank"> </A> <BR /> <BR /> We are so excited and proud of what we have done so far to listen to your voice, continuously optimize Server Core container size and performance, and fix top application compatibility issues to make your Windows Container experience better and meet your business needs better. We love hearing how you are using Windows containers, and we know there is still plenty of opportunities ahead of us to make them even faster and better. Fun journey ahead of us! <BR /> <BR /> Thank you. <BR /> <BR /> Weijuan </BODY></HTML> Fri, 22 Mar 2019 00:15:14 GMT Virtualization-Team 2019-03-22T00:15:14Z Tar and Curl Come to Windows! <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Dec 19, 2017 </STRONG> <BR /> Beginning in Insider Build <STRONG> 17063, </STRONG> we’re introducing two command-line tools to the Windows toolchain: curl and bsdtar. It’s been a long time coming, I know. We'd like to give credit to the folks who’ve created and maintain <A href="#" target="_blank"> bsdtar </A> and <A href="#" target="_blank"> curl </A> —awesome open-source tools used by millions of humans every day. Let's take a look at two impactful ways these tools will make developing on Windows an even better experience. <BR /> <H2> 1. Developers! Developers! Developers! </H2> <BR /> Tar and curl are staples in a developer’s toolbox; beginning today, you’ll find these tools are available from the command-line for all SKUs of Windows. And yes, they're the same tools you've come to know and love! If you're unfamiliar with these tools, here's an overview of what they do: <BR /> <UL> <BR /> <LI> <STRONG> Tar: </STRONG> A command line tool that allows a user to extract files and create archives. Outside of PowerShell or the installation of third party software, there was no way to extract a file from cmd.exe. We're correcting this behavior :)</img> The implementation we're shipping in Windows uses <A href="#" target="_blank"> libarchive </A> . </LI> <BR /> <LI> <STRONG> Curl: </STRONG> Another command line tool that allows for transferring of files to and from servers (so you can, say, now download a file from the internet). </LI> <BR /> </UL> <BR /> Now not only will you be able to perform file transfers from the command line,&nbsp; you'll also be able to extract files in formats in addition to .zip (like .tar.gz, for example). PowerShell <EM> does </EM> already offer similar functionality (it has curl and it's own file extraction utilities), but we recognize that there might be instances where PowerShell is not readily available or the user wants to stay in cmd. <BR /> <BR /> <IMG src="" /> <BR /> <H2> 2. The Containers Experience </H2> <BR /> Now that we’re shipping these tools inbox, you no longer need to worry about using a separate container image as the builder when targeting nanoserver-based containers. Instead, we can invoke the tools like so: <BR /> <BR /> <BR /> <H3> Background </H3> <BR /> We offer two base images for our containers: <EM> windowsservercore </EM> and <EM> nanoserver </EM> . The servercore image is the larger of the two and has support for such things as the full .NET framework. On the opposite end of the spectrum is nanoserver, which is built to be lightweight with as minimal a memory footprint&nbsp;&nbsp; as possible. It’s capable of running .NET core but, in keeping with the minimalism, we’ve tried to slim down the image size as much as possible. We threw out all components we felt were not mission-critical for the container image. <BR /> <BR /> PowerShell was one of the components that was put on the chopping block for our nanoserver image. PowerShell is a whopping 56 Mb (given that the total size of the nanoserver image is 200 Mb…that’s quite the savings!) But the consequence of removing PowerShell meant there was no way to pull down a package and unzip it from within the container. <BR /> <BR /> <BR /> <BR /> If you’re familiar with writing dockerfiles, you’ll know that it’s common practice to pull in all the packages (node, mongo, etc.) you need and install them. Instead, users would have to rely on using a separate image with PowerShell as the “builder” image to accomplish constructing an image. This is clearly not the experience we want our users to have when targeting nanoserver—they’d end up having to download the much larger servercore image. <BR /> <BR /> This is all resolved with the addition of curl and tar. You can call these tools from servercore images as well. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> <BR /> <H2> We want your Feedback! </H2> <BR /> Are there other developer tools you would like to see added to the command-line? Drop a comment below with your thoughts! In the mean time, go grab <STRONG> Insider Build 17063 </STRONG> and get busy curl’ing an tar’ing to your heart’s desire. </BODY></HTML> Fri, 22 Mar 2019 00:14:53 GMT Virtualization-Team 2019-03-22T00:14:53Z WSL Interoperability with Docker <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Dec 08, 2017 </STRONG> <BR /> We frequently get asked about running docker from within the Windows Subsystem for Linux (WSL). We don’t support running the docker daemon directly in WSL. But what you <I> can </I> do is call in to the daemon running under Windows from WSL. What does this let you do? You can create dockerfiles, build them, and run them in the daemon—Windows or Linux, depending on which runtime you have selected—all from the comfort of WSL. <BR /> <BR /> <IMG src="" /> <BR /> <DIV> <BR /> <DIV> <BR /> <H3> <B> Overview </B> </H3> <BR /> The architectural design of docker is split into three components: a client, a REST API, and a server (the daemon). At a high level: <BR /> <UL> <BR /> <LI> <B> C </B> <B> lient </B> <B> : </B> interacts with the REST API. The primary purpose of this piece is to allow a user to interface the daemon. </LI> <BR /> <LI> <B> REST API </B> : Acts as the interface between the client and server, allowing a flow of communication. </LI> <BR /> <LI> <B> Daemon: </B> Responsible for actually managing the containers—starting, stopping, etc. The daemon listens for API requests from docker clients. </LI> <BR /> </UL> <BR /> The daemon has very close ties to the kernel. Today in Windows, when you’re running Windows Server containers, a daemon process runs in Windows. When you switch to Linux Container mode, the daemon actually runs inside a VM called the Moby Linux VM. With the <A href="#" target="_blank"> upcoming release </A> of Docker, you’ll be able to run Windows Server containers and Linux container side-by-side, and the daemon will always run as a Windows process. <BR /> <BR /> The client, however, doesn’t have to sit in the same place as the daemon. For example, you could have a local docker client on your dev machine communicating with Docker up in Azure. This allows us to have a client in WSL talking to the daemon running on the host. <BR /> <H3> <B> What </B> <B> 's the </B> <B> Proposal? </B> </H3> <BR /> This method is made available because of a tool built by John Starks ( <A href="#" target="_blank"> @gigastarks </A> ), a dev lead on Hyper-V, called <STRONG> npiperelay </STRONG> . Getting communication up and running between WSL and the daemon isn't new; there have been several great blog posts ( <A href="#" target="_blank"> this blog </A> by Nick Janetakis comes to mind) which recommend going a TCP-route by opening a port without TLS (like below): <BR /> <BR /> <IMG src="" /> <BR /> <BR /> While I would consider the port 2375 method to be more robust than the tutorial we're about to walk through, you <I> do </I> expose your system to potential attack vectors for malicious code. We don't like exposing attack vectors :)</img> <BR /> <BR /> What about opening another port to have docker listen on and protect <I> that </I> with TLS? Well, Docker for Windows <A href="#" target="_blank"> doesn’t support </A> the requirements needed to make this happen.&nbsp; So this brings up back to npiperelay. <BR /> <BR /> <I> Note: </I> the tool we are about to use works best with insider builds--it can be a little buggy on ver. 1709. Your mileage may vary. <BR /> <H3> Installing Go </H3> <BR /> We're going to build the relay from within WSL. If you do not have WSL installed, then you'll need to download it from the <A href="#" target="_blank"> Microsoft Store </A> . Once you have WSL running, we need to download Go. To do this: <BR /> <BR /> <CODE> #Make sure we have the latest package lists <BR /> sudo apt-get update <BR /> #Download Go. You should change the version if there's a newer one. Check at: <A href="#" target="_blank"></A> <BR /> sudo wget <A href="#" target="_blank"></A> </CODE> <BR /> <BR /> Now we need to unzip Go and add the binary to our PATH: <BR /> <BR /> <CODE> #unzip Go <BR /> sudo tar -C /usr/local -xzf go1.9.2.linux-amd64.tar.gz <BR /> #Put it in the path <BR /> export PATH=$PATH:/usr/local/go/bin </CODE> <BR /> <DIV> <BR /> <H3> Building the Relay </H3> <BR /> With Go now installed, we can build the relay. In the command below, make sure to replace with your Windows username: <BR /> <BR /> <CODE> go get -d github.com <BR /> GOOS=windows go build -o /mnt/c/Users/&lt;your_user_name&gt;/go/bin/npiperelay.exe github.com </CODE> <BR /> <BR /> We've now built the relay for Windows but we want it callable from within WSL. To do this, we make a symlink. Make sure to replace with your Windows username: <BR /> <BR /> <CODE> sudo ln -s /mnt/c/Users/&lt;your_user_name&gt;/go/bin/npiperelay.exe /usr/local/bin/npiperelay.exe </CODE> <BR /> <BR /> We'll be using socat to help enable the relay. Install <A href="#" target="_blank"> socat </A> , a tool that allows for bidirectional flow of data between two points (more on this later). Grab this package: <BR /> <BR /> <CODE> sudo apt install socat </CODE> <BR /> <BR /> We need to install the docker client on WSL. To do this: <BR /> <BR /> <CODE> sudo apt install </CODE> <BR /> <H3> Last Steps </H3> <BR /> With socat installed and the executable built, we just need to string a few things together. We're going to make a shell script to activate the functionality for us. We're going to place this in the home directory of the user. To do this: <BR /> <BR /> <CODE> #make the file <BR /> touch ~/docker-relay <BR /> #add execution privileges <BR /> chmod +x ~/docker-relay </CODE> <BR /> <BR /> Open the file we've created with your favorite text editor (like vim). Paste this into the file: <BR /> <BR /> <CODE> #!/bin/sh <BR /> exec socat UNIX-LISTEN:/var/run/docker.sock,fork,group=docker,umask=007 EXEC:"npiperelay.exe -ep -s //./pipe/docker_engine",nofork </CODE> <BR /> <BR /> Save the file and close it. The docker-relay script configures the Docker pipe to allow access by the docker group. To run as an ordinary user (without having to attach 'sudo' to every docker command), add your WSL user to the docker group. In Ubuntu: <BR /> <BR /> <CODE> sudo adduser ${USER} docker </CODE> <BR /> <H3> Test it Out! </H3> <BR /> Open a new WSL shell to ensure your group membership is reset. Launch the relay in the background: <BR /> <BR /> <CODE> sudo ~/docker-relay &amp; </CODE> <BR /> <BR /> Now, run a docker command to test the waters. You should be greeted by the same output as if you ran the command from Windows (and note you don't need 'sudo' prefixed to the command, either!) <BR /> <BR /> <IMG src="" /> <BR /> Volume Mounting <BR /> If you're wondering how volume mounting works with npiperelay, you'll need to use the <STRONG> Windows path </STRONG> when you specify your volume. See the comparison below: <BR /> <BR /> <CODE> #this is CORRECT <BR /> docker run <STRONG> -v C:/Users/crwilhit.REDMOND/tmp/ </STRONG> microsoft/nanoserver cmd.exe </CODE> <BR /> <BR /> <CODE> #this is INCORRECT <BR /> docker run <STRONG> -v /mnt/c/Users/crwilhit.REDMOND/tmp/ </STRONG> microsoft/nanoserver cmd.exe </CODE> <BR /> <H3> How Does it Work? </H3> <BR /> There's a fundamental problem with getting the docker client running under WSL to communicate with Docker for Windows: the WSL client understands IPC via unix sockets, whereas Docker for Windows understands IPC via named pipes. This is where socat and npiperelay.exe come in to play--as the mediators between these two forms of disjoint IPC. Socat understands how to communicate via unix sockets and npiperelay understands how to communicate via named pipes. Socat and npiperelay both understand how to communicate via stdio, hence they can talk to each other. <BR /> <BR /> <IMG src="" /> <BR /> <H3> Conclusion </H3> <BR /> </DIV> <BR /> Congratulations, you can now talk to Docker for Windows via WSL. With the recent <A href="#" target="_blank"> addition of background processes </A> in WSL, you can close out of WSL, open it later, and the relay we've built will continue to run. However, if you kill the socat process or do a hard reboot of your system, you'll need to make sure you launch the relay in the background again when you first launch WSL. <BR /> <BR /> You can use the npiperelay tool for other things as well. Check out the <A href="#" target="_blank"> GitHub repo </A> to learn more. Try it out and let us know how this works out for you. <BR /> <BR /> <BR /> <BR /> </DIV> <BR /> </DIV> </BODY></HTML> Fri, 22 Mar 2019 00:14:26 GMT Virtualization-Team 2019-03-22T00:14:26Z Available to Windows 10 Insiders Today: Access to published container ports via “localhost”/ <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Nov 06, 2017 </STRONG> <BR /> <EM> This is a cross-post </EM> ; click the link below to be rerouted to our main post on the Networking Blog :) <BR /> <BR /> <A href="#" target="_blank"></A> <BR /> </BODY></HTML> Fri, 22 Mar 2019 00:12:15 GMT Virtualization-Team 2019-03-22T00:12:15Z Container Images are now out for Windows Server version 1709! <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Oct 18, 2017 </STRONG> <BR /> With the release of Windows Server version 1709 also come Windows Server Core and Nano Server base OS container images. <BR /> <BR /> It is important to note that while older versions of the base OS container images will work on a newer host (with Hyper-V isolation), the opposite is not true. Container images based on Windows Server version 1709 will <B> <STRONG> <STRONG> <STRONG> <STRONG> not </STRONG> </STRONG> </STRONG> </STRONG> </B> work on a host using Windows Server 2016. <A href="#" title="Container base image versioning" target="_blank"> Read more about the different versions of Windows Server </A> . <BR /> <BR /> We’ve also made some changes to our tagging scheme so you can more easily specify which version of the container images you want to use.&nbsp; From now on, the “latest” tag will follow the releases of the current LTSC product, Windows Server 2016. If you want to keep up with the latest patches for Windows Server 2016, you can use: <BR /> <BR /> “microsoft/nanoserver” <BR /> or <BR /> “microsoft/windowsservercore” <BR /> <BR /> in your dockerfiles to get the most up-to-date version of the Windows Server 2016 base OS images. You can also continue using specific versions of the Windows Server 2016 base OS container images by using the tags specifying the build, like so: <BR /> <BR /> “microsoft/nanoserver:10.0.14393.1770” <BR /> or <BR /> “microsoft/windowsservercore:10.0.14393.1770”. <BR /> <BR /> If you would like to use base OS container images based on Windows Server version 1709, you will have to specify that with the tag. In order to get the most up-to-date base OS container images of Windows Server version 1709, you can use the tags: <BR /> <BR /> “microsoft/nanoserver:1709” <BR /> or <BR /> “microsoft/windowsservercore:1709” <BR /> <BR /> And if you would like a specific version of these base OS container images, you can specify the KB number that you need on the tag, like this: <BR /> <BR /> “microsoft/nanoserver:1709_KB4043961” <BR /> or <BR /> “microsoft/windowsservercore:1709_KB4043961”. <BR /> <BR /> We hope that this tagging scheme will ensure that you always choose the image that you want and need for your environment. Please let us know in the comments if you have any feedback for us. <BR /> <BR /> <B> Note: </B> We currently do not intend to use the build numbers to specify Windows Server version 1709 container images. We will only be using the KB schema specified above for the tagging of these images. Let us know if you have feedback about this as well <BR /> <BR /> Regards, <BR /> Ender </BODY></HTML> Fri, 22 Mar 2019 00:12:07 GMT scooley 2019-03-22T00:12:07Z Docker's routing mesh available with Windows Server version 1709 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Sep 26, 2017 </STRONG> <BR /> The Windows Core Networking team, along with our friends at Docker, are thrilled to announce that support for Docker's <A href="#" target="_blank"> ingress <STRONG> routing mesh </STRONG> </A> will be supported with Windows Server version 1709. <BR /> <BR /> Ingress routing mesh is part of <A href="#" target="_blank"> swarm mode </A> --Docker's built-in orchestration solution for containers. Swarm mode first became available on Windows <A href="#" target="_blank"> early this year </A> , along with support for the Windows overlay network driver. With swarm mode, users have the ability to create container services and deploy them to a cluster of container hosts. With this, of course, also comes the ability to <STRONG> define published ports for services </STRONG> , so that the apps that those services are running can be accessed by endpoints outside of the swarm cluster (for example, a user might want to access a containerized web service via web browser from their laptop or phone). <BR /> <BR /> To place routing mesh in context, it's useful to understand that Docker currently provides it, along with another option for publishing services with swarm mode--host mode service publishing:* <BR /> <UL> <BR /> <LI> <STRONG> Host mode </STRONG> is an approach to service publishing that's <EM> optimal for production environments </EM> , where system administrators value maximum <STRONG> performance </STRONG> and full <STRONG> control </STRONG> over their container network configuration. With host mode, <EM> each container of a service is published directly to the host where it is running </EM> . </LI> <BR /> <LI> <STRONG> Routing mesh </STRONG> is an approach to service publishing that's <EM> optimized for the developer experience </EM> , or for production cases where a <STRONG> simple configuration experience </STRONG> is valued above performance, or control over how incoming requests are routed to the specific replicas/containers for a service. With ingress routing mesh, the containers for a published service, can all be accessed through a single "swarm port"--one port, published on every swarm host (even the hosts where no container for the service is currently running!). </LI> <BR /> </UL> <BR /> <P> <EM> While our support for routing mesh is new with Windows Server version 1709, host mode service publishing has been supported since swarm mode was originally made available on Windows. </EM> </P> <BR /> <P> <I> *F </I> <EM> or more information, on how host mode and routing mesh work, visit Docker's documentation on <A href="#" target="_blank"> routing mesh </A> and <A href="#" target="_blank"> publishing services with swarm mode. </A> </EM> </P> <BR /> <STRONG> So, what does it take to use routing mesh on Windows? </STRONG> Routing mesh is Docker's default service publishing option. It has always been the default behavior on Linux, and now it's also supported as the <EM> default on Windows </EM> ! This means that all you need to do to use routing mesh, is create your services using the <CODE> --publish </CODE> flag to the <CODE> docker service create </CODE> option, as described in <A href="#" target="_blank"> Docker's documentation </A> . <BR /> <BR /> For example, assume you have a basic web service, defined by a container image called, <CODE> web-frontend </CODE> . If you wanted to publish this service to port 80 of each container and port 8080 of all of your swarm nodes, you'd create the service with a command like this: <BR /> <BR /> <CODE> C:\&gt; docker service create --name web --replicas 3 --publish 8080:80 web-frontend </CODE> <BR /> <BR /> In this case, the <CODE> web </CODE> app, running on a pre-configured swarm cluster along with a <CODE> db </CODE> backend service, might look like the app depicted below. As shown, because of routing mesh clients outside of the swarm cluster (in this example, web browsers) are able to access the <CODE> web </CODE> service via its published port--8080. And in fact, each client can access the web service via its published port on <EM> any swarm host </EM> ; no matter which host receives an original incoming request, that host will use routing mesh to route the request to a <CODE> web </CODE> container instance that can ultimately service that request. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Once again, we at Microsoft and our partners at Docker are proud to make ingress mode available to you on Windows. Try it out on <EM> Windows Server version 1709, and using Docker EE Preview </EM> *, and let us know what you think! We appreciate your engagement and support in making features like routing mesh possible, and we encourage you to continue reaching out with feedback. Please provide your questions/comments/feature requests by posting issues to the <A href="#" target="_blank"> Docker for Windows GitHub repo </A> or by emailing the Windows Core Networking team directly, at <BR /> <BR /> *Note: Ingress mode on Windows currently has the following system requirements: <BR /> <UL> <BR /> <LI> Windows Server version 1709 -- Coming Soon <EM> (** <A href="#" target="_blank"> Available to Windows Insiders today </A> !**) </EM> </LI> <BR /> <LI> Docker Enterprise Edition (EE) Preview; get more info and find download/install instructions <A href="#" target="_blank"> here </A> </LI> <BR /> </UL> <BR /> </BODY></HTML> Fri, 22 Mar 2019 00:12:02 GMT Virtualization-Team 2019-03-22T00:12:02Z Delivering Safer Apps with Windows Server 2016 and Docker Enterprise Edition <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Sep 05, 2017 </STRONG> <BR /> <EM> Windows Server 2016 and Docker Enterprise Edition are revolutionizing the way Windows developers can create, deploy, and manage their applications on-premises and in the cloud. Microsoft and Docker are committed to providing secure containerization technologies and enabling developers to implement security best practices in their applications. This blog post highlights some of the security features in Docker Enterprise Edition and Windows Server 2016 designed to help you deliver safer applications. </EM> <BR /> <BR /> <EM> For more information on Docker and Windows Server 2016 Container security, check out the <A href="#" target="_blank"> full whitepaper </A> on Docker's site. </EM> <BR /> <H2> Introduction </H2> <BR /> Today, many organizations are turning to Docker Enterprise Edition (EE) and Windows Server 2016 to deploy IT applications consistently and efficiently using containers. Container technologies can play a pivotal role in ensuring the applications being deployed in your enterprise are safe -- free of malware, up-to-date with security patches, and known to come from a trustworthy source. Docker EE and Windows each play a hand in helping you develop and deploy safer applications according to the following three characteristics: <BR /> <OL> <BR /> <LI> <STRONG> Usable Security: </STRONG> Secure defaults with tooling that is native to both developers and operators. </LI> <BR /> <LI> <STRONG> Trusted Delivery: </STRONG> Everything needed to run an application is delivered safely and guaranteed not to be tampered with. </LI> <BR /> <LI> <STRONG> Infrastructure Independent: </STRONG> Application and security configurations are portable and can move between developer workstations, testing environments, and production deployments regardless of whether those environments are running in Azure or your own datacenter. </LI> <BR /> </OL> <BR /> <IMG src="" /> <BR /> <H2> Usable Security </H2> <BR /> <H3> Resource Isolation </H3> <BR /> Windows Server 2016 ships with support for Windows Server Containers, which are powered by Docker Enterprise Edition. Docker EE for Windows Server is the result of a joint engineering effort between Microsoft and Docker. When you run a Windows Server Container, key system resources are sandboxed for each container and isolated from the host operating system. This means the container does not see the resources available on the host machine, and any changes made within the container will not affect the host or other containers. Some of the resources that are isolated include: <BR /> <UL> <BR /> <LI> File system </LI> <BR /> <LI> Registry </LI> <BR /> <LI> Certificate stores </LI> <BR /> <LI> Namespace (privileged API access, system services, task scheduler, etc.) </LI> <BR /> <LI> Local users and groups </LI> <BR /> </UL> <BR /> Additionally, you can limit a Windows Server Container's use of the CPU, memory, disk usage, and disk throughput to protect the performance of other applications and containers running on the same host. <BR /> <H3> Hyper-V Isolation </H3> <BR /> For even greater isolation, Windows Server Containers can be deployed using Hyper-V isolation. In this configuration, the container runs inside a specially optimized Hyper-V virtual machine with a completely isolated Windows kernel instance. Docker EE handles creating, managing, and deleting the VM for you. Better yet, the same Docker container images can be used for both process isolated and Hyper-V isolated containers, and both types of containers can run side by side on the same host. <BR /> <H3> Application Secrets </H3> <BR /> Starting with Docker EE 17.06, support for delivering secrets to Windows Server Containers at runtime is now available. Secrets are simply blobs of data that may contain sensitive information best left out of a container image. Common examples of secrets are SSL/TLS certificates, connection strings, and passwords. <BR /> <BR /> Developers and security operators use and manage secrets in the exact same way -- by registering them on manager nodes (in an encrypted store), granting applicable services access to obtain the secrets, and instructing Docker to provide the secret to the container at deployment time. Each environment can use unique secrets without having to change the container image. The container can just read the secrets at runtime from the file system and use them for their intended purposes. <BR /> <H2> Trusted Delivery </H2> <BR /> <H3> Image Signing and Verification </H3> <BR /> Knowing that the software running in your environment is authentic and came from a trusted source is critical to protecting your information assets. With Docker Content Trust, which is built into Docker EE, container images are cryptographically signed to record the contents present in the image at the time of signing. Later, when a host pulls the image down, it will validate the signature of the downloaded image and compare it to the expected signature from the metadata. If the two do not match, Docker EE will not deploy the image since it is likely that someone tampered with the image. <BR /> <H3> Image Scanning and Antimalware </H3> <BR /> Beyond checking if an image has been modified, it's important to ensure the image doesn't contain malware of libraries with known vulnerabilities. When images are stored in Docker Trusted Registry, Docker Security Scanning can analyze images to identify libraries and components in use that have known vulnerabilities in the Common Vulnerabilities and Exposures (CVE) database. <BR /> <BR /> Further, when the image is pulled on a Windows Server 2016 host with Windows Defender enabled, the image will automatically be scanned for malware to prevent malicious software from being distributed through container images. <BR /> <H3> Windows Updates </H3> <BR /> Working alongside Docker Security Scanning, Microsoft Windows Update can ensure that your Windows Server operating system is up to date. Microsoft publishes two pre-built Windows Server base images to Docker Hub: <A href="#" target="_blank"> microsoft/nanoserver </A> and <A href="#" target="_blank"> microsoft/windowsservercore </A> . These images are updated the same day as new Windows security updates are released. When you use the "latest" tag to pull these images, you can rest assured that you're working with the most up to date version of Windows Server. This makes it easy to integrate updates into your continuous integration and deployment workflow. <BR /> <H2> Infrastructure Independent </H2> <BR /> <H3> Active Directory Service Accounts </H3> <BR /> Windows workloads often rely on Active Directory for authentication of users to the application and authentication between the application itself and other resources like Microsoft SQL Server. Windows Server Containers can be configured to use a Group Managed Service Account when communicating over the network to provide a native authentication experience with your existing Active Directory infrastructure. You can select a different service account (even belonging to a different AD domain) for each environment where you deploy the container, without ever having to update the container image. <BR /> <H3> Docker Role Based Access Control </H3> <BR /> Docker Enterprise Edition allows administrators to apply fine-grained role based access control to a variety of Docker primitives, including volumes, nodes, networks, and containers. IT operators can grant users predefined permission roles to collections of Docker resources. Docker EE also provides the ability to create custom permission roles, providing IT operators tremendous flexibility in how they define access control policies in their environment. <BR /> <H2> Conclusion </H2> <BR /> With Docker Enterprise Edition and Windows Server 2016, you can develop, deploy, and manage your applications more safely using the variety of built-in security features designed with developers and operators in mind. To read more about the security features available when running Windows Server Containers with Docker Enterprise Edition, check out the <A href="#" target="_blank"> full whitepaper </A> and learn more about using <A href="#" target="_blank"> Docker Enterprise Edition in Azure </A> . </BODY></HTML> Fri, 22 Mar 2019 00:11:42 GMT Virtualization-Team 2019-03-22T00:11:42Z Windows Server 2016 Adds Native Overlay Network Driver, enabling mixed Linux + Windows Docker Swarm Mode Clusters <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Apr 18, 2017 </STRONG> <BR /> Based on customer and partner feedback, we are happy to announce the Windows networking team released a native overlay network driver for Windows Server 2016 to enable admins to create a <A href="#" target="_blank"> Docker Swarm </A> cluster spanning <STRONG> multiple Windows Server and Linux container hosts </STRONG> without worrying about configuring the underlying network fabric <STRONG> . </STRONG> Windows Server containers and those with Hyper-V Isolation powered by Docker are available natively in Windows Server 2016 and enable developers and IT admins to work together in building and deploying both modern, cloud-native applications as well as supporting lift-and-shift of workloads from a virtual machine (VM) into a container. Previously, an admin would be limited to scaling out these containers on a <STRONG> single </STRONG> Windows Docker host. <STRONG> </STRONG> With Docker Swarm and overlay, your containerized workloads can now communicate seamlessly across hosts, and scale fluidly, on-demand. <STRONG> </STRONG> <BR /> <BR /> <EM> How did we do it? </EM> The Docker engines, running in Swarm mode, are able to scale-out services by launching multiple container instances across&nbsp;all nodes in a cluster. When one of the&nbsp;"master" Swarm mode nodes schedules a container instance to run on a particular host, the Docker engine on that host will call&nbsp;the Windows Host Networking Service (HNS) to create the container endpoint and attach it&nbsp;to the&nbsp;overlay networks referenced by that particular service. HNS will then program this policy into the Virtual Filtering Platform (VFP) Hyper-V switch extension where it is enforced by creating network overlays using VXLAN encapsulation. <BR /> <BR /> The flexibility and agility enjoyed by applications already being managed by Docker Swarm is one thing, but what about the up-front work of getting those applications developed, tested, and deployed? Customers can re-use their Docker Compose file from their development environment to deploy and scale out a multi-service/tier application across the cluster using <EM> docker stack deploy </EM> command syntax. <STRONG> It’s easy to leverage the power of running both Linux and Windows services in a single application, </STRONG> by deploying individual services on the OS for which they are optimized. Simply use constraints and labels to specify the OS for a Docker Service, and Docker Swarm will take care of scheduling tasks for that service to be run only on the correct host OS. In addition, customers can use Docker Datacenter (via Docker Enterprise Edition Standard) to provide integrated container management and security from development to production. <BR /> <BR /> <EM> Ready to get your hands on Docker Swarm and Docker Datacenter with Windows Server 2016? </EM> This feature has already been validated by beta customers by successfully deploying workloads using swarm mode and Docker Datacenter (via Docker Enterprise Edition Standard), and we are now excited to release it to all Windows Server customers through <A href="#" target="_blank"> Windows Update KB4015217 </A> . This feature is <A href="#" target="_blank"> also available </A> in the Windows 10 Creator’s Edition (with Docker Community Edition) so that developers can have a consistent experience developing apps on both Windows client and server. <BR /> <BR /> To learn more about Docker Swarm on Windows, start here ( <A href="#" target="_blank"> </A> ). To learn more about Docker Datacenter, start with Docker's documentation on Docker Enterprise Edition ( <A href="#" target="_blank"> </A> ). <BR /> <BR /> <EM> Feature requests? Bugs? General feedback? We would love to hear from you! </EM> Please email us with feedback at <A href="" target="_blank"> </A> . </BODY></HTML> Fri, 22 Mar 2019 00:10:05 GMT Virtualization-Team 2019-03-22T00:10:05Z Use NGINX to load balance across your Docker Swarm cluster <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Apr 19, 2017 </STRONG> <BR /> <H2> A practical walkthrough, in six steps </H2> <BR /> <EM> This basic example demonstrates NGINX and swarm mode in action, to provide the foundation for you to apply these concepts to your own configurations. </EM> <BR /> <BR /> This document walks through several steps for setting up a containerized NGINX server and using it to load balance traffic across a swarm cluster. For clarity, these steps are designed as an end-to-end tutorial for setting up a three node cluster and running two docker services on that cluster; by completing this exercise, you will become familiar with the general workflow required to use swarm mode and to load balance across Windows Container endpoints using an NGINX load balancer. <BR /> <H3> The basic setup </H3> <BR /> This exercise requires three container hosts--two of which will be joined to form a <STRONG> two-node swarm cluster </STRONG> , and one which will be used to host a <STRONG> containerized NGINX load balancer </STRONG> . In order to demonstrate the load balancer in action, two docker services will be deployed to the swarm cluster, and the NGINX server will be configured to load balance across the container instances that define those services. The services will both be web services, hosting simple content that can be viewed via web browser. With this setup, the load balancer will be easy to see in action, as traffic is routed between the two services each time the web browser view displaying their content is refreshed. <BR /> <BR /> <EM> The figure below provides a visualization of this three-node setup </EM> . Two of the nodes, the "Swarm Manager" node and the "Swarm Worker" node together form a two-node swarm mode cluster, running two Docker web services, "S1" and "S2". A third node (the "NGINX Host" in the figure) is used to host a containerized NGINX load balancer, and the load balancer is configured to route traffic across the container endpoints for the two container services. This figure includes example IP addresses and port numbers for the two swarm hosts and for each of the six container endpoints running on the hosts. <BR /> <BR /> <IMG src="" /> <BR /> <H3> System requirements </H3> <BR /> <STRONG> Three* or more computer systems </STRONG> running either <STRONG> Windows 10 Creators Update </STRONG> or <STRONG> Windows Server 2016 </STRONG> <EM> with all of the latest updates </EM> *, setup as a container host (see the topic, <A href="#" target="_blank"> Windows Containers on Windows 10 </A> or <A href="#" target="_blank"> Windows Containers on Windows Server </A> for more details on how to get started with Docker containers on Windows 10). <A> + </A> <BR /> <DIV> <BR /> <DIV id="main"> <BR /> <DIV> <BR /> <DIV> <BR /> <P> * <STRONG> Note </STRONG> : Docker Swarm on Windows Server 2016 requires <A href="#" target="_blank"> KB4015217 </A> </P> <BR /> <BR /> </DIV> <BR /> </DIV> <BR /> </DIV> <BR /> </DIV> <BR /> Additionally, each host system should be configured with the following: <BR /> <UL> <BR /> <LI> The <A href="#" target="_blank"> microsoft/windowsservercore </A> container image </LI> <BR /> <LI> Docker Engine v1.13.0 or later </LI> <BR /> <LI> Open ports: Swarm mode requires that the following ports be available on each host. <BR /> <UL> <BR /> <LI> TCP port 2377 for cluster management communications </LI> <BR /> <LI> TCP and UDP port 7946 for communication among nodes </LI> <BR /> <LI> TCP and UDP port 4789 for overlay network traffic </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> <STRONG> *Note on using two nodes rather than three: </STRONG> <BR /> <EM> These instructions can be completed using just two nodes. However, currently there is a known bug on Windows which prevents containers from accessing their hosts using localhost or even the host’s external IP address (for more background on this, see Caveats and Gotchas below). This means that in order to access docker services via their exposed ports on the swarm hosts, the NGINX load balancer must not reside on the same host as any of the service container instances. </EM> <BR /> <EM> Put another way, if you use only two nodes to complete this exercise, one of them will need to be dedicated to hosting the NGINX load balancer, leaving the other to be used as a swarm container host (i.e. you will have a single-host swarm cluster, a host dedicated to hosting your containerized NGINX load balancer). </EM> <BR /> <H2> Step 1: Build an NGINX container image </H2> <BR /> In this step, we'll build the container image required for your containerized NGINX load balancer. Later we will run this image on the host that you have designated as your NGINX container host. <BR /> <P> <EM> <STRONG> Note: </STRONG> To avoid having to transfer your container image later, complete the instructions in this section on the container host that you intend to use for your NGINX load balancer. </EM> </P> <BR /> NGINX is available for <A href="#" target="_blank"> download from </A> . An NGINX container image can be built using a simple Dockerfile that installs NGINX onto a Windows base container image and configures the container to run as an NGINX executable. The content of such a Dockerfile is shown below. <BR /> FROM microsoft/windowsservercore <BR /> RUN powershell Invoke-webrequest <A href="#" target="_blank"></A> -UseBasicParsing -outfile c:\\ <BR /> RUN powershell Expand-Archive c:\\ -Dest c:\\nginx <BR /> WORKDIR c:\\nginx\\nginx-1.12.0 <BR /> ENTRYPOINT powershell .\\nginx.exe <BR /> Create a Dockerfile from the content provided above, and save it to some location (e.g. C:\temp\nginx) on your NGINX container host machine. From that location, build the image using the following command: <BR /> C:\temp\nginx&gt; docker build -t nginx . <BR /> Now the image should appear with the rest of the docker images on your system (check using the <CODE> docker images </CODE> command). <BR /> <H3> (Optional) Confirm that your NGINX image is ready </H3> <BR /> First, run the container: <BR /> C:\temp&gt; docker run -it -p 80:80 nginx <BR /> Next, open a new cmdlet window and use the <CODE> docker ps </CODE> command to see that the container is running. Note its ID. The ID of your container is the value of <CODE> &lt;CONTAINERID&gt; </CODE> in the next command. <BR /> <BR /> Get the container’s IP address: <BR /> C:\temp&gt; docker exec <CODE> &lt;CONTAINERID&gt; </CODE> ipconfig <BR /> For example, your container’s IP address may be, as in the example output shown below. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Next, open a browser on your container host and put your container’s IP address in the address bar. You should see a confirmation page, indicating that NGINX is successfully running in your container. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> <BR /> <H2> Step 2: Build images for two containerized IIS Web services </H2> <BR /> In this step, we'll build container images for two simple IIS-based web applications. Later, we'll use these images to create two docker services. <BR /> <P> <EM> Note: Complete the instructions in this section on one of the container hosts that you intend to use as a swarm host. </EM> </P> <BR /> <BR /> <H3> Build a generic IIS Web Server image </H3> <BR /> Below are the contents of a simple Dockerfile that can be used to create an IIS Web server image. The Dockerfile simply enables the <A href="#" target="_blank"> Internet Information Services (IIS) </A> Web server role within a microsoft/windowsservercore container. <BR /> FROM microsoft/windowsservercore <BR /> RUN dism.exe /online /enable-feature /all /featurename:iis-webserver /NoRestart <BR /> Create a Dockerfile from the content provided above, and save it to some location (e.g. C:\temp\iis) on one of the host machines that you plan to use as a swarm node. From that location, build the image using the following command: <BR /> C:\temp\iis&gt; docker build -t iis-web . <BR /> <H3> (Optional) Confirm that your IIS Web server image is ready </H3> <BR /> First, run the container: <BR /> C:\temp&gt; docker run -it -p 80:80 iis-web <BR /> Next, use the <CODE> docker ps </CODE> command to see that the container is running. Note its ID. The ID of your container is the value of <CODE> &lt;CONTAINERID&gt; </CODE> <CODE> </CODE> in the next command. <BR /> <BR /> Get the container's IP address: <BR /> C:\temp&gt; docker exec <CODE> &lt;CONTAINERID&gt; </CODE> ipconfig <BR /> Now open a browser on your container host and put your container’s IP address in the address bar. You should see a confirmation page, indicating that the IIS Web server role is successfully running in your container. <BR /> <BR /> <IMG src="" /> <BR /> <H3> Build two custom IIS Web server images </H3> <BR /> In this step, we’ll be replacing the IIS landing/confirmation page that we saw above with custom HTML pages--two different images, corresponding to two different web container images. In a later step, we’ll be using our NGINX container to load balance across instances of these two images. <EM> Because the images will be different, we will easily see the load balancing in action as it shifts between the content being served by the containers we’ll define in this step. </EM> <BR /> <BR /> First, on your host machine create a simple file called, index_1.html. In the file type any text. For example, your index_1.html file might look like this: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> <BR /> Now create a second file, index_2.html. Again, in the file type any text. For example, your index_2.html file might look like this: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Now we’ll use these HTML documents to make two custom web service images. <BR /> <BR /> If the iis-web container instance that you just built is not still running, run a new one, then get the ID of the container using: <BR /> C:\temp&gt; docker exec &lt;CONTAINERID&gt; ipconfig <BR /> Now, copy your index_1.html file from your host onto the IIS container instance that is running, using the following command: <BR /> C:\temp&gt; docker cp index_1.html &lt;CONTAINERID&gt;:C:\inetpub\wwwroot\index.html <BR /> Next, stop and commit the container in its current state. This will create a container image for the first web service. Let’s call this first image, "web_1." <BR /> C:\&gt; docker stop &lt;CONTAINERID&gt; <BR /> C:\&gt; docker commit &lt;CONTAINERID&gt; web_1 <BR /> Now, start the container again and repeat the previous steps to create a second web service image, this time using your index_2.html file. Do this using the following commands: <BR /> C:\&gt; docker start &lt;CONTAINERID&gt; <BR /> C:\&gt; docker cp index_2.html &lt;CONTAINERID&gt;:C:\inetpub\wwwroot\index.html <BR /> C:\&gt; docker stop &lt;CONTAINERID&gt; <BR /> C:\&gt; docker commit &lt;CONTAINERID&gt; web_2 <BR /> You have now created images for two unique web services; if you view the Docker images on your host by running <CODE> docker images </CODE> , you should see that you have two new container images—“web_1” and “web_2”. <BR /> <H3> Put the IIS container images on all of your swarm hosts </H3> <BR /> To complete this exercise you will need the custom web container images that you just created to be on all of the host machines that you intend to use as swarm nodes. There are two ways for you to get the images onto additional machines: <BR /> <BR /> <STRONG> <EM> Option 1: </EM> </STRONG> Repeat the steps above to build the "web_1" and "web_2" containers on your second host. <BR /> <EM> <STRONG> Option 2 [recommended]: </STRONG> </EM> Push the images to your repository on <A href="#" target="_blank"> Docker Hub </A> then pull them onto additional hosts. <BR /> <BR /> <EM> Using Docker Hub is a convenient way to leverage the lightweight nature of containers across all of your machines, and to share your images with others. Visit the following Docker resources to get started with pushing/pulling images with Docker Hub: <BR /> <A href="#" target="_blank"> Create a Docker Hub account and repository <BR /> </A> <A href="#" target="_blank"> Tag, push and pull your image </A> </EM> <BR /> <H2> Step 3: Join your hosts to a swarm </H2> <BR /> As a result of the previous steps, one of your host machines should have the nginx container image, and the rest of your hosts should have the Web server images, "web_1" and "web_2". In this step, we'll join the latter hosts to a swarm cluster. <BR /> <BR /> <EM> <STRONG> Note: </STRONG> The host running the containerized NGINX load balancer cannot run on the same host as any container endpoints for which it is performing load balancing; the host with your nginx container image must be reserved for load balancing only. For more background on this, see Caveats and Gotchas below </EM> . <BR /> <BR /> First, run the following command from any machine that you intend to use as a swarm host. The machine that you use to execute this command will become a manager node for your swarm cluster. <BR /> <UL> <BR /> <LI> Replace <CODE> &lt;HOSTIPADDRESS&gt; </CODE> with the public IP address of your host machine </LI> <BR /> </UL> <BR /> C:\temp&gt; docker swarm init --advertise-addr=&lt;HOSTIPADDRESS&gt; --listen-addr &lt;HOSTIPADDRESS&gt;:2377 <BR /> Now run the following command from each of the other host machines that you intend to use as swarm nodes, joining them to the swarm as a worker nodes. <BR /> <UL> <BR /> <LI> Replace <CODE> &lt;MANAGERIPADDRESS&gt; </CODE> with the public IP address of your host machine (i.e. the value of <CODE> &lt;HOSTIPADDRESS&gt; </CODE> that you used to initialize the swarm from the manager node) </LI> <BR /> <LI> Replace <CODE> &lt;WORKERJOINTOKEN&gt; </CODE> with the worker join-token provided as output by the <CODE> docker swarm init </CODE> command (you can also obtain the join-token by running <CODE> docker swarm join-token worker </CODE> from the manager host) </LI> <BR /> </UL> <BR /> C:\temp&gt; docker swarm join --token &lt;WORKERJOINTOKEN&gt; &lt;MANAGERIPADDRESS&gt;:2377 <BR /> Your nodes are now configured to form a swarm cluster! You can see the status of the nodes by running the following command from your manage node: <BR /> C:\temp&gt; docker node ls <BR /> <H2> Step 4: Deploy services to your swarm </H2> <BR /> <EM> <STRONG> Note: </STRONG> Before moving on, <CODE> stop </CODE> and <CODE> remove </CODE> any NGINX or IIS containers running on your hosts. This will help avoid port conflicts when you define services. To do this, simply run the following commands for each container, replacing <CODE> &lt;CONTAINERID&gt; </CODE> with the ID of the container you are stopping/removing: </EM> <BR /> C:\temp&gt; docker stop &lt;CONTAINERID&gt; <BR /> C:\temp&gt; docker rm &lt;CONTAINERID&gt; <BR /> Next, we’re going to use the "web_1" and "web_2" container images that we created in previous steps of this exercise to deploy two container services to our swarm cluster. <BR /> <BR /> To create the services, run the following commands from your swarm manager node: <BR /> C:\ &gt; docker service create --name=s1 --publish mode=host,target=80 --endpoint-mode dnsrr web_1 powershell -command {echo sleep; sleep 360000;} <BR /> C:\ &gt; docker service create --name=s2 --publish mode=host,target=80 --endpoint-mode dnsrr web_2 powershell -command {echo sleep; sleep 360000;} <BR /> You should now have two services running, s1 and s2. You can view their status by running the following command from your swarm manager node: <BR /> C:\ &gt; docker service ls <BR /> Additionally, you can view information on the container instances that define a specific service with the following commands (where <CODE> &lt;SERVICENAME&gt; </CODE> is replaced with the name of the service you are inspecting (for example, <CODE> s1 </CODE> or <CODE> s2 </CODE> ): <BR /> # List all services <BR /> C:\ &gt; docker service ls <BR /> # List info for a specific service <BR /> C:\ &gt; docker service ps &lt;SERVICENAME&gt; <BR /> <H3> (Optional) Scale your services </H3> <BR /> The commands in the previous step will deploy one container instance/replica for each service, <CODE> s1 </CODE> and <CODE> s2 </CODE> . To scale the services to be backed by multiple replicas, run the following command: <BR /> C:\ &gt; docker service scale &lt;SERVICENAME&gt;=&lt;REPLICAS&gt; <BR /> # e.g. docker service scale s1=3 <BR /> <H2> Step 5: Configure your NGINX load balancer </H2> <BR /> Now that services are running on your swarm, you can configure the NGINX load balancer to distribute traffic across the container instances for those services. <BR /> <BR /> <EM> Of course, generally load balancers are used to balance traffic across instances of a single service, not multiple services. For the purpose of clarity, this example uses two services so that the function of the load balancer can be easily seen; because the two services are serving different HTML content, we’ll clearly see how the load balancer is distributing requests between them. </EM> <BR /> <H3> The nginx.conf file </H3> <BR /> First, the nginx.conf file for your load balancer must be configured with the IP addresses and service ports of your swarm nodes and services. The download for NGINX that was downloaded in step 1 as a part of building your NGINX container image includes an example nginx.conf file. For the purpose of this exercise, a version of that file was copied and adapted to create a simple template for you to adapt with your specific node/container information. <STRONG> Get the template file <A href="#" target="_blank"> here </A> and save it onto your NGINX container host machine. </STRONG> In this step, we'll adapt the template file and use it to replace the default nginx.conf file that was originally downloaded onto your NGINX container image. <BR /> <BR /> You will need to adjust the file by adding the information for your hosts and container instances. The template nginx.conf file provided contains the following section: <BR /> upstream appcluster { <BR /> server &lt;HOSTIP&gt;:&lt;HOSTPORT&gt;; <BR /> server &lt;HOSTIP&gt;:&lt;HOSTPORT&gt;; <BR /> server &lt;HOSTIP&gt;:&lt;HOSTPORT&gt;; <BR /> server &lt;HOSTIP&gt;:&lt;HOSTPORT&gt;; <BR /> server &lt;HOSTIP&gt;:&lt;HOSTPORT&gt;; <BR /> server &lt;HOSTIP&gt;:&lt;HOSTPORT&gt;; <BR /> } <BR /> To adapt the file for your configuration, you will need to adjust the <CODE> &lt;HOSTIP&gt;:&lt;HOSTPORT&gt; </CODE> entries in the config file. You will have an entry for each container endpoint that defines your web services. For any given container endpoint, the value of <CODE> &lt;HOSTIP&gt; </CODE> will be the IP address of the container host upon which that container is running. The value of <CODE> &lt;HOSTPORT&gt; </CODE> will be the port on the container host upon which the container endpoint has been published. <BR /> <BR /> <EM> When the services, s1 and s2, were defined in the previous step of this exercise, the <CODE> --publish mode=host,target=80 </CODE> parameter was included. This paramater specified that the container instances for the services should be exposed via published ports on the container hosts. More specifically, by including <CODE> --publish mode=host,target=80 </CODE> in the service definitions, each service was configured to be exposed on port 80 of each of its container endpoints, as well as a set of automatically defined ports on the swarm hosts (i.e. one port for each container running on a given host). </EM> <BR /> <H3> First, identify the host IPs and published ports for your container endpoints </H3> <BR /> Before you can adjust your nginx.conf file, you must obtain the required information for the container endpoints that define your services. To do this, run the following commands (again, run these from your swarm manager node): <BR /> C:\ &gt; docker service ps s1 <BR /> C:\ &gt; docker service ps s2 <BR /> The above commands will return details on every container instance running for each of your services, across all of your swarm hosts. <BR /> <UL> <BR /> <LI> One column of the output, the “ports” column, includes port information for each host of the form <CODE> *:&lt;HOSTPORT&gt;-&gt;80/tcp </CODE> . The values of <CODE> &lt;HOSTPORT&gt; </CODE> will be different for each container instance, as each container is published on its own host port. </LI> <BR /> <LI> Another column, the “node” column, will tell you which machine the container is running on. This is how you will identify the host IP information for each endpoint. </LI> <BR /> </UL> <BR /> You now have the port information and node for each container endpoint. Next, use that information to populate the upstream field of your nginx.conf file; for each endpoint, add a server to the upstream field of the file, replacing the field with the IP address of each node (if you don’t have this, run ipconfig on each host machine to obtain it), and the field with the corresponding host port. <BR /> <BR /> For example, if you have two swarm hosts (IP addresses and, each running three containers your list of servers will end up looking something like this: <BR /> upstream appcluster { <BR /> server; <BR /> server; <BR /> server; <BR /> server; <BR /> server; <BR /> server; <BR /> } <BR /> <BR /> Once you have changed your nginx.conf file, save it. Next, we'll copy it from your host to the NGINX container image itself. <BR /> <H3> Replace the default nginx.conf file with your adjusted file </H3> <BR /> If your nginx container is not already running on its host, run it now: <BR /> C:\temp&gt; docker run -it -p 80:80 nginx <BR /> Get the ID of the container using: <BR /> C:\temp&gt; docker exec &lt;CONTAINERID&gt; ipconfig <BR /> With the container running, use the following command to replace the default nginx.conf file with the file that you just configured (run the following command from the directory in which you saved your adjusted version of the nginx.conf on the host machine): <BR /> C:\temp&gt; docker cp nginx.conf &lt;CONTAINERID&gt;:C:\nginx\nginx-1.10.3\conf <BR /> Now use the following command to reload the NGINX server running within your container: <BR /> C:\temp&gt; docker exec &lt;CONTAINERID&gt; nginx.exe -s reload <BR /> <H2> Step 6: See your load balancer in action </H2> <BR /> Your load balancer should now be fully configured to distribute traffic across the various instances of your swarm services. To see it in action, open a browser and <BR /> <UL> <BR /> <LI> If accessing from the NGINX host machine: Type the IP address of the nginx container running on the machine into the browser address bar. (This is the value of <CODE> &lt;CONTAINERID&gt; </CODE> above). </LI> <BR /> <LI> If accessing from another host machine (with network access to the NGINX host machine): Type the IP address of the NGINX host machine into the browser address bar. </LI> <BR /> </UL> <BR /> Once you’ve typed the applicable address into the browser address bar, press enter and wait for the web page to load. Once it loads, you should see one of the HTML pages that you created in step 2. <BR /> <BR /> Now press refresh on the page. You may need to refresh more than once, but after just a few times you should see the other HTML page that you created in step 2. <BR /> <BR /> If you continue refreshing, you will see the two different HTML pages that you used to define the services, web_1 and web_2, being accessed in a round-robin pattern (round-robin is the default load balancing strategy for NGINX, <A href="#" target="_blank"> but there are others </A> ). The animated image below demonstrated the behavior that you should see. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> As a reminder, below is the full configuration with all three nodes. When you're refreshing your web page view, you're repeatedly accessing the NGINX node, which is distributing your GET request to the container endpoints running on the swarm nodes. Each time you resend the request, the load balancer has the opportunity to route you to a different endpoint, resulting in your being served a different web page, depending on whether or not your request was routed to an S1 or S2 endpoint. <BR /> <BR /> <IMG src="" /> <BR /> <H2> Caveats and gotchas </H2> <BR /> Is there a way to publish a single port for my service, so that I can load balance across just a few endpoints rather than all of my container instances? <BR /> Unfortunately, we do not yet support publishing a single port for a service on Windows. This feature is swarm mode’s routing mesh feature—a feature that allows you to publish ports for a service, so that that service is accessible to external resources via that port on every swarm node. <BR /> <BR /> <A href="#" target="_blank"> Routing mesh </A> for swarm mode on Windows is not yet supported, but will be coming soon. <BR /> Why can’t I run my containerized load balancer on one of my swarm nodes? <BR /> Currently, there is a known bug on Windows, which prevents containers from accessing their hosts using localhost or even the host’s external IP address. This means containers cannot access their host’s exposed ports—the can only access exposed ports on other hosts. <BR /> <BR /> In the context of this exercise, this means that the NGINX load balancer must be running on its own host, and never on the same host as any services that it needs to via exposed ports. Put another way, for the containerized NGINX load balancer to balance across the two web services defined in this exercise, s1 and s2, it cannot be running on a swarm node—if it were running on a swarm node, it would be unable to access any containers on that node <EM> via host exposed ports. </EM> <BR /> <BR /> Of course, an additional caveat here is that containers do not need to be accessed via host exposed ports. It is also possible to access containers directly, using the container IP and published port. If this instead were done for this exercise, the NGINX load balancer would need to be configured to access: <BR /> <UL> <BR /> <LI> containers that share its host by their container IP and port </LI> <BR /> <LI> containers that do not share its host by their host’s IP and exposed port </LI> <BR /> </UL> <BR /> There is no problem with configuring the load balancer in this way, other than the added complexity that it introduces compared to simply putting the load balancer on its own machine, so that containers can be uniformly accessed via their hosts. </BODY></HTML> Fri, 22 Mar 2019 00:09:03 GMT Virtualization-Team 2019-03-22T00:09:03Z Overlay Network Driver with Support for Docker Swarm Mode Now Available to Windows Insiders on Windows 10 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Feb 09, 2017 </STRONG> <BR /> <EM> Windows 10 Insiders can now take advantage of <A href="#" target="_blank"> overlay networking and Docker swarm mode </A> </EM> <EM> to manage containerized applications in both single-host and clustering scenarios. </EM> <BR /> <BR /> Containers are a rapidly growing technology, and as they evolve so must the technologies that support them as members of a broader collection of compute, storage and networking infrastructure components. For networking, in particular, this means continually striving to achieve better connectivity, higher reliability and easier management for container networking. Less than six months ago, Microsoft released Windows 10 Anniversary Edition and Windows Server 2016, and even as our first versions of Windows with container support were being celebrated we were already hard at work on new container features, including several container networking features. <BR /> <BR /> Our last Windows release showcased <A href="#" target="_blank"> Docker Compose and service discovery </A> —two key features for <EM> single-host </EM> container deployment and networking scenarios. Now, we’re expanding the reach of Windows container networking to <EM> multi-host (clustering) </EM> scenarios with the addition of a native overlay network driver and support for <A href="#" target="_blank"> Docker swarm mode </A> , available today to <A href="#" target="_blank"> Windows Insiders </A> as part of the upcoming Windows 10, Creators Update. <BR /> <BR /> Docker swarm mode is Docker’s native orchestration tool, designed to simplify the experiencing of declaring, managing and scaling container services. The Windows overlay network driver (which uses VXLAN and virtual overlay networking technology) makes it possible to connect container endpoints running on separate hosts to the same, isolated network. <EM> Together, swarm mode and overlay enable easy management and complete scalability of your containerized applications, allowing you to leverage the full power of your infrastructure hosts. </EM> <BR /> <H2> What is “swarm mode”? </H2> <BR /> Swarm mode is a Docker feature that provides built in container orchestration capabilities, including native clustering of Docker hosts and scheduling of container workloads. A group of Docker hosts form a “swarm” cluster when their Docker engines are running together in “swarm mode.” <BR /> <BR /> A swarm is composed of two types of container hosts: <STRONG> manager nodes, </STRONG> and <STRONG> worker nodes </STRONG> . Every swarm is initialized via a manager node, and all Docker CLI commands for controlling and monitoring a swarm must be executed from one of its manager nodes. Manager nodes can be thought of as “keepers” of the Swarm state—together, they form a consensus group that maintains awareness of the state of services running on the swarm, and it’s their job to ensure that the swarm’s <EM> actual state </EM> always matches its <EM> intended state </EM> , as defined by the developer or admin. <BR /> <P> <STRONG> Note: </STRONG> Any given swarm can have multiple manager nodes, but it must always have <EM> at least one </EM> . </P> <BR /> Worker nodes are orchestrated by Docker swarm via manager nodes. To join a swarm, a worker node must use a “join token” that was generated by the manager node when the swarm was initialized. Worker nodes simply receive and execute tasks from manager nodes, and so they require (and possess) no awareness of the swarm state. <BR /> <BR /> [caption id="attachment_9625" align="alignnone" width="600"] <IMG src="" /> Figure 1: A four-node swarm cluster running two container services on isolated overlay networks.[/caption] <BR /> <BR /> <STRONG> Figure 1 </STRONG> offers a simple visualization of a four-node cluster running in swarm mode, leveraging the overlay network driver. In this swarm, Host A is the manager node and Hosts B-D are worker nodes. Together, these manager and worker nodes are running two Docker services which are backed by a total of ten container instances, or “replicas.” The yellow in this figure distinguishes the first service, <EM> Service 1 </EM> ; the containers for Service 1 are connected by an overlay network. Similarly, the blue in this figure represents the second service, <EM> Service 2 </EM> ; the containers for Service 2 are also attached by an overlay network. <BR /> <P> <STRONG> Note: </STRONG> In this case, the two Docker services happen to be connected by separate/isolated overlay networks. It is also possible, however, for multiple container services to be attached to <EM> the same </EM> overlay network. </P> <BR /> <BR /> <H2> Windows Network Stack Implementation </H2> <BR /> Under the covers, Swarm and overlay are enabled by enhancements to the Host Network Service (HNS) and Windows libnetwork plugin for the Docker engine, which leverage the Azure Virtual Filtering Platform (VFP) forwarding extension in the Hyper-V Virtual Switch. <STRONG> Figure 2 </STRONG> shows how these components work together on a given Windows container host, to enable overlay and swarm mode functionality. <BR /> <BR /> [caption id="attachment_9655" align="alignnone" width="550"] <IMG src="" /> Figure 2: Key components involved in enabling swarm mode and overlay networking on Windows container hosts.[/caption] <BR /> <H2> The HNS overlay network driver plugin and VFP forwarding extension </H2> <BR /> Overlay networking was enabled with the addition of an overlay network driver plugin to the HNS service, which creates encapsulation rules using the VFP forwarding extension in the Hyper-V Virtual Switch; the HNS overlay plugin communicates with the VFP forwarding extension to perform the VXLAN encapsulation required to enable overlay networking functionality. <BR /> <P> On Windows, the <STRONG> Azure Virtual Filtering Platform (VFP) </STRONG> is a <A href="#" target="_blank"> software defined networking (SDN) </A> element, installed as a programmable Hyper-V Virtual Switch forwarding extension. It is a shared component with the Azure platform, and was added to Windows 10 with Windows 10 Anniversary Edition. It is designed as a high performance, rule-flow based engine, to specify per-endpoint rules for forwarding, transforming, or blocking network traffic. The VFP extension has been used for implementing the l2bridge and l2tunnel <A href="#" target="_blank"> Windows container networking </A> modes and is now also used to implement the overlay networking mode. As we continue to expand container networking capabilities on Windows, we plan to further leverage the VFP extension to enable more fine-grained policy. </P> <BR /> <BR /> <H2> Enhancements to the Windows libnetwork plugin </H2> <BR /> Overlay networking support was the main hurdle that needed to be overcome to achieve Docker swarm mode support on Windows. Aside from that, additions also needed to be made to the Windows libnetwork Plugin—the plugin to the Docker engine that enables container networking functionality on Windows by facilitating communication between the Docker engine and the HNS service. <BR /> <H2> Load balancing: Windows routing mesh <EM> coming soon </EM> </H2> <BR /> Currently, Windows supports DNS Round-Robin load balancing between services. The <A href="#" target="_blank"> routing mesh </A> for Windows Docker hosts is not yet supported, but will be coming soon. Users seeking an alternative load balancing strategy today can setup an external load balancer (e.g. NGINX) and use Swarm’s <A href="#" target="_blank"> publish-port mode </A> to expose container host ports over which to load balance. <BR /> <H2> Boost your DevOps cycle and manage containers across Windows hosts by leveraging Docker swarm mode today </H2> <BR /> Together, Docker Swarm and support for overlay container networks enable multi-host scenarios and rapid scalability of your Windows containerized applications and services. This new support, combined with service discovery and the rest of the capabilities that you are used to leveraging in single-host configurations, makes for a clean and straight-forward experience developing containerized apps on Windows for multi-host environments. <BR /> <BR /> <STRONG> <EM> To get started with Docker Swarm and overlay networking on Windows, <A href="#" target="_blank"> start here </A> </EM> </STRONG> <STRONG> <EM> </EM> </STRONG> <STRONG> <EM> . </EM> </STRONG> <BR /> <BR /> The Datacenter and Cloud Networking team worked alongside our partners internally and at Docker to bring overlay networking mode and Docker swarm mode support to Windows. Again, this is an exciting milestone in our ongoing work to achieve better container networking support to Windows users. We’re constantly seeking more ways to improve your experience working with containers on Windows, and it’s only with your feedback that we can best decide what to do next to enable you and your DevOps teams. <BR /> <BR /> <EM> We encourage you to share your experiences, questions and feedback with us, to help us learn more about what you’re doing with </EM> <A href="#" target="_blank"> <EM> container networking on Windows </EM> </A> <EM> today, and to understand what you’d like to achieve in the future. Visit our <A href="#" target="_blank"> Contact Page </A> to learn more about the forums that you can use to be in touch with us. </EM> </BODY></HTML> Fri, 22 Mar 2019 00:07:37 GMT Virtualization-Team 2019-03-22T00:07:37Z Introducing the Host Compute Service (HCS) <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Jan 27, 2017 </STRONG> <BR /> <H3> Summary </H3> <BR /> <DIV> This post introduces a low level container management API in Hyper-V called the Host Compute Service (HCS).&nbsp; It tells the story behind its creation, and links to a few open source projects that make it easier to use. </DIV> <BR /> <H3> Motivation and Creation </H3> <BR /> Building a great management API for Docker was important for Windows Server Containers.&nbsp; There's a ton of really cool low-level technical work that went into enabling containers on Windows, and we needed to make sure they were easy to use.&nbsp; This seems very simple, but figuring out the right approach was surprisingly tricky. <BR /> <BR /> Our first thought was to extend our existing management technologies (e.g. WMI, PowerShell) to containers.&nbsp; After investigating, we concluded that they weren’t optimal for Docker, and started looking at other options. <BR /> <BR /> Next, we considered mirroring the way Linux exposes containerization primitives (e.g. control groups, namespaces, etc.).&nbsp; Under this model, we could have exposed each underlying feature independently, and asked Docker to call into them individually.&nbsp; However, there were a few questions about that approach that caused us to consider alternatives: <BR /> <OL> <BR /> <LI> The low level APIs were evolving (and improving) rapidly.&nbsp; Docker (and others) wanted those improvements, but also needed a stable API to build upon.&nbsp; Could we stabilize the underlying features fast enough to meet our release goals? </LI> <BR /> <LI> The low level APIs were interesting and useful because they made containers possible.&nbsp; Would anyone actually want to call them independently? </LI> <BR /> </OL> <BR /> After a bit of thinking, we decided to go with a third option.&nbsp; We created a new management service called the Host Compute Service (HCS), which acts as a layer of abstraction above the low level functionality.&nbsp; The HCS was a stable API Docker could build upon, and it was also easier to use.&nbsp; Making a Windows Server Container with the HCS is just a single API call.&nbsp; Making a Hyper-V Container instead just means adding a flag when calling into the API.&nbsp; Figuring out how those calls translate into actual low-level implementation is something the Hyper-V team has already figured out. <BR /> <DIV> </DIV> <BR /> <DIV> <IMG src="" /> <IMG src="" /> </DIV> <BR /> <H3> Getting Started with the HCS </H3> <BR /> If you think this is nifty, and would like to play around with the HCS, here's some infomation to help you get started.&nbsp; Instead of calling our C API directly, I recommend using one the friendly wrappers we've built around the HCS.&nbsp; These wrappers make it easy to call the HCS from higher level languages, and are released open source on GitHub.&nbsp; They're also super handy if you want to figure out how to use the C API.&nbsp; We've released two wrappers thus far.&nbsp; One is written in Go (and used by Docker), and the other is written in C#. <BR /> <BR /> You can find the wrappers here: <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> </A> </LI> <BR /> <LI> <A href="#" target="_blank"> </A> </LI> <BR /> </UL> <BR /> If you want to use the HCS (either directly or via a wrapper), or you want to make a Rust/Haskell/InsertYourLanguage wrapper around the HCS, please drop a comment below.&nbsp; I'd love to chat. <BR /> <BR /> For a deeper look at this topic, I recommend taking a look at John Stark’s DockerCon presentation: <A href="#" target="_blank"> </A> <BR /> <BR /> John Slack <BR /> Program Manager <BR /> Hyper-V Team </BODY></HTML> Fri, 22 Mar 2019 00:04:49 GMT Virtualization-Team 2019-03-22T00:04:49Z Use Docker Compose and Service Discovery on Windows to scale-out your multi-service container application <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Oct 18, 2016 </STRONG> <BR /> Article by&nbsp;Kallie Bracken and Jason Messer <BR /> <BR /> <EM> The containers revolution popularized by Docker has come to Windows so that developers on Windows 10 ( </EM> <A href="#" target="_blank"> Anniversary Edition </A> <EM> ) or IT Pros using </EM> <A href="#" target="_blank"> Windows Server 2016 </A> <EM> can rapidly build, test, and deploy Windows “containerized” applications! </EM> <BR /> <BR /> <EM> Based on community feedback, we have made several improvements to the Windows containers networking stack to enable multi-container, multi-service application scenarios. Support for Service Discovery and the ability to create (or re-use existing) networks are at the center of the improvements that were made to bring the efficiency of Docker Compose to Windows. Docker Compose enables developers to instantly build, deploy and scale-out their “containerized” applications running in Windows containers with just a few simple commands. Developers define their application using a ‘Compose file’ to specify the services, corresponding container images, and networking infrastructure required to run their application. Service Discovery itself is a key requirement to scale-out multi-service applications using DNS-based load-balancing and we are proud to announce support for Service Discovery in the most recent versions of Windows 10 and Windows Server 2016. </EM> <BR /> <BR /> <EM> Take your next step in mastering development with Windows Containers, and keep letting us know what great capabilities you would like to see next! </EM> <BR /> <BR /> <BR /> <BR /> When it comes to using Docker to manage Windows containers, with just a little background it’s easy to <A href="#" target="_blank"> get simple container instances up and running </A> . Once you’ve covered the basics, the next step is to build your own custom container images using Dockerfiles to install features, applications and other configuration layers on top of the Windows base container images. From there, the next step is to get your hands dirty building multi-tier applications, composed of multiple services running in multiple container instances. It’s here—in the modularization and scaling-out of your application—that Docker Compose comes in; Compose is the perfect tool for streamlining the specification and deployment of multi-tier, multi-container applications. Docker Compose registers each container instance by service name through the Docker engine thereby allowing containers to ‘discover' each other by name when sending intra-application network traffic. Application services can also be scaled-out to multiple container instances using Compose. Network traffic destined to a multi-container service is then round-robin’d using DNS load-balancing across all container instances implementing that service. <BR /> <BR /> This post walks through the process of creating and deploying a multi-tier blog application using Docker Compose (Compose file and application shown in Figure 1). <BR /> <BR /> [caption id="attachment_8705" align="alignnone" width="1429"] <IMG src="" /> Figure 1: The Compose File used to create the blog application, including its BlogEngine.NET front-end (the ‘web’ service) and SQL Server back-end (the ‘db’ service).[/caption] <BR /> <BR /> <STRONG> Note: </STRONG> Docker Compose can be used to scale-out applications on a single host which is the scope of this post. To scale-out your ‘containerized’ application across multiple hosts, the application should be deployed on a multi-node cluster using a tool such as Docker Swarm. Look for multi-host networking support in Docker Swarm on Windows in the near future. <BR /> <BR /> The first tier of the application is an ASP.NET web app, <A href="#" target="_blank"> BlogEngine.NET </A> , and the back-end tier is a database built on SQL Server Express 2014. The database is created to manage and store blog posts from different users which are subsequently displayed through the Blog Engine app. <BR /> <BR /> <STRONG> New to Docker or Windows Containers? </STRONG> <BR /> <BR /> This post assumes familiarity with the basics of Docker, Windows containers and ‘containerized’ ASP.NET applications. Here are some good places to start if you need to brush up on your knowledge: <BR /> <UL> <BR /> <LI> Intro to Docker on Windows (i.e. using Docker to run Windows base containers) <BR /> <UL> <BR /> <LI> Microsoft MSDN: <A href="#" target="_blank"> Windows Containers </A> and <A href="#" target="_blank"> Containers 101 on Channel 9 </A> </LI> <BR /> </UL> <BR /> </LI> <BR /> <LI> Building a containerized .NET app on Docker (i.e. using Docker to configure/deploy custom containers): <A href="#" target="_blank"> Windows Containers – How to Containerize an ASP.NET Web API Application in Windows using Docker </A> </LI> <BR /> </UL> <BR /> <H2> <STRONG> Setup </STRONG> </H2> <BR /> <H2> <STRONG> System Prerequisites </STRONG> </H2> <BR /> Before you walk through the steps described in this post, check that your environment meets the following requirements and has the most recent versions of Docker and Windows updates installed: <BR /> <UL> <BR /> <LI> Windows 10 Anniversary Edition (Professional or Enterprise) or Windows Server 2016 <BR /> <EM> Windows Containers requires your system to have critical updates installed. Check your OS version by running </EM> <EM> winver.exe </EM> <EM> , and ensure you have installed the latest </EM> <A href="#" target="_blank"> <EM> KB 3192366 </EM> </A> <EM> and/or </EM> <A href="#" target="_blank"> <EM> Windows 10 </EM> </A> <EM> updates. </EM> </LI> <BR /> </UL> <BR /> <UL> <BR /> <LI> The <STRONG> Windows Container Feature </STRONG> and <STRONG> Docker </STRONG> must be enabled/installed on your system as described in the Quickstarts below. <BR /> <EM> Make sure you are running the most recent version of Docker - either </EM> <A href="#" target="_blank"> <EM> Docker v1.13.0-dev </EM> </A> <EM> or later for Windows 10 clients OR Commercially Supported (CS) </EM> <A href="#" target="_blank"> <EM> Docker v1.12.2 </EM> </A> <EM> or later for Windows Server 2016 </EM> <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> Windows Server QuickStart </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Windows 10 Quickstart </A> </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> <UL> <BR /> <LI> The latest version of <STRONG> Docker-Compose </STRONG> (available with <A href="#" target="_blank"> Docker-for-Windows </A> ) must be installed on your system. </LI> <BR /> </UL> <BR /> <STRONG> NOTE </STRONG> : The current version of Docker Compose on Windows requires that the Docker daemon be configured to listen to a TCP socket for new connections. A Pull Request (PR) to fix for this issue is in review and will be merged soon. For now, please ensure that you do the following: <BR /> <P> Please configure the Docker Engine by adding a “hosts” key to the daemon.json file (example shown below) following the instructions here. Be sure to restart the Docker service after making this change. </P> <BR /> <BR /> { <BR /> … <BR /> "hosts":["tcp://", “npipe:////./pipe/win_engine"] <BR /> … <BR /> } <BR /> <P> When running docker-compose, you will either need to explicitly reference the host port by adding the option “-H tcp://localhost:2375” to the end of this command (e.g. docker-compose -H “tcp://localhost:2375” or by setting your DOCKER_HOST environment variable to always use this port (e.g. $env:DOCKER_HOST=”tcp://localhost:2375” </P> <BR /> <BR /> <H2> Blog Application Source with Compose and Dockerfiles </H2> <BR /> This blog application is based on the Blog Engine ASP.NET web app availably publicly here: <A href="#" target="_blank"> </A> .&nbsp; To follow this post and build the described application, a complete set of files is available on GitHub. Download the <A href="#" target="_blank"> Blog Application files from GitHub </A> and extract them to a location somewhere on your machine, e.g. ‘C:\build’ directory. <BR /> <BR /> The blog application directory includes: <BR /> <UL> <BR /> <LI> A ‘web’ folder that contains the Dockerfile and resources that you’ll need to build the image for the blog application’s ASP.NET front-end. </LI> <BR /> <LI> A ‘db’ folder that contains the Dockerfile and resources that you’ll need to build the blog application’s SQL database back-end. </LI> <BR /> <LI> A ‘docker-compose.yml’ file that you will use to build and run the application using Docker Compose. </LI> <BR /> </UL> <BR /> The top-level of the blog application source folder is the main working directory for the directions in this post. <EM> <STRONG> Open an elevated PowerShell session and navigate there now - </STRONG> e.g. </EM> <BR /> <EM> PS C:\&gt; cd c:\build\ </EM> <BR /> <H2> The Blog Application Container Images </H2> <BR /> <H2> Database Back-End Tier: The ‘db’ Service </H2> <BR /> The database back-end Dockerfile is located in the ‘db’ sub-folder of the blog application source files and can be referenced here: <A href="#" target="_blank"> The Blog Database Dockerfile </A> <STRONG> . </STRONG> The main function of this Dockerfile is to run two scripts over the Windows Server Core base OS image to define a new database as well as the tables required by the BlogEngine.NET application. <BR /> <BR /> The SQL scripts referenced by the Dockerfile to construct the blog database are included in the ‘db’ folder, and copied from host to container when the container image is created so that they can be run on the container. <BR /> <H2> <STRONG> BlogEngine.NET Front-End </STRONG> </H2> <BR /> <A href="#" target="_blank"> The BlogEngine.NET Dockerfile </A> is in the ‘web’ sub-folder of the blog application source files. <BR /> <BR /> This Dockerfile refers to a PowerShell script (buildapp.ps1) that does the majority of the work required to configure the web service image. The <A href="#" target="_blank"> buildapp.ps1 PowerShell Script </A> obtains the BlogEngine.NET project files using a download link from <A href="#" target="_blank"> Codeplex </A> , configures the blog application using the default IIS site, grants full permission over the BlogEngine.NET project files (something that is required by the application) and executes the commands necessary to build an IIS web application from the BlogEngine.NET project files. <BR /> <BR /> After running the script to obtain and configure the BlogEngine.NET web application, the Dockerfile finishes by copying the <A href="#" target="_blank"> Web.config </A> file included in the ‘web’ sub-folder to the container, to overwrite the file that was downloaded from Codeplex. The config file provided has been altered to point the ‘web’ service to the ‘db’ back-end service. <BR /> <H2> <STRONG> Streamlining with Docker Compose </STRONG> </H2> <BR /> When dealing with only one or two independent containers, it is simple to use the ‘docker run’ command to create and start a container image. However, as soon as an application begins to gain complexity, perhaps by including several inter-dependent services or by deploying multiple instances of any one service, the notion of configuring and running that app “manually” becomes impractical. To simplify the definition and deployment of an application, we can use <A href="#" target="_blank"> Docker Compose. </A> <BR /> <BR /> A <A href="#" target="_blank"> Compose file </A> is used to define our “containerized” application using two services—a ‘web’ service and a ‘db’ service. &nbsp;The blog application’s Compose File (available <A href="#" target="_blank"> here </A> for reference) defines the ‘web’ service which runs the BlogEngine.NET web front-end tier of the application and the ‘db’ service which runs the SQL Server 2014 Express back-end database tier. The compose file also handles network configuration for the blog application (with both application-level and service-level granularity). <BR /> <BR /> Something to note in the blog application Compose file, is that the ‘expose’ option is used in place of the ‘ports’ option for the ‘db’ service. The ‘ports’ option is analogous to using the ‘-p’ argument in a ‘docker run’ command, and specifies HOST:CONTAINER port mapping for a service. However, this ‘ports’ option specifies a specific container host port to use for the service thereby limiting the service to only one container instance since multiple instances can’t re-use the same host port. The ‘expose’ option, on the other hand, can be used to define the internal container port with a dynamic, external port selected automatically by Docker through the Windows Host Networking Service – HNS. This allows for the creation of multiple container instances to run a single service; where the ‘ports’ option requires that every container instance for a service be mapped as specified, the ‘expose’ option allows Docker Compose to handle port mapping as required for scaled-out scenarios. <BR /> <BR /> <A href="#" target="_blank"> The ‘networks’ key </A> in the Compose file specifies the network to which the application services will be connected. In this case, we define the default network for all services to use as external meaning a network will not be created by Docker Compose. The ‘nat’ network referenced is the default NAT network created by the Docker Engine when Docker is originally installed. <BR /> <H2> ‘docker-compose build’ </H2> <BR /> In this step, Docker Compose is used to build the blog application. The Compose file references the Dockerfiles for the ‘web’ and ‘db’ services and uses them to build the container image for each service. <BR /> <BR /> From an elevated PowerShell session, navigate to the top level of the Blog Application directory. For example, <BR /> cd C:\build\ <BR /> Now use Docker Compose to build the blog application: <BR /> docker-compose build <BR /> <H2> ‘docker-compose up’ </H2> <BR /> Now use Docker Compose to run the blog application: <BR /> docker-compose up <BR /> This will cause a container instance to be run for each application service. Execute the command to see that the blog application is now up and running. <BR /> docker-compose ps <BR /> You can access the blog application through a browser on your local machine, as described below. <BR /> <BR /> <STRONG> Define Multiple, Custom NAT Networks </STRONG> <BR /> <BR /> In previous Windows Server 2016 technical previews, Windows was limited to a single NAT network per container host. While this is still technically the case, it is possible to define custom NAT networks by segmenting the default NAT network’s large, internal prefix into multiple subnets. <BR /> <BR /> For instance, if the default NAT internal prefix was, a custom NAT network could be carved out from this prefix. The ‘networks’ section in the Compose file could be replaced with the following: <BR /> networks: <BR /> default: <BR /> driver: nat <BR /> ipam: <BR /> driver: default <BR /> config: <BR /> - subnet: <BR /> This would create a user-defined NAT network with a user-defined IP subnet prefix (in this case, <A href="#" target="_blank"> The ipam option </A> is used to specify this custom IPAM configuration. <BR /> <BR /> <STRONG> Note: </STRONG> Ensure that any custom nat network defined is a subset of the larger nat internal prefix previously created. To obtain your host nat network’s internal prefix, run ‘docker network inspect nat’. <BR /> <H2> View the Blog Application </H2> <BR /> Now that the containers for the ‘web’ and ‘db’ services are running, the blog application can be accessed from the local container host using the internal container IP and port (80). Use the command <EM> docker inspect &lt;web container instance&gt; </EM> to determine this internal IP address. <BR /> <BR /> To access the application, open an internet browser on the container host and navigate to the following URL: “http://&lt;container <EM> ip&gt;/ </EM> /BlogEngine/” appended. For instance, you might enter: <A href="#" target="_blank"> </A> <BR /> <BR /> To access the application from an <STRONG> <EM> external host </EM> </STRONG> that is connected to the container host’s network, you must use the Container Host IP address and mapped port of the web container. The mapped port of the web container endpoint is displayed from <EM> docker-compose ps </EM> or <EM> docker ps </EM> commands. For instance, you might enter: <A href="#" target="_blank"> </A> <BR /> <BR /> The blog application may take a moment to load, but soon your browser should present the following page. <BR /> <BR /> [caption id="attachment_8735" align="alignnone" width="1203"] <IMG src="" /> Screenshot of page[/caption] <BR /> <H2> <STRONG> Taking Advantage of Service Discovery </STRONG> </H2> <BR /> Built in to Docker is Service Discovery, which offers two key benefits: service registration and service name to IP (DNS) mapping. Service Discovery is especially valuable in the context of scaled-out applications, as it allows multi-container services to be discovered and referenced in the same way as single container services; with Service Discovery, intra-application communication is simple and concise—any service can be referenced by name, regardless of the number of container instances that are being used to run that service. <BR /> <BR /> Service registration is the piece of Service Discovery that makes it possible for containers/services on a given network to discover each other by name. As a result of service registration, every application service is registered with a set of internal IP addresses for the container endpoints that are running that service. With this mapping, DNS resolution in the Docker Engine responds to any application endpoint seeking to communicate with a given service by sending a randomly ordered list of the container IP addresses associated with that service. The DNS client in the requesting container then chooses one of these IPs for container-container communication. This is referred to as DNS load-balancing. <BR /> <BR /> Through DNS mapping Docker abstracts away the added complexity of managing multiple container endpoints; because of this piece of Service Discovery a single service can be treated as an atomic entity, no matter how many container instances it has running behind the scenes. <BR /> <BR /> <EM> Note: </EM> For further context on Service Discovery, visit <A href="#" target="_blank"> this Docker resource </A> . However, note that Windows does not support the “-link” options. <BR /> <H2> Scale-Out with ‘docker-compose scale’ </H2> <BR /> <IMG src="" /> <BR /> <BR /> While the service registration benefit of Service Discovery is leveraged by an application even when one container instance is running for each application service, a scaled-out scenario is required for the benefit of DNS load-balancing to truly take effect. <BR /> <BR /> To run a scaled-out version of the blog application, use the following command (either in place of ‘docker-compose up’ or even after the compose application is up and running). This command will run the blog application with one container instance for the ‘web’ service and three container instances for the ‘db’ service. <BR /> docker-compose scale web=1 db=3 <BR /> Recall that the docker-compose.yml file provided with the blog application project files does not allow for scaling multiple instances of the 'web' service. To scale the web service, the 'ports' option for the web service must be replaced with the 'expose' option. However, without a load-balancer in front of the web service, a user would need to reference individual container endpoint IPs and mapped ports for external access into the web front-end of this application. An improvement to this application would be to use volume mapping so that all ‘db’ container instances reference the same SQL database files. Stay tuned for a follow-on post on these topics. <BR /> <H2> <STRONG> Service Discovery in action </STRONG> </H2> <BR /> In this step, Service Discovery will be demonstrated through a simple interaction between the ‘web’ and ‘db’ application services. The idea here is to ping different instances of the ‘db’ service to see that Service Discovery allows it to be accessed as a single service, regardless of how many container instances are implementing the service. <BR /> <BR /> <STRONG> <EM> Before you begin: </EM> </STRONG> <EM> Run the blog application using the ‘docker-compose scale’ instruction described above. </EM> <BR /> <BR /> <STRONG> Return </STRONG> to your PowerShell session, and run the following command to ping the ‘db’ back-end service from your web service. Notice the IP address from which you receive a reply. <BR /> docker run blogengine ping db <BR /> Now run the ping command again, and notice whether or not you receive a reply from a different IP address (i.e. a different ‘db’ container instance).* <BR /> docker run blogengine ping db <BR /> The image below demonstrates the behavior you should see—after pinging 2-3 times, you should receive replied from at least two different ‘db’ container instances: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> * There is a chance that Docker will return the set of IPs making up the ‘db’ service in the same order as your first request. In this case, you may not see a different IP address. Repeat the ping command until you receive a reply from a new instance. <BR /> <BR /> <STRONG> <EM> Technical Note: Service Discovery implemented in Windows </EM> </STRONG> <BR /> <BR /> On Linux, the Docker daemon starts a new thread in each container namespace to catch service name resolution requests. These requests are sent to the Docker engine which implements a DNS resolver and responds back to the thread in the container with the IP address/es of the container instance/s which correspond to the service name. <BR /> <BR /> In Windows, service discovery is implemented differently due to the need to support both Windows Server Containers (shared Windows kernel) and Hyper-V Containers (isolated Windows kernel). Instead of starting a new thread in each container, the primary DNS server for the Container endpoint’s IP interface is set to the default gateway of the (NAT) network. A request to resolve the service name will be sent to the default gateway IP where it is caught by the Windows Host Networking Service (HNS) in the container host. The HNS service then sends the request to the Docker engine which replies with the IP address/es of the container instance/s for the service. HNS then returns the service name (DNS) query to the container. </BODY></HTML> Fri, 22 Mar 2019 00:01:10 GMT scooley 2019-03-22T00:01:10Z General Availability of Windows Server and Hyper-V Containers in Windows Server 2016 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Oct 13, 2016 </STRONG> <BR /> The general availability of Windows Server 2016 marks a major milestone in our journey to bring world class container technologies to Windows customers. From the first time, we showcased this technology at //build in 2015, through the first public preview with Technical Preview 3 onto today with general availability, our team has been hard at work creating and refining this technology.&nbsp; So today we are excited that it is available for you to use in production. <BR /> <BR /> With each preview release we have tried hard to continue improving these technologies and today is no exception. We have increased the performance and density of our containers, lowered their start-up times and even added Active Directory support. For example, with Hyper-V Containers, we are now taking advantage of new cloning technology designed specifically to reduce start-up times and increase density. We heard your feedback, and we are excited to be expanding cross-SKU support such that you can run Windows Server Core containers using our Hyper-V Container technology including on Windows 10 Anniversary Update as well as Windows Server Containers with Nano Server on a Windows Server 2016 host installed with Server Core or Desktop. These are just a few of the enhancements we are excited to be bringing you with Windows Server 2016, our documentation site has for more information on features as well as guides to get you started. <BR /> <BR /> Along with this release, in partnership with Docker Inc. the CS Docker Engine will also be available to Windows Server 2016 customers. This provides, at no additional cost, users of Windows Server 2016 enterprise support for Windows containers and Docker.&nbsp; Please read more about this announcement on the <A href="#" target="_blank"> Hybrid Cloud blog </A> . <BR /> <BR /> In the coming days, we will also be releasing a OneGet provider which will simplify the experience of installing and setting up the Containers feature including the CS Docker Engine on Windows Server 2016 machines. Please stay tuned for more from our team and also remember to give us feedback on your experience in our <A href="#" target="_blank"> forums </A> , or head over to our <A href="#" target="_blank"> UserVoice </A> page if you have any feature requests. <BR /> <BR /> Ender Barillas <BR /> Program Manager and Release Captain for Windows Server and Hyper-V Containers </BODY></HTML> Fri, 22 Mar 2019 00:00:18 GMT Virtualization-Team 2019-03-22T00:00:18Z Windows Container Networking <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 05, 2016 </STRONG> <BR /> Actual Author: &nbsp;Jason Messer <BR /> <BR /> All of the technical documentation corresponding to this post is available <A href="#" title="Container Networking Documentation" target="_blank"> here </A> . <BR /> <BR /> <EM> There is a lot excitement and energy around the introduction of Windows containers and Microsoft’s partnership with Docker. For Windows Server Technical Preview 5, we invested heavily in the container network stack to better align with the Docker management experience and brought our own networking expertise to add additional features and capabilities for Windows containers! This article will describe the Windows container networking stack, how to attach your containers to a network using Docker, and how Microsoft is making containers first-class citizens in the modern datacenter with Microsoft Azure Stack. </EM> <BR /> <H2> Introduction </H2> <BR /> Windows Containers can be used to host all sorts of different applications from web servers running Node.js to databases, to video streaming. These applications all require network connectivity in order to expose their services to external clients. So what does the network stack look like for Windows containers? How do we assign an IP address to a container or attach a container endpoint to a network? How do we apply advanced network policy such as maximum bandwidth caps or access control list (ACL) rules to a container? <BR /> <BR /> Let’s dive into this topic by first looking at a picture of the container’s network stack in Figure 1. <BR /> <BR /> [caption id="attachment_8505" align="alignnone" width="919"] <IMG src="" /> Figure 1 – Windows Container Network Stack[/caption] <BR /> <BR /> All containers run inside a container host which could be a physical server, a Windows client, or a virtual machine. It is assumed that this container host already has network connectivity through a NIC card using WiFi or Ethernet which it needs to extend to the containers themselves. The container host uses a Hyper-V virtual switch to provide this connectivity to the containers and connects the containers to the virtual switch (vSwitch) using either a Host virtual NIC (Windows Server Containers) or a Synthetic VM NIC (Hyper-V Containers). Compare this with Linux containers which use a <STRONG> <EM> bridge </EM> </STRONG> device instead of the Hyper-V Virtual Switch and <STRONG> <EM> veth pairs </EM> </STRONG> instead of vNICs / vmNICs to provide this basic Layer-2 (Ethernet) connectivity to the containers themselves. <BR /> <BR /> The Hyper-V virtual switch alone does not allow network services running in a container to be accessible from the outside world, however. We also need Layer-3 (IP) connectivity to correctly route packets to their intended destination. In addition to IP, we need higher-level networking protocols such as TCP and UDP to correctly address specific services running in a container using a port number (e.g. TCP Port 80 is typically used to access a web server). Additional Layer 4- 7 services such as DNS, DHCP, HTTP, SMB, etc. are also required for containers to be useful. All of these options and more are supported with Windows container networking. <BR /> <H2> Docker Network Configuration and Management Stack </H2> <BR /> New in Windows Server Technical Preview 5 (TP5) is the ability to setup container networking using the Docker client and Docker engine’s RESTful API. Network configuration settings can be specified either at container network creation time or at container creation time depending upon the scope of the setting. Reference MSDN article ( <A href="#" target="_blank"> </A> ) for more information. <BR /> <BR /> The Windows Container Network management stack uses Docker as the management surface and the Windows Host Network Service (HNS) as a servicing layer to create the network “plumbing” underneath (e.g. vSwitch, WinNAT, etc.). The Docker engine communicates with HNS through a network plug-in (libnetwork). Reference Figure 2 to see the updated management stack. <BR /> <BR /> [caption id="attachment_8515" align="alignnone" width="1014"] <IMG src="" /> Figure 2 – Management Stack[/caption] <BR /> <BR /> With this Docker network plugin interfacing with the Windows network stack through HNS, users no longer have to create their own static port mappings or custom Windows Firewall rules for NAT as these are automatically created for you. <BR /> <BR /> <STRONG> Example: Create static Port Mapping through Docker <BR /> </STRONG> <EM> Note: NetNatStaticMapping (and Firewall Rule) created automatically </EM> <BR /> <BR /> <IMG src="" /> <BR /> <BR /> <BR /> <H2> Networking Modes </H2> <BR /> Windows containers will attach to a container host network using one of four different network modes (or drivers). The networking mode used determines how the containers will be accessible to external clients, how IP addresses will be assigned, and how network policy will be enforced. <BR /> <BR /> Each of these networking modes use an internal or external VM Switch – created automatically by HNS - to connect containers to the container host’s physical (or virtual) network. Briefly, the four networking modes are given below with recommended usage. Please refer to the MSDN article ( <A href="#" title="Container Networking Documentation" target="_blank"> here </A> ) for more in-depth information about each mode: <BR /> <UL> <BR /> <LI> NAT – this is the default network mode and attaches containers to a private IP subnet. This mode is quick and easy to use in any environment. </LI> <BR /> <LI> Transparent – this networking mode attaches containers directly to the physical network without performing any address translation. Use this mode with care as it can quickly cause problems in the physical network when too many containers are running on a particular host. </LI> <BR /> <LI> L2 Bridge / L2 Tunnel – these networking modes should usually be reserved for private and public cloud deployments when containers are running on a tenant VM. </LI> <BR /> </UL> <BR /> <EM> Note: The <STRONG> “NAT” VM Switch Type will no longer be available </STRONG> in Windows Server 2016 or Windows 10 client builds. NAT container networks can be created by specifying the “nat” driver in Docker or NAT Mode in PowerShell. </EM> <BR /> <BR /> <STRONG> Example: Create Docker ‘nat’ network <BR /> </STRONG> <EM> Notice how VM Switch and NetNat are created automatically </EM> <BR /> <BR /> <IMG src="" /> <BR /> <H2> Container Networking + Software Defined Networking (SDN) </H2> <BR /> Containers are increasingly becoming first-class citizens in the datacenter and enterprise alongside virtual machines. IaaS cloud tenants or enterprise business units need to be able to programmatically define network policy (e.g. ACLs, QoS, load balancing) for both VM network adapters as well as container endpoints. The Software Defined Networking (SDN) Stack ( <A href="#" target="_blank"> TechNet topic </A> ) in Windows Server 2016 allows customers to do just that by creating network policy for a specific container endpoint through the Windows Network Controller using either PowerShell scripts, SCVMM, or the new Azure Portal in the Microsoft Azure Stack. <BR /> <BR /> In a virtualized environment, the container host will be a virtual machine running on a physical server. The Network Controller will send policy down to a Host Agent running on the physical server using standard SouthBound channels (e.g. OVSDB). The Host Agent will then program this policy into the VFP extension in the vSwitch on the physical server where it will be enforced. This network policy is specific to an IP address (e.g. container end-point) so that even though multiple container endpoints are attached through a container host VM using a single VM network adapter, network policy can still be granularly defined. <BR /> <BR /> Using the L2 tunnel networking mode, all container network traffic from the container host VM will be forwarded to the physical server’s vSwitch. The VFP forwarding extension in this vSwitch will enforce the policy received from the Network Controller and higher-levels of the Azure Stack (e.g. Network Resource Provider, Azure Resource Manager, Azure Portal). Reference Figure 3 to see how this stack looks. <BR /> <BR /> [caption id="attachment_8536" align="alignnone" width="1330"] <IMG src="" /> Figure 3 –Containers attaching to SDN overlay virtual network[/caption] <BR /> <BR /> This will allow containers to join overlay virtual networks (e.g. VxLAN) created by individual cloud tenants to communicate across multi-node clusters and with other VMs. as well as receive network policy. <BR /> <H2> Future Goodness </H2> <BR /> We will continue to innovate in this space not only by adding code to the Windows OS but also by contributing code to the open source Docker project on GitHub. We want Windows container users to have full access to the rich set of network policy and be able to create this policy through the Docker client. We’re also looking at ways to apply network policy as close to the container endpoint as possible in order to shorten the data-path and thereby improve network throughput and decrease latency. <BR /> <BR /> Please continue to offer your feedback and comments on how we can continue to improve Windows Containers! <BR /> <BR /> ~ Jason Messer </BODY></HTML> Thu, 21 Mar 2019 23:59:20 GMT scooley 2019-03-21T23:59:20Z //build 2016 Container Announcements: Hyper-V Containers and Windows 10 and PowerShell For Docker! <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Apr 01, 2016 </STRONG> <BR /> <I> 4/26 - Quick update to this post, the GitHub repo for the new PowerShell module for Docker is now public.&nbsp; (<A href="#" target="_blank"></A>) </I> <BR /> <BR /> <BR /> <BR /> //Build/ will always be a special place for Windows containers, this was the stage where last year we first showed the world a Windows Server Container.&nbsp; So it’s fitting that back home at //build/ this year we have two new announcements to make. <BR /> <BR /> First, as we all know Windows is an operating system that users love to customize! From backgrounds and icon locations to font sizes and window layouts. everyone has their own preferences.&nbsp; This is even more true for developers: source code locations, debugger configuration, color schemes, environment variables and default tool configurations are important for optimal efficiency.&nbsp; Whether you are a front-end engineer building a highly scalable presentation layer on top of a multi-layer middle tier, or you are a platform developer building the next amazing database engine, being able to do all of your development in your environment is crucial to your productivity. <BR /> <BR /> For developers using containers typically this has required running a server virtual machine on their development machine and then running containers inside that virtual machine.&nbsp; This leads to complex and sometimes problematic cross-machine orchestration and cumbersome scenarios for code sharing and debugging; fracturing that personal and optimized developer experience.&nbsp; Today we are incredibly excited to be ending this pain for Windows developers by bringing Hyper-V Containers natively into Windows 10! &nbsp;This will further empower developers to build amazing cloud applications benefiting from native container capabilities right in Windows.&nbsp; Since Hyper-V Containers utilize their own instance of the Windows kernel, your container is truly a server container all the way down the kernel.&nbsp; Plus, with the flexibility of Windows container runtimes containers built on Windows 10 can be run on Windows Server 2016 as either Windows Server Containers or Hyper-V Containers. <BR /> <BR /> Windows Insider’s will start to see a new “Containers” feature in the Windows Features dialog in upcoming flights and with the upcoming release of Windows Server 2016 Technical Preview 5 the Nano Server container OS image will be made available for download along with an updated Docker engine for Windows.&nbsp; Stay tuned to <A href="#" target="_blank"> </A> for all of the details and a new quick start guide that will get you up and running with Hyper-V Containers on Windows 10 in the near future! <BR /> <BR /> Secondly, since last year at //build/ we’ve been asking for your thoughts and feedback on Windows containers and we can’t thank you enough for the forum posts, tweets, GitHub comments, in person conversations etc… (please keep them coming!).&nbsp; Among all of these comments one request has come up more than any other - why can’t I see Docker containers from PowerShell? <BR /> <BR /> As we’ve discussed the pro’s, con’s and various options with you we’ve come to the conclusion that our current container PowerShell module needs an update… So today we are announcing that we are deprecating the container PowerShell module that has been shipping in the preview builds of Windows Server 2016 and replacing it with a new PowerShell module for Docker.&nbsp; We have already started development on this new module and will be open sourcing it in the near future as part of what we hope will be a community collaboration on building a great PowerShell experience for containers though the Docker engine.&nbsp; This new module builds directly on top of the Docker Engine’s REST interface enabling user choice between the Docker CLI, PowerShell or both. <BR /> <BR /> Building a great PowerShell module is no easy task, between getting all of the code right and striking the right balance of objects and parameters sets and cmdlet names are all super important.&nbsp; So as we are embarking on this new module we are going to be looking to you – our end users and the vast PowerShell and Docker communities to help shape this module.&nbsp; What parameter sets are important to you?&nbsp; Should we have an equivalent to “docker run” or should you pipe new-container to start-container – what would you want…&nbsp; To learn more about this module and participate in the development we will be launching a new page at <A href="#" target="_blank"> </A> in the next few days so head on over take a look at let us know what you think! <BR /> <BR /> -Taylor </BODY></HTML> Thu, 21 Mar 2019 23:58:15 GMT scooley 2019-03-21T23:58:15Z Announcing the release of Hyper-V Containers in Windows Server 2016 Technical Preview 4 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on Nov 19, 2015 </STRONG> <BR /> <P> Today marks an exciting moment on the journey to bring world class container technologies and options to our customers. With the release of <A href="#" target="_blank"> Windows Server 2016 Technical Preview 4 </A> , we’re excited to showcase the next major step in our container journey with the first public preview of Hyper-V Containers, as well as significant enhancements to both Windows Server Containers and the Docker engine for Windows. </P> <P> We heard from you that you want the advantages of containers, including speed, simplified DevOps, and increased flexibility in application development, on Windows Server with support for existing Windows applications and technologies. A few months ago, we released a preview of both Windows Server Containers and, in partnership with the Docker community, a preview of the Docker engine for Windows.&nbsp; Expanding your choices with containers, today’s release of Hyper-V Containers brings you a new deployment option with increased isolation. Ensuring applications are hosted with the appropriate level of isolation is vital to the security of your infrastructure. Hyper-V Containers isolate applications with the guarantees associated with traditional virtualization, but with the ease, image format and management model of Windows Server Containers, including the support of Docker Engine.&nbsp; You can make the choice at deployment of whether your application needs the isolation provided by Hyper-V Containers or not, without having to make any changes to the container image or the container configuration.&nbsp; To learn more, check out Mark Russinovich’s post <A href="#" target="_blank"> Containers: Docker, Windows and Trends </A> , as well as watching the Microsoft Mechanics episode, <A href="#" target="_blank"> Early look at containers in Windows Server, Hyper-V and Azure </A> , to learn about Hyper-V container scenarios. </P> <P> Since August, we’ve also been hard at work enhancing Windows Server Containers. Your feedback on the Windows container forum has been immensely valuable, and I’m happy to say we’ve addressed many of the issues you reported.&nbsp; Application compatibility was a key focus, and while we still have some work ahead of us, a number of key applications and application frameworks now work in Windows Server Containers, including ASP.Net 3.5 and 4.6.&nbsp; You can now use <A href="" target="_blank"> Nano Server </A> as both a container host and as a container runtime (the operating system that runs inside the container), providing a lean, efficient installation of Windows Server ideal for born-in-the-cloud applications. We also added shared folder support (also known as volumes in Docker), hostname configuration, and much more. You can find more details on these and other feature improvements, along with updated content and guidance,&nbsp; on the <A href="#" target="_blank"> Windows Server Containers </A> site. </P> <P> Reading blog posts and watching demos is never quite the same as getting your hands on new technology, playing with it, and building an application with it.&nbsp; As of today, you can run both Windows Server and Hyper-V Containers by <A href="#" target="_blank"> downloading Windows Server 2016 Technical Preview 4 </A> to install on your own systems, and you can experiment with Windows Server Containers by spinning up a Windows Server 2016 Technical Preview 4 virtual machine in Azure straight from the <A href="#" target="_blank"> marketplace </A> . We look forward to your continued <A href="#" target="_blank"> feedback </A> ! </P> <P> </P> <P> Taylor Brown <BR /> Lead Program Manager Windows Server and Hyper-V Containers <BR /> </P> <P> <IFRAME frameborder="0" height="360" src="" width="640"> </IFRAME> </P> <P> </P> <BR /> </BODY></HTML> Thu, 21 Mar 2019 23:55:02 GMT Virtualization-Team 2019-03-21T23:55:02Z Windows Server Containers and Hyper-V Containers Debut at Ignite and Build <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TECHNET on May 11, 2015 </STRONG> <BR /> Build and Ignite were packed with information about Windows Server Containers (announced in October) and Hyper-V Containers (announced in April).&nbsp; We also launched the official Windows containers documentation site here: <BR /> <A href="#" target="_blank"> </A> <BR /> <BR /> <BR /> Ignite and Build sessions as well as Channel9 interviews are available on the front page and in the Community section. <BR /> <BR /> Keep checking back in for the latest announcements, community tools, and information as we move forward. <BR /> <BR /> Right now you can read about Windows containers, the container ecosystem, and our most frequently asked questions. <BR /> <BR /> Cheers, <BR /> Sarah <BR /> <BR /> <P> </P> </BODY></HTML> Thu, 21 Mar 2019 23:50:43 GMT Virtualization-Team 2019-03-21T23:50:43Z